[ { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-04-24-contract-member-support/", "title": "Contract Member Support", "subtitle":"", "rank": 1, "lastmod": "2023-04-24", "lastmod_ts": 1682294400, "section": "Jobs", "tags": [], "description": "Request for services: Member Support Contractor Location: Remote\nWe’re looking for contractors able to work remotely and help us to welcome new members from around the world. There is no set schedule and contractors would bill their hours monthly.\nCrossref receives over 200 new applications every month from organisations who produce scholarly and professional materials and content.\nKey responsibilities Manage queries from applicants and members via our Zendesk support desk and potentially other channels.", "content": "Request for services: Member Support Contractor Location: Remote\nWe’re looking for contractors able to work remotely and help us to welcome new members from around the world. There is no set schedule and contractors would bill their hours monthly.\nCrossref receives over 200 new applications every month from organisations who produce scholarly and professional materials and content.\nKey responsibilities Manage queries from applicants and members via our Zendesk support desk and potentially other channels. Follow the administrative process for new applicants, such as: Check the details in application forms that come via our website. Set them up in our Customer Relationship Management (CRM) System (CRM). We use SugarCRM. Send them an invoice for the first year of membership, and once this is paid\u0026hellip; Set up and share their DOI prefix and account credentials. Ensure that the information in our CRM is kept clean and up-to-date. Work closely with the Member Experience team and our finance colleagues. About you Organized with an eye for details Happy with data entry and maintenance Comfortable following processes and taking on new systems Friendly and clear communication skills (in English) About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince 2000 we have grown from strength to strength and now have over 15,000 members across 140 countries, and thousands of tools and services relying on our metadata.\nHow to respond A statement of interest that includes:\nExamples of similar work (and/or your CV) References from previous work Hourly rate Please send your response, statement of interest, and resume to: jobs@crossref.org.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Request for services: Member Support Contractor","Key responsibilities","About you","About Crossref","How to respond"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-02-08-contract-software-development/", "title": "Request for Services - Software Development Contracting", "subtitle":"", "rank": 1, "lastmod": "2023-02-08", "lastmod_ts": 1675814400, "section": "Jobs", "tags": [], "description": "Applications for this position is closed. Request for services: Software Development Contracting Location: Remote\nDuration: Until completion of the specified software.\nProject summary Our new generation of REST API features requires us to build the \u0026ldquo;Metadata Rendering Framework\u0026rdquo;. This is a subsystem that coordinates the rendering of bibliographic metadata in a variety of formats. We are looking for a contract software developer to help us build this.\nThe Rendering Framework should maintain a set of rendered metadata objects in S3.", "content": " Applications for this position is closed. Request for services: Software Development Contracting Location: Remote\nDuration: Until completion of the specified software.\nProject summary Our new generation of REST API features requires us to build the \u0026ldquo;Metadata Rendering Framework\u0026rdquo;. This is a subsystem that coordinates the rendering of bibliographic metadata in a variety of formats. We are looking for a contract software developer to help us build this.\nThe Rendering Framework should maintain a set of rendered metadata objects in S3. It should trigger content to be re-rendered when any relevant change occurs in the database. It should also provide a simple REST API interface for retrieving these S3 objects. The code that renders each already exists, so complex data modeling is not required, though an understanding of the metadata is necessary.\nThis module will be implemented with our existing source Kotlin codebase. It will integrate with other pre-existing software components written in Kotlin, Java and Clojure.\nHigh-level specifications are included here for scoping purposes. We expect an iterative approach and we will supply feedback and guidance. Code will be reviewed by Crossref developers.\nDeliverables You will report to the Head of Software Development and collaborate with a member of the Product Team.\nThe initial scope of the project should result in the following deliverables. This may evolve as we iterate on the work. There may also be subsequent projects for which specs and deliverables will be defined and agreed upon by both parties.\nExtensible rendering framework built. Initial implementations / integrations for four initial data formats (application/citeproc+json, application/vnd.crossref.member+json, application/vnd.crossref.matching.grant+json, application/vnd.crossref.matching.citation+json). Full tests as part of our existing BDD / Cucumber suite. Code must meet our standards (SONAR).\nSkills Understanding of bibliographic metadata formats such as Citeproc-JSON. Experience with Kotlin and Spring Boot. Experience with Clojure. Experience writing BDD tests with Cucumber. Open source software development practices. Timeline We would like responses by 15th February. Work can commence immediately. Because of the nature of software projects we do not expect an estimate, but we expect this may take of the order of weeks.\nTo respond Please send a CV and a cover letter (each no longer than 2 pages) to share how you meet the requirements of the contract role, and a rate sheet or fee schedule to jobs@crossref.org.\nAbout Crossref Crossref is a non-profit membership organization that exists to make scholarly communications better. We make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nCrossref is at its core a community organization with 17,500 members across 148 countries (and counting)! We’re committed to lowering barriers for global participation in the research enterprise, we’re funded by members and subscribers, and we engage regularly with them in multiple ways from webinars to working groups.\nCrossref operates and continuously develops an impressive portfolio of services, products and features to support scholarly communication and infrastructure organisations to contribute to, maintain and preserve robust documentation of the scholarly process. From registration forms and APIs, to complex systems of linking scholarly works with references or citations, and metadata retrieval, our busy Product Team continuously develops and refines these metadata tools.\nSpecification outline The following is an indicative specification for scoping purposes. We expect the code to be iteratively specified by a BDD suite.\nBackground The Item Graph is the database that powers the next generation of Crossref services. \u0026ldquo;Items\u0026rdquo; in the graph are things such as Works, Members, Funders, etc. Items are also used to represent reified relationships (such as citations which themselves have metadata).\nThe Item Tree Retriever is an existing module that can retrieve subgraphs from the Item Graph in connection with an Item. For example, for a Work it would retrieve citations and other assertions. It works in a generic way, following links to a given depth, using a given strategy.\nEach Item may have a number of natural representations. For example, a Work could be rendered into Citeproc-JSON for end-user consumption or a specialized representation for a search index. A Member will be rendered into our existing JSON format. We have prior code to run these translations in some cases, detailed below.\nThe Content Rendering Framework will be a module that translates Item Trees into content representations and keeps track of them.\nMedia Types The Content Rendering Framework will have a registry of Media Types (aka MIME types, per IANA vocabulary), in the vocabulary of MIME types. The initial deliverable will include:\napplication/citeproc+json application/vnd.crossref.member+json application/vnd.crossref.matching.grant+json application/vnd.crossref.matching.citation+json Representation Storage The Content Rendering Framework will store rendered representations of Items in S3 object storage. It will support the storing and retrieval of rendered content by Item ID and Media Type. It will maintain an ETag value for stored versions so we easily detect when there is a change to the rendered representation.\nRenderer The Renderer will render Item Trees into requested Media Types. It will dispatch to relevant rendering code.\nTrigger The Content Rendering Framework will keep track of Items\u0026rsquo; rendered representations. For every representation it will indicate whether it is considered \u0026lsquo;stale\u0026rsquo;, i.e. potentially in need of re-rendering.\nIt will watch the Property and Relationship Assertion tables. When an assertion is made in connection with an Item, that Item is marked as being stale and needing re-rendering. For example, when the title of an Work changes, it should be marked for re-rendering. When a member name changes, every Work that\u0026rsquo;s connected to it should be marked for re-rendering.\nA continual process will re-render stale Representations. It will compare the ETag of the content with the stored item and only update it if the re-render resulted in a change.\nThis process will be based on an SQS queue. This process will be a configured profile of the running service, allowing us to scale out rendering on demand.\nCollections A Collection is a named set of Items. For example, \u0026ldquo;the set of Works\u0026rdquo;, \u0026ldquo;the set of Members\u0026rdquo;, etc. When Items are ingested, the ingester code can mark Items as belonging to a given set.\nA Collection is also associated with a configuration that indicates the set of content types that items should be rendered to.\nVarious modules that are responsible for ingesting Items will indicate that Items they ingest belong to a given set.\nVersions API functionality A simple REST API endpoint will list the list of versions for each Item for a given format, allowing users to see the history of a rendered item.\nA similar endpoint will be available for Collections, which will provide a list updates for all Items in that collection.\n", "headings": ["Request for services: Software Development Contracting","Project summary","Deliverables","Skills","Timeline","To respond","About Crossref","Specification outline","Background","Media Types","Representation Storage","Renderer","Trigger","Collections","Versions API functionality"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-09-30-contract-technical-support/", "title": "Technical Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2023-01-05", "lastmod_ts": 1672876800, "section": "Jobs", "tags": [], "description": "Applications for this contractor position closed 2022-12-31. Request for services: contract technical support Come and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote\nAbout the contractor role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, an eighteen-strong distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways.", "content": " Applications for this contractor position closed 2022-12-31. Request for services: contract technical support Come and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote\nAbout the contractor role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, an eighteen-strong distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages. We’re looking for a Technical Support Contractor to provide front-line help to our international community of publishers, librarians, funders, researchers and developers on a range of services that help them deposit, find, link, cite, and assess scholarly content.\nWe’re looking for an independent contractor able to work remotely. There is no set schedule and contractors bill hours monthly.\nScope of work Replying to and solving community queries using the Zendesk support system. Using our various tools and APIs to find the answers to these queries, or pointing users to support materials that will help them. Working with colleagues on particularly tricky tickets, escalating as necessary. Working efficiently but also kindly and with empathy with our very diverse, global community. About the team You’ll be working closely with nine other technical and membership support colleagues to provide support and guidance for people with a wide range of technical experience. You’ll help our community create and retrieve metadata records with tools ranging from simple user interfaces to robust APIs.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nWe’re a small but mighty group working with over 17,000 members from 146 countries, and we have thousands of tools and services relying on our metadata. We take our work seriously but usually not ourselves.\nHow to respond We\u0026rsquo;ve extended the deadline: responses should be submitted by 31 December and should include:\nA statement of interest that includes:\nExamples of similar work (and/or your CV) References from previous work Hourly rate Please send your response, statement of interest, and resume to: jobs@crossref.org.\n", "headings": ["Request for services: contract technical support","About the contractor role","Scope of work","About the team","About Crossref","How to respond"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-11-21-community-engagement-manager/", "title": "Community Engagement Manager", "subtitle":"", "rank": 1, "lastmod": "2022-11-21", "lastmod_ts": 1668988800, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-12-11. Community Engagement Manager Come and work with us as a Community Engagement Manager. It’ll be fun!\nJob title: Community Engagement Manager\nLocation: Remote and global (with regular working in European and some Asia Pacific time zones)\nRemuneration: €58,000 – €70,000, or local equivalent, depending on experience\nReports to: Head of Community Engagement and Communications\nApplication timeline: Advertise and recruitment in November/December, Start date Jan/Feb 2023", "content": " Applications for this position closed 2022-12-11. Community Engagement Manager Come and work with us as a Community Engagement Manager. It’ll be fun!\nJob title: Community Engagement Manager\nLocation: Remote and global (with regular working in European and some Asia Pacific time zones)\nRemuneration: €58,000 – €70,000, or local equivalent, depending on experience\nReports to: Head of Community Engagement and Communications\nApplication timeline: Advertise and recruitment in November/December, Start date Jan/Feb 2023\nCrossref is a non-profit membership organization that exists to make scholarly communications better. We make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nIf you’ve got a passion for equity and diversity and would like to use your engagement and organizational skills to help strengthen our community and to help shed light on the inner workings behind the progress of science, this position may be for you. Read on and apply by December 11.\nCrossref is at its core a community organization with 17,000 members across 146 countries (and counting)! We’re committed to lowering barriers for global participation in the research enterprise, we’re funded by members and subscribers, and we engage regularly with them in multiple ways from webinars to working groups.\nThe organizations in our community are involved in documenting the progress of scholarship. We provide infrastructure to preserve and curate metadata – that is data underpinning and describing scholarly outputs and processes (such as authorship, funding, modifications etc., and links between these works). Our community develops and promotes standards and best practices for such documentation in keeping with the changing world of scholarly infrastructure and communication.\nThe Community Engagement Manager’s key responsibility is management of our well-established Ambassadors program, which has a global reach and an important role in informing and engaging our membership. It’s been growing over the past five years and there’s an opportunity for the new manager to build upon its success, innovate and shape it for the future.\nKey responsibilities of the role: Strategically manage the Crossref Ambassadors program – you will build and weave together strong relationships with and between our volunteers, and develop activities and resources that help equip, mobilize and empower Ambassadors to engage their respective communities with Crossref’s messages and initiatives\nDesigning and coordinating activities that facilitate access to and understanding of Crossref’s membership and services, such as translations, community platforms, events, and other (as appropriate), in partnership with the other Community Engagement Manager, the Membership Team and other community partners, including the support for the nascent Publishers Learning and Community Exchange (PLACE) platform\nIdentifying and creating opportunities to listen to the sentiment and feedback of the Crossref’s community, sharing community insights with colleagues\nRepresenting Crossref and using the role to bring people together, attending and speaking at relevant industry events, online and in-person\nBuilding and managing relationships with community partners and collaborators worldwide to help progress Crossref’s mission\nCreating content – such as writing articles and blogs, creating slides and diagrams\nContribute to other outreach activities The role is based within the Community Engagement and Communications team. We work collaboratively across a variety of projects and programmes. We adopt an approachable, community-appropriate tone and style in our communications. We’re looking to re-engage with our community through face-to-face opportunities as well as online, so the post-holder will have their share of travel (accordingly with our latest thinking on travel and sustainability).\nOur primary aim is to engage colleagues from the member organizations and other stakeholders to be actively involved in capturing documentation of the scholarly progress and making it transparent. This contributes to co-creating a robust research nexus. As part of the wider Outreach department at Crossref, we seek to encourage adoption and development of best practices in scholarly publishing and communication with regards to metadata and permanence of scholarly record. Colleagues across the organization are helpful, easy-going and supportive, so if you’re open minded and ready to work as part of the team and across different teams, you will fit right in. Watch the recording of our recent Annual Meeting to learn more about the current conversations in our community.\nAbout you As scientific community engagement is an emerging profession, practical experience in this area is more important to us than traditional qualifications. It’s best if you can demonstrate that you have most of these characteristics:\nCollaborative attitude\nCuriosity to explore complex concepts and to learn new skills and perspectives\nAbility to translate complex ideas into accessible narratives in English\n3+ years experience of community building and management and/or of planning, executing and evaluating participatory initiatives\nDemonstrable skills in group facilitation and stakeholder relationships management\nTrack record of programme development and improvement, working to budgets\nConfidence in public speaking in-person and online, including delivery of webinars/workshops\nEvent and project management experience\nTried and tested strategies for ensuring that your programs are equitable, diverse and inclusive\nAwareness of current trends in academic culture and scholarly communications\nIt would be a plus if you also have any of the following:\nUnderstanding of matters concerning metadata\nExperience or background in communications or campaign management\nExperience of working in a multicultural setting\nExperience of moderating an online discussion forum, blogging platform or similar\nAbility to communicate in a language other than English\nWhat it’s like working at Crossref We’re about 45 staff and now ‘remote-first’ although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organization. Check out the organization chart.\nWe are active supporters of ongoing professional development opportunities and promote self- learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? We encourage applications from excellent candidates wherever you might be in the world, especially from people with backgrounds historically under-represented in research and scholarly communications. Our team is fully remote and distributed across time zones and continents. This role will require regular working in European and Asia-Pacific Time zones. Our main working language is English, but there are many opportunities in this job to use other tongues if you’re able. If anything here is unclear, feel free to contact Kora Korzec, the hiring manager, on kkorzec@crossref.org.\nTo apply, please send a CV and a cover letter to share how you meet the requirements of the role to jobs@crossref.org. One of the best ways of offering evidence of your suitability is with an example of a relevant project you’re particularly proud of, whether from professional, voluntary or personal experience. If possible, we’d also love to see an example of content you’ve created – a link to a recording of your talk, blog post, infographic, or something else. As it’s essential for the role that you have access to reliable high speed internet connection, please indicate clearly in your letter whether that is the case.\nLastly, if you don’t meet the criteria we listed here, but are confident you’d be natural in delivering the key responsibilities of the role, please explain what strengths you would be bringing to this job.\nWe aim to start reviewing applications on December 12. Please strive to send your documents to jobs@crossref.org by then.\nThe role will report to Kora Korzec, Head of Community Engagement and Communications at Crossref, and she will review all applications along with Michelle Cancel, our HR Manager, and Ginny Hendricks, Director of Member \u0026amp; Community Outreach.\nWe intend to invite selected candidates to a brief first interview to talk about the role as soon as possible following review. Following those, shortlisted candidates will be invited to an interview taking place in early to mid-January. The interview will include some exercises you’ll have a chance to prepare for. All interviews will be held remotely.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you!\n", "headings": ["Community Engagement Manager","Key responsibilities of the role:","Contribute to other outreach activities","About you","What it’s like working at Crossref","Thinking of applying?","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-12-22-contract-product-communications-support/", "title": "Product Communications Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2022-10-28", "lastmod_ts": 1666915200, "section": "Jobs", "tags": [], "description": "Applications for this position is closed. Request for services: Contract for Product Communications Support Come and work with us as an independent Product Communications Support Contractor. It’ll be fun!\nLocation: Remote\nDuration of contract: 12 months\nAbout the contractor role We require services for product communications to support the Community Engagement and Communications team in liaising effectively with both Product and Outreach teams, to keep our community abreast of progress, and to encourage broad adoption of existing and new tools.", "content": " Applications for this position is closed. Request for services: Contract for Product Communications Support Come and work with us as an independent Product Communications Support Contractor. It’ll be fun!\nLocation: Remote\nDuration of contract: 12 months\nAbout the contractor role We require services for product communications to support the Community Engagement and Communications team in liaising effectively with both Product and Outreach teams, to keep our community abreast of progress, and to encourage broad adoption of existing and new tools. You will be collaborating with Community Engagement and Communications colleagues to devise effective ways of engaging our members and other stakeholders with the ever-changing functionality of the tools we provide.\nIf you’d like to use engaging product communications to help organisations make scholarly outputs more discoverable, and support the integrity of the scholarly record, this freelance opportunity may be for you.\nScope of work Identify opportunities and create campaigns to highlight less well-known and changing functionality of existing products and services, and, working with product managers, create and manage associated communications and launch plans. Support the Events \u0026amp; Communications Manager in managing the content calendar, ensuring a balanced proportion of product communications in the context of overall outreach and engagement activities. Support Crossref in soliciting community feedback and—leveraging others’ insights across the organisation—tailor effective ways of engaging our diverse and global audiences. Translate complex technical information into written actionable and mobilising language, with deliverables such as keeping our slide library up to date with relevant stories, facts, and figures. Collaborate with subject-matter experts to update each \u0026lsquo;Service\u0026rsquo; page on our website, integrating them with project tracking tools to make these sections dynamic and current. Work with the full range of digital communications to engage our community. Note we use Act-On (email), Hugo (website/blog), Discourse (forum), Trello (org-wide roadmap), and Jira (product development and R\u0026amp;D). Deliverables Development and deployment of audience-centred communications resources to engage the Crossref community with relevant services and maximise usage as appropriate, in support of the Managed Member Journey (including but not limited to communications about metadata completeness, registering references, use of crossmark, relationships metadata). Campaigns plans and execution in support of launch of new and updated services, such as changes to Crossref APIs and grant registration form. User and audience research for service updates under development, such as Relationships API and new Participation Reports. Delivery of online showcase and support events for new and updated Crossref services. About the team The contractor will work closely with Kora Korzec, Head of Community Engagement and Communications, and will collaborate with the broader Outreach group on a variety of projects and programmes to augment our product communications, using a full range of digital communications and leverage event opportunities, to maximise adoption of tools and best practice. We adopt an approachable, community-appropriate tone and style in our communications. We’re looking to re-engage with our community through face-to-face opportunities as well as online, so some travel may be involved in this role (according to our our latest thinking on travel and sustainability).\nOur primary aim is to engage colleagues from the member organizations and other stakeholders to be actively involved in capturing documentation of the scholarly progress and making it transparent. This contributes to co-creating a robust research nexus. As part of the wider Outreach group at Crossref, we seek to encourage adoption and development of best practices in scholarly publishing and communication with regards to metadata and permanence of scholarly record. Colleagues across the organization are helpful, easy-going and supportive, and you’d be expected to work collaboratively with different teams. You can also watch the recording of our recent Annual Meeting to learn more about the current conversations in our community.\nAbout Crossref Crossref is a non-profit membership organization that exists to make scholarly communications better. We make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nCrossref is at its core a community organization with 17,500 members across 148 countries (and counting)! We’re committed to lowering barriers for global participation in the research enterprise, we’re funded by members and subscribers, and we engage regularly with them in multiple ways from webinars to working groups.\nCrossref operates and continuously develops an impressive portfolio of services, products and features to support scholarly communication and infrastructure organisations to contribute to, maintain and preserve robust documentation of the scholarly process. From registration forms and APIs, to complex systems of linking scholarly works with references or citations, and metadata retrieval, our busy Product Team continuously develops and refines these metadata tools.\nAbout You 3+ years of experience in technical customer service/marketing/product communications Collaborative attitude, pragmatic and proactive approach Excellent communications skills in English (we’re always happy to hear if you’re able to communicate in other languages too) Ability to translate complex technical information into accessible narratives in English Experience of content creation and leveraging diverse digital engagement and communications channels to get a message across Experience of organising and executing online events Demonstrable effective project management skills Good understanding of audience segmentation; experience of doing it would be even better Good working understanding of product development life cycle Experience working in academia or scholarly communications is nice to have but not required Experience conducting customer or user research (qualitative or quantitative) would be useful but not essential How to respond If you’ve got a passion for equity and diversity and would like to use your engagement and communications skills to help our community shed light on the inner workings behind the progress of science, we encourage you to read on and respond by February 5, 2023.\nPlease send a CV and a cover letter (each no longer than 2 pages) to share how you meet the requirements of the contract role, accompanied by a brief portfolio of relevant work, and a rate sheet or fee schedule to jobs@crossref.org. One of the best ways of offering evidence of your suitability in the letter is with an example of a relevant project, highlighting your contributions to that project that showcase relevant skills. Nota bene, if you’re including hyperlinks in any of your documents to things available online, please ensure these can be accessed by third parties, and give some context as to what was your role in creating it if not clearly stated on the material itself. As it’s essential for this work that you have access to reliable high speed internet connection, please indicate clearly in your letter whether that is the case.\nWe intend to start reviewing responses on February 6, 2023 and contact the selected profiles shortly after to start conversations.\n", "headings": ["Request for services: Contract for Product Communications Support","About the contractor role","Scope of work","Deliverables","About the team","About Crossref","About You","How to respond"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/members-area/", "title": "Hello, members", "subtitle":"", "rank": 4, "lastmod": "2022-10-19", "lastmod_ts": 1666137600, "section": "Hello, members", "tags": [], "description": "Welcome to the Crossref member\u0026rsquo;s area Haven\u0026rsquo;t joined us yet? You need to be a member of Crossref in order to get a DOI prefix so you can create Crossref DOIs and register content. Membership allows you to connect your content with a global network of online scholarly research, currently over 17,000 other organizational members from 140 countries. It’s so much more than just getting a DOI.\nPlease note: You don’t need to be a member to just use others’ metadata - if that\u0026rsquo;s all you need, read more about our open metadata retrieval tools.", "content": "Welcome to the Crossref member\u0026rsquo;s area Haven\u0026rsquo;t joined us yet? You need to be a member of Crossref in order to get a DOI prefix so you can create Crossref DOIs and register content. Membership allows you to connect your content with a global network of online scholarly research, currently over 17,000 other organizational members from 140 countries. It’s so much more than just getting a DOI.\nPlease note: You don’t need to be a member to just use others’ metadata - if that\u0026rsquo;s all you need, read more about our open metadata retrieval tools.\nApply to become a Crossref member If you haven\u0026rsquo;t joined Crossref yet, you can read more about membership and apply to join here. New member? Start registering your content Follow our new member setup guide.\nGet help You can get help from our detailed support docs, request support from our small team, or head to the community forum to ask others (there is a specific category for new members).\nSome key documentation that you might be interested in:\nHow to construct your DOIs Web deposit form to manually register your metadata records Manage your records in our Content Registration system Verify your registration Understand the reports that we send by email Add references to your metadata Been a member for a while? Alongside the basics of registering your content, there are other services you can take advantage of:\nFunder Registry Similarity Check Cited by Crossmark Reference Linking It\u0026rsquo;s also important to keep the metadata of your existing records up to date - more on maintaining your metadata. And finally, don\u0026rsquo;t forget to include references in the content you register with us.\nMaintaining your membership There are several things you need to do in order to maintain your membership.\nPay your invoices - more on our billing FAQs page. Plan any platform migrations carefully - read our platform migration guide. Let us know if any of your contact details change Continue to meet your member obligations - you can find a reminder of these here. Respond to the reports we send you and fix any errors - more on reports. Get more involved Sign up to our bi-monthly newsletter. You can read the latest issue here. Vote in our board elections - more about our board and governance Attend a webinar or event - our latest events Join an advisory group or working group - find out more Help and advise other members on our community forum Cancel your membership By committing to our membership terms, you’ve committed to the long-term stewardship of your metadata and content. However, there are sometimes reasons why members need to cancel. It\u0026rsquo;s important that you tell us if you wish to cancel - otherwise we\u0026rsquo;ll continue to send you annual membership fee invoices and you\u0026rsquo;ll continue to be responsible for them. Find out more.\n", "headings": ["Welcome to the Crossref member\u0026rsquo;s area","Haven\u0026rsquo;t joined us yet?","Apply to become a Crossref member","New member?","Start registering your content","Get help","Been a member for a while?","Maintaining your membership","Get more involved","Cancel your membership"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-08-02-site-reliability-engineer/", "title": "Site Reliability Engineer", "subtitle":"", "rank": 1, "lastmod": "2022-08-03", "lastmod_ts": 1659484800, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-08-23. ## Come and work with us as a **Site Reliability Engineer**. Help us to build and run the infrastructure that underlies the global scholarly communications ecosystem.\nLocation: Remote. But we are looking for somebody in the UTC, UTC+1 Time zones (e.g., Ireland, UK, Scandinavia, Central Europe, West/Central Africa). Salary: Between 101K-125K EUR (or local equivalent) depending on experience and location. Benchmarked every two years.", "content": " Applications for this position closed 2022-08-23. ## Come and work with us as a **Site Reliability Engineer**. Help us to build and run the infrastructure that underlies the global scholarly communications ecosystem.\nLocation: Remote. But we are looking for somebody in the UTC, UTC+1 Time zones (e.g., Ireland, UK, Scandinavia, Central Europe, West/Central Africa). Salary: Between 101K-125K EUR (or local equivalent) depending on experience and location. Benchmarked every two years. Excellent benefits. Reports to: Head of Infrastructure Services. Closing date: August 23rd, 2022. About the role Crossref is looking for a talented Site Reliability Engineer to help us optimize and evolve our infrastructure services.\nWe\u0026rsquo;re looking for a new member of our technology team who can bring experience, leadership, and help us solve some interesting operations and development challenges. Crossref operates the service that connects thousands of publishers, millions of articles and research content, and serves a diverse set of communities within scholarly publishing, research and beyond.\nYou will report to the head of infrastructure services and will work closely with one systems administrator and extensively with the software development, R\u0026amp;D, and product teams also.\nKey responsibilities The infrastructure services group is primarily responsible for Crossref\u0026rsquo;s infrastructure services. That is, central, crosscutting tools and systems that are used by our software development group as the common foundation we use for delivering services to our members and the broader research community. In other words- you will be building, deploying, monitoring and managing tools and services used by other developers.\nYou will be responsible for ensuring that these infrastructure services are reliable and responsive as well as making sure they are able to evolve quickly to support the new requirements and new services that Crossref is developing on behalf of its membership.\nYour challenge will be to accomplish this, whilst simultaneously helping to drive the modernization of our current software stack, infrastructure, and software engineering culture. The entire technology team is undertaking a migration from a mostly self-hosted, manually-managed, and manually-tested environment, to a cloud-based system and the SRE tools, processes and culture which that entails.\nWe currently use a blend of AWS, Docker, Terraform, self-hosted VMWare, Elastic Search, Kafka and more. Most of our codebases are written in Java, Clojure, and Python.\nThere are a lot of skills that we are looking for, but we don’t expect to find a purple unicorn. Our primary criterion is that you have a track record of being able to deliver projects using a variety of languages, frameworks and development paradigms.\nBut you get double bonus points if you have experience with:\nImmutable infrastructure. Virtualization and containerization of legacy code bases. Configuration management. Security infrastructure. Automation of development. Site monitoring and alerting. Web services software development. Transitioning on-premise datacentre to the cloud. In-depth knowledge of one or more cloud providers. And it would be very useful if you had a subset of the following skills:\nContainerisation using ECS/Docker. Core AWS Infrastructure including EC2, VPC, S3, RDS, IAM, Route53 and Cloudfront. Infrastructure configuration, management and orchestration tools (such as Terraform, Kubernetes, CloudFormation, Ansible, Salt, or equivalents). Java. High proficiency in at least one other language (e.g. Python, Clojure). Extensive experience with SQL, particularly PostgreSQL, MySQL or Oracle. Elasticsearch, Solr, Lucene, or similar. Distributed logging and monitoring frameworks. Continuous Integration, continuous delivery frameworks. Modern, HTTP-based API design and implementation. Experience with open source development. Experience with agile development methodologies. Experience with XML- particularly with mixed content models. And please note that this is not a back-office position. We believe that it is vital that the entire technical team develops an understanding of our members, the broader community and their needs. Without this kind of empathy, we cannot add value to our services. As such, you will also find yourself working closely with the product and outreach teams.\nLocation \u0026amp; travel requirements This is a remote position. The technology team currently has members working in the UK, Europe and the east coast of the US. As a remote-first organization, we are not bound to a specific location.\nRemote workers should expect they will need to visit an office approximately 5 days a quarter along with the travel (possibly international) which that entails. If you work from an office you will be expected to travel internationally for ~ 5 days once a year. In either case, travel can increase should you have an interest in representing Crossref at community events.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 121 countries, and thousands of tools and services relying on our metadata.\nWe can offer the successful candidate a challenging and fun environment to work in. We’re fewer than 40 professionals but together we are dedicated to our global mission. We are constantly adapting to ensure we get there, and we don’t tend to take ourselves too seriously along the way.\nEqual opportunities commitment Crossref is an equal opportunity employer. We believe that diversity and inclusion among our staff is critical to our success as a global organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\nTo apply Send your CV and covering letter via email to:\nStewart Houten, Head of Infrastructure Services\njobs@crossref.org\n", "headings": ["About the role","Key responsibilities","Location \u0026amp; travel requirements","About Crossref","Equal opportunities commitment","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-07-29-human-resources-manager/", "title": "Human Resources Manager", "subtitle":"", "rank": 1, "lastmod": "2022-07-29", "lastmod_ts": 1659052800, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-08-23. Join Crossref as our Human Resources Manager. It\u0026rsquo;ll be fun. No, really! Location: Remote Salary: Between 75K-90K USD (or currency equivalent) depending on experience and location. Benchmarked every two years. Excellent benefits. Reports to: Director of Finance \u0026amp; Operations Application timeline: We will begin to review applications the week of August 22nd About the position Human resources is often seen solely as an operational necessity, and it is, but at Crossref it is also pivotal to our culture and our journey toward a remote-first, global, and transparent community-led organization.", "content": " Applications for this position closed 2022-08-23. Join Crossref as our Human Resources Manager. It\u0026rsquo;ll be fun. No, really! Location: Remote Salary: Between 75K-90K USD (or currency equivalent) depending on experience and location. Benchmarked every two years. Excellent benefits. Reports to: Director of Finance \u0026amp; Operations Application timeline: We will begin to review applications the week of August 22nd About the position Human resources is often seen solely as an operational necessity, and it is, but at Crossref it is also pivotal to our culture and our journey toward a remote-first, global, and transparent community-led organization. Sometimes it is even fun! We like to run staff activities online and occasionally in person, and we try to live by our commitments to diversity, equity, and inclusion.\nWe are looking for a Human Resources Manager with a collaborative style who will support and evolve the working experience for everyone at Crossref. Reporting to the Director of Finance and Operations, the Human Resources Manager is a central role within the organization.\nThis position helps to recruit, retain, and support our 40+ staff across the world. As a remote-first organization, this position helps to develop and implement practices that provide a consistent, equitable employment experience for our team. The ideal candidate should have experience working with staff in multiple countries.\nIn the day to day, this position helps recruit new staff, administers payroll and benefits, and troubleshoots employment issues for staff.\nWe are a small team and this position is a good fit for someone looking to work across a multinational organization. This position sits on the finance \u0026amp; operations team, but works closely with every part of the organization.\nKey responsibilities Recruiting new staff Support the new hire process, posting the job description, tracking and reviewing job applicants. Ensure a robust, inclusive recruitment process Support the interview process as needed Onboard all new hires Administering the employment experience Administer payroll (biweekly for US staff, monthly for EU, Indonesia, Ireland, Kenya, UK staff) Keep up to date with applicable employment law in countries and regions where we employ staff. As a US legal entity, we comply with US Federal employment laws. We comply with state laws where we have staff (currently 9 states in the US). We comply with regional or country level employment laws in France, Germany, Guernsey, Indonesia, Ireland, Kenya, and the UK. This list changes as staff are hired in additional locations. This position will work with additional contracted HR expertise in the UK, EU, and through the PEOs we work with in Indonesia and Kenya. Select and manage PEO contracts Administer renewal and open enrollment for US employee benefits including health, dental, FSA, HRA and vision; review communication materials; audit employee elections and respond to employee issues Administer pension and 401(k) programs, including non-discrimination testing and 5500 filings for US 401(k) Support annual financial and security audits Helping shape employment policies and supporting a healthy working culture Ensure that the policies on diversity, equity and inclusion are put into action Support the performance evaluation process by consulting with management, tracking information and updating documentation as needed Benchmark positions using compensation software and give advice to managers to ensure consistency in pay practices Maintain employee handbooks and processes identifying areas for enhancement and making recommendations for change Identify and administer required and optional training opportunities Contribute ideas to Crossref leadership to help us progress further on our journey to a global and remote-first organization Lead the charge with our commitment to open and transparent operations, creating and updating content on our website regarding HR practices About you This role is for a hands-on manager who wants to help set direction and is comfortable handling day-to-day administration. You might not have all the skills we list below, but we encourage you to apply if this sounds like you:\nHas experience, roughly 5-7+ years, working in an HR role. Is experienced in and knowledgeable about contemporary best practices for diversity, equity, and inclusion Has experience working with remote-first teams across various countries/regions and international payroll and benefit programs Has managed payroll and benefits Experience working in the scholarly communications space is nice to have but not required Some experience managing a budget or providing a perspective on the financial impact of a decision Enjoy helping staff find solutions to problems they may have with their employment experience. For example, this could be helping to navigate issues with benefits, or developing policies that support working from home. Attention to detail and an organized approach ​​- Problem solving ability Excellent communicator Comfortable with digital tools and technology Location \u0026amp; travel requirements This is a remote position. The Finance \u0026amp; Operations team currently has members working on the east coast of the US, primarily in the Boston-area. As a remote-first organization, we are not bound to a specific location.\nIn general, Crossref is committed to lowering its environmental impact by reducing unnecessary travel. We also recognize that some people may be unable to travel and the ability to travel is not a requirement for this position. That being said, if you are able to travel, it would be for no more than 5-10 days a year (possibly international).\nWhat it’s like working at Crossref We’re a little more than 40 staff and now ‘remote-first’ although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem, and that’s reflected in our ethos and staff culture. We like to work hard, but we have fun too! We take a creative, iterative approach to our projects and believe that all team members can enrich the culture and performance of our whole organization.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation, and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nAbout Crossref Crossref makes research outputs easy to find, cite, link, and assess. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put scholarly content in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 17,000 members across 146 countries, and thousands of tools and services relying on our metadata.\nTo apply Please send a cover letter and a CV via email by August 19th, 2022 to Lucy Ofiesh via jobs@crossref.org.\nEqual opportunities commitment Crossref is an equal opportunity employer. We believe that diversity and inclusion among our staff is critical to our success as a global organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Join Crossref as our Human Resources Manager. It\u0026rsquo;ll be fun. No, really!","About the position","Key responsibilities","Recruiting new staff","Administering the employment experience","Helping shape employment policies and supporting a healthy working culture","About you","Location \u0026amp; travel requirements","What it’s like working at Crossref","About Crossref","To apply","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-06-17-principal-rnd-developer/", "title": "Principal R&D Developer", "subtitle":"", "rank": 1, "lastmod": "2022-06-17", "lastmod_ts": 1655424000, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-07-05. Come work at Crossref as a Principal R\u0026amp;D Developer. It\u0026rsquo;ll be fun! Help us research, prototype, and build new services for our members and the community.\nLocation: Remote. But we are looking for somebody in +/- 2 UTC Time zones (e.g. Brazil, Ireland, UK, Scandinavia, Central Europe, West/Central Africa) Salary: Between 80K-124K EUR (or equivalent) depending on experience and location. Benchmarked every two years.", "content": " Applications for this position closed 2022-07-05. Come work at Crossref as a Principal R\u0026amp;D Developer. It\u0026rsquo;ll be fun! Help us research, prototype, and build new services for our members and the community.\nLocation: Remote. But we are looking for somebody in +/- 2 UTC Time zones (e.g. Brazil, Ireland, UK, Scandinavia, Central Europe, West/Central Africa) Salary: Between 80K-124K EUR (or equivalent) depending on experience and location. Benchmarked every two years. Benefits: Competitive. Reports to: Director of Technology and Research. Closing date: July 5, 2022 About the position We are hiring a Principal R\u0026amp;D Developer to help prototype and develop new web-based tools and services.\nCrossref operates a service that connects thousands of scholarly publishers, millions of research articles, and research content and serves an increasingly diverse set of communities within scholarly publishing, research, and beyond.\nThe Crossref R\u0026amp;D team focuses on the kinds of research projects that have allowed Crossref to make transformational technology changes, launch innovative new services, and engage with entirely new constituencies. Some Illustrious projects that had their origins in the R\u0026amp;D group include:\nDOI Content Negotiation Similarity Check (originally CrossCheck) ORCID (originally Author DOIs) Crossmark The Open Funder Registry The Crossref REST API Linked Clinical Trials Event Data Grant registration ROR We\u0026rsquo;re looking for a developer who will thrive taking messy, vague, and often contradictory requirements and working with the community to refine those requirements into a practical implementation plan. This process involves a lot of listening, writing, and prototyping. And it also requires a lot of iteration.\nYou will report to the Director of Technology and Research and work on a team that includes the Head of Strategic Initiatives and another Principal R\u0026amp;D developer.\nAbout you There are a lot of skills that we are looking for, but we don\u0026rsquo;t expect to find a purple unicorn. Instead, our primary criterion is that you have a track record of working with communities to deliver innovative projects using a variety of tools, languages, frameworks, and development paradigms. We are looking for someone who:\nIs an expert in one or more programming languages (e.g. Python, Kotlin, Java, Clojure). Wants to learn new skills and work with a variety of technologies. Relishes working with metadata. Has experience delivering web-based applications using agile methodologies. Groks mixed-content model XML. Groks RDF. Groks REST. Understands relational databases (MySQL, Oracle). Is self-directed, a good manager of their own time, with the ability to focus. Enjoys working with a small, geographically dispersed team. Can see a solo project through or collaborate on a larger team. Has deployed and maintained Linux-based systems. Bonus points for: Experience with data science techniques and tools Experience with machine learning, deep learning, natural language processing Experience building tools for online scholarly communication. Experience with a variety of programming language paradigms (OO, Functional, Declarative). Experience contributing to open-source projects that are not their own. Experience with ElasticSearch, Solr, or Lucene. Experience with front-end development (HTML, CSS, React, Angular or similar). Has worked on standards bodies. Experience with public speaking or willingness to build this skill. Responsibilities The Principal R\u0026amp;D Developer will report to the Director of Technology \u0026amp; Research. They will be responsible for prototyping and developing new Crossref initiatives and applying new Internet technologies to further Crossref’s mission to make research outputs easy to find, cite, link, assess, and reuse.\nThe post will work with the Head of Strategic Initiatives, Head of Development, Head of Infrastructure, and Director of Product to develop and implement new services – taking ideas from concept to prototype and, where appropriate, create and deploy production services.\nThe Principal R\u0026amp;D Developer may represent Crossref at conferences and in industry activities and projects. They will also play an active role in developing industry and community technical standards and help develop technical guidelines for Crossref members.\nWorking with the Head of Development and the Head of Infrastructure, the Principal R\u0026amp;D Developer will ensure that new services are designed with a robust and sustainable architecture. The Principal R\u0026amp;D Developer will actively engage with technical representatives from Crossref’s membership, the library community, scholarly researchers, and broader Internet initiatives.\nAnd please note that this is not a back-office position. On the contrary, we believe that it is vital that the entire technical team develops an understanding of our members, the broader community, and their needs. Without this kind of empathy, we cannot add value to our services. As such, you will also find yourself working closely with the product and outreach teams.\nWhat it\u0026rsquo;s like working at Crossref We\u0026rsquo;re about 40 staff and now \u0026lsquo;remote-first\u0026rsquo; although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem, and that\u0026rsquo;s reflected in our ethos and staff culture. We like to work hard, but we have fun too! We take a creative, iterative approach to our projects and believe that all team members can enrich the culture and performance of our whole organization.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation, and we only continue to grow. While we won\u0026rsquo;t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nLocation \u0026amp; travel requirements This is a remote position. The R\u0026amp;D team currently has members working in the US (Brooklyn), Ireland (Dublin), and France (Nîmes). As a remote-first organization, we are not bound to a specific location. Ideally, for this role, we are looking for someone based in time zones +/- 2 UTC.\nIn general, Crossref is committed to lowering its environmental impact by reducing unnecessary travel.\nWe also recognize that some people may be unable to travel and the ability to travel is not a requirement for this position.\nThat being said, if you are able to travel, it would be for no more than 7-14 days a year (possibly international).\nSalary Between 80K-124K EUR (or equivalent) depending on experience and location. Benchmarked every two years. Excellent benefits.\nTo apply Send a cover letter and a CV via email by July 5th, 2022 to:\nLindsay Russell\njobs@crossref.org\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\n", "headings": ["Come work at Crossref as a Principal R\u0026amp;D Developer. It\u0026rsquo;ll be fun!","About the position","About you","Bonus points for:","Responsibilities","What it\u0026rsquo;s like working at Crossref","Location \u0026amp; travel requirements","Salary","To apply","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-03-15-head-of-infrastructure/", "title": "Head of Infrastructure Services", "subtitle":"", "rank": 1, "lastmod": "2022-03-15", "lastmod_ts": 1647302400, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-03-30. ## Come and work with us as **The Head of Infrastructure Services**. Help us build and run the infrastructure that underlies the global scholarly communications ecosystem.\nLocation: Remote. But we are looking for somebody in the UTC, UTC+1 Time zones (e.g., Ireland, UK, Scandinavia, Central Europe, West/Central Africa) Salary: Between 128-174K EUR (or equivalent) depending on experience and location. Benchmarked every two years. Benefits: Competitive.", "content": " Applications for this position closed 2022-03-30. ## Come and work with us as **The Head of Infrastructure Services**. Help us build and run the infrastructure that underlies the global scholarly communications ecosystem.\nLocation: Remote. But we are looking for somebody in the UTC, UTC+1 Time zones (e.g., Ireland, UK, Scandinavia, Central Europe, West/Central Africa) Salary: Between 128-174K EUR (or equivalent) depending on experience and location. Benchmarked every two years. Benefits: Competitive. Reports to: Director of Technology and Research. Closing date: March 30, 2022 About the role Crossref is looking for a Head of Infrastructure to lead our infrastructure services team.\nThis is a senior role, and it\u0026rsquo;s crucial to Crossref\u0026rsquo;s mission and future ability to deliver on our strategy. In addition, it is an opportunity to push an entire organization\u0026rsquo;s infrastructure and way of working with it forward. And all in the service of helping scholarly researchers communicate more openly, efficiently, and effectively.\nWe\u0026rsquo;re looking for a new member of our technology team who can bring leadership experience and help steer us through some interesting operations, development, and cultural challenges. Crossref operates the service that connects thousands of scholarly publishers, millions of research articles, and research content and serves an increasingly diverse set of communities within scholarly publishing, research, and beyond.\nYou will report to the Director of Technology and Research and will lead a group of one developer and one system administrator. You will also work extensively with the software development, R\u0026amp;D, and product teams.\nKey responsibilities The infrastructure services group is primarily responsible for Crossref\u0026rsquo;s infrastructure services. That is, central, crosscutting tools and systems that are used by our software development group as the common foundation we use for delivering services to our members and the broader research community.\nIn other words- you will be leading the team that is responsible for building, deploying, and managing tools and services used by other developers.\nYou will be responsible for ensuring that these infrastructure services are reliable and responsive and making sure they can evolve quickly to support the new requirements and new services that Crossref is developing on behalf of its membership.\nYour challenge will be to accomplish this whilst simultaneously helping to drive the modernization of our current software stack, infrastructure, and software engineering culture. The entire technology team is migrating from a mostly self-hosted, manually-managed, and manually-tested environment to a cloud-based system and the SRE tools and processes.\nThis is both a cultural change and a technological one, so we are looking for someone experienced in helping teams navigate and adapt to new ways of thinking and doing things.\nWe currently use a blend of AWS, Docker, Terraform, self-hosted VMWare, Elastic Search, Kafka, and more. Most of our codebases are written in Java, Clojure, and Python, with growing Kotlin and Typescript codebases. All the code we write is open source.\nThere are a lot of skills that we are looking for, but we don\u0026rsquo;t expect to find a purple unicorn. Instead, our primary criterion is that you have a track record of leading teams through change and delivering projects using a variety of tools, languages, frameworks, and development paradigms.\nBut you get double bonus points if you have experience with:\nLeading DevOps, system administration, or SRE teams. Transitioning on-prem data center to the cloud. In-depth knowledge of one or more cloud providers. Immutable infrastructure. Virtualization and containerization of legacy code bases. Configuration management. Security infrastructure. Automation of development. Site monitoring and alerting. Web services software development. And it would be very useful if you had a subset of the following skills:\nContainerisation using ECS/Docker. Core AWS Infrastructure including EC2, VPC, S3, RDS, IAM, Route53, and Cloudfront. Infrastructure configuration, management, and orchestration tools (such as Terraform, Kubernetes, CloudFormation, Ansible, Salt, or equivalents). Java. High proficiency in at least one other language (e.g. Python, Clojure). Extensive experience with SQL, particularly PostgreSQL and Oracle. GitLab Elasticsearch, Solr, Lucene, or similar. Distributed logging and monitoring frameworks. Continuous Integration, continuous delivery frameworks. Modern, HTTP-based API design and implementation. Experience with open source development. Experience with agile development methodologies. Experience with XML- particularly with mixed content models. And please note that this is not a back-office position. On the contrary, we believe that it is vital that the entire technical team develops an understanding of our members, the broader community, and their needs. Without this kind of empathy, we cannot add value to our services. As such, you will also find yourself working closely with the product and outreach teams.\nWhat it\u0026rsquo;s like working at Crossref We\u0026rsquo;re about 40 staff and now \u0026lsquo;remote-first\u0026rsquo; although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem, and that\u0026rsquo;s reflected in our ethos and staff culture. We like to work hard, but we have fun too! We take a creative, iterative approach to our projects and believe that all team members can enrich the culture and performance of our whole organization. This means that while this is a senior role, it is also a hands-on role, like all roles at Crossref. Check out the organization chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation, and we only continue to grow. While we won\u0026rsquo;t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nLocation \u0026amp; travel requirements This is a remote position. The technology team currently has members working in the US (Lynnfield, MA, New York City, NY), UK (Oxford, Sheffield), Jersey, Ireland (Dublin), and France (Nîmes). We are looking for somebody on the Eastern side of the Atlantic for this position. Ideally +/- 1-2 hours UTC.\nIn normal, non-pandemic circumstances (assuming they ever return), technology staff should expect they will need to travel 7-14 days a year (possibly international).\nTo apply Send a cover letter and a CV via email to:\nLindsay Russell\njobs@crossref.org\nPlease apply by 30th March 2022.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you!\n", "headings": ["About the role","Key responsibilities","What it\u0026rsquo;s like working at Crossref","Location \u0026amp; travel requirements","To apply","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2022-03-09-technical-community-manager/", "title": "Technical Community Manager", "subtitle":"", "rank": 1, "lastmod": "2022-03-09", "lastmod_ts": 1646784000, "section": "Jobs", "tags": [], "description": "Applications for this position closed March 2022. Join us as our brand new Technical Community Manager to expand the adoption and integration of ROR (Research Organization Registry) throughout the global scholarly communications ecosystem. We are looking for a full-time Technical Community Manager to expand the adoption and integration of ROR throughout the global scholarly communications ecosystem.\nROR is a community-led initiative to develop an open, sustainable, usable, and unique identifier for every research organization in the world.", "content": " Applications for this position closed March 2022. Join us as our brand new Technical Community Manager to expand the adoption and integration of ROR (Research Organization Registry) throughout the global scholarly communications ecosystem. We are looking for a full-time Technical Community Manager to expand the adoption and integration of ROR throughout the global scholarly communications ecosystem.\nROR is a community-led initiative to develop an open, sustainable, usable, and unique identifier for every research organization in the world. It is jointly managed by DataCite, Crossref, and California Digital Library. Each of these operating organizations provides input on decisions and strategies that support the growth and sustainability of ROR. Our goal is to address the problem of tracking affiliations in research communications. The Technical Community Manager will be a key driver of that change.\nThe Technical Community Manager will work closely with the small and committed core ROR team, staff from the three operating organizations, and the broader ROR community, to promote and support the adoption of ROR in systems used throughout research and scholarly communications workflows. This includes engaging with new and existing ROR adopters and other community stakeholders to understand their workflows and systems, and to guide their implementations and integrations.\nThis position will be employed by Crossref as a full-time staff member and included in all Crossref staff activities. It is fully remote, so location and hours are flexible but overlap with the US Pacific timezone will be necessary. As pandemic circumstances allow, we expect to resume a small amount of international travel for meetings and events.\nResponsibilities Build strategies to drive adoption and technical implementation Identify opportunities and challenges for adoption and integration of ROR in key research and scholarly communications systems and workflows Develop and implement strategies to encourage and support the adoption and integration of ROR into scholarly communication systems and workflows, including showcasing exemplars and promoting best practices Develop measures/benchmarks to assess and communicate adoption progress both internally and to the ROR community Lead technical community engagement efforts Organize regular meetings, webinars, and other events for new and current integrators Engage with the ROR community to develop and encourage integration best practices Build and maintain tools and documentation that meet the needs of key communities Engage with the ROR community to identify and build consensus around evolving needs Provide first-line support and troubleshooting help to adopters Cultivate and manage relationships with adopters Introduce ROR to potential adopters you have identified, such as specific publishers, funders, repositories, research institutions, and the service providers and developers that offer platforms to those organizations Maintain ongoing communications with adopters through regular check-ins to ensure their integration work is well-supported Consult with adopters to recommend ROR integration approaches for their particular system/use case, collaborating with other ROR team members as needed Communicate feedback about adopter needs to the ROR team Contribute to the development and implementation of overall ROR strategies Based on community needs, identify specifications for improvements and new features Collaborate with ROR team on product development strategy Work with ROR and Crossref teams to develop and implement strategies that support wider adoption Skills and experience Community management experience, particularly in an international environment. An understanding that community management needs a mix of interpersonal, technical, program management, program development, and communication skills. The CSCCE skills wheel is a good resource to explore. While we don’t ask for a specific number of years\u0026rsquo; experience, entry-level candidates are unlikely to be successful in the role. Sufficient technical skills to advise adopters on integrations, including experience with making and troubleshooting requests to RESTful APIs and familiarity with XML and JSON data structures (or technical aptitude and a desire to learn!) Deep knowledge of research and scholarly communications systems and workflows, and familiarity with the academic research environment Familiarity with research infrastructure and the open science landscape Familiarity with a not-for-profit environment and the transparency that entails Ability to work remotely with small distributed teams across global time zones Strong, compelling, and clear written, oral, and visual communication Self-motivated to succeed, take initiative, and seek continuous improvement Working at ROR \u0026amp; Crossref As a young start-up initiative, ROR is a dynamic place to work as we are growing quickly and laying the groundwork for long-term sustainability. We are a fun community (we do actually roar sometimes 🦁) but we also take our work seriously! ROR’s three operating organizations work closely together and everyone contributing to ROR balances the needs of ROR with those of their home organization. We also work closely with ROR adopters and community stakeholders through working groups and advisory boards and aim for all of these activities to be open and transparent, in line with the Principles of Open Scholarly Infrastructure (POSI).\nROR has a Project Lead based at CDL and a Metadata Curation Lead contracted with Crossref. Our previous Adoption Lead has moved over to become our new full-time Technical Lead based at DataCite. We’re now reshaping the previous adoption role on the ROR team as this Technical Community Manager. This is a full-time role and you will be employed at Crossref. This means you will need to balance being part of two teams: ROR; and Crossref.\nCrossref is committed to supporting ongoing professional development opportunities and promoting self-learning for its 40+ people. Crossref—and ROR—are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture.”\nThinking of applying? We especially encourage applications from people with backgrounds historically underrepresented in research and scholarly communications.\nThe role will be accountable to the ROR operations team and within Crossref will report to Ginny Hendricks who will review applications along with Project Lead Maria Gould and Technical Lead Liz Krznarich. Candidates who meet the qualifications will be invited to a 30-minute screening call. Those subsequently shortlisted will be invited to a 90-minute online interview which will include an exercise you’ll have a chance to prepare for.\nTo apply, please send a CV and covering letter explaining how your skills match ROR’s goals to jobs@ror.org, by 16th March 2022. Interviews will take place in late March/early April.\nEqual opportunities commitment Crossref and ROR are committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref and ROR will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you!\n", "headings": ["Join us as our brand new Technical Community Manager to expand the adoption and integration of ROR (Research Organization Registry) throughout the global scholarly communications ecosystem.","Responsibilities","Skills and experience","Working at ROR \u0026amp; Crossref","Thinking of applying?","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-12-08-head-community-engagement-communications/", "title": "Head of Community Engagement and Communications", "subtitle":"", "rank": 1, "lastmod": "2021-12-08", "lastmod_ts": 1638921600, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-01-20. Join us as our brand new Head of Community Engagement and Communications and help advance open research Crossref is a not-for-profit membership organization that exists to make scholarly communications better by providing metadata and services that make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.", "content": " Applications for this position closed 2022-01-20. Join us as our brand new Head of Community Engagement and Communications and help advance open research Crossref is a not-for-profit membership organization that exists to make scholarly communications better by providing metadata and services that make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nCrossref is at its core a community organization. We’re committed to lowering barriers for global participation in the research enterprise, we’re funded by members and subscribers, and we engage regularly with them in multiple ways from webinars to working groups. The Head of Community Engagement and Communications (CEC) is an exciting and newly-created leadership position, which reflects an evolution of our former Marketing and Communications (‘marcomms’) approach to be more in line with our community-focused mission. We are looking for someone who understands that community-led not-for-profit organizations need a different approach — co-creating, listening, and facilitating conversations — rather than simply broadcasting.\nWith 16,000 members across 146 countries (and counting), this is a role where you’ll be driving communications strategy and building relationships from day one. You’ll combine big picture thinking about our engagement strategy with attention to detail in coordinating community programming, content creation, and communications. And you’ll ensure that we continue to work responsively in a manner that supports diverse, global participation.\nAs scientific community engagement is an emerging profession, practical experience in this area is more important to us than traditional qualifications. The successful person may in fact have a technical background in scholarly communications but can show strong community management and communications skills. We prefer candidates who can show some familiarity or affinity with the work of the CSCCE and knowledge of the dynamics of scholarly communications.\nKey responsibilities This is a senior role and it\u0026rsquo;s crucial to Crossref\u0026rsquo;s mission and future ability to deliver on our strategy. It provides the exciting opportunity to:\nLead a team of two community engagement managers and one communications and events manager, and guide their work with publishers, funders, infrastructure organizations, and research institutions. Work with internal and external data sources and partners to build out our country-level community engagement strategy and extend activities that lower barriers to participation in Crossref, measuring successful outcomes such as new members joining (new constituencies, new countries). Oversee our Sponsors program, which is the primary route to membership for emerging countries, and our Ambassadors program. You’ll be building relationships with key Sponsors and Ambassadors around the world, and ensuring we are working optimally together for our members’ benefit. Create and oversee product communications plans including messaging, roll-out schedules, and adoption campaigns, keeping in regular touch with product/outreach colleagues to plan for key developments and articulate how they will be of use and to which sectors of our community. Be or become an advocate for community engagement as a core set of skills in the work of open scholarly infrastructure organizations. Facilitate and participate in committees and advisory groups led by other initiatives, and bring insights back to share with colleagues to inform and adapt our own priorities and organizational strategy. Refresh our content strategy using your knowledge of the changing dynamics in research communications, and an understanding of our recent repositioning to work toward the Principles of Open Scholarly Infrastructure. You’ll lead the coordination of content for all our activities - including identifying opportunities to create together with the community and to curate and archive essential resources so that they remain discoverable and reusable long after their initial publication. Establish a program management approach to all interactions such as advisory groups, webinars, newsletters, social, blog, website, documentation, discussion forums, and conference participation, including developing our multilingual and multi-time-zone programming. Oversee the calendar of activities and plan ahead for key events such as our annual board election, annual report, conferences, and other community meetings. Data-driven, you’ll interrogate analytics (website, email, CRM, etc.) to understand and optimize the reach and effectiveness of engagement programs. Develop a library of creative resources and refine and add materials such as animations, diagrams, and slide libraries. Package them for different purposes such as onboarding, or different constituencies such as research funders, or different use cases such as API querying. Involve the community in co-creating such resources where possible, being mindful of serving an international audience. Some international travel will likely be appropriate when it’s safe to do so, for example to in-person meetings with colleagues, members, and sponsors. We’re looking for the right candidate—who can be based anywhere—while being aware that many of our engagement activities include the Asia Pacific timezones.\nYour skills and experience Interpersonal Ability to build trusted relationships with colleagues and community members\u0026mdash;even remotely\u0026mdash;including brokering outcomes where everyone gets what they need A truly global and inclusive perspective; we have 16,000 member organizations from 146 countries across all time zones and hundreds of languages A desire to bring people together, listening for emergent needs, and responsively providing training and other solutions A coaching style of leadership that is empathetic and unconcerned with hierarchy Technical Technical resourcefulness to get hands-on with our websites (built on Hugo, using markdown and Git, with Matomo for analytics) and our systems, managing our email and content platforms and their integrations A good understanding of APIs and metadata, unfazed by technical jargon or digging into metadata for reporting A strong creative edge and an appreciation for minimalist design with some experience of media production Program management Strategic thinking while loving the detail; ability to get from an ambiguous idea to achievable chunks of work Equally comfortable facilitating and chairing meetings and working groups as well as coaching other staff in leading their own Hands-on with event management and hosting from logistics to reporting back and follow-up Program development Demonstrable experience managing communities, including developing and guiding programming activities Formalizing collaborations as long-term partnerships including drafting MOUs, and overseeing relationships with and management of our Sponsors Knowledge of current priority topics within scholarly communications and publishing Understanding of mission-driven or community-led organizations and their sustainability and governance challenges Communication Concise writing and editing skills Experience with content planning, creation, and curation including archiving Experience with value message creation and product roll-out/adoption plans Always on the look-out for opportunities to present at external events, ability to create consistent stories that align with our strategy, and being comfortable speaking and presenting yourself Demonstrated experience with digital communications strategy including social media and community forums Languages other than English would be a plus What it’s like working at Crossref We’re about 40 staff and now ‘remote-first’ although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organization. This means that while this is a senior role, like all roles at Crossref, it is also a hands-on one. Check out the organization chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nTo apply, please send a CV along with a covering letter to Ginny Hendricks at jobs@crossref.org.\nPlease strive to get applications in by the first week of January 2022. [EDIT 13th Jan: We are now keeping the role open until Thursday 20th January]\nThe role will report to Ginny, Director of Member \u0026amp; Community Outreach at Crossref, and she will review all applications along with Lindsay Russell, our HR Manager, and Lou Woodley, Executive Director of CSCCE. Candidates who have been shortlisted will be invited to a first interview which will include some exercises you’ll have a chance to prepare for. And then there will be follow-up meetings for the final candidates to meet the team - Rosa, Susan, and Vanessa. We aim to make an offer before the end of January.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you!\n", "headings": ["Join us as our brand new Head of Community Engagement and Communications and help advance open research","Key responsibilities","Your skills and experience","Interpersonal","Technical","Program management","Program development","Communication","What it’s like working at Crossref","Thinking of applying?","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-12-03-member-support-specialist/", "title": "Member Support Specialist", "subtitle":"", "rank": 1, "lastmod": "2021-12-03", "lastmod_ts": 1638489600, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2022-March-01. Come and work with us as one of our Member Support Specialists. It’ll be fun! ​​Do you want to help make scholarly communications better? Come and join the world of open scholarly infrastructure and be part of improving the creation of and access to knowledge for all. It’s a serious job but we don’t take ourselves too seriously.\nThis role serves our global membership in all countries but will be home-based in Indonesia or nearby", "content": " Applications for this position closed 2022-March-01. Come and work with us as one of our Member Support Specialists. It’ll be fun! ​​Do you want to help make scholarly communications better? Come and join the world of open scholarly infrastructure and be part of improving the creation of and access to knowledge for all. It’s a serious job but we don’t take ourselves too seriously.\nThis role serves our global membership in all countries but will be home-based in Indonesia or nearby\nAbout Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books to preprints, data, and grants. Through partnerships and collaborations we engage with members in 146 countries (and counting) and it’s very important to us to make sure that everyone who wants to participate, can.\nWe are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. Read more about our strategy.\nWe like to work hard but we have fun too!\nAbout the role Member Support Specialist is a pivotal role in the Member Experience team. The \u0026ldquo;MemX team\u0026rdquo;, for short, is part of Crossref’s Outreach team, which is at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways and enhancing Crossref\u0026rsquo;s understanding of trends in scholarly communications. The Outreach team is currently 14 people dispersed across North America, East Africa, and Western Europe. Check out our organisation chart.\nThe Member Support Specialist is a full time global role reporting to Amanda Bartell (Head of Member Experience). The work is a mix of involved consultations with applicants together with detailed systems and administrative work. You’ll need to have an understanding of the academic and scientific communications process, great attention to detail, the ability to ask probing questions of applicants, and a logical, systematic approach.\nYou\u0026rsquo;ll be working primarily with new journal publishers to set them up as members, or to determine if they have other needs. Along with another Member Support Specialist (plus two contractors), you\u0026rsquo;ll take these publishers through our application process, setting them up carefully in our CRM and other systems, paying extremely close attention to data quality. Once they’re members, you’ll continue to work with them closely - answering their questions via email, social media, and our community forum. You’ll help them take on new Crossref services, navigate platform migrations, and understand how to set up service providers to work with us on their behalf. It’s a very diverse role and is a great opportunity to get wide-ranging experience within Crossref and the global open scholarly infrastructure and communications community.\nKey responsibilities Work with new applicants to understand their internal structures and help them understand the various membership options available to them. Own and drive the administrative process for new applicants - ensuring we have all the information we need to help them get started and setting them up accurately in our central systems. Broker conversations between members, sponsors, platforms, and service providers to ensure the member is able to fulfill their aims while still meeting Crossref membership obligations. Manage queries from applicants and members via email, social media, our community forum and other channels. Ensure that the data in our CRM system is kept clean and up-to-date. Work closely with the support and finance teams to solve problems and ensure a smooth experience for members. Location Indonesia or surrounding region\nWe’re about 40 staff and now ‘remote-first’, although we have optional offices in Oxford (UK) and Boston (USA).\nWhile we seek someone in Indonesia, they will need to be able to liaise with colleagues in Western Europe, East Africa, and North America. As Indonesia has emerged as the largest Open Access research-producing country in the world (which is reflected in Crossref’s huge membership there) there will be occasional opportunities to meet and engage with our members at conferences and other events in the region.\nAbout you We’re looking for a motivated person who will take initiative, highlight things that seem inefficient, and be able to dig into things with our diverse membership to really get to the bottom of their needs.\nYou\u0026rsquo;ll need to follow processes precisely and maintain accuracy while at the same time being comfortable with ambiguity; our community and environment is changing rapidly and we won’t always have a clear answer for everything. It keeps things interesting!\nIn addition\u0026hellip;\nAble to balance a very busy role while still paying close attention to detail and keeping member experience at the forefront. Experience in helping customers and solving problems in creative and unique ways. Strong written and verbal communication skills with the ability to communicate clearly - able to use open questions to get to the bottom of things when members may not seem to make sense. A truly global perspective - we have 16,000 member organizations from 146 countries across numerous time zones. Be comfortable taking the initiative to lead conversations with people at all levels. Quick learner of new systems and processes and can rapidly pick up new techniques. Extremely organized and attentive to detail. Experience with Zendesk or similar support system is ideal, as is familiarity with CRM systems such as Sugar. Familiar with the scholarly publishing process, with a bonus being some knowledge of XML and metadata. Thinking of applying? Even if you don’t think you have all the specific experience, we’re looking for someone with the right approach who is keen to jump in and learn. Practical experience is more important to us than traditional qualifications.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nWe especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nTo apply, please send your cover letter and resume to Lindsay Russell at jobs@crossref.org.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\n", "headings": ["Come and work with us as one of our Member Support Specialists. It’ll be fun!","About Crossref","About the role","Key responsibilities","Location","About you","Thinking of applying?","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-04-30-contract-member-support/", "title": "Member Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2021-04-30", "lastmod_ts": 1619740800, "section": "Jobs", "tags": [], "description": "Applications for this position are closed. Request for services: Member Support Contractor Location: Remote\nWe’re looking for contractors able to work remotely and help us to welcome new members from around the world. There is no set schedule and contractors would bill their hours monthly.\nCrossref receives over 220 new applications every month from organisations who produce scholarly and professional materials and content.\nKey responsibilities Manage queries from applicants and members via our Zendesk support desk and potentially other channels.", "content": " Applications for this position are closed. Request for services: Member Support Contractor Location: Remote\nWe’re looking for contractors able to work remotely and help us to welcome new members from around the world. There is no set schedule and contractors would bill their hours monthly.\nCrossref receives over 220 new applications every month from organisations who produce scholarly and professional materials and content.\nKey responsibilities Manage queries from applicants and members via our Zendesk support desk and potentially other channels. Follow the administrative process for new applicants, such as: Check the details in application forms that come via our website. Set them up in our Customer Relationship Management (CRM) System (CRM). We use SugarCRM. Send them an invoice for the first year of membership, and once this is paid\u0026hellip; Set up and share their DOI prefix and account credentials. Ensure that the information in our CRM is kept clean and up-to-date. Work closely with the Member Experience team and our finance colleagues. About you Organized with an eye for details Happy with data entry and maintenance Comfortable following processes and taking on new systems Friendly and clear communication skills (in English) About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince 2000 we have grown from strength to strength and now have over 15,000 members across 140 countries, and thousands of tools and services relying on our metadata.\nTo apply Please send a cover letter and your CV to Lindsay Russell at jobs@crossref.org.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Request for services: Member Support Contractor","Key responsibilities","About you","About Crossref","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/display-guidelines/", "title": "Display guidelines", "subtitle":"", "rank": 4, "lastmod": "2021-04-21", "lastmod_ts": 1618963200, "section": "Display guidelines", "tags": [], "description": "1 September 2022 These 2017 guidelines are not changing but we’ve added a recommendation to improve accessibility for Crossref links on landing pages. Please see our recent call for comments for more information. This page will be updated when the recommendation has been finalized.\nDisplay guidelines for Crossref DOIs - effective from March 2017 Cite as “Crossref Display Guidelines (March 2017)\u0026quot;, retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy\nIt\u0026rsquo;s really important for consistency and usability that all members follow these guidelines.", "content": "1 September 2022 These 2017 guidelines are not changing but we’ve added a recommendation to improve accessibility for Crossref links on landing pages. Please see our recent call for comments for more information. This page will be updated when the recommendation has been finalized.\nDisplay guidelines for Crossref DOIs - effective from March 2017 Cite as “Crossref Display Guidelines (March 2017)\u0026quot;, retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy\nIt\u0026rsquo;s really important for consistency and usability that all members follow these guidelines. We rarely have to change them and usually only do so for very good reasons. Please note that this is for display of Crossref DOIs, not anyone else\u0026rsquo;s persistent links, as, for example, not all DOIs are made equal.\nThe goals of the guidelines are to:\nMake it as easy as possible for users without technical knowledge to cut and paste or click to share Crossref DOIs (for example, using right-click to copy a URL). Get users to recognize a Crossref links as both a persistent link as well as a persistent identifier, even if they don\u0026rsquo;t know what a Crossref DOI is. Enable points 1 and 2 above by having all Crossref members display DOIs in a consistent way. Enable robots and crawlers to recognize Crossref DOIs as URLs. When linking to a research work, use its Crossref DOI link rather than its URL. If the URL changes, the publisher will update the metadata in Crossref with the new URL, so that the link will always take you to the correct location of the work.\nHow to display a Crossref link When displaying DOIs, it’s important to follow these display guidelines. Crossref DOIs should:\nalways be displayed as a full URL link in the form https://0-doi-org.libus.csd.mu.edu/10.xxxx/xxxxx not be preceded by doi: or DOI: not use dx in the domain name part of DOI links and we recommend HTTPS (rather than HTTP). Here is an example of canonical DOI display:\nShow image × Changes to guidelines in March 2017 These guidelines introduce two important changes that differ from the previous guidelines:\nwe have dropped the dx from the domain name portion of Crossref links we recommend you use the secure HTTPS rather than HTTP Note this change is backwards compatible, so DOIs such as http://0-dx-doi-org.libus.csd.mu.edu/ and http://0-doi-org.libus.csd.mu.edu/ which conform to older guidelines will continue to work indefinitely.\nWhere to apply the display guidelines Crossref links should be displayed as the full URL link wherever the bibliographic information about the content is displayed.\nAn obligation of membership is that Crossref persistent links must be displayed on members’ landing pages. We recommend that Crossref links also be displayed or distributed in the following contexts:\nTables of contents Abstracts Full-text HTML and PDF articles, and other scholarly documents Citation downloads to reference management systems Metadata feeds to third parties \u0026ldquo;How to Cite This\u0026rdquo; instructions on content pages Social network links Anywhere users are directed to a permanent, stable, or persistent link to the content. Crossref members should not use proprietary, internal, or other non-Crossref links in citation downloads, metadata feeds to third parties, nor in instructions to researchers on how to cite a document. The membership terms stipulate that Crossref persistent identifier links must be the default.\nCrossref links in reference lists and bibliographies Linking references in journal articles using Crossref DOIs is a condition of membership. This means including the link for each item in your reference list. We strongly encourage members to link references for other record types too. Because there are space constraints even in online references lists, Crossref DOIs can be displayed in several ways, depending on the publisher’s preference and publication style. We recommend the following options:\nuse the Crossref DOI URL as the permanent link. Example: Soleimani N, Mohabati Mobarez A, Farhangi B. Cloning, expression and purification flagellar sheath adhesion of Helicobacter pylori in Escherichia coli host as a vaccination target. Clin Exp Vaccine Res. 2016 Jan;5(1):19-25. https://0-doi-org.libus.csd.mu.edu/10.7774/cevr.2016.5.1.19 display the text Crossref with a permanent DOI link behind the text. Example: Galli, S.J., and M. Tsai. 2010. Mast cells in allergy and infection: versatile effector and regulatory cells in innate and adaptive immunity. Eur. J. Immunol. 40:1843–1851. Crossref. Learn more about how to link your references.\nShortDOI The DOI Foundation created the ShortDOI service as an open system that creates shortcuts to DOIs. DOIs can be long, so this service aimed to to the same thing as URL shortening services. ShortDOIs are not widely used and are not really actual DOIs themselves, which is confusing. We recommend simply creating shorter DOIs in the first place. Learn more about constructing your DOIs.\nFrequently Asked Questions (FAQs) about the March 2017 changes Can we make the display changes now or do we need to wait? These guidelines are now in effect. We set the date as March 2017, after giving our members six months\u0026rsquo; notice to make the changes.\nWhy does a Crossref DOI have to be displayed as a link on the page that it links to? Some members have reported resistance from colleagues to displaying the Crossref DOI on the landing page as a link (they say the link in that location appears superfluous as it appears to link to itself). However, the Crossref DOI must be displayed as a link, because it is both an identifier and a persistent link. It is also part of the membership terms agreed to when members join Crossref. It is easier for users when members display the DOI as a full link as they can copy it easily. Also, many users don’t know what a DOI is, but they know what a link is. We want to encourage the DOI to be used as a persistent link, and to be shared and used in other applications (such as reference management tools). A fully linked DOI enables this, wherever it appears.\nDo we need to redeposit our metadata to update the DOI display? No - there is no need to redeposit metadata. These guidelines cover how you display DOIs on your website, not how to register them with us.\nWhy not use doi: or DOI:? When Crossref was founded in 2000, we recommended that DOIs be displayed in the format doi:10.NNNN/doisuffix and many members still use doi:[space][doinumber], DOI: [space][doinumber], or DOI[space][doinumber]. At the time that the DOI system was launched in the late 1990s it was thought that doi: would become native to browsers and automatically resolve DOIs, like http:. This did not happen, and so doc:/DOI: is not a valid way of displaying or linking Crossref DOIs.\nAdvantages to changing the display to a resolvable URL (even on the page the DOI itself resolves to) include:\nA Crossref DOI is both a link and an identifier. Users will more easily recognize them as an actionable link, regardless of whether they know about the infrastructure behind it. Users who do not know how to right-click on the link and choose Copy link will still be able to easily copy the DOI URL Machines and programs (such as bots) will recognize the Crossref DOI as a link, thereby increasing discoverability and usage. Why not use dx as in http://0-dx-doi-org.libus.csd.mu.edu/? Originally the dx separated the DOI resolver from the International DOI Foundation (IDF) website but this changed a few years ago and the IDF recommends http://0-doi-org.libus.csd.mu.edu as the preferred form for the domain name in DOI URLs.\nWhy should we use HTTPS? Providing the central linking infrastructure for scholarly publishing is something we take seriously. Because we form the connections between publisher content all over the web, it’s important that we do our bit to enable secure browsing from start to finish. In addition, HTTPS is now a ranking signal for Google, which gives sites using HTTPS a small ranking boost.\nThe process of enabling HTTPS on publisher sites will be a long one, and given the number of members we have, it may take a while before everyone’s made the transition. But by using HTTPS we are future-proofing scholarly linking on the web.\nSome years ago we started the process of making our new services available exclusively over HTTPS. The Crossref API is HTTPS enabled, and Crossmark and our Assets CDN use HTTPS exclusively. In 2015 we collaborated with Wikipedia to make all of their DOI links HTTPS. We hope that we’ll start to see more of the scholarly publishing industry doing the same.\n", "headings": ["Display guidelines for Crossref DOIs - effective from March 2017 ","Cite as","How to display a Crossref link ","Changes to guidelines in March 2017 ","Where to apply the display guidelines ","Crossref links in reference lists and bibliographies ","ShortDOI ","Frequently Asked Questions (FAQs) about the March 2017 changes ","Can we make the display changes now or do we need to wait? ","Why does a Crossref DOI have to be displayed as a link on the page that it links to? ","Do we need to redeposit our metadata to update the DOI display? ","Why not use doi: or DOI:? ","Why not use dx as in http://0-dx-doi-org.libus.csd.mu.edu/? ","Why should we use HTTPS? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/editors/", "title": "For editors", "subtitle":"", "rank": 5, "lastmod": "2017-01-18", "lastmod_ts": 1484697600, "section": "Get involved", "tags": [], "description": "Your decisions influence what research is communicated and how. Demonstrate your editorial integrity with tools that help you assess a paper’s originality, and properly label and connect updates, corrections, and retractions.\nGet discovered Using our services provides a way for editors to maximize the discoverability of the content they publish.\nRegistering DOIs and metadata with us means that content can be found and used alongside that of our members. We also ask our members to link their references using the DOI as this makes sure that content will be linked persistently to the work it cites, so that links to your publications won’t deprecate over time and readers can find them long-term.", "content": "Your decisions influence what research is communicated and how. Demonstrate your editorial integrity with tools that help you assess a paper’s originality, and properly label and connect updates, corrections, and retractions.\nGet discovered Using our services provides a way for editors to maximize the discoverability of the content they publish.\nRegistering DOIs and metadata with us means that content can be found and used alongside that of our members. We also ask our members to link their references using the DOI as this makes sure that content will be linked persistently to the work it cites, so that links to your publications won’t deprecate over time and readers can find them long-term. You can encourage your authors to add DOIs to the reference lists of the papers they submit to you, and we have tools to help with that.\nOther things that will help your content get discovered include collecting information on who funded the research behind the content you publish and the ORCID iDs of your authors, so that it can be found by readers searching on those criteria too. If you collect that information, make sure it’s being deposited with Crossref so that it can be used. Not sure if that\u0026rsquo;s the case? Let us know.\nIntegrity and ethics As an editor, we know you’re on the front-line to ensure the quality and integrity of what you publish. We provide the Similarity Check service which gives members a tool to help editors and publishers ensure that work is original (and references other work properly) by checking it against a growing database of academic publications and general web content. Questions about Similarity Check? Get in touch.\nEven with thorough processes in place, changes may happen to a work after it has been published which may affect how it should be interpreted or credited. Perhaps some supplementary information has been added or the work needs to be corrected or retracted. It’s important that your readers know these changes have happened to research they want to read or cite. Crossmark provides a way to communicate this information in a standard way across publishers so that researchers don’t miss it (even on a PDF), and also gives you a way to showcase additional publication information e.g. funding, license and peer review details.\nWant to find out more? Check out our webinars, blog or come and speak to us in person at an event - we’re eager to hear from you!\n", "headings": ["Get discovered","Integrity and ethics"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-09-07-director-programs-services/", "title": "Director of Programs & Services", "subtitle":"", "rank": 1, "lastmod": "2024-09-07", "lastmod_ts": 1725667200, "section": "Jobs", "tags": [], "description": "Develop services and initiatives that help progress open science worldwide.", "content": " Applications for this position closed October 4, 2024. Do you want to drive the development of services and initiatives that help progress open science worldwide? Come and join the world of open scholarly infrastructure and metadata as our new Director of Programs \u0026amp; Services.\nLocation: Remote and global Type: Full-time Remuneration: 135K USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Chief Program Officer, Ginny Hendricks Timeline: Advertise and recruit September-October, offer in November About the role We have created a new role for a Director of Programs and Services to be a key member of the senior management team at Crossref to help deliver on our strategic agenda. The role is responsible for planning and driving the success of the tools and services that make up our primary programs such as metadata discovery, metadata sources, community integrations, and ensuring the integrity of the scholarly record—all programs that help us help our members meet the vision of an open and connected research nexus and enable a better open research ecosystem.\nThe successful candidate will be critical to Crossref’s transformation away from a more traditional software-focused structure and culture, and towards a more appropriate non-profit and community-guided structure, leading a global and collaborative approach to prioritizing and meeting our mission. While the work is similar to traditional product work (and we assume some candidates may have a background in that area) this role is newly envisioned in the context of the vast scale and growth in our membership and users and the need to bring them more closely into the process. Whatever your background, if the CSCCE Community Participation Model1 looks like something you have done or could do, we want to hear from you. Success in this role requires an intrinsically open approach, experience in enabling co-creation at scale, the ability to listen to and act on evolving member needs, and the practical capabilities to bring cross-organisation teams together to plan and establish clear processes to ensure a clear and measurable path to implementation and delivery of real-world impact for our community members.\nCSCCE Participation Model\nCenter for Scientific Collaboration and Community Engagement. (2020) The CSCCE Community Participation Model – A framework for member engagement and information flow in STEM communities. Woodley and Pratt.\nKey responsibilities Leadership and Management: Bring structure to our cross-organisational planning, working with the Operations group and members of the senior management team to translate our vision into concrete projects and deliverables. Develop a new program-wide approach and manage a team of seven to extend beyond traditional product management, promote the value and role of program design and project management. Be responsible for what Crossref can commit to; estimate, resource, and organize work across existing and new programs and projects and ensure everyone who needs to be is involved. Help enhance our culture, which is based on remote working and open communication. Create an open decision-making culture for program development at Crossref. Work with and establish strong links with all teams and all levels of the organisation. Program Strategy: Develop and deliver on our new vision for evolving product management, designing program and project approaches from design, management, delivery, and measurement. Develop and guide the team to optimally manage the key programs, which include: The integrity of the scholarly record (services and tools like Crossmark and Retraction) Metadata retrieval and discovery (REST API and developing our Search tool) Metadata sources (member metadata, improving registration forms, as well as incorporating new or partner data sources) Metadata development (activities like input and output schema evolution, implementing mapping and matching projects such as for funders and affiliations) Modern operations (membership automation, infrastructure optimization, cloud migration, as well as fee and resourcing projects) Integrations and interfaces (participation reports, main admin interface, as well as integrations with key platform partners like Open Journal Systems) Centre metadata and the community in everything we do. Oversee and support the metadata development process from consultation and design through to delivery and developing best practices. In support of the strategic agenda, conduct program design for any new programs, plan resources and milestones along the way, and determine and report on the expected outcomes. Contribute to the design and management of a new Research Nexus fund for community tools, data sources, and new initiatives. Work with the Operations group to ensure that all development work delivers value to our members. Build and maintain the roadmap, ensuring all areas of the organisation are involved in prioritization and plans and actively share these with our community stakeholders. Be responsible for the team in scoping and planning new feature development, running pilots and beta test phases, and facilitating internal project teams and external Working Groups to deliver on time. Community Focus: Work closely with the Community and Membership Directors to set development priorities based on user needs. Research community needs and engage people through Advisory Groups for each key area of our service as well as Working Groups for new initiatives. Develop introduction plans to roll out new features through open community consultation and co-creation. Be a visible part of Crossref in the community e.g. speaking at events, being directly accessible to members and setting expectations with the community, engaging on social media and the community forum, and blogging about our services and plans. Represent Crossref on others’ working groups and advising other community groups on strategic initiatives. About you We are looking for a proactive, communicative, analytical, and highly organized person to help take our product function to the next level of community co-creation and program management. The successful candidate will likely possess the following attributes and experience:\nCommunity-minded with a background in non-profit, social impact, open data, and/or open-source software. Driven by seeing real outcomes and impact for community members. Strong program design and implementation skills. Systems thinking and experience bringing large and dispersed groups together asynchronously. Willingness to adapt and be flexible based on new insights or data. A highly communicative and transparent way of working and sharing information. Curiosity and tendency to listen (you will never have all the answers). Strong written and public speaking skills. Experience engaging users and partners in product development processes. Experience with product methodologies and best practices in open-source software development. Adept at planning and launching features and services in an open and transparent way. Tech-savvy, comfortable with API use, metadata formats, and databases. Analytical, highly organized, and process-focused. Demonstrated experience improving operational processes and systems. Love of data, keen to track usage and participation trends to inform decisions and measure success. Experience working globally across time zones and with diverse groups stakeholders and cultures. Experience managing budgets, external consultants, and oversight of project management. About Crossref and the team We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart and view our Annual Reports and financial information here.\nHow to apply To apply, please submit a CV and cover letter, detailing how you fulfil the role description and personal specification to Perrett Laver’s application page quoting reference 7556. The deadline for applications is Friday, October 4, 2024.\nThis is a remote position and the successful candidate can be based most anywhere as long as they are prepared to adapt their hours to European and East Coast US time zones. A moderate amount of travel is expected.\nAnticipated salary for this role is approximately $135,000 USD, or equivalent amount paid in local currency. Crossref offers competitive compensation, a rich benefits package, flexible work arrangements, professional development opportunities, and a supportive work environment. As a non-profit organisation, we prioritize mission over profit.\nThe selection committee will together review all candidates’ applications and agree on a longlist for the role. Longlisted candidates will be invited to discuss the position with Perrett Laver in greater detail. The selection committee will subsequently meet to decide upon a final shortlist to be invited to the formal interview stage.\nProtecting your personal data is of the utmost importance to Perrett Laver and we take this responsibility very seriously. Any information obtained by our trading divisions is held and processed in accordance with the relevant data protection legislation. The data you provide us with is securely stored on our computerized database and transferred to our clients for the purposes of presenting you as a candidate and/or considering your suitability for a role you have registered interest in.\nPerrett Laver is a Data Controller and a Data Processor, and our legal basis for processing your personal data is ‘Legitimate Interests’. You have the right to object to us processing your data in this way. For more information about this, your rights, and our approach to Data Protection and Privacy, please see our Privacy Statement.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","Leadership and Management:","Program Strategy:","Community Focus:","About you","About Crossref and the team","How to apply","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/faq/", "title": "FAQs", "subtitle":"", "rank": 4, "lastmod": "2024-04-17", "lastmod_ts": 1713312000, "section": "FAQs", "tags": [], "description": "Here are answers to some general questions. If you don\u0026rsquo;t find answers here, please review our detailed documentation, or contact us for help.\nGeneral help Am I eligible for membership? How do I get a DOI for my paper? When (and how) do I pay for my DOIs? How do I find the DOI for a particular article? How do I handle title transfers? I\u0026rsquo;ve found a problem with your metadata - how do I get you to fix it?", "content": "Here are answers to some general questions. If you don\u0026rsquo;t find answers here, please review our detailed documentation, or contact us for help.\nGeneral help Am I eligible for membership? How do I get a DOI for my paper? When (and how) do I pay for my DOIs? How do I find the DOI for a particular article? How do I handle title transfers? I\u0026rsquo;ve found a problem with your metadata - how do I get you to fix it? How do I update my contact information? Content Registration How do I register my content? What is my DOI suffix / is my suffix OK? What types of content can I register? I messed up - how do I correct my metadata record? What does this error message mean? I registered a DOI but it is not working - what do I do? Updating and maintaining metadata records My content has moved, how do I update my resource resolution URLs? How do I tell you about my title change? Do I need to create a new DOI if I’ve missed something or provided incorrect metadata? Can I delete records and/or DOIs? Reports How do I access reports? I hate reports, can you stop emailing them to me? What does this email mean? I have questions about my DOI error report, resolution report, conflict report, Schematron report, or depositor report Crossref services I have questions about Crossmark I have questions about Similarity Check I have questions about Cited-by Multiple resolution What is multiple resolution? How do I update my multiple resolution URLs? I want my links to go to one place - how do I turn off multiple resolution? How does multiple resolution affect my resolution statistics? What if I want to do multiple resolution but sometimes want to direct people to a single URL? What if I want to use different URLs based on where the user is coming from - do you support country codes? Co-access What is Co-access? What problem does Co-access solve? Who is Co-access for and how does it work? How much does Co-access cost? How do I participate in Co-access? What is the difference between Multiple Resolution and Co-access? Can Co-access be used for journal content DOIs too? Doesn’t Co-access violate the “uniqueness” rule? What about citation splitting? How does Co-access affect resolution reports? How are Co-access relationships represented in Crossref metadata? How are Co-Access groups defined What if a deposit is made by a party not included in a Co-access agreement? I’m a book publisher, how do I know which aggregators have registered me for Co-access? How are Co-Access groups defined? Can I opt-out a single title in a Co-access group? Is Co-access a long term solution? General help Am I eligible for membership? If you publish scholarly content online or represent organizations who publish, you are eligible to become a member. You also must be able to commit to our member terms. How do I get a DOI for my paper? We don\u0026rsquo;t supply DOIs ad-hoc. If the publisher of your paper is a member, they\u0026rsquo;ll register your article on your behalf. When (and how) do I pay for my DOIs? Your Content Registration fees will be invoiced quarterly. Invoices can be paid through our payment portal. In addition to credit card payments we also accept wires and checks. Questions about logging in or billing in general can be emailed to our finance team. How do I find a DOI for a particular article? To look up a single DOI use our Metadata Search interface. If you want to look up metadata records or DOIs in volume, read more about metadata retrieval. How do I handle title transfers? If you\u0026rsquo;ve acquired a title from another member, you need to let us know about the transfer and provide confirmation from the disposing publisher. We\u0026rsquo;ll accept transfers posted to the Enhanced Transfer Alerting Service (ETAS). If you don\u0026rsquo;t participate in Transfer, your confirmation may be a forwarded email from the disposing publisher to the acquiring publisher acknowledging the transfer. See our title and record ownership transfer documentation for more details.\nI\u0026rsquo;ve found a problem within your metadata - how do I get you to fix it? While we aren\u0026rsquo;t able to correct the metadata provided by our members, report any metadata issues to our support staff and we\u0026rsquo;ll contact the responsible member and ask them to make corrections.\nHow do I update my contact information? Please contact our membership specialist with any changes to your contact information.\n", "headings": ["General help","Content Registration","Updating and maintaining metadata records","Reports","Crossref services","Multiple resolution","Co-access","General help","Am I eligible for membership?","How do I get a DOI for my paper?","When (and how) do I pay for my DOIs?","How do I find a DOI for a particular article?","How do I handle title transfers?","I\u0026rsquo;ve found a problem within your metadata - how do I get you to fix it?","How do I update my contact information?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/funders/", "title": "Funders", "subtitle":"", "rank": 5, "lastmod": "2020-12-01", "lastmod_ts": 1606780800, "section": "Get involved", "tags": [], "description": "How Crossref fits with funders We collect and share metadata about research published in articles, books, preprints, reviews, and more\u0026mdash;such as licenses, clinical trials, and retractions\u0026mdash;all of which helps funders measure reach and return.\nOur members register DOIs and bibliographic metadata with us for journal articles, preprints, books, peer reviews, and more. We have a growing number of funder members, and encourage funders to become involved in our committees, advisory groups and overall governance.", "content": "How Crossref fits with funders We collect and share metadata about research published in articles, books, preprints, reviews, and more\u0026mdash;such as licenses, clinical trials, and retractions\u0026mdash;all of which helps funders measure reach and return.\nOur members register DOIs and bibliographic metadata with us for journal articles, preprints, books, peer reviews, and more. We have a growing number of funder members, and encourage funders to become involved in our committees, advisory groups and overall governance.\nCrossref metadata is the bedrock for many thousands of platforms and services from search and discovery to research and assessment tools.\nWe make this information available via a funder search interface and via our public REST API so that it can be seamlessly integrated into downstream systems.\nThis helps report on:\nPublished outputs: where and how are researchers publishing, when, which institutions are they from, who is funding them? Data sharing and citation: has underlying/related data been linked to or made available? The citation and sharing of these published outputs The openness of this information is a key part of Crossref’s support of the Principles of Open Scholarly Infrastructure, which our board voted to adopt in November 2020. The principles are a set of guidelines by which open scholary infrastructure organizations and initiatives that support the research community can be run and sustained. We think that these principles help all those that support scholarly communications and research hold each other accountable, and ensure that vital open infrastructure can be stakeholder-governed, transparent and maintained for as long as it serves its purpose.\nFunding data and the funder registry Watch the video below to find out more about funding data and the registry:\nAs the video shows, we take this largely unstructured grant data, and help the research community match products and people to funding. In turn, funders use this metadata to track the impacts of their investment, and understand similar investments made by other funders.\nThis metadata will always be open and freely available but we also offer a more dedicated service with more predictable API response times, through our Plus service.\nRegistering research grants Reporting at the funder-level can be useful, but we’ve been working with our Funder Advisory Group to support the registration of grant metadata so that funders can see published outputs connected to specific research grants. You can find out more about why funders are registering research grants and how you can get involved in this initiative by visiting our dedicated grants page.\nIf you have any questions about membership or registering grants, then our membership team can help. Our technical support specialists can also help with questions about the funder registry, our APIs or grant registration.\n", "headings": ["How Crossref fits with funders","Funding data and the funder registry","Registering research grants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/preprints/", "title": "Preprints", "subtitle":"", "rank": 4, "lastmod": "2020-11-09", "lastmod_ts": 1604880000, "section": "Get involved", "tags": [], "description": "Our members asked for the flexibility to register content at different points in the publishing lifecycle, so we extended our infrastructure to support members who want to register early versions such as preprints or working papers.\nOur custom support for preprints ensures that links to these outputs persist over time, that they are connected to the full history of the shared research results, and that the citation record is clear and up-to-date.", "content": "Our members asked for the flexibility to register content at different points in the publishing lifecycle, so we extended our infrastructure to support members who want to register early versions such as preprints or working papers.\nOur custom support for preprints ensures that links to these outputs persist over time, that they are connected to the full history of the shared research results, and that the citation record is clear and up-to-date.\nPublishing preprints is about more than simply getting a DOI Crossref can help you to clearly label content as a preprint using a preprint-specific schema. It’s not advisable to register preprints as data, components, articles, or anything else, because a preprint is not any of those things. Our service allows you to ensure the relationships between preprints and any eventual article are asserted in the metadata, and accurately readable by both humans and machines.\nWe have designed a schema together with a working group that included preprint advisors from bioRxiv and arXiv, along with members including PLOS, Elsevier, AIP, IOP, ACM. The schema lays out what metadata is specifically important for preprint content. We also developed a notification feature to alert preprint creators of any matches with journal articles, so they can link to future versions from the preprint.\nSince November 2016, members have been registering hundreds of thousands of preprints with us, and thousands of those in turn already have matches with journal articles too (requires a JSON viewer). These relationships in the Crossref metadata, available through our APIs, are relied upon by many parties - from researchers to funders - to discover, track and evaluate the preprint journey.\nBenefits of our custom support for preprints Persistent identifiers for preprints to ensure successful links to the scholarly record over the course of time The preprint-specific metadata we ask for reflects researcher workflows from preprint to formal publication Support for preprint versioning by providing relationships between metadata for different iterations of the same document. Notification of links between preprints and formal publications that may follow (such as journal articles, monographs) Reference linking for preprints, connecting up the scholarly record to associated literature Auto-update of ORCID records to ensure that preprint contributors are acknowledged for their work Preprints include funding data so people can report research contributions based on funder and grant identification Discoverability: we make the metadata available for machine and human access, across multiple interfaces (including our REST API, OAI-PMH, and Metadata Search. What to be aware of when registering preprints Members registering preprints need to make sure they:\nRegister content using the posted content metadata schema (see examples in the posted content markup guide) Respond to our match notifications that an accepted manuscript (AM) or version of record (VOR) has been registered, and link to that within seven days. You should designate a specific contact with us who will receive these alerts (it can be your existing technical contact) Clearly label the manuscript as a preprint, above the fold on the preprint landing page, and ensure that any link to the AAM or VOR is also prominently displayed above the fold. Other considerations:\nReferences will be flagged as belonging to a preprint in our Cited-by service The preprint is treated as one item only without components for its constituent parts Each version should be assigned a new DOI, and associate the versions via a relationship with type isVersionOf - learn more about relationships Preprints are not currently able to participate in Crossmark. Registering preprints: joining as a member Preprint owners who would like to use our preprint service should apply to join as a member. We have a dedicated fee structure for registering each preprint, and volume discounts offered for both backfile and current content. Learn more about our fees.\nRegistering preprints: existing members Are you an existing Crossref member who wants to assign preprint DOIs? Let’s talk about getting started or migrating any existing mis-labelled content over to the dedicated preprint deposit schema. You can also give us a specific contact who will receive match notifications that an author\u0026rsquo;s accepted manuscript or version of record (AAM or VOR) has been registered. Get in touch with our membership team and they’ll be able to walk you through the process.\nLearn more about registering preprints in our Education documentation.\n", "headings": ["Publishing preprints is about more than simply getting a DOI ","Benefits of our custom support for preprints ","What to be aware of when registering preprints","Registering preprints: joining as a member","Registering preprints: existing members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/publishers/", "title": "For publishers", "subtitle":"", "rank": 4, "lastmod": "2020-11-02", "lastmod_ts": 1604275200, "section": "Get involved", "tags": [], "description": "We work with thousands of publishers from all over the world. No matter what your size, subject area or business model, it doesn’t limit your ability to connect your content with the global network of online scholarly research. Each Crossref member gets to cast their vote to create a board that represents all types of organizations, and members can also stand for election to the board. We have elections each year and have designated seats for different sizes of members.", "content": "We work with thousands of publishers from all over the world. No matter what your size, subject area or business model, it doesn’t limit your ability to connect your content with the global network of online scholarly research. Each Crossref member gets to cast their vote to create a board that represents all types of organizations, and members can also stand for election to the board. We have elections each year and have designated seats for different sizes of members.\nIf you publish one journal or thousands, you’re welcome to join our growing community.\nApply Find out more about what becoming a member involves. You may want to join Crossref directly, or you can also consider joining via one of our Sponsors, who can provide technical, billing, language and administrative support to members (some may charge extra for this).\nOur membership team can also help with any questions you may have about joining.\nParticipate Register your content Our members join us to register their content with us via human or machine interfaces. The metadata we collect supports a variety of record types, to effectively support the different scholarly content members want to register. By sending us metadata and identifiers related to your publications, you’re making it available to numerous systems and organizations that together help credit and cite the work, report impact of funding, track outcomes and activity, and more.\nBecause of this, providing robust, accurate metadata helps make your content more discoverable. You can easily track what metadata you have registered by visiting our Participation Reports and entering your organization name. These reports give a clear picture of the metadata registered by a member and are open to all.\nLink references Crossref is all about rallying the scholarly community to work together. Because of this, reference linking is an obligation for all Crossref members and for all current journal content. Reference linking means hyperlinking to Crossref DOIs when you create your citation list. This makes it possible for readers to follow a DOI link from the reference list of a published work to the location of the full-text document on a member’s publishing platform, building a network infrastructure that enhances scholarly communications on the web.\nOther services From helping members check content for originality, finding out who has cited the work they have published, to providing a consistent way to show readers the latest status of an article (or any other research object), we’ve developed a growing range of services that support and enhance the specific needs of our members and how they work with their content.\nHow your metadata is used All of the metadata that Crossref collects helps our members’ content be more discoverable. We make it available in a variety of formats so that anyone can come to one place to get information from our thousands of diverse members. Information about your publications is being shared by and used in search engines, collaborative editing and authoring tools, discovery platforms, library databases, by publishers themselves and many, many other places.\nYou can contact our membership specialist with any questions or to get set up, or you can get in touch with our technical support specialists for any technical or troubleshooting questions.\n", "headings": ["Apply","Participate","Register your content","Link references","Other services","How your metadata is used"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-09-07-director-technology/", "title": "Director of Technology", "subtitle":"", "rank": 1, "lastmod": "2024-09-07", "lastmod_ts": 1725667200, "section": "Jobs", "tags": [], "description": "Deliver next-level open infrastructure for global open science.", "content": " Applications for this position closed October 4, 2024. Do you want to take the lead in delivering next-level infrastructure for global open science? Come and join the global world of open research metadata as our new Director of Technology.\nLocation: Remote and global Type: Full-time Remuneration: 160K USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Chief Operations Officer, Lucy Ofiesh Timeline: Advertise and recruit September-October, offer in November About the role Crossref is seeking a Director of Technology to play a key role in the leadership team and to develop and execute Crossref’s technical strategy. Reporting to the Chief Operations Officer, the Director of Technology will lead a talented team of software developers to build and maintain a robust, scalable, and innovative open scholarly infrastructure. As part of the leadership team, they will contribute to setting Crossref’s organisational strategy, develop and implement the organisation’s technology strategy, report to colleagues and the board, lead the technology team, and collaborate with other open scholarly infrastructure organisations.\nTechnology at Crossref helps us fulfil our vision of a rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nWe maintain a database of 161+ million metadata records registered by our nearly 20,000 members over the past 25 years. Our system acts as the backbone for preserving the scholarly record. We offer a wide array of services to ensure that scholarly research metadata is registered, linked, and distributed. When members register their content with us, we collect both bibliographic and non-bibliographic metadata. We process it so that connections can be made between publications, people, organisations, and other associated outputs. We preserve the metadata we receive as a critical part of the scholarly record. We also make it available across a range of interfaces and formats so that the community can use it and build tools with it.\nTechnology is a critical enabler of our mission and scalability and modernisation are at the heart of our strategy. By transitioning to a fully cloud-based infrastructure and modernising our systems and services, we will enhance our ability to meet the evolving needs of our community and keep pace with the growth in the amount and complexity of our metadata, the rapid changes in scholarly research and communications, and the development of Open Research globally. When done right, our technology can:\nenable best practices in scholarly communications solve problems shared by our 20,000 global members. We act on behalf of our members to advance their interests strengthen Crossref as a comprehensive, reliable, and secure infrastructure upon which ourcommunity can build tools and services adapt to how the scholarly community evolves to keep pace with our scale of growth model openness, ensuring that our data and software are available for reuse or inspection as a public good whenever possible, and support collaborative community development of services when possible The Director of Technology will lead an experienced team of 11 people and work closely with the leadership and senior management teams. The right candidate will be experienced at managing a team, setting priorities, and defining practices.\nThis role will work closely with the Programs group to understand the community’s needs and design solutions. Although Crossref is not a traditional software company, we provide critical open infrastructure with a mission to deliver innovative, cloud-based solutions for our members and the broader scholarly communications community. To do that, we need to centre the community in our development and hire technology leadership that focuses on excellence in architecture and delivery.\nKey responsibilities Technology Leadership: Develop and execute a comprehensive technology strategy that aligns with Crossref’s mission and goals, fostering innovation and continuous improvement and building out the research nexus vision. Collaborate with the leadership team to integrate technology initiatives into the organisation’s overall strategy. Lead the modernization of Crossref’s systems and infrastructure. Identify and assess new technologies and trends to inform decision making and ensure Crossref remains at the forefront of scholarly infrastructure innovation. Report to the board of directors and collaborate with the leadership team to make recommendations to the board. Work closely with cross-functional teams, including product management, finance and operations, and membership and community outreach. Technical Operations and the Crossref System: Oversee the design, development, and maintenance of Crossref’s systems, services and technical infrastructure. Ensure high availability, security, and scalability of systems, including APIs, databases, and web services. Develop and manage technology budgets, vendor relationships, and effective resource allocation. Identify opportunities for process optimization, automation, and efficiency gains. Collaborate closely with stakeholders across the organisation to understand requirements and deliver technology solutions that meet their needs. Team Management and Mentorship: Lead and inspire a high-performing technology team, fostering a collaborative and inclusive culture that values diversity and professional growth. Lead the technology team in adopting best practices, methodologies, and industry standards to ensure high-quality, scalable, and secure systems. Provide strategic guidance, mentorship, and support to team members, encouraging their professional development and career advancement. Promote a culture of continuous learning, knowledge sharing, and cross-functional collaboration within the technology team and across the organisation. Collaboration and Community Engagement: Collaborate with adopters of the Principles of Open Scholarly Infrastructure and other open infrastructure organisations to enhance interoperability and data sharing. Engage with the scholarly community, attending conferences, workshops, sprints, and forums. Represent Crossref in technical discussions and contribute to open standards and protocols. Actively encourage community participation from the team and encourage co-creation and open-source contributions within the community. About you The ideal candidate will be a strategic thinker with experience in big data, architecting systems, implementing change and leading technology teams.\nKey professional experiences: Minimum of 10 years of progressive experience in technology leadership roles, with a proven track record of leading and managing high-performing teams. Proven ability to lead and manage change, foster innovation, and drive continuous improvement in technology initiatives. Proven track record of technology leadership in complex, mission-driven organisations. Strong understanding of open-source technologies, cloud computing, distributed systems architectures, APIs, and web services. Experience balancing technical excellence with practical business needs. Extensive knowledge of software development methodologies, project management, and technology stack selection. Demonstrated experience in metadata management, data integration, and interoperability standards within scholarly communications or related domains. Experience leading cybersecurity efforts and knowledge of best practices in cybersecurity. Familiarity with scholarly publishing, research workflows, and metadata standards (e.g., DOI, ORCID) is a plus. Bachelor’s or master’s degree in computer science, information technology, or a related field. Key Skills: Strong strategic thinking and problem-solving abilities. Excellent communication and collaboration skills with the ability to articulate technical concepts to non-technical audiences. People management. Financial management and budgeting. Passion for Crossref\u0026rsquo;s mission and commitment to open scholarly infrastructure and research integrity. About Crossref and the team We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart and view our Annual Reports and financial information here.\nHow to apply To apply, please submit a CV and cover letter, detailing how you fulfil the role description and personal specification to Perrett Laver’s application page quoting reference 7465. The deadline for applications is Friday, October 4, 2024.\nThis is a remote position and the successful candidate can be based most anywhere as long as they are prepared to adapt their hours to European and East Coast US time zones. A moderate amount of travel is expected.\nAnticipated salary for this role is approximately $160,000 USD, or equivalent amount paid in local currency. Crossref offers competitive compensation, a rich benefits package, flexible work arrangements, professional development opportunities, and a supportive work environment. As a non-profit organisation, we prioritize mission over profit.\nThe selection committee will together review all candidates’ applications and agree on a longlist for the role. Longlisted candidates will be invited to discuss the position with Perrett Laver in greater detail. The selection committee will subsequently meet to decide upon a final shortlist to be invited to the formal interview stage.\nProtecting your personal data is of the utmost importance to Perrett Laver and we take this responsibility very seriously. Any information obtained by our trading divisions is held and processed in accordance with the relevant data protection legislation. The data you provide us with is securely stored on our computerized database and transferred to our clients for the purposes of presenting you as a candidate and/or considering your suitability for a role you have registered interest in.\nPerrett Laver is a Data Controller and a Data Processor, and our legal basis for processing your personal data is ‘Legitimate Interests’. You have the right to object to us processing your data in this way. For more information about this, your rights, and our approach to Data Protection and Privacy, please see our Privacy Statement.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","Technology Leadership:","Technical Operations and the Crossref System:","Team Management and Mentorship:","Collaboration and Community Engagement:","About you","Key professional experiences:","Key Skills:","About Crossref and the team","How to apply","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-05-21-metadata-manager/", "title": "Metadata Manager", "subtitle":"", "rank": 1, "lastmod": "2024-05-21", "lastmod_ts": 1716249600, "section": "Jobs", "tags": [], "description": "APPLICATIONS ARE CLOSED EFFECTIVE JUNE 11, 2024 Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new **Metadata Manager**. Location: Remote and global (with 3-hour overlap with UTC 15:00 -18:00) Type: Full-time Remuneration: Approx. 70-80K USD or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer.", "content": " APPLICATIONS ARE CLOSED EFFECTIVE JUNE 11, 2024 Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new **Metadata Manager**. Location: Remote and global (with 3-hour overlap with UTC 15:00 -18:00) Type: Full-time Remuneration: Approx. 70-80K USD or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Head of Metadata, Patricia Feeney; this role is full time at Crossref and works closely with a cross-organizational team running ROR, with colleagues based at California Digital Library and DataCite. Timeline: Advertise and recruit in May-June/hire in July About the role We\u0026rsquo;re looking for a new full-time Metadata Manager. This role will be based at Crossref and responsibilities will be split between ROR (75%) and Crossref (25%). This role will be responsible for day-to-day metadata curation activities for the ROR registry, including coordinating ongoing registry updates, working with ROR’s curation advisors and other community stakeholders, and maintaining ROR’s curation policies and practices. This role will also collaborate with Crossref’s metadata team in developing and strengthening Crossref metadata.\nWe are a geographically distributed, remote-first team with flexible working hours.\nKey responsibilities For ROR\nCoordinate community-based curation processes:\nTriage incoming requests to prepare for community review and/or work with Crossref support colleagues to optimize triaging Oversee community review process to make sure requests are reviewed in a timely and accurate manner Optimize curation workflow as needed to improve experience for requesters and community curators Maintain guidance and documentation for community curators Schedule and facilitate regular curator meetings Onboard and offboard community curators Provide training and assign tasks to contract staff Coordinate ROR registry updates\nMaintain regular schedule of registry updates Identify which updates will be in a given release Prepare metadata for records being added or updated Work with curators handling metadata records Ensure metadata records pass validation/QA Work with curators handling metadata records Review and test release candidates Deploy changes to production via Github-based workflow Publish release notes and announce releases to users Publish data dump on Zenodo Gather data and generate reports on curation processes to track volume, turnaround times, types of requests, etc. Metadata management and QA:\nMaintain documentation of metadata policies and inclusion criteria, as well as schema documentation Analyze current registry data to identify opportunities for metadata QA and future improvements Work with ROR’s development team on schema updates and metadata clean-up Community engagement:\nRespond to support questions about data issues, curation policies, and release timelines Support strategic initiatives and integrations Provide updates at community calls and webinars Collect feedback from community to inform curation policies and process For Crossref:\nParticipate in community-led efforts to expand and refine Crossref metadata by assisting with working groups and other community interaction Help with input (XML) and output (JSON) metadata modeling and testing Help maintain documentation About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away! If you have any questions, please get in touch.\nQualities\nComfortable collaborating with colleagues or stakeholders in the community Comfortable being part of a distributed team Self-motivated to succeed and take initiative and seek continuous improvement Familiarity with scholarly research infrastructures and the open science landscape Skills\nStrong at written and verbal communication skills, able to communicate clearly,simply, and effectively Experience in metadata curation Experience in data analysis Working knowledge of a scripting language, such as Python Experience in workflow development and optimization Experience facilitating community groups/collaborations Experience with Github and Markdown Experience working with RESTful APIs and related web services/technologies Experience or familiarity with XML and JSON About the team The Crossref team is distributed across the world. The ROR team is based in the USA.\nWe work fully remotely, but try to meet in person at least once a year. This is a full-time position, but working hours are flexible. The applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. If you have any questions we would be happy to discuss.\nYou can be based anywhere in the world where we can employ staff, either directly or through an employer of record.\nAbout Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books, to preprints, data, and grants. Through partnerships and collaborations we engage with members in 148 countries (and counting) and it’s very important to us to nurture that community.\nWe’re about 46 staff and remote-first. This means that we support our teams working asynchronously and to flexible hours. Some international travel will likely be appropriate, for example to in-person meetings with colleagues and members, but in line with our travel policy. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nWE ARE NO LONGER ACCEPTING APPLICATIONS. Equal opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["APPLICATIONS ARE CLOSED EFFECTIVE JUNE 11, 2024","About the role","Key responsibilities","About you","About the team","About Crossref","WE ARE NO LONGER ACCEPTING APPLICATIONS.","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-03-11-product-manager-2023-03-11/", "title": "Product Manager", "subtitle":"", "rank": 1, "lastmod": "2023-03-11", "lastmod_ts": 1678492800, "section": "Jobs", "tags": [], "description": "Applications for this position is closed. Product Manager Location: Remote and global (with minimum overlap with the US Eastern time zone)\nRemuneration: €80,000 – €105,000 or local equivalent, depending on experience\nReports to: Director of Product\nApplication timeline: We will receive applications until March 31st.\nCrossref is a not-for-profit membership organization that exists to make scholarly communications better by providing metadata and services that make research objects easy to find, cite, link, assess, and reuse.", "content": " Applications for this position is closed. Product Manager Location: Remote and global (with minimum overlap with the US Eastern time zone)\nRemuneration: €80,000 – €105,000 or local equivalent, depending on experience\nReports to: Director of Product\nApplication timeline: We will receive applications until March 31st.\nCrossref is a not-for-profit membership organization that exists to make scholarly communications better by providing metadata and services that make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nWe are a small team with a big impact, and we’re looking for a creative, technically-oriented Product Manager to join us in improving scholarly communications. Reporting to the Director of Product, Product Managers at Crossref are expected to work across multiple products and services based on organisational goals (/strategy) and manage projects across the different teams in the organization and with community partners like the Public Knowledge Project.\nUnlike traditional SaaS Product Management, being community-led means the Product Manager is not always the expert in what they are developing, so the ability to plan, research, convene, listen, and facilitate consensus-based decisions, is a key factor to being successful in this role at Crossref.\nSome examples of the diverse set of tools, services, features and functionality we support include ORCID auto-update, simple API-driven tools to help members register records with us, API endpoints and interfaces to help the community retrieve and use the metadata we store, Crossmark and the Funder Registry, and we have more in the pipeline supported by our R\u0026amp;D, engineering, and infrastructure services teams. Does this sound like your sort of thing?\nKey responsibilities Manage all aspects of the life cycle for one or more key areas within the Crossref ecosystem Coordinate work across teams at Crossref by communicating ideas and writing project plans, gathering and assessing feedback and using that to drive decision-making Seek, absorb and articulate community input through advisory groups, convening meetings, and reviewing data. Integrate usability studies, user research, system investigations, and ongoing community feedback into requirements Influence the product development strategy by communicating priorities based on organisational needs and community feedback Define goals, methods and metrics for the adoption of features and functionality to track success Coordinate and direct working groups made up of community members and users Promote adoption directly with the community as well as develop relationships with key community influencers \u0026amp; strategic partners Working with R\u0026amp;D to test concepts internally and with the community to inform and support Crossref’s service(s) Evangelize areas of focus to rally people and resources behind ideas and ambitions critical to success About you You think in terms of the big picture, but can work closely with others to explain context and deliver on the details You can turn a range of inputs into solid action plans and achievable chunks of work You have an understanding and experience of complex workflow systems, writing clear specifications and working with APIs You care about open infrastructure and want to make scholarly communications better You do whatever it takes to make your product and community successful and love to problem solve, whether that means writing a QA plan, tracking down the root cause of a user’s frustration or working with our R\u0026amp;D team to spin up and test a POC in response to an idea You are passionate about understanding community needs, working transparently, eliciting advice and feedback openly and advising on community calls You communicate with empathy and exceptional precision You can convey and encapsulate strategic (and technical) concepts in presentations verbally, visually, and textually. You have experience working with developers. You are technical enough to discuss with engineers critical questions about architecture and product choices You are motivated to continually improve products based on community feedback You are self-motivated with a collaborative and can-do attitude and enjoy working with a small team across multiple time zones You’re comfortable facilitating and chairing meetings and working groups You maintain order in a dynamic environment, managing multiple priorities You have 5+ years of product management experience with internet technologies and/or equivalent experience in the research publishing arena What it’s like working at Crossref We’re about 45 staff and now ‘remote-first’ although we have optional offices in Oxford, UK, and Boston, USA. This means that we support our teams working asynchronously and to flexible hours. Some international travel will likely be appropriate when it’s safe to do so, for example to in-person meetings with colleagues and members, but in line with our travel policy. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nClick here to apply. Please submit your CV/Resume and Cover Letter. We require both documents in order to be considered for the role.\nPlease strive to submit your application by March 31st, 2023.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you!\n", "headings": ["Product Manager","Key responsibilities","About you","What it’s like working at Crossref","Thinking of applying?","Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-09-28-accounts-payable-specialist/", "title": "Accounts Payable Specialist", "subtitle":"", "rank": 1, "lastmod": "2021-09-28", "lastmod_ts": 1632787200, "section": "Jobs", "tags": [], "description": "Applications for this position closed 2021-11-29. Come and work with us as our Accounts Payable Specialist. It’ll be fun! ​​Crossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 14000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals for the benefit of society.", "content": " Applications for this position closed 2021-11-29. Come and work with us as our Accounts Payable Specialist. It’ll be fun! ​​Crossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 14000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals for the benefit of society. This is a time of considerable change for the Crossref community, and you can help shape our future.\nWe are a small team with a big impact, and we’re looking for a detail-oriented self-starter to join the team as Part-time Accounts Payable Specialist.\nAbout the Role Reporting to the Supervising Accountant, the Accounts Payable Specialist is a key role within the Finance team. The Accounts Payable Specialist is responsible for full-cycle Accounts Payable accounting for both USA and UK-based offices, assuring proper recording within the accounting system. This position is the lead contact for vendor relations and the internal expense reporting application. The position requires a skill set and personality type capable of performing a broad range of duties and responsibilities with minimal supervision and a high degree of accuracy and thoroughness.\nAbout You The successful candidate will possess the following:\nThe ability to organize work, set priorities, follow-up and work proactively and accurately Excellent oral, written, data entry, and communication skills The ability to work collaboratively and independently A self-starter and problem solver with exceptional attention to detail Able to adapt and succeed in a fluid and flexible environment Be motivated, self-directed, and detail-oriented Have experience in a multi-currency environment (GBP) Key Responsibilities Responsible for the full cycle AP function for both the UK and USA offices, including entering invoices into Intacct, obtaining payment approvals, and facilitating payment processing (checks/wires/direct debits/ACH’s) Responsible for managing corporate credit cards, including reviewing and reconciling to statements monthly Responsible for the Expensify expense reporting platform, including maintaining knowledge of updates and enhancements and troubleshooting Responsible for the yearly 1099/1096 filing and vendor reporting Act as backup for other Finance Team staff Responding to Zendesk inquires and assisting in collections as needed Assist with monthly and quarterly financial reporting Assist with audit Other ad hoc financial and operational projects Qualifications 2-5 years of accounting experience Solid experience using cloud-based/accounting applications (Intacct) Solid experience using Microsoft Excel and other tools (Gmail/google docs, etc.) Bachelors Degree in Accounting/Business or equivalent business experience preferred This position is full-time (30 hours). The Crossref team is geographically distributed in Europe, North America and Africa, and we fully support working from home. We have two small offices (Lynnfield, MA, USA and Oxford, UK) that are temporarily closed due to the pandemic. It would be good to have a minimum 3-hour overlap with the US Eastern time zone.\nTo Apply To apply, please send your cover letter and resume to Lindsay Russell at jobs@crossref.org. Even if you don’t think you have all of the right experience, we’re really excited to hear from you.\nCrossref is an equal opportunity employer. We believe that diversity and inclusion among our staff are critical to our success as a global organization. Therefore, we seek to recruit, develop, and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\n", "headings": ["Come and work with us as our Accounts Payable Specialist. It’ll be fun!","About the Role","About You","Key Responsibilities","Qualifications","To Apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-06-15-technical-support-specialist/", "title": "Technical Support Specialist", "subtitle":"", "rank": 1, "lastmod": "2021-06-15", "lastmod_ts": 1623715200, "section": "Jobs", "tags": [], "description": "Applications for this position are closed. Technical Support Specialist Location: Remote (Africa, West Asia) Closing date: Thursday, 2021 July 15 About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organisation, we ingest and distribute metadata from our 14000+ member organisations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society.", "content": " Applications for this position are closed. Technical Support Specialist Location: Remote (Africa, West Asia) Closing date: Thursday, 2021 July 15 About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organisation, we ingest and distribute metadata from our 14000+ member organisations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society. This is a time of considerable change for the Crossref community, and you can help shape our future.\nAbout the role Reporting to our Technical Support Manager, the full-time Technical Support Specialist is an important role in our Member Experience team, part of Crossref’s Outreach group, a fourteen-strong distributed team with colleagues across the US and Europe. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages.\nWe’re looking for a Technical Support Specialist to provide first-line help to our international community of publishers, librarians, funders, researchers, and developers on a range of services that help them deposit, find, link, cite, and assess scholarly content. You’ll be working closely with five other technical and membership support colleagues to provide support and guidance for people with a wide range of technical experience. The strongest candidates will not necessarily be from a technical background, but they’ll have interest and initiative to grow their technical skills while communicating the complexity of our products and services in straightforward and easy-to-understand terms. You’ll help our community both create and retrieve metadata records with tools ranging from simple user interfaces to APIs and integrations.\nCrossref is a distributed team serving members and users around the world. We are seeking candidates to bolster our team, in accordance with our 2025 strategic agenda, to work remotely from Africa or West Asia (where Crossref is administratively able to support employees). We work a flexible schedule; for training and synchronous problem-solving, we also ask that candidates have availability between 13.00 and 15:00 UTC.\nKey responsibilities Replying to and solving community queries using the Zendesk support system Using our various tools and APIs to find the answers to these queries, or pointing users to support materials that will help them Working with colleagues on particularly tricky tickets, escalating as necessary Working efficiently but also kindly and with empathy with our very diverse, global community About you We are looking for a proactive candidate with a unique blend of customer service skills, analytical trouble-shooting skills, and a passion to help others. You’ll have an interest in data and technology and will be a quick learner of new technologies. You’ll be able to build relationships with our community members and serve their very diverse needs - from assisting those with basic queries to really digging into some knotty technical queries. Because of this, you’ll also be able to distill those complex and technically challenging queries into easy-to-follow guidance.\nEssential\nThe ability to clearly communicate complex technical information to technical and non-technical users, using open questions to get to the bottom of things when queries don’t seem to make sense Quick learner of new technologies; can rapidly pick up new processes and systems; and, have interest and initiative to grow your own technical skills Extremely organized and can bring order to chaos, independently manage multiple priorities Able to balance a very diverse role, wearing a lot of different hats and providing a wide range of support Proactive in asking questions and making suggestions for improvements Process-driven but able to cope with occasional ambiguity and lack of clarity - open to feedback and adaptable when things change quickly A truly global perspective - we have over 15,000 member organizations from 140 countries across numerous time zones Nice to have\nExperience helping customers and solving problems in creative and unique ways Experience with or interest in XML, metadata, and Crossref as well as scholarly research and information science Experience with Zendesk and Gitlab or similar support and issue management software To apply To apply, please send your cover letter and resume to Lindsay Russell at jobs@crossref.org. Even if you don’t think you have all of the right experience, we’re really excited to hear from you.\nCrossref is an equal opportunity employer. We believe that diversity and inclusion among our staff is critical to our success as a global organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Technical Support Specialist","About Crossref","About the role","Key responsibilities","About you","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-02-08-senior-software-developer/", "title": "Senior Software Developer", "subtitle":"", "rank": 1, "lastmod": "2021-02-08", "lastmod_ts": 1612742400, "section": "Jobs", "tags": [], "description": "Applications for this position are closed. Senior Software Developer Location: Remote Closing date: Friday 5th March 2021 Salary: Competitive\nCrossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 15,000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society.", "content": " Applications for this position are closed. Senior Software Developer Location: Remote Closing date: Friday 5th March 2021 Salary: Competitive\nCrossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 15,000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society. This is a time of considerable change for the Crossref community, and you can help shape our future.\nThe software development team is responsible for maintaining, operating and building the services that enable the workflows of our members and the wider research community. We have a deep understanding not only of technology, but also the needs of our diverse community.\nYou will contribute primarily to our Java and Kotlin codebases, with the option of also contributing to our Clojure codebases. We don\u0026rsquo;t expect you to be an expert in all of these, but you should have in-depth knowledge of at least one compiled, JVM, or Functional language (Java, Clojure, Kotlin, Scala, C#, Go etc). You will build primarily back-end services and APIs.\nYou will specify, design and implement improvements, features and services. You will have a key voice in discussions about technical approaches and architecture. You will always keep an eye on software quality and ensure that the code you and your colleagues produce is maintainable, well tested, and of high quality.\nKey responsibilities Understand Crossref’s mission and how we support it with our services. Collaborate with external stakeholders when needed. Pursue continuous improvement across legacy and green-field codebases. Work flexibly in multi-functional project teams, especially in partnership with the Product team, to design and develop services. Ensure that solutions are reliable, responsive, and efficient. Produce well-scoped, testable, software specifications. Implement and test solutions using Kotlin, Java and other relevant technologies. Work closely with the Head of Software Development to solve problems, maintain and improve our services and execute technology changes. Provide code reviews and guidance to other developers regarding development practices and help maintain and improve our development environment. Identify vulnerabilities and inefficiencies in our system architecture and processes, particularly regarding cloud operations, metrics and testing. Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users. Openly document and share development plans and workflow changes. Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to our systems; be part of our on-call team responding to service outages. About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away!\nAn expert senior developer with experience in Java, Kotlin, Clojure or related languages. Experience in Spring or similar frameworks. Have a proven track record of picking up new technologies. Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques. Familiar with AWS, containerization and infrastructure management using tools like Terraform. Some experience with Python, JavaScript or similar scripting languages. Experience working on open source projects. Able to quickly understand, refactor and improve legacy code and fix defects. Experience with, or a working understanding of, XML and document-oriented systems such as Elastic Search. Experience building tools for online scholarly communication or related fields such as Library and information science, etc. Comfortable collaborating with colleagues or stakeholders in the community. Ability to create and maintain a project plan. Self-directed, a good manager of your own time, with the ability to focus. Comfortable being part of a distributed team. Curious and tenacious at learning new things and getting to the bottom of problems. Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively. Outstanding at interpersonal relations and relationship management. Comfortable collaborating with colleagues across the organisation. Assuming that international travel ever becomes possible again, the applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. About the team The software development team is distributed. Most issue tracking and all new code is open source. We strongly believe in open scholarly infrastructure and openness at all stages of the software development lifecycle. As a membership organization we keep closely in touch with our users, and encourage our developers to be familiar with our community. The Development, Product and Infrastructure teams are tightly knit and we work in 2 week sprints.\nTo apply This position is full time and, as for all Crossref employees, location is flexible. The Crossref team is geographically distributed in Europe and North America and we fully support working from home. It would be good to have a minimum 3-hour overlap with the UTC-0 time zone.\nTo apply, please send your cover letter and resume to Lindsay Russell at jobs@crossref.org. Even if you don’t think you have all of the right experience, we’re really excited to hear from you.\nCrossref is an equal opportunity employer. We believe that diversity and inclusion among our staff is critical to our success as a global organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Senior Software Developer","Key responsibilities","About you","About the team","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2021-01-19-product-manager-2021-01-19/", "title": "Product Manager", "subtitle":"", "rank": 1, "lastmod": "2021-01-19", "lastmod_ts": 1611014400, "section": "Jobs", "tags": [], "description": "Applications for this position are closed. Product Manager Location: Remote\nCrossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 13000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society. This is a time of considerable change for the Crossref community, and you can help shape our future.", "content": " Applications for this position are closed. Product Manager Location: Remote\nCrossref makes research objects easy to find, cite, link, assess, and reuse. As an open infrastructure organization, we ingest and distribute metadata from our 13000+ member organizations worldwide, ensuring community collaboration in everything that we do. Our work helps achieve open research and open metadata goals, for the benefit of society. This is a time of considerable change for the Crossref community, and you can help shape our future.\nWe are a small team with a big impact, and we’re looking for a creative, technically-oriented Product Manager to join us in improving scholarly communications. Reporting to the Director of Product, Product Managers at Crossref are expected to work across multiple products and services. This role, at least initially, is focused around ‘scholarly stewardship’ and includes Similarity Check, Funder Registry and registering research grants, as well as others.\nKey responsibilities Manage all aspects of the product life-cycle for one or more key services or products within the Crossref ecosystem Seek, absorb and articulate community input through advisory groups, convening meetings, and reviewing data Influence product strategy focusing on business priorities and user experience Integrate usability studies, user research, system investigations, and ongoing community feedback into product requirements Define goals, methods and metrics for product adoption to track success Coordinate and direct working groups made up of community members and users Promote product adoption directly with the community as well as develop relationships with key community influencers and strategic partners Evangelize products and values to rally people and resources behind ideas and ambitions critical to success Advance the concepts of research integrity within the scholarly community About you You think in terms of the big picture, but deliver on the details You have a nose for great products and advocate for new features using qualitative and quantitative reasoning You can turn incomplete, conflicting, or ambiguous inputs into solid action plans You will, ideally, have an understanding and experience of complex workflow systems You care about open infrastructure and want to make scholarly communications better You do whatever it takes to make your product and community successful, whether that means writing a QA plan or track down the root cause of a user’s frustration You are passionate about understanding community needs, working transparently, eliciting advice and feedback openly and advising on community calls You communicate with empathy and exceptional precision You can convey and encapsulate strategic (and technical) concepts in presentations verbally, visually, and textually. You are comfortable working with developers. You are technical enough to discuss with engineers critical questions about architecture and product choices You obsess about continuous product improvement You are self-motivated with a collaborative and can-do attitude and enjoy working with a small team across multiple time zones You maintain order in a dynamic environment, managing multiple priorities You are committed to agile best practices You have 5+ years of product management experience with internet technologies and/or equivalent experience in the research publishing arena This position is full time and, as for all Crossref employees, location is flexible. The Crossref team is geographically distributed in Europe and North America and we fully support working from home. We have two small offices (Lynnfield, MA, USA and Oxford, UK) that are temporarily closed due to the pandemic. It would be good to have a minimum 3-hour overlap with the US Eastern time zone.\nTo apply, please send your cover letter, resume, and at least one sample of an effective product specification or epic/story development to Lindsay Russell at jobs@crossref.org. Even if you don’t think you have all of the right experience, we’re really excited to hear from you.\nCrossref is an equal opportunity employer. We believe that diversity and inclusion among our staff is critical to our success as a global organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Product Manager","Key responsibilities","About you"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2020-11-26-senior-front-end-developer/", "title": "Senior Front-end Developer", "subtitle":"", "rank": 1, "lastmod": "2020-11-26", "lastmod_ts": 1606348800, "section": "Jobs", "tags": [], "description": "Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for the front-end components of a number of services that are critical to thousands of publishers in the global scholarly community. You will design and build infrastructure that serves our newer smaller members as well as large high-volume publishers.\nYou will contribute to our JavaScript and Java codebases, building new functionality using Vue.", "content": " Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for the front-end components of a number of services that are critical to thousands of publishers in the global scholarly community. You will design and build infrastructure that serves our newer smaller members as well as large high-volume publishers.\nYou will contribute to our JavaScript and Java codebases, building new functionality using Vue.js and maintaining our React.js codebase.\nAs a technical lead on our front-end you will collaborate with the Product and Infrastructure teams to specify, design and implement our new features and services as part of our full stack. You will have a key voice in discussions about technical approaches to front-end architecture and how they relate to our back-end, infrastructure and operational considerations. You will always keep an eye on software quality and ensure that the code you and your colleagues produce is maintainable, well tested and of high standard.\nKey responsibilities Understand Crossref’s mission, our role in the scholarly community, and how we support it with our services. Understand and help to guide our data models, and translate them into usable, accessible interfaces. Pursue continuous improvement and quality. Work flexibly in multi-functional project teams to design and develop services and ensure that our systems are reliable, responsive, and efficient. Work closely with the Head of Software Development to solve strategic problems, maintain and improve our services and execute technology changes. Help to guide our legacy front-end migration strategy. Maintain expertise in front-end technologies. Provide code reviews and guidance to other developers regarding development practices and help maintain and improve our development environment. Identify vulnerabilities and inefficiencies in our system architecture and processes, particularly regarding cloud operations, metrics and testing. Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users. Openly document and share development plans and workflow changes. Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to our systems; be part of our on-call team responding to service outages. About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away!\nAn interest in scholarly communication and open infrastructure. An expert senior developer with experience in JavaScript and other front-end technologies, preferably Vue.js, and have a proven track record of picking up new technologies. Familiarity with Java-based back-end technology. A focus on the particular needs of our diverse users. Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques. Experience with Python, preferably including Jupyter notebooks. Experience with static site generators such as Hugo. Experience working on open source code. Familiar with AWS, Docker and infrastructure management using Terraform. Able to quickly pick up, understand and improve legacy code. Self-directed, a good manager of your own time, with the ability to focus. Curious and tenacious at learning new things and getting to the bottom of problems. Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively. Outstanding at interpersonal relations and relationship management. Comfortable collaborating with colleagues across the organisation. Assuming that international travel ever becomes possible again, the applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. You can find more about our latest plans from our recent LIVE Annual event: https://0-doi-org.libus.csd.mu.edu/10.13003/5gq8v1q.\nThis position is full time and, as for all Crossref employees, location is flexible - you can work remotely with a 2 to 3-hour overlap with UTC-0. We provide a competitive benefits package.\nAbout the team Our colleagues are spread across Europe and North America. The software development team can be found in the US east-coast, the UK, Ireland and France.\nWe build and maintain services for the Crossref community. Our DOI registration, metadata pipeline, reference matching, search and querying play a part in the operations of 15,000 members, who have registered the metadata for over 100 million content items. Our systems have evolved over our 20 year history, and we\u0026rsquo;re continuing to proactively update them. New code and services are written in modern Java, Clojure and JavaScript and run in AWS, making use of Kafka and Elastic Search.\nIssue tracking and all new code is open source. We strongly believe in open scholarly infrastructure and openness at all stages of the software development lifecycle. As a membership organization we keep closely in touch with our users, and encourage our developers to be familiar with our community. The Development, Product and Infrastructure teams are tightly knit and we work in 2 week sprints.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries and thousands of tools and services relying on our metadata.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2020-11-17-contract-technical-support/", "title": "Technical Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2020-11-17", "lastmod_ts": 1605571200, "section": "Jobs", "tags": [], "description": "Applications for this position are closed. Request for services: Technical Support Contractor Location: Remote\nClosing date: Wednesday 2021 May 12 About the role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, a fifteen-strong distributed team with members across the US and Europe. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages.", "content": " Applications for this position are closed. Request for services: Technical Support Contractor Location: Remote\nClosing date: Wednesday 2021 May 12 About the role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, a fifteen-strong distributed team with members across the US and Europe. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages. We’re looking for a Technical Support Contractor to provide front-line help to our international community of publishers, librarians, funders, researchers and developers on a range of services that help them deposit, find, link, cite, and assess scholarly content.\nWe’re looking for contractors able to work remotely. There is no set schedule and contractors bill their hours monthly.\nKey responsibilities Replying to and solving community queries using the Zendesk support system. Using our various tools and APIs to find the answers to these queries, or pointing users to support materials that will help them. Working with colleagues on particularly tricky tickets, escalating as necessary. Working efficiently but also kindly and with empathy with our very diverse, global community. This position is for an independent contractor.\nAbout the team You’ll be working closely with six other technical and membership support colleagues to provide support and guidance for people with a wide range of technical experience. You’ll help our community create and retrieve metadata records with tools ranging from simple user interfaces to robust APIs.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that. Since January 2000 we have grown from strength to strength and now have over 15,000 members across 140 countries, and thousands of tools and services relying on our metadata.\nTo apply Please send a cover letter, and your resume to: jobs@crossref.org.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["Request for services: Technical Support Contractor","About the role","Key responsibilities","About the team","About Crossref","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2020-09-07-senior-software-developer/", "title": "Senior Software Developer", "subtitle":"", "rank": 1, "lastmod": "2020-09-01", "lastmod_ts": 1598918400, "section": "Jobs", "tags": [], "description": "Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for a number of services that are critical to thousands of publishers in the global scholarly community. You will design and build infrastructure that serves our newer smaller members as well as large high-volume publishers.\nYou will contribute to our Clojure and Java codebases. You don\u0026rsquo;t need to know both, but you should be strong in at least one, and have experience with similar languages.", "content": " Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for a number of services that are critical to thousands of publishers in the global scholarly community. You will design and build infrastructure that serves our newer smaller members as well as large high-volume publishers.\nYou will contribute to our Clojure and Java codebases. You don\u0026rsquo;t need to know both, but you should be strong in at least one, and have experience with similar languages.\nAs a technical lead you will collaborate with the Product and Infrastructure teams to specify, design and implement our new features and services. You will have a key voice in discussions about technical approaches and architecture. You will always keep an eye on software quality and ensure that the code you and your colleagues produce is maintainable, well tested and of high quality.\nKey responsibilities Understand Crossref’s mission and how we support it with our services. Pursue continuous improvement. Work flexibly in multi-functional project teams to design and develop services and ensure that our systems are reliable, responsive, and efficient. Work closely with the Head of Software Development to solve problems, maintain and improve our services and execute technology changes. Provide code reviews and guidance to other developers regarding development practices and help maintain and improve our development environment. Identify vulnerabilities and inefficiencies in our system architecture and processes, particularly regarding cloud operations, metrics and testing. Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users. Openly document and share development plans and workflow changes. Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to our systems; be part of our on-call team responding to service outages. About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away!\nAn expert senior developer with experience in Java and/or Clojure, and have a proven track record of picking up new technologies. Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques. Some experience with Python. Experience working on open source projects. Familiar with AWS, containerization and infrastructure management using tools like Terraform. Able to quickly pick up, understand and improve legacy code. Experience with, or a working understanding of, XML and document-oriented systems such as Elastic Search. Experience building tools for online scholarly communication. Self-directed, a good manager of your own time, with the ability to focus. Curious and tenacious at learning new things and getting to the bottom of problems. Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively. Outstanding at interpersonal relations and relationship management. Comfortable collaborating with colleagues across the organisation. Assuming that international travel ever becomes possible again, the applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. About the team Our colleagues are spread across Europe and North America. The software development team can be found in the US east-coast, the UK, Ireland and France.\nWe build and maintain services for the Crossref community. Our DOI registration, metadata pipeline, reference matching, search and querying play a part in the operations of 12,000 publishers, who have registered the metadata for over 100 million content items. Our systems have evolved over our 20 year history, and we\u0026rsquo;re continuing to proactively update them. New code and services are written in modern Java and Clojure and run in AWS, making use of Kafka and Elastic Search.\nIssue tracking and all new code is open source. We strongly believe in open scholarly infrastructure and openness at all stages of the software development lifecycle. As a membership organization we keep closely in touch with our users, and encourage our developers to be familiar with our community. The Development, Product and Infrastructure teams are tightly knit and we work in 2 week sprints.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries and thousands of tools and services relying on our metadata.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2020-06-22-senior-software-developer/", "title": "Senior Software Developer", "subtitle":"", "rank": 1, "lastmod": "2020-07-10", "lastmod_ts": 1594339200, "section": "Jobs", "tags": [], "description": "Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for a number of services. You will contribute directly to the systems that form a critical part of thousands of scholarly publishers\u0026rsquo; workflows, and help us to better serve our newer smaller members in the global academic community.\nAs a technical lead you will collaborate with the Product and Infrastructure teams to specify, design and implement our new features and services.", "content": " Applications for this position are closed About the role You will play a key role in the software development team, being the technical lead for a number of services. You will contribute directly to the systems that form a critical part of thousands of scholarly publishers\u0026rsquo; workflows, and help us to better serve our newer smaller members in the global academic community.\nAs a technical lead you will collaborate with the Product and Infrastructure teams to specify, design and implement our new features and services. You will have a key voice in discussions about technical approaches and architecture. You will always keep an eye on software quality and ensure that the code you produce and review is maintainable, well tested and of high quality.\nKey responsibilities Understand Crossref’s mission and how we support it with our services. Pursue continuous improvement. Work flexibly in multi-functional project teams to scope, specify design and develop services and ensure that our systems are reliable, responsive, and efficient. Work closely with the Head of Software Development to solve problems, maintain and improve our services and execute technology changes. Provide code reviews and guidance to other developers regarding coding practices and help maintain and improve our development environment. Identify vulnerabilities and inefficiencies in our system architecture and development processes, particularly regarding cloud operations, metrics and testing. Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users. Openly document and share development plans and workflow changes. Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to our systems; be part of our on-call team responding to service outages. About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away!\nAn expert senior developer with experience in Java or Clojure, and have a proven track record of picking up new technologies. Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques. Some experience with Python. Experience working on open source projects. Familiar with AWS, containerization and infrastructure management using tools like Terraform. Able to quickly pick up, understand and improve legacy code. Experience with, or a working understanding of, XML and document-oriented systems such as Elasticsearch. Experience building tools for online scholarly communication. Self-directed, a good manager of your own time with the ability to focus. Curious and tenacious at learning new things and getting to the bottom of problems. Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively. Outstanding at interpersonal relations and relationship management, and comfortable working with other developers, product management, outreach, membership, and technical support teams. Assuming that international travel ever becomes possible again, the applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. About the team Like the rest of Crossref, the software development team is distributed. We can be found in the US east-coast, the UK, Ireland and France. We build and maintain the services for the Crossref community. Our metadata pipeline, reference matching, search and querying play a part in the operations of 12,000 publishers, who have registered the metadata for over 100 million content items. Our systems have evolved over our 20 year history, and we\u0026rsquo;re continuing to proactively update them. New code and services are written in modern Java and Clojure and run in AWS ECS, making use of Kafka and Elasticsearch.\nIssue tracking and all new code is open source. We strongly believe in open scholarly infrastructure and openness at all stages of the software development lifecycle. As a membership organization we keep closely in touch with our users, and encourage our developers to be familiar with our community. The Development, Product and Infrastructure teams are tightly knit and we work in 2 week sprints.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that. Since January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries and thousands of tools and services relying on our metadata.\nCrossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, color, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities, in accordance with applicable law.\n", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-12-20-product-manager/", "title": "Product Manager", "subtitle":"", "rank": 1, "lastmod": "2019-12-20", "lastmod_ts": 1576800000, "section": "Jobs", "tags": [], "description": "Applications for this position are closed Crossref makes scholarly content easy to find, cite, link, assess and reuse by collecting and sharing metadata from our 11000+ member organisations worldwide. Our services and value now extend well beyond persistent identifiers and reference linking, and our connected open infrastructure benefits our membership as well as all those involved in scholarly research. This is a time of considerable change at Crossref and you can help shape our future.", "content": " Applications for this position are closed Crossref makes scholarly content easy to find, cite, link, assess and reuse by collecting and sharing metadata from our 11000+ member organisations worldwide. Our services and value now extend well beyond persistent identifiers and reference linking, and our connected open infrastructure benefits our membership as well as all those involved in scholarly research. This is a time of considerable change at Crossref and you can help shape our future.\nWe are a small team with a big impact, and we’re looking for a creative, technically-oriented Product Manager to join us in improving scholarly communications.\nKey responsibilities Manage all aspects of the product life cycle for one or more key services or products within the Crossref ecosystem Articulate and influence product strategy focusing on business objectives and user experience Integrate usability studies, user research, system analysis and community feedback into product requirements Define objectives, methods and metrics for product adoption to track success Co-ordinate and direct working groups made up of external stakeholders Promote product adoption externally as well as liaise with and maintain relations with key market stakeholders \u0026amp; strategic partners Evangelize products and value stories internally to rally people and resources behind ideas and ambitions critical to success About you You can think in terms of the big picture, but deliver on the details You have a nose for great products and advocate for new features using qualitative and quantitative reasoning You will, ideally, have an understanding and experience of complex workflow systems You do whatever it takes to make your product and team successful, whether that means writing a QA plan or hunting down the root cause of a user’s frustration You can turn incomplete, conflicting, or ambiguous inputs into solid action plans You communicate with empathy and exceptional precision You’re comfortable working with developers. You are technical enough to ask engineers good questions about architecture and product decisions alike You obsess about continuous product improvement You are self-driven with a collaborative and can-do attitude and enjoy working with a small team across multiple time zones You maintain order in a dynamic environment, independently managing multiple priorities You champion agile best practices You are adept at communicating technical systems to non-technical audiences through writing, small group settings, and conference talks 5+ years of product management experience with internet technologies and/or equivalent experience in the research publishing arena You can find more about our plans by viewing our latest annual report - a Fact File for 2019. Crossref Annual Report \u0026amp; Fact File 2018-19, https://0-doi-org.libus.csd.mu.edu/10.13003/y8ygwm5.\nThis position is full time and, as for all Crossref employees, location is flexible - you can work remotely or be based out of either of our Crossref offices (Lynnfield, MA and Oxford, UK), with a minimum 3-hour overlap with US Eastern time zone. We provide a competitive benefits package.\nTo apply, please send your cover letter, resume, and at least one sample of an effective product specification or epic/story development to Lindsay Russell at jobs@crossref.org.\n", "headings": ["Key responsibilities","About you"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-08-22-infrastructure-services-developer/", "title": "Infrastructure Services Software Developer", "subtitle":"", "rank": 1, "lastmod": "2019-08-21", "lastmod_ts": 1566345600, "section": "Jobs", "tags": [], "description": "Applications for this position are closed Come and work with us as an Infrastructure Services Software Developer. It’ll be fun! Location: Wherever the best candidate is. Salary: Competitive. Benchmarked every two years. Benefits: Competitive. Reports to: Head of Infrastructure Services. About the role Crossref is looking for a talented developer to help us optimize and evolve our infrastructure services.\nWe\u0026rsquo;re looking for a new member of our technology team who can bring experience, leadership, and help us solve some interesting operations and development challenges.", "content": " Applications for this position are closed Come and work with us as an Infrastructure Services Software Developer. It’ll be fun! Location: Wherever the best candidate is. Salary: Competitive. Benchmarked every two years. Benefits: Competitive. Reports to: Head of Infrastructure Services. About the role Crossref is looking for a talented developer to help us optimize and evolve our infrastructure services.\nWe\u0026rsquo;re looking for a new member of our technology team who can bring experience, leadership, and help us solve some interesting operations and development challenges. Crossref operates the service that connects thousands of publishers, millions of articles and research content, and serves a diverse set of communities within scholarly publishing, research and beyond.\nYou will report to the head of infrastructure services and will work in a group of two developers and one system administrator. You will also work extensively with the software development, R\u0026amp;D, and product teams as well.\nKey responsibilities The infrastructure services group is primarily responsible for Crossref\u0026rsquo;s infrastructure services. That is, central, crosscutting tools and systems that are used by our software development group as the common foundation we use for delivering services to our members and the broader research community. In other words- you will be building, deploying and managing tools and services used by other developers.\nYou will be responsible for ensuring that these infrastructure services are reliable and responsive as well as making sure they are able to evolve quickly to support the new requirements and new services that Crossref is developing on behalf of its membership.\nYour challenge will be to accomplish this, whilst simultaneously helping to drive the modernization of our current software stack, infrastructure, and software engineering culture. The entire technology team is undertaking a migration from a mostly self-hosted, manually-managed, and manually-tested environment, to a cloud-based system and the SRE tools, processes and culture which that entails.\nWe currently use a blend of AWS, Docker, Terraform, self-hosted VMWare, Elastic Search, Kafka and more. Most of our codebases are written in Java, Clojure, and Python.\nThere are a lot of skills that we are looking for, but we don’t expect to find a purple unicorn. Our primary criterion is that you have a track record of being able to deliver projects using a variety of languages, frameworks and development paradigms.\nBut you get double bonus points if you have experience with:\nImmutable infrastructure. Virtualization and containerization of legacy code bases. Configuration management. Security infrastructure. Automation of development. Site monitoring and alerting. Web services software development. Transitioning on-prem datacenter to the cloud. In-depth knowledge of one or more cloud providers. And it would be very useful if you had a subset of the following skills:\nContainerisation using ECS/Docker. Core AWS Infrastructure including EC2, VPC, S3, RDS, IAM, Route53 and Cloudfront. Infrastructure configuration, management and orchestration tools (such as Terraform, Kubernetes, CloudFormation, Ansible, Salt, or equivalents). Java. High proficiency in at least one other language (e.g. Python, Clojure). Extensive experience with SQL, particularly MySQL and Oracle. Elasticsearch, Solr, Lucene, or similar. Distributed logging and monitoring frameworks. Continuous Integration, continuous delivery frameworks. Modern, HTTP-based API design and implementation. Experience with open source development. Experience with agile development methodologies. Experience with XML- particularly with mixed content models. And please note that this is not a back-office position. We believe that it is vital that the entire technical team develops an understanding of our members, the broader community and their needs. Without this kind of empathy, we cannot add value to our services. As such, you will also find yourself working closely with the product and outreach teams.\nLocation \u0026amp; travel requirements Crossref has offices in the US (Lynnfield, Massachusetts) and the UK (Oxford). We also support remote work. The technology team currently has members working in the US (Lynnfield, MA, New York City, NY), UK (Oxford), Ireland (Dublin), and France (Dinan).\nRemote workers should expect they will need to visit an office approximately 5 days a quarter along with the travel (possibly international) which that entails. If you work from an office you will be expected to travel internationally for ~ 5 days once a year. In either case, travel can increase should you have an interest in representing Crossref at community events.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries, and thousands of tools and services relying on our metadata.\nWe can offer the successful candidate a challenging and fun environment to work in. We’re fewer than 40 professionals but together we are dedicated to our global mission. We are constantly adapting to ensure we get there, and we don’t tend to take ourselves too seriously along the way.\nTo apply Send cover letter and a CV via email to:\nJoe Aparo, Head of Infrastructure Services\njobs@crossref.org\n", "headings": ["Come and work with us as an Infrastructure Services Software Developer. It’ll be fun!","About the role","Key responsibilities","Location \u0026amp; travel requirements","About Crossref","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-08-19-member-support-specialist-0819/", "title": "Member Support Specialist", "subtitle":"", "rank": 4, "lastmod": "2019-08-19", "lastmod_ts": 1566172800, "section": "Jobs", "tags": [], "description": "Applications for this position are closed Come and work with us as our Member Support Specialist. It’ll be fun! Location: Flexible - Crossref has members globally and offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not necessary. Reports to: Amanda Bartell, Head of Member Experience Salary and benefits: Competitive Do you want to make scholarly communications better? Are you a customer support specialist or editorial assistant who’s keen to have more in-depth conversations with publishers across the globe?", "content": " Applications for this position are closed Come and work with us as our Member Support Specialist. It’ll be fun! Location: Flexible - Crossref has members globally and offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not necessary. Reports to: Amanda Bartell, Head of Member Experience Salary and benefits: Competitive Do you want to make scholarly communications better? Are you a customer support specialist or editorial assistant who’s keen to have more in-depth conversations with publishers across the globe?\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nAbout the role This busy role is a real mix of involved member consultations and detailed systems and administrative work. You’ll need to have an understanding of the academic publishing community, great attention to detail, the ability to ask probing questions of applicants and a logical, systematic approach to work. You’ll be working with publishers to really understand their approach and workflows, and then recommending the best Crossref membership option for them. You’ll take them through our application process, setting them up carefully in our CRM and other systems paying extremely close attention to detail. Once they’re members, you’ll continue to work with them closely - answering their questions via email, Twitter and our community forum. You’ll help them take on new Crossref services, navigate platform migrations, and understand how to set up service providers to work with us on their behalf. It’s a very diverse role and is a great opportunity to get wide-ranging experience within Crossref and scholarly communications.\nThis is a pivotal role in the Member Experience team, part of Crossref’s Outreach team, a fourteen-strong team split between offices in Boston and Oxford, plus dispersed team members across the US and Europe. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re embarking on a new onboarding program for the thousands of publishers that join as members every year and currently rolling out an educational program for existing members and users. And we’re aiming for a more open approach to having conversations with people all around the world, in multiple languages.\nKey responsibilities Work with new applicants to understand their internal structures and help them understand the various membership options available to them. Own and drive the administrative process for new applicants - ensuring we have all the information we need to help them get started and setting them up correctly in our central systems. Support existing members in meeting the obligations necessary for taking on new services and get them set up for these services. Broker conversations between publishers, platforms and service providers to ensure the member is able to fulfill their aims while still meeting their membership obligations. Manage queries from applicants and members via email, Twitter, our community forum and other channels. Ensure that the information in our CRM is kept clean and up-to-date. Work closely with the billing team. About you We’re looking for a smart, savvy person who’s able to work with our global, diverse membership to really get to the bottom of their needs. You’ll need to adapt quickly within a changing environment while still maintaining accuracy. You’ll be a quick learner of new technologies and enjoy improving systems and processes, but you’ll also be able to build relationships with our members and serve their very diverse needs - from handholding those with basic queries to really digging into some knotty organizational relationships.\nAble to balance a very busy role while still paying close attention to detail and keeping member experience at the forefront of everything you do. Experience in helping customers and solving problems in creative and unique ways. Strong written and verbal communication skills with the ability to communicate clearly - able to use open questions to get to the bottom of things when members don’t seem to make sense. A truly global perspective - we have 10,000 member organizations from 118 countries across numerous time zones. Quick learner of new technologies and can rapidly pick up new programs and systems. Extremely organized and attentive to detail. Bachelor\u0026rsquo;s degree. Experience with Zendesk or similar support system ideal. Familiar with the publishing industry with knowledge of XML, metadata, scholarly research or information science a bonus. Other spoken languages will help. To apply If you’d like to join the Crossref team, and contribute to our mission, please send a cover letter and your CV to jobs@crossref.org. We can\u0026rsquo;t wait to read all about you!\n", "headings": ["Come and work with us as our Member Support Specialist. It’ll be fun!","About Crossref","About the role","Key responsibilities","About you","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-07-15-director-of-finance/", "title": "Director of Finance & Operations", "subtitle":"", "rank": 4, "lastmod": "2019-07-15", "lastmod_ts": 1563148800, "section": "Jobs", "tags": [], "description": "Applications for this position closed August 2019. Location: Flexible - Crossref has members globally and offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not necessary. Reports to: Ed Pentz, Executive Director Salary and benefits: Competitive\nApplications for this position will close on 9 August 2019. In a nutshell Crossref faces an exciting future as we grow further internationally beyond our existing 120 countries and our staff becomes increasingly distributed.", "content": " Applications for this position closed August 2019. Location: Flexible - Crossref has members globally and offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not necessary. Reports to: Ed Pentz, Executive Director Salary and benefits: Competitive\nApplications for this position will close on 9 August 2019. In a nutshell Crossref faces an exciting future as we grow further internationally beyond our existing 120 countries and our staff becomes increasingly distributed. We\u0026rsquo;re seeking a globally-minded and resourceful person with the skills, vision, and drive to help us achieve our mission. That means putting our community first and making it easier to work with us across the board - from implementing modern systems that work for all our members\u0026rsquo; many languages and currencies, to providing personable support. This is not just your average finance role; everyone at Crossref wears many hats, can articulate our value and purpose, and has a naturally collaborative and communicative nature. Come and show your financial prowess and operational flair at Crossref. It\u0026rsquo;ll be a challenge. But it\u0026rsquo;ll be fun!\nAbout the role Reporting to the Executive Director, the Finance and Operations Director is a key member of Crossref\u0026rsquo;s leadership team and has strategic and managerial responsibility for all aspects of Crossref\u0026rsquo;s finances, HR, legal, and governance. You will work closely with the Executive Director, the senior leadership team, the Finance and Operations Team and the board to instil a culture of transparency, collaboration, and dynamic leadership across the organization - and to ensure that all activities support Crossref\u0026rsquo;s mission and strategic priorities and foster continuous improvement and innovation while mitigating operational risks.\nAs part of the leadership team at Crossref you will contribute to our strategic planning and help define and develop our organization\u0026rsquo;s culture. This role is responsible for planning, preparing, monitoring, reporting, and analyzing all aspects of the organization\u0026rsquo;s finance functions, providing guidance and support with financial reporting, budgets, forecasts, investment portfolios, payroll, and benefits. The role also manages all commercial insurance policies (general liability, Directors and Officers, cybersecurity, etc.). You will provide key metrics, insights, and recommendations on all financial, HR, legal, and governance matters to the senior team, Executive Director, Treasurer, and board.\nKey responsibilities Financial planning and Leadership Provide leadership in the development and continuous evaluation of short and long-term strategic financial objectives; Provide timely and accurate analysis of budgets, financial reports and financial trends in order to assist the Executive Director, the board and other staff directors in performing their responsibilities; Evaluate and advise on the financial impact of long range planning, new partnerships/alliances, and the introduction of new programs/strategies; Establish and maintain strong relationships with staff directors to identify their needs; Provide strategic financial input and leadership on decision making issues affecting the organization; i.e., evaluation of potential alliances, new services, and investments; Work with external audit firm overseeing financial statements and internal control audits; Maintain positive and effective banking relationships, and oversee all treasury related functions; Manage the relationship with external counsel and oversee work on contracts, legal compliance, and governance matters. Organizational management and leadership Provide clear, accurate reporting and forecasting of Crossref’s performance in the context of its strategic direction and long term mission. Attend and present at board meetings and support the Executive Director in developing effective relationships with board members; provide support to the Treasurer; Provide financial analyses and reports for Executive Director, Treasurer, and board and adhere to all regulatory reporting requirements and deadlines; Manage a team of six staff including the Controller (Head of Finance), Head of Accounting, and HR Manager; Lead the finance/accounting team to ensure timely and accurate reporting, forecasting, budgeting, risk management, tax returns, government forms, and financial audits; Work with insurance company to ensure we are compliant; Oversee and provide data for the efficient operation of financial systems and internal financial controls. You will ensure compliance with all relevant regulations such as GAAP, 501c (6) organizations, IRS, and local and state reporting requirements; Work with the Directors to translate strategy and communicate financial information and develop long term growth plans and financial projections; Serve as staff representative on the board’s audit and any ad hoc finance committees, contributing and providing reports as needed; Provide leadership of the HR functions, managing benefits and compensation and all regulatory compliance, oversee the payroll process, administration of the 401K plan and overseas plans; Serve as Secretary of the Corporation (as defined in the Bylaws); Oversee the annual board of directors election. About you 15+ years in progressively responsible financial or operational leadership roles with proven leadership in nonprofit finance, administration and operations; A strategic thinker with a rich understanding of how finances affect the needs and goals of a mission driven nonprofit organization and how to manage costs while driving revenue growth; A strong leader with the ability to coach, mentor and develop staff into a well- functioning team, assessing strengths and weaknesses that will help you lead and guide team; Experience working with an international organization and dealing with financial issues across multiple countries A strong team player with a commitment to creating a positive and engaging work environment; Demonstrated ability to balance financial goals against organizational mission; Knowledge of nonprofit financial environment, policies and procedures; Knowledge of US GAAP and accounting theories and practices; Experience working with an international organization and dealing with financial issues across multiple countries Knowledge of databases and accounting systems with strong general ledger, AP, AR, payroll, income tax and working knowledge of banking and investment; A self-directed leader that is a good manager of your own time with the ability to focus despite competing demands on your time; Strong written and verbal communication skills, able to communicate clearly, simply, and effectively; Outstanding interpersonal relations and relationship management, and comfortable working with other teams; Travel required to board meetings and Director face-to-face meetings. Process \u0026amp; timeline Prospective candidates interested in applying should contact our search partners Perrett Laver. For an informal discussion about the role, contact Daniel Flynn to hear more or address any questions you may have by August 9th, 2019.\nCompleted applications comprised of CV and cover letter can be uploaded at http://www.perrettlaver.com/candidates quoting reference number 4216 or sent to Daniel Flynn directly at Daniel.Flynn@perrettlaver.com. The deadline for applications will be Friday, August 9.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries, and thousands of tools and services relying on our metadata.\nWe can offer the successful candidate a challenging and fun environment to work in. We’re fewer than 40 professionals but together we are dedicated to our global mission. We are constantly adapting to ensure we get there, and we don’t tend to take ourselves too seriously along the way.\n", "headings": ["In a nutshell","About the role","Key responsibilities","Financial planning and Leadership","Organizational management and leadership","About you","Process \u0026amp; timeline","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/datacite/", "title": "DataCite", "subtitle":"", "rank": 4, "lastmod": "2019-07-05", "lastmod_ts": 1562284800, "section": "Get involved", "tags": [], "description": "The basics ‘One size fits all’ never quite works, does it? This is why there are different DOI Registration Agencies to serve the needs of different interest groups. Crossref and DataCite constitute two of these Registration Agencies, but we overlap more than most in terms of our missions and our communities.\nWe understand why it might be confusing trying to decide who to join, or whether to join both. We want to help, so that you can get the services that are the best fit for your organization and the type of content you want to register.", "content": "The basics ‘One size fits all’ never quite works, does it? This is why there are different DOI Registration Agencies to serve the needs of different interest groups. Crossref and DataCite constitute two of these Registration Agencies, but we overlap more than most in terms of our missions and our communities.\nWe understand why it might be confusing trying to decide who to join, or whether to join both. We want to help, so that you can get the services that are the best fit for your organization and the type of content you want to register.\nCrossref makes research outputs easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that. DataCite’s mission is to be the world\u0026rsquo;s leading provider of persistent identifiers for research. Through our portfolio of services, we provide the means to create, find, cite, connect, and use research. We seek to create value and develop community-driven, innovative, open, integrated, useable, and sustainable services for research. If you’ve been following the work we’ve been doing, you’ll know that we’ve been making joint announcements for some time. We also collaborate on numerous initiatives that aim to provide foundational infrastructure for research outputs.\nWho to join Here are some things to think about, to help you decide which organization is the right one for you.\nWhat type of organization are you? Thinking about how you classify your organization can be helpful; what type of organization/department/initiative/project are you? Do you see your organization as one that sits within a certain community?\nCrossref members are organizations who publish content, or register research grants. These include publishers, research institutions, university presses, societies and funders. In order to become a member, register content and deposit metadata and DOIs, you’ll need to meet the criteria set out in our governing by-laws. Membership in Crossref is open to organizations that produce professional and scholarly materials and content. In addition, applicants should be able to meet the terms and conditions of membership.\nWith DataCite, membership is open to all organisations that share their mission. DataCite’s members work with data centers, stewards, libraries, archives, universities, publishers and research institutes that host repositories and who have responsibility for managing, holding, curating, and archiving data and other research outputs. Members agree with the DataCite statutes. You can see current DataCite members here.\nSpecific services It’s not just about ‘getting a DOI’. Crossref and DataCite have expertise and services that support and enhance the specific needs of our communities and how they work with their content.\nCrossref provides services like:\nReference Linking: Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. Cited-by: Cited-by shows how work has been received by the wider community; displaying the number of times it has been cited, and linking to the citing content. Similarity Check: A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism. DataCite provides services like:\nDOI Fabrica: DOI management platform, perfect for manual DOI creation or for human curation of automatically-created DOIs. Register your first DOI in less than a minute. Link checker: Automatically checks your DOIs to make sure they are still resolving correctly. Data metrics badge: Embed citation metrics for any DataCite DOI on your own website. Importantly, both organizations make the metadata you register available via APIs. This metadata is used by thousands of different tools and services. So if you’re registering content with an organization that isn’t the best fit for your content, then you might wonder why it isn’t appearing in specific databases e.g. Google Dataset Search for data, Dimensions for articles. This is why we have specific metadata schemas for different record types, to fit the communities we work with, so that an organization can work with us to get access to all of the data they’re interested in, in a standard format and in one place.\nYour role What role do you have in relation to the content being registered?\nDo you follow a publishing workflow? I.e. are there editorial processes involved in selecting and stewarding the content e.g. issuing updates like corrections. We know there are lots of definitions of a publisher so it’s hard to be exact, but this might help you think about how you work. If this sounds like you then you should explore joining Crossref. Or are you depositing content? This could be different types of research outputs using different kinds of platforms, e.g. researchers posting content to an institutional repository. In that case, the DataCite community is a good fit for you. Joining both We’re also working with an increasing number of organizations who have record types that are best served by being members of both organizations, such as a university that has both a publishing program and an institutional repository.\nThe Center for Open Science provides a good example of a member who works with both of our organizations to meet the needs of their community:\n“We hear from lots of users about how important Digital Object Identifiers (DOIs) are to their work. DOIs ensure persistent links to content and enhance discoverability of one’s research. At the behest of our users, we began issuing DOIs in 2015, first to public registrations, then to public projects in September 2016, and recently to preprints in July 2017. In all, over 22,000 DOIs were registered for content on the OSF. The DOIs issued on the OSF have historically been registered with DataCite, through the California Digital Library’s EZID. Earlier this year, we learned that EZID’s services are evolving, and COS was faced with the choice of a new registration agency for DOIs. This has given us the opportunity to explore how best to support our users and the diverse research outputs they share via OSF. Ultimately, COS decided to pursue registering DOIs with two separate agencies to provide users with services tailored to their needs: registering DOIs for preprints with Crossref and DOIs for projects and registrations with DataCite.”\nAvoid assigning multiple DOIs to one research object For identifiers and metadata to work to their full potential, both Crossref and DataCite require that only one identifier be assigned to a research object. Multiple persistent and identifiers for an object reduce the chance of it being identified and discovered, and can split usage, citation information over multiple instances of a work.\nIf a persistent identifer for an object already exists, you should continue to use that identifier for the content and not reassign another identifier for that object under a different prefix.\nWe’re also happy to discuss specific needs directly. Contact the Crossref membership team or DataCite support with information on what you’re trying to achieve and we’ll help you!\n", "headings": ["The basics","Who to join","What type of organization are you?","Specific services","Your role","Joining both","Avoid assigning multiple DOIs to one research object"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-05-23-director-of-product/", "title": "Director of Product", "subtitle":"", "rank": 4, "lastmod": "2019-05-23", "lastmod_ts": 1558569600, "section": "Jobs", "tags": [], "description": "Applications for this position closed on 10th June 2019. Come and work with us as our Director of Product. It’ll be fun! Location: Flexible - Crossref has offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not a requirement.\nReports to: Ed Pentz, Executive Director\nSalary and benefits: Competitive\nAbout the role The Director of Product is a key member of the senior leadership team at Crossref and responsible for the strategy, delivery, adoption, and success of the Crossref service.", "content": " Applications for this position closed on 10th June 2019. Come and work with us as our Director of Product. It’ll be fun! Location: Flexible - Crossref has offices in Oxford, UK and Lynnfield, MA, USA but being office-based is not a requirement.\nReports to: Ed Pentz, Executive Director\nSalary and benefits: Competitive\nAbout the role The Director of Product is a key member of the senior leadership team at Crossref and responsible for the strategy, delivery, adoption, and success of the Crossref service. The role centers around a deep understanding of community needs balanced with technical credibility and credentials.\nKey responsibilities The director manages and leads the product team (3 Product Managers, 1 UX Designer), planning and managing the development of new services and the enhancement of existing ones. The role will also work closely with:\nThe Technology \u0026amp; Research team to agree on architecture and development approaches, and technical resource allocation. The Outreach team including technical support, to research and gather insights and ensure projects stay community-led; openly communicating through planning and running projects, working groups, and collaborations. Product strategy Create and maintain the product strategy, which supports and advances the organization\u0026rsquo;s strategic agenda. Work with the Technology \u0026amp; Research team to ensure that all software development work is product-focused. Build and maintain the product roadmap, ensuring all areas of the organization are up-to-date on plans and timelines. Be responsible for the product team\u0026rsquo;s work in scoping and planning new feature development, running pilots and beta test phases, and facilitating internal project teams and external Working Groups to deliver on time. Develop introduction plans to roll out and embed new features with Outreach, Support, Billing, and Technology maintenance. Maintain a balance between developing new things and maintaining existing features and services. Community focus Research community needs and engage people through Advisory Groups for each key area of our service as well as Working Groups for new initiatives. Work closely with the Outreach Director to set development priorities based on user needs. Enable members and users to comment on and contribute to our roadmap so that they can plan for their own adoption of new features. Be an evangelist for Crossref as open scholarly infrastructure and its services at conferences and events, engage on social media and the community forum, and blog about our services and plans. Leadership and management Manage and develop three Product Managers and one UX/UI Designer. Reinforce and promote the value and role of product management within Crossref. Ensure that the product team culture reflects and exemplifies the Crossref culture (dedicated, open, transparent). Contribute to developing Crossref\u0026rsquo;s overall strategy as part of the senior leadership team of staff directors. Report to the board regularly. Write papers for consideration by the leadership team, board committees, and the board. About you We are looking for an experienced product person and strategic thinker to join the senior leadership team and contribute to the ongoing success of the organization. Our transparency principle, What you see, what you get, means that using open methodologies is a must. We are also looking for someone with:\nUnderstanding of current trends in scholarly communications. Strong technical understanding, ability to establish credibility with software developers. A background in product management for mission-driven organizations and/or open-source tech. Experience planning and launching new features and services with buy-in from a community of users. Experience working globally across time zones and within multicultural groups both internally and externally. Experience working remotely and managing remote staff. Leadership and management experience. Experience overseeing and improving internal processes and systems. Ability to be a hands-on technical product manager when needed. Experience developing wireframes and product specifications. Awareness of new approaches to product management and development methodologies. Creativity, being resourceful as needed for a small non-corporate organization. Collaboration skills and the confidence to accomplish goals while engaging the community. Excellent communication with strong writing and public speaking skills. Experience managing budgets, external consultants, and oversight of project management. Process \u0026amp; timeline Please email a cover letter and CV to jobs@crossref.org by June 10th, 2019.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 12,000 members across 120 countries, and thousands of tools and services relying on our metadata.\nWe can offer the successful candidate a challenging and fun environment to work in. We’re fewer than 40 professionals but together we are dedicated to our global mission. We are constantly adapting to ensure we get there, and we don’t tend to take ourselves too seriously along the way.\n", "headings": ["Come and work with us as our Director of Product. It’ll be fun!","About the role","Key responsibilities","Product strategy","Community focus","Leadership and management","About you","Process \u0026amp; timeline","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2019-03-01-member-support-specialist/", "title": "Member Support Specialist", "subtitle":"", "rank": 4, "lastmod": "2019-03-01", "lastmod_ts": 1551398400, "section": "Jobs", "tags": [], "description": "Applications for this position are closed as of 2019-04-15 Come and work with us as our Member Support Specialist. It’ll be fun! Are you a customer support specialist looking to expand your technical skills? Maybe you’re an editorial assistant keen to improve scholarly communications?\nLocation: Flexible. Either office-based (Lynnfield, MA or Oxford, UK) or remote/home-based in North America or Europe Reports to: Amanda Bartell, Head of Member Experience Salary and benefits: Competitive About the role This role is focused on Similarity Check, one of our key services which helps our members promote editorial integrity.", "content": " Applications for this position are closed as of 2019-04-15 Come and work with us as our Member Support Specialist. It’ll be fun! Are you a customer support specialist looking to expand your technical skills? Maybe you’re an editorial assistant keen to improve scholarly communications?\nLocation: Flexible. Either office-based (Lynnfield, MA or Oxford, UK) or remote/home-based in North America or Europe Reports to: Amanda Bartell, Head of Member Experience Salary and benefits: Competitive About the role This role is focused on Similarity Check, one of our key services which helps our members promote editorial integrity. It’s a very diverse role and is a great opportunity to get wide-ranging experience within Crossref and scholarly communications. You’ll be working closely with three other technical support staff, our membership coordinator, our billing team, and the product manager for this service - plus publishers across the globe. You’ll be helping members understand their technical obligations for participating in Similarity Check, navigate the application process, and learn how to use the service. You’ll make sure that our support materials meet the needs of the members, and be the “voice of the user” in conversations with the developers and partners who run the service.\nThis is an important new role in the Member Experience team, part of Crossref’s Outreach team, a fourteen-strong team split between offices in Boston and Oxford, plus dispersed team members across the US and Europe. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re embarking on a new onboarding program for the thousands of publishers that join as members every year and currently rolling out an educational program for existing members and users. And we’re aiming for a more open approach to having conversations with people all around the world, in multiple languages.\nKey responsibilities Support Similarity Check users and applicants via email, Twitter and other channels. Own and drive the administrative process for members taking on the service.\u2028Support members in meeting the technical obligations for participating in the service.\u2028Manage the onboarding process, working with our Education Manager and the Product Manager to ensure that documentation and other support materials meet member needs. Providing support for members using the Similarity Check service through iThenticate. Provide support for queries about Similarity Check billing.\u2028Analyze, communicate and triage support issues to the product manager and technology providers to inform future development.\u2028Work closely with the technology providers’ support team to proactively communicate with members about technical issues, outages, and planned maintenance etc. Provide wider technical and membership support for other services as required. About you We\u0026rsquo;re looking for a smart, savvy person who can wear many hats, has a truly global outlook, a collaborative style, and an inner drive to make things better. You’ll be a quick learner of new technologies, and ideally have an understanding of editorial processes. You’ll have a unique blend of analytical troubleshooting skills, customer service skills, and a passion to help others. You’ll be able to build relationships with our members and serve their very diverse needs - from hand holding those with basic queries to really digging into some knotty technical queries. You are:\nIdeally familiar with manuscript submission and review process. Able to balance a very diverse role, wearing a lot of different hats and providing a wide range of support.\u2028Experience helping customers and solving problems in creative and unique ways.\u2028Able to communicate technical issues to a non-technical audience and use open questions to get to the bottom of things when members don’t seem to make sense. Strong written and verbal communication skills with the ability to communicate clearly. A truly global perspective - we have 10,000 member organizations from 118 countries across numerous time zones. Quick learner of new technologies and can rapidly pick up new programs and systems. Extremely organized and attentive to detail\u2028Bachelor\u0026rsquo;s degree.\u2028Experience with Zendesk and Jira or similar support and issue management software will be helpful.\u2028Knowledge of XML, metadata, scholarly research or information science a bonus. Other languages a bonus, particularly Russian. About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 11,000 members across 118 countries, and thousands of tools and services relying on our metadata.\nTo apply If you’d like to join the Crossref team, and contribute to our mission, please send a cover letter and your CV to jobs@crossref.org. We can\u0026rsquo;t wait to read all about you!\n", "headings": ["Come and work with us as our Member Support Specialist. It’ll be fun!","About the role","Key responsibilities","About you","About Crossref","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/user-stories/", "title": "Who relies on Crossref metadata?", "subtitle":"", "rank": 5, "lastmod": "2019-02-24", "lastmod_ts": 1550966400, "section": "Find a service", "tags": ["Metadata", "APIs", "Metadata retrieval", "REST API", "Research Nexus"], "description": "Over 100 million unique scholarly works are distributed into systems across the research enterprise 24/7 via our metadata APIs, at a rate of around 607 million queries a month. This is a collection of brief user stories from some of the people who rely on Crossref metadata.\nWe use Crossref metadata to… Add funding info from publications to researchers’ portfolios, and report the publications as arising from the grant; to validate the data provided by universities; and we use the license and embargo period information to help understand the open access status of publications.", "content": "Over 100 million unique scholarly works are distributed into systems across the research enterprise 24/7 via our metadata APIs, at a rate of around 607 million queries a month. This is a collection of brief user stories from some of the people who rely on Crossref metadata.\nWe use Crossref metadata to… Add funding info from publications to researchers’ portfolios, and report the publications as arising from the grant; to validate the data provided by universities; and we use the license and embargo period information to help understand the open access status of publications.\n\u0026ndash; Gavin Reddick, Researchfish\nLink references on our journal platforms, pull citations statistics for our Article-Level Metrics and ensure we are publishing unique science. Crossref metadata is vital to our everyday operations and the discovery of the research we publish.\n\u0026ndash; Polina Grinbaum, PLOS\nEnhance and correct the metadata delivered to us, just with a correct DOI.\n\u0026ndash; Ulf Kronman, National Library of Sweden\nVerify and correct references, retrieve Funder Registry IDs, and include Cited-by links in published content, and so much more. As a small publisher, discoverability is of utmost importance, and Crossref is a discoverability hub. The inclusion of metadata in Crossref strengthens the content we publish.\n\u0026ndash; Rob O’Donnell, Rockefeller University Press\nKnow about our possible ‘universe’ of articles.\n\u0026ndash; Christian Herzog, Daniel Hook, Simon Porter, Dimensions, Digital Science\nConnect authors with research articles similar to their own, and help them decide where to submit their manuscripts for the best chance of success.\n\u0026ndash; Damian Pattinson, ResearchSquare\nIndex preprints alongside traditional journal publications and we plan to: provide another means to access and discover preprints; help explore the role of preprints in the publishing ecosystem; support their inclusion in processes such as grant reporting and credit attribution systems.\n\u0026ndash; Michael Parkin, Europe PMC\nLook across the world at research outputs and understand how institutions and communities are making them more accessible. We\u0026rsquo;re using Crossref metadata as the central reference point to handle objects from different sources and to have a consistent set of metadata on things like publication date that covers the full set.\n\u0026ndash; Cameron Neylon, Centre for Culture and Technology, Curtin University\nAllow researchers to immediately interact with the readers of their works, if the publishers provide ORCID iDs of authors in their Crossref metadata.\n\u0026ndash; Alexander Naydenov, PaperHive\nCheck for the availability of articles associated with our data packages, and to verify some of our metadata. Being able to do this programmatically has revolutionized our data publication workflow.\n\u0026ndash; Elizabeth Hull, Dryad\nRetrieve Cited-by counts for a DOI so we can include them as part of the ‘basket of metrics’ we provide to our researchers. They can then understand the performance of their publications in context, and see the correlation between actions and results.\n\u0026ndash; David Sommer, Kudos\nEnable authors to search and add references to their papers.\n\u0026ndash; Alberto Pepe, Authorea\nEnrich the metadata of our hosted preprints and link them to the manuscript or version of record. We also perform various analyses regarding the completeness and discoverability of our records.\n\u0026ndash; Mark Hahnel, Figshare\nStreamline research communication in a global, interconnected and evolving scholarly domain in order to support researchers even better and facilitate research in general.\n\u0026ndash; Henning Schoenenberger, Springer Nature\nIdentify articles unambiguously as we preserve scholarly literature in our system. Also, in the rare instances when we ‘trigger’ journals for open access, we want the reference-linking functionality to work, and we work with Crossref to point the URLs to our site for resolution.\n\u0026ndash; Craig van Dyck, CLOCKSS\nHelp Unpaywall identify and disambiguate over 20 million Open Access articles, and it is absolutely essential to our work. We 😍❤️🤗 Crossref!\n\u0026ndash; Jason Priem, ImpactStory\nYou don\u0026rsquo;t have to sign up to anything in order to use our REST API. That means we don\u0026rsquo;t necessarily know who is using it, although we see millions of hits every day. If you are using it in your projects and would like to share, please let us know and we\u0026rsquo;ll feature you on this page.\n", "headings": ["We use Crossref metadata to…"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/2018-agreement/", "title": "2018 membership agreement", "subtitle":"", "rank": 1, "lastmod": "2018-11-11", "lastmod_ts": 1541894400, "section": "Become a member", "tags": [], "description": "This page shows the agreement that was used up to and including 2018. But it has been superceded by new membership terms approved by the Board in July 2018. Former membership agreement Updated September 2017\nThis membership agreement, version 5.5, and any duly executed addenda and any other attachments hereto (\u0026ldquo;Agreement\u0026rdquo;) sets forth the terms and conditions under which a qualified institution becomes a member of The Publishers International Linking Association, Inc.", "content": " This page shows the agreement that was used up to and including 2018. But it has been superceded by new membership terms approved by the Board in July 2018. Former membership agreement Updated September 2017\nThis membership agreement, version 5.5, and any duly executed addenda and any other attachments hereto (\u0026ldquo;Agreement\u0026rdquo;) sets forth the terms and conditions under which a qualified institution becomes a member of The Publishers International Linking Association, Inc. (“PILA”), a nonprofit corporation organized under the laws of New York, and doing business as Crossref, subject to the approval of PILA. Membership in PILA is open to publishers of scholarly and professional content who have rights to transfer, manage and otherwise fulfill the obligations of this Agreement with respect to the content’s “Metadata” and, to the extent necessary, the content itself. Additional criteria for qualifying institutions, incorporated by reference, are available at https://0-www-Crossref-org.libus.csd.mu.edu or successor sites (“PILA Site”). The Agreement is by and between PILA and the party below (the “PILA Member”) and shall be deemed effective upon execution by both parties (the \u0026ldquo;Effective Date\u0026rdquo;).\nIntroduction. Under the mark Crossref®, PILA manages and maintains a database of regularly updated information (collectively, \u0026ldquo;Metadata\u0026rdquo;) that describes and identifies substantially non-derivative publishable works (“Original Works”), as well as of digital identifiers (“Digital Identifiers”) that point to the location of certain Original Works on the Internet. As described below, PILA also facilitates the deposit and retrieval of Metadata and Digital Identifiers to enable and promote persistent and reliable linking among and discovery of Original Works on the Internet through their embedded reference citations, as well as other online information management services. The “PILA System” (occasionally, the “Crossref System”) refers to all of the foregoing, including associated software and know-how.\nPILA Membership. By accepting all of the terms of this Agreement and paying the requisite fees a qualified institution becomes a member of PILA entitled to all of the benefits and subject to all of the responsibilities of being a member of PILA, as governed by the bylaws of PILA (\u0026ldquo;Bylaws\u0026rdquo;).\na). Benefits. Provided that the PILA Member is in full compliance with the terms of the Agreement, it may use the PILA System under the terms and conditions of this Agreement, participate in the governance of PILA by voting for members of the board of directors of PILA (the \u0026ldquo;Board\u0026rdquo;) and on various issues, and recommend one or more representatives (if desired) to certain of the PILA working committees through which policy recommendations are made (the Board shall retain the authority to appoint and remove committee members in accordance with the Bylaws).\nb). Obligations. The PILA Member must promptly pay all membership dues and any charges or fees as established by the Board from time to time and set forth on the PILA Site. The PILA Member must nominate a business, technical and billing contact for purposes of PILA administration, and keep such contact information up to date.\nc). Terms and Conditions. At all times, the PILA Member may exercise any authority over the Board, individually or collectively with other members of PILA, expressly granted by the Bylaws, as amended from time to time. The Board shall have the power to modify the terms of this Agreement by publishing amended versions that will automatically supersede prior versions, and shall further establish or amend supplemental policies and procedures governing membership from time to time. The PILA Member agrees to periodically review the membership terms and conditions at a designated location on the PILA Site for revisions and modifications. PILA will use its reasonable discretion in deciding if a modification is material, and if so will provide written notice to the PILA Member’s representative (designated above) of material changes in terms and conditions of membership by email or postal service. Continued acceptance of all terms and conditions pertaining to membership is a condition of remaining a member of PILA.\nPILA Operations. Subject to the limitations and restrictions set forth herein, and through the use of Digital Identifiers, the PILA Member agrees to permit other members of PILA or other qualified users of the PILA System to at all times link their Original Works and/or other qualified content to the Original Works or other qualified content of the PILA Member; to actively maximize and maintain Digital-Identifier enabled links from within its own Original Works to those of other members of PILA or other qualified users of the PILA System; and to otherwise cooperate with the implementation or operation of other PILA online information management services. a). Initiation of Cross-Linking. The process of linking among Original Works in electronic form is known as \u0026ldquo;Cross-Linking\u0026rdquo;. As part of being a member of PILA, the PILA Member is required to do the following:\ni) Depositing Data. As soon as reasonably practicable after electronic publication of each Original Work, the PILA Member shall deposit into the PILA System the Metadata corresponding to said Original Work (\u0026ldquo;Deposited Metadata\u0026rdquo;). From time to time, PILA shall specify certain fields, parameters and other criteria that Deposited Metadata must contain. For example (and not by way of limitation), each single set of Deposited Metadata includes various reference citations and fields designated by PILA (e.g., title, author, etc.) that describes and identifies the corresponding Original Work. In addition, the PILA Member shall ensure that its Deposited Metadata conforms to the PILA technical documentation standards, as amended by PILA from time to time. For example (and not by way of limitation), the PILA Member is responsible for maintaining the accuracy of Deposited Metadata.\nii) Digital Identifiers. The PILA Member shall assign or re-assign (as the case may be) a Digital Identifier (as provided by PILA technical specifications, as may be modified from time to time) to each of its Original Works, and shall provide the same to PILA as a Crossref Member, and PILA shall register the same within the PILA System and elsewhere consistent with its business practice.\niii) Retrieving Data. As soon as practicable, the PILA Member shall use the PILA System to retrieve the Digital Identifier(s) corresponding to each reference citation within said Original Work for which a Digital Identifier is available, and embed the same as set forth immediately below.\niv) Cross-Linking. The PILA Member may maintain reference links that are not based on Digital Identifiers. However, other than for citations to Original Works, all of which said cited-Original Works are collectively hosted on a common hosting system or platform controlled by the PILA Member or its agent (\u0026ldquo;Internal Citations\u0026rdquo;), the Crossref Member shall use Digital Identifiers (if a Digital Identifier has been registered for the cited item) for reference linking in the same manner(s) it may provide, offer or support all other (i.e., non-Digital-Identifier-based) reference linking. The PILA Member may not divert, interrupt or otherwise interfere or delay the resolution of said Digital Identifier-enabled reference citation links to the “Response Page” (defined below), and shall display the same to end-users (i.e., readers) in a manner that is no less prominent or immediate than other reference links (if any). For avoidance of doubt, the PILA Member is encouraged but not required to use Digital Identifiers for Internal Citations.\nb) Accessibility of Content. The PILA Member must maintain each Digital Identifier assigned to it or for which it is otherwise responsible such that said Digital Identifier continuously resolves to a response page (\u0026ldquo;Response Page\u0026rdquo;) containing no less than complete bibliographic information about the corresponding Original Work (including without limitation the Digital Identifier), visible on the initial page, with reasonably sufficient information detailing how the Original Work can be acquired and/or a hyperlink leading to the Original Works itself (collectively, “Accessibility Standards”). The PILA Member shall use the Digital Identifier as the permanent URL link to the Response Page. The PILA Member shall register the URL for the Response Page with Crossref, shall keep it up-to-date and active, and shall promptly correct any errors or variances noted by Crossref. The members of PILA may support enhanced levels of accessibility to Original Work in their sole discretion. For the avoidance of doubt, the Board may modify the Accessibility Standards from time to time.\nc) No Fees. The members of PILA may not charge fees for Cross-Linking. Subject to the foregoing sentence, each member of PILA shall control access to its systems and shall have discretion to establish pricing and other terms of access to its Original Work (and other content) beyond the Response Page.\nd) Archives. The PILA Member will use commercially reasonable efforts to establish and maintain arrangements whereby Original Works will be preserved and made available through an authorized archive (\u0026ldquo;Authorized Archive\u0026rdquo;) in the event that the PILA Member or a successor ceases to host such Original Works. In the event that an agreement is entered into between the PILA Member and the Authorized Archive (an “Archive Agreement”) and a “trigger event” as defined in such Archive Agreement occurs, the PILA Member authorizes PILA to enter into an appropriate agreement with such Authorized Archive or other subsequent authorized host of the content to ensure the persistence of links to the Original Work.\nGeneral License. Subject to the terms and conditions of this Agreement, the PILA Member hereby grants to PILA and its agents a fully-paid, non-exclusive, worldwide license for any and all rights necessary to use, reproduce, transmit, distribute, display and sublicense the Deposited Metadata and Digital Identifiers in the discretion of PILA in connection with the PILA System, including without limitation all aspects of Cross-Linking and online information management services.\nMetadata Rights and Limitations. The PILA Member shall not acquire or retain, and may not provide or transfer, any rights (including all related copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed) in any Metadata belonging to another member of PILA. Except as set forth herein and specifically without limitation to section 4 (General License) above, PILA shall not use, or acquire or retain any rights (including all related copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed) in the Deposited Metadata of the PILA Member.\nPILA’s Intellectual Property. The PILA Member acknowledges that nothing shall enlarge or restrict the rights of PILA or its agents to acquire, develop and maintain any Metadata and any collective rights therein. The PILA Member acknowledges that, as between itself and PILA, PILA has all right, title and interest in and to the PILA System, including all related copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed, with the exception of rights in the Deposited Metadata as set forth in section 5 (Metadata Rights and Limitations), or as expressly provided elsewhere in writing. The PILA Member shall accurately maintain and not delete or modify any of PILA’s copyright notices on documents, electronic text or programs that PILA may prepare or enable for the use or display by members of PILA.\nPermissive Use of the PILA System by PILA Members. Subject to the payment of corresponding fees if any, the PILA Member may (i) confirm Metadata about the identity, description and location of Original Works of other members of PILA (\u0026ldquo;Clean-Up\u0026rdquo;), (ii) submit Digital Identifiers to retrieve the corresponding Metadata (“Reverse Look-Up”) and (iii) retrieve and display Digital Identifiers and corresponding Metadata for Original Works of other members of PILA to enable “cited-by” links in published content (“Cited By Linking”) where both PILA Members participate in Cited By Linking.. Notwithstanding the general limitations in section 5 (Metadata Rights and Limitations), as part of its use of Clean-Up, Reverse Look-Up and Cited By Linking and other PILA services, the PILA Member may from time to time transfer, copy or display Metadata of other members of PILA; provided however that the PILA Member may not use the Metadata of other members of PILA to create a system that directly competes with the PILA System. For the avoidance of doubt, (i) PILA reserves the right to provide and modify guidelines governing Clean-Up, Reverse Look-Up, Cited By Linking and other PILA services from time to time.; and (ii) nothing herein shall be deemed to limit the rights that the PILA Member may have, if any, to use the Metadata of other PILA members as a member of the general public.\nCaching and Transfer. Providing that the PILA Member is not in violation of the Agreement, subject to certain restrictions that PILA shall provide and amend from time to time, and accordance with PILA technical guidelines, a member of PILA may cache Digital Identifiers obtained through the PILA System. However, other than incidentally to the copying or transfer of Original Works containing embedded Digital Identifiers enabling the reference citation links, no member of PILA may provide, copy or transfer for value any Digital Identifier (cached or otherwise).\nSharing of Metadata by PILA.\na) Local Hosting. Subject to the payment of local hosting fees and costs, and compliance with other PILA local hosting terms and conditions as set forth in a separate agreement between PILA and the local hosting entity, PILA may authorize the PILA Members and affiliate members of PILA (\u0026ldquo;PILA Affiliates\u0026rdquo;) to locally host Metadata and Digital Identifiers from the PILA System, which PILA shall provide directly, solely to facilitate use of DOIs for linking to Original Works, subject to all other restrictions on the use of Metadata and Digital Identifiers. PILA reserves the right, upon reasonable notice, to audit the local hosting activity to ensure the proper functioning of the local-host system, and compliance with all applicable Crossref guidelines and agreements.\nb) Other Metadata Services. Subject to compliance by the entity receiving the Metadata and Digital Identifiers with terms and conditions established by PILA for the particular service through which access is provided, PILA may authorize third parties to receive and use Metadata and Digital Identifiers from PILA, which PILA shall provide directly to such third parties.\nPromotion. PILA and the PILA Member may each use the other’s name(s) and mark(s) to identify the status of the PILA Member as a member of PILA. The PILA Member may use a print version of the Crossref mark, as it appears at https://0-www-crossref-org.libus.csd.mu.edu/brand, in its print publications subject to PILA approval not to be unreasonably withheld. The PILA Member shall use commercially reasonable efforts to place the Crossref mark in electronic form, by referencing the code provided at https://0-www-crossref-org.libus.csd.mu.edu/brand, as a link to the PILA Site in a prominent location on Web pages of the PILA Member related to its Original Works. The PILA Member may otherwise use the PILA name(s) or mark(s) only with the prior written consent of PILA. Notwithstanding any of the foregoing, PILA reserves the right to reasonably regulate or restrict use of the PILA name(s) and mark(s) by its members in press releases, advertising, client lists or marketing materials.\nTerm, PILA-Member Termination. This Agreement shall commence upon the Effective Date and shall continue through December 31 of the current year (\u0026ldquo;Initial Term\u0026rdquo;), and thereafter shall automatically be renewed according to the terms of the then-most recent version for consecutive twelve (12) month periods (each a “Term”) unless terminated in accordance with the Agreement. The PILA Member may terminate this Agreement upon ninety (90) days prior written notice, but shall not be entitled to a refund of any fees that have been paid or waiver of any fees that have accrued. Termination by any party shall have no adverse effect on PILA’s intellectual property rights in any Metadata or upon any related licenses then in effect, subject only to the following section 12 (Actions Following Termination).\nActions Following Termination. Following termination or expiration of its membership in PILA, the PILA Member shall have no further obligation to deposit Metadata with PILA or to assign Digital Identifiers to its Original Works, and PILA shall have no further obligation to register such Digital Identifiers. With respect to Metadata deposited and Digital Identifiers registered prior to such termination or expiration: (i) PILA shall have the right to keep, maintain and use such Metadata and Digital Identifiers within the PILA System, including without limitation in deliveries of metadata made pursuant to Section 9 above unless the terminating PILA Member indicates otherwise in writing as of the Effective Date of Termination; and (ii) the obligations of the PILA Member set forth in section 3(b), (c), and (d) will survive. PILA may substitute a general PILA response page where a Digital Identifier ceases to resolve to an Original Work.\nEnforcement. PILA has the right but not the obligation to enforce the terms of this Agreement against any of its members. PILA shall not be obligated to take any action with respect to any Metadata that is the subject of an intellectual property dispute, but nonetheless reserves the right, in its sole discretion, to remove or suspend access from, to or through it and/or its associated Original Work(s), or to take any other action it deems appropriate. Without limiting the foregoing, PILA reserves the right to terminate or restrict access by the PILA Member to the PILA System and related services (including Cross-Linking) for just cause as PILA determines in its reasonable good faith discretion. The PILA Member agrees to hold PILA harmless from any consequences of any of the foregoing, provided PILA does not willfully, recklessly or with gross negligence violate its obligations. PILA’s executive committee as defined in the Bylaws (\u0026ldquo;Executive Committee\u0026rdquo;) shall review and ratify any PILA decision permanently terminating the PILA Member’s membership in PILA, as provided in the Bylaws, or any significant membership benefit (e.g., blocking access to or removing significant amounts of Deposited Metadata for many Original Works for an extended period) of the PILA Member within 10 days of implementation. As part of such review, the PILA Member shall have an opportunity to be heard under such reasonable procedures as the Board may determine in its good faith. PILA or the PILA Member may petition PILA’s Executive Committee to review and ratify any PILA decision temporarily restricting the PILA Member’s access to or use of the PILA System for a limited period, and the PILA Executive Committee shall decide whether it wishes to exercise its authority in its sole and complete discretion. Any decision by PILA to terminate or restrict the access of a party that is not a member of PILA to the PILA System or any portion of it shall not be subject to the foregoing Executive Committee automatic review provisions.\nDisputes. The PILA Member agrees to abide by the terms and conditions of the following dispute resolution procedures, which PILA may amend in its discretion from time to time (\u0026ldquo;Dispute Policies\u0026rdquo;).\na) Choice of Law, Jurisdiction. This Agreement shall be interpreted, governed and enforced under the laws of New York, without regard to its conflict of law rules. All claims, disputes and actions of any kind arising out of or relating to the Agreement shall be settled in New York, New York.\nb) Alternative Dispute Resolution. The PILA Member shall be responsible for promptly notifying PILA of any claim, dispute or action, whether against other members of PILA or PILA, related to this Agreement or any Digital Identifiers or Deposited Metadata. Pursuant to the Commercial Arbitration Rules of the American Arbitration Association, a single arbitrator reasonably familiar with the publishing and Internet industries shall settle all claims, disputes or actions of any kind arising out of or relating to the subject matter of this Agreement, including the interpretation of all Dispute Policies, between PILA and the PILA Member or among members of PILA (\u0026ldquo;ADR Procedures\u0026rdquo;). The decision of the arbitrator shall be final and binding on the parties, and may be enforced in any court of competent jurisdiction. Without limiting the application of any of the foregoing, any claim, dispute or action arising out of or relating to this Agreement that is not otherwise within the scope of these ADR Procedures shall be settled before a federal court located in New York, New York.\nc) Injunctive Relief. Notwithstanding the foregoing subsection 14(b) (Alternative Dispute Resolution), no party shall be prevented from seeking injunctive or preliminary relief in anticipation, but not in any way in limitation, of arbitration, before any court located in New York, New York and pursuant to the Civil Practice Law and Rules of New York. The PILA Member acknowledges that the unauthorized use of Metadata would cause the owner or PILA as a beneficial owner thereof irreparable harm that could not be compensated by monetary damages. The PILA Member therefore agrees that PILA and affected members of PILA may seek injunctive and preliminary relief to remedy any actual or threatened unauthorized use of Metadata without the posting of a bond, and otherwise as consistent with the Dispute Policies.\nd) Actions between Members. The PILA Member agrees that any member of PILA may bring and maintain an action arising out of the subject matter of this Agreement directly against any other member of PILA to enforce rights and seek remedies for misuse of its Deposited Metadata, which shall be subject to the Dispute Policies. The foregoing sentence shall not limit the moving party’s other rights and remedies at law or in equity relating to any violation of its intellectual property rights, breach of contract or other cause of action that is merely incidental to its activities or assets as a member of PILA and does not otherwise arise out of or relate to this Agreement.\ne) Limitations. The PILA Member may not seek to impel PILA to act against any other member of PILA, and agrees not to join PILA in any action between itself and another member of PILA (except if PILA is required to be joined for just adjudication, consistent with the standards set forth in the Federal Rules of Civil Procedure, R. 19, and provided that the joining party indemnifies PILA as PILA may reasonably require) or to bring any related cause of action against PILA directly or indirectly for such purpose(s). PILA agrees, however, to use commercially reasonable efforts to seek to enforce any final judgment of a competent tribunal that PILA reasonably believes to be enforceable, subject to the receipt of sufficient indemnities by the PILA Member seeking enforcement. Nothing in this subsection shall limit the PILA Member’s right to bring an action against PILA for a direct violation of this Agreement subject to the Dispute Policies.\nWarranty. Each party warrants and represents that it has the full power and complete authority to enter into this Agreement, that it has conducted a review of the rights granted herein according to documented internal policies and procedures, and that the rights granted by the respective parties herein will not infringe the rights of any third party. The PILA Member agrees only to deposit or register Metadata in the PILA System corresponding to Original Work for which it has electronic rights, including the right to use such Original Work as part of the PILA System including Cross-Linking. The PILA Member shall be exclusively responsible for maintaining the accuracy of data associated with each Digital Identifier and the validity and operation of the corresponding URL(s) containing the Response Page and related pages.\nIndemnification. To the extent authorized by law, and subject to the terms of the Agreement, the PILA Member agrees to indemnify and hold harmless PILA, and its agents and affiliates, and their directors, officers and employees (\u0026ldquo;PILA\u0026rdquo;), as well as other members of PILA, from and against any and all liability, damage, loss, cost or expense, including reasonable attorney\u0026rsquo;s fees, costs, and other expenses arising out of any activity undertaken by the PILA Member, its agent(s) or representatives, pursuant to this Agreement or its subject matter, or which if true would be a violation of any PILA Member warranty, obligation or third-party intellectual property right.\nLimitations of Liability. NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR ANY INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, CONSEQUENTIAL DAMAGES OR LOST PROFITS ARISING OUT OF OR RELATING TO THIS AGREEMENT, EVEN IF IT HAS BEEN INFORMED IN ADVANCE OF THE POSSIBILITY OF SUCH DAMAGES. NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR (I) ANY LOSS, CORRUPTION OR DELAY OF DATA OR (II) ANY LOSS, CORRUPTION OR DELAY OF COMMUNICATIONS WITH OR CONNECTION TO RELATED PRODUCT OR CONTENT.\nTaxes. The PILA Member is responsible for all sales and use taxes imposed, if any, with respect to the services rendered or products provided to the PILA Member hereunder, other than taxes based upon or credited against PILA’s income.\nNo Waiver. The parties agree that no delay or omission by either party hereto, or by any member of PILA, to exercise any right or power hereunder shall impair such right or power or be construed to be a waiver thereof. A waiver by either of the parties of any of the covenants to be performed by the other or any breach thereof shall not be construed to be a waiver of any succeeding breach thereof or of any other covenant contained herein.\nNo Partnership. Neither party to the Agreement is an agent, representative, or partner of the other party, except insofar as PILA rules and regulations expressly provide that PILA may act on behalf of the PILA Member. The PILA Member shall not have any right, power or authority to enter into any agreement for or on behalf of, or incur any obligation or liability of, or to otherwise bind, PILA.\nNo Third-Party Beneficiaries. Except as expressly set forth herein, neither party intends that this Agreement shall benefit, or create any right or cause of action in or on behalf of, any person or entity other than PILA or the PILA Member.\nNo Assignment. The PILA Member may not assign, subcontract or sublicense this Agreement without the prior written consent of PILA, which consent shall not be unreasonably withheld, delayed, conditioned or denied.\nNotices. Written notice under this Agreement shall be effective if sent to the party’s address as follows: (i) by personal service on the same day, or (ii) by internationally recognized courier (e.g., FedEx, UPS) on the next business day following the scheduled delivery date.\nIf to PILA:\nMr. Edward Pentz, Executive Director Crossref 50 Salem Street Lynnfield, MA 01940, USA, (fax) +1-781-295-0077 If to the PILA Member, to the name and address designated by the PILA Member as the Business Contact in the membership application, with a copy to \u0026ldquo;General Counsel/Legal Department\u0026rdquo; at the same address. The Business Contract may be changed by the PILA Member by giving notice as provided in this section.\nSurvival. Sections (and the corresponding subsections, if any) 5, 6, 12, 13, 14, 15, 16, 17, 19, 20, 21, 23, 24, 26, 27 and any rights to payment shall survive the expiration or termination of this Agreement for any reason.\nHeadings. The headings of the sections and subsections used in this Agreement are included for convenience only and are not to be used in construing or interpreting this Agreement.\nSeverability. If any provision of this Agreement is held to be invalid, illegal, or unenforceable, such invalidity, illegality, or unenforceability will be reformed to be enforceable to the maximum extent permitted under applicable law, and whether or not it may be so reformed, it will not affect any other provision of this Agreement, unless the unenforceability of the applicable provision would materially impair either party\u0026rsquo;s ability to obtain substantial performance of the other party.\nEntire Agreement. The terms and conditions of this Agreement and any exhibits supersede all prior oral and written agreements between the parties with respect to the subject matter of this Agreement and shall constitute the entire agreement between the parties with respect to the matters contained herein. This Agreement shall not be modified or amended except through Board action or in writing duly executed by authorized representatives of the parties.\nCounterparts; Electronic Signature. This Agreement and any amendments may be executed in one or more counterparts, each of which shall be deemed an original, but all of which shall constitute one agreement. EACH PARTY MAY USE A HARD COPY (INK ON PAPER) OR ELECTRONIC SIGNATURE, EACH OF WHICH SHALL BE DEEMED TO BE AUTHENTIC AND EQUALLY ENFORCEABLE.\nAgency Authorization Addendum (Optional) This page only needs to be completed if you are authorizing a third-party vendor to interact with PILA on your behalf. For example, if you are working with a hosting provider.\nThe PILA Member authorizes __________________________ to be the exclusive agent (\u0026ldquo;Agent\u0026rdquo;) for itself and its designated content for purposes of interacting with PILA and the PILA System, and accepts responsibility for the Agent’s acts and omissions on behalf of the PILA Member as if such acts were the PILA Member’s own. Without limiting the foregoing, and for avoidance of doubt, it is understood that:\nA) The Agent, in consultation with the Business Contact for the PILA Member, will assign DOIs using standard methods and will register with Crossref on the PILA Member’s account/login the DOI, the URL corresponding to each DOI, and the required metadata identifying each content item;\nB) The Agent will query the Crossref Metadata Database on the PILA Member’s account/login to obtain and insert links into the PILA Member organization’s registered content for all references that are contained in the Crossref Database;\nC) The Agent is not permitted to cache any Metadata or DOIs on behalf of the PILA Member without signing a separate Affiliate Agreement;\nD) Although either the PILA Member or the Agent may pay Crossref the established membership and Content Registration fees for all activity performed on behalf of the PILA Member’s registered content using its account/login, the PILA Member remains responsible for such fees as they accrue.\nIf you would like to apply to join please visit our membership page which describes the obligation and leads to an application form. Please contact our membership specialist with any questions.\n", "headings": ["Former membership agreement","Agency Authorization Addendum (Optional)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2018-10-05-senior-software-developer/", "title": "Senior Software Developer", "subtitle":"", "rank": 4, "lastmod": "2018-10-05", "lastmod_ts": 1538697600, "section": "Jobs", "tags": [], "description": "Applications for this position are closed Come and work with us as our Senior Software Developer. It’ll be fun! Location: Flexible. Either office-based (Lynnfield, MA or Oxford, UK) or remote/home-based in North America or Europe Reports to: Chuck Koscher, Director of Technology Salary and benefits: Competitive About the role This role provides a technology cornerstone to our development team responsible for the ongoing development, maintenance, and operation of our Content Registration system.", "content": " Applications for this position are closed Come and work with us as our Senior Software Developer. It’ll be fun! Location: Flexible. Either office-based (Lynnfield, MA or Oxford, UK) or remote/home-based in North America or Europe Reports to: Chuck Koscher, Director of Technology Salary and benefits: Competitive About the role This role provides a technology cornerstone to our development team responsible for the ongoing development, maintenance, and operation of our Content Registration system. Metadata is critically important to Crossref’s mission to make research communications better, and the registration system is where it all starts. You will report to the director of technology, joining a team of three developers and one system administrator. You will also work extensively with all other teams across our small but impactful organization.\nOur stack is mainly a backend system written in Java on Spring and utilizing SQL on MySQL and Oracle. We’re still running our own hardware but are moving to AWS and already use S3, RDS, and other Amazon services. You will be responsible for ensuring that the Content Registration system is reliable and responsive as well as making sure it is able to evolve quickly to support the new requirements and new services that we are developing for its membership and metadata subscribers. As such, you will need to work closely with product management and the strategic initiatives teams.\nThis position also provides programming and workflow guidance to the entire team by guiding concept formulation, design, and implementation. Our processes are built on Jira, SVN, Zendesk, and Git and we’re starting to use agile methods. We communicate via Slack and Google apps. You will help improve our quality control initiatives, review methodologies and help develop a culture of continuous testing and deployment.\nYour challenge will be to accomplish this, whilst simultaneously driving the modernization of our current software stack, infrastructure, and software engineering culture.\nKey responsibilities Understand Crossref’s mission and how it that applies to the Content Registration service. Work in multi-functional project teams to scope, specify design and develop services and ensure that the Content Registration system is reliable, responsive, and efficient. Work very closely with the Director of Technology to solve problems, maintain and improve the registration service. Recommend and execute technology changes, for example upgrading to Java 8, or other tools and off-the-shelf solutions that might improve operations, visibility or monitoring. Provide guidance to other developers regarding coding practices and help maintain and improve our development environment. Identify vulnerabilities and inefficiencies in our system architecture and development processes, particularly regarding DevOps procedures, unit and regression testing. Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users. Openly document and share development plans and workflow changes. Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to the Content Registration system; be part of our on-call team responding to service outages. About you You are:\nAn expert Java developer with a solid understanding of Spring and with a lot of SQL and MySQL experience at the application level and with infrastructure level issues dealing with JDBC, connection pooling, table optimization, index construction, charset, and driver issues. Proficient in one other language and expert scripting skills. Experienced with full backend stack (Java, Spring, MySQL, Tomcat) continuous testing/delivery frameworks, and DevOps concepts and techniques. Experience with or a working understanding of XML and document-oriented systems such as MongoDB, Solr, and Elasticsearch. Experience with AWS services, containerization with tools like Docker and infrastructure management using tools like Terraform. Very much self- directed, must be a good manager of your own time and have ability to focus even when other things compete for your time. Curious and tenacious at learning new things and getting to the bottom of problems. Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively. Outstanding at interpersonal relations and relationship management, and comfortable working with other developers, product management, outreach, membership, and technical support teams. If remote, able to travel occasionally to meet with colleagues at either Lynnfield MA or Oxford UK office. About Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have over 11,000 members across 118 countries, and thousands of tools and services relying on our metadata.\nTo apply If you’d like to join the Crossref tea, and contribute to our mission, please send a cover letter and your resume to jobs@crossref.org. We can\u0026rsquo;t wait to read all about you!\n", "headings": ["Come and work with us as our Senior Software Developer. It’ll be fun!","About the role","Key responsibilities","About you","About Crossref","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2018-02-01-support-manager/", "title": "Support Manager", "subtitle":"", "rank": 4, "lastmod": "2018-02-01", "lastmod_ts": 1517443200, "section": "Jobs", "tags": [], "description": "Applications for this position are closed as of 2018-04-01 Come and work with us as our Support Manager. It\u0026rsquo;ll be fun! - **Location:** Remote/home-based: anywhere from the Pacific to Eastern timezones\n- **Reports to:** Head of Member Experience\n- **Benefits:** Competitive\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.", "content": " Applications for this position are closed as of 2018-04-01 Come and work with us as our Support Manager. It\u0026rsquo;ll be fun! - **Location:** Remote/home-based: anywhere from the Pacific to Eastern timezones\n- **Reports to:** Head of Member Experience\n- **Benefits:** Competitive\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nSince January 2000 we have grown from strength to strength and now have almost 10,000 members across 114 countries and thousands of tools and services relying on our metadata.\nAbout the team We have big ambitions in the member and community outreach group. We’re thirteen-strong, (soon to be sixteen), and split between Boston, New York, London, and Oxford. We are at the forefront of Crossref’s growth, building relationships with new audiences in new markets in new ways. We cover member experience and support, marketing communications, outreach, business development, and metadata education. We’re embarking on a new onboarding program for the thousands of (mostly research publishers) that join as members every year. There are plans for an educational program for existing members and users. And we’re aiming for a more open approach to having conversations with people all around the world, in multiple languages. We are fortunate to have strong product management, finance, and technology teams to work closely with to achieve our objectives.\nAbout the role This is a key role in the Member Experience section of our Member and Community Outreach team. In this role you’ll be working closely with two Support Specialists to handle the most technical support queries, ensure that the member experience team has the tools and processes to effectively support members and users, work closely with the DevOps and Product teams on bug fixes and new developments to support users, and communicate with members and users on service issues, both 1:1 and in public through e.g. Twitter and GitHub.\nKey responsibilities Handling the most technical support queries Answering member and user queries\u0026mdash;using Zendesk, Twitter, Discourse, and GitHub\u0026mdash;owning the problem through to resolution. Being the escalation point for other members of the support team, handling the most complex customer support issues. Managing and adjusting publication title information within our metadata system. Monitoring conflict reports and working with members to resolve. Monitoring DOI crawler reports and contacting publishers who do not maintain their DOIs. Tools and processes Managing the support systems, setting KPIs and ensuring regular reporting is accurate and actionable. Identifying peaks and troughs in support queries and finding ways to smooth them out. Implementing support through new channels as we move to a philosophy of \u0026ldquo;open support\u0026rdquo;. Ensuring that the member experience and outreach teams have everything they need to work efficiently and effectively. Assisting other Crossref staff in understanding metadata and schema issues. Bug fixes and new developments Identifying problems/opportunities resulting from customer issues. Working closely with the technical team on issues impacting members, running regular technical review meetings with the development team. Feeding into service development conversations to ensure support overhead is kept to a minimum. Leading or participating in community working groups. Communicating with members on support issues Managing outbound communications regarding service outages through multiple channels. Monitoring and responding to external or internal reporting systems that indicate the health of the DOI and Crossref systems. Suggesting measures to improve visibility into quality conditions and ways to better assist members in working with us. About you This important role in the Member Experience team provides support to our diverse member and user base with very different levels of technical knowledge and many different languages. It’s also the key bridge between our community and our own technical teams. You’ll need:\nExperience in providing technical support/troubleshooting with the ability to organize and prioritize a very busy helpdesk. Critical thinking and problem solving skills, with a high level of attention to detail and be comfortable digging into unfamiliar and complex technical issues. We need someone who is a problem-solver\u0026mdash;curious and tenacious at learning new things and getting to the bottom of problems. Strong written and verbal communication skills with the ability to communicate clearly, simply and effectively. Able to communicate technical issues to less technical audiences and use open questions to get to the bottom of things when the question doesn\u0026rsquo;t seem to make sense. Strong interpersonal and relationship management skills. A passionate customer service orientation with experience in managing multiple stakeholders. Ability to work with colleagues in different teams and at different levels. Experience with XML-based publishing systems (ideal) or just XML with exposure to metadata vocabularies. A philosophy of transparency in everything you do with strong experience providing support publicly e.g. through discussion forums, technical repositories, and social media. To apply If you are considering joining Crossref and contributing to our mission, please send your cover letter and resume to: Amanda Bartell\n", "headings": ["Come and work with us as our Support Manager. It\u0026rsquo;ll be fun!","About Crossref","About the team","About the role","Key responsibilities","Handling the most technical support queries","Tools and processes","Bug fixes and new developments","Communicating with members on support issues","About you","To apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2018-01-05-rnd-programmer/", "title": "R&D Programmer", "subtitle":"", "rank": 1, "lastmod": "2018-01-05", "lastmod_ts": 1515110400, "section": "Jobs", "tags": [], "description": "Applications for this position are closed as of 2018-02-28 Come work at Crossref as an R\u0026amp;D Programmer. It\u0026rsquo;ll be fun! - **Location:** Based in Oxford, UK. Remote work possible\n- **Reports to:** Director of Strategic Initiatives\n- **Benefits:** Competitive\nAbout the position We are hiring an R\u0026amp;D Programmer to help prototype and develop new web-based tools and services.\nWe are looking for someone who: Is expert in one or more programming languages (Java, Clojure, Python, Ruby).", "content": " Applications for this position are closed as of 2018-02-28 Come work at Crossref as an R\u0026amp;D Programmer. It\u0026rsquo;ll be fun! - **Location:** Based in Oxford, UK. Remote work possible\n- **Reports to:** Director of Strategic Initiatives\n- **Benefits:** Competitive\nAbout the position We are hiring an R\u0026amp;D Programmer to help prototype and develop new web-based tools and services.\nWe are looking for someone who: Is expert in one or more programming languages (Java, Clojure, Python, Ruby). Wants to learn new skills and work with a variety of technologies. Relishes working with metadata. Has experience delivering web-based applications using agile methodologies. Enjoys working with a small, geographically dispersed team. Groks mixed-content model XML. Groks RDF. Groks REST. Can see a solo project through or collaborate in a larger team. Has deployed and maintained Linux based systems. Understands relational databases (MySQL, Oracle). Tests first. Bonus points for: Experience building tools for online scholarly communication. Experience with a variety of programming language paradigms (OO, Functional, Declarative). Experience with ElasticSearch, Solr or Lucene. Having contributed to open source projects. Experience with front-end development (HTML, CSS, React, Angular or similar). Having worked on standards bodies. Experience with public speaking. Responsibilities The R\u0026amp;D Programmer will report to the Director of Strategic Initiatives. They will be responsible for prototyping and developing new Crossref initiatives and applying new Internet technologies to further Crossref’s mission to make research outputs easy to find, cite, link, assess, and reuse.\nThe post will work with the Director of Strategic Initiatives and the Director of Technology to research, develop and implement new services – taking ideas from concept to prototype and, where appropriate, create and deploy production services.\nThe R\u0026amp;D Programmer may represent Crossref at conferences and in industry activities and projects. They will play also an active role in developing industry and community technical standards and help develop technical guidelines for Crossref members.\nWorking with the Director of Strategic Initiatives and Director of Technology, the R\u0026amp;D Programmer will ensure that new services are designed with a robust and sustainable architecture. The R\u0026amp;D Programmer will actively engage with technical representatives from Crossref’s membership, the library community, scholarly researchers and broader Internet initiatives.\nLocation \u0026amp; travel requirements Crossref has offices in the US (Lynnfield, Massachusetts) and the UK (Oxford). We also support remote work. This position is in the R\u0026amp;D team, which is currently split between the UK and France.\nRemote workers should expect they will need to visit an office approximately 5 days a quarter along the travel (possibly international) which that entails.\nIf you work from an office you will be expected to travel internationally for ~ 5 days once a year.\nThe position In either case, travel can increase should you have an interest in representing Crossref at industry events.\nSalary Competitive depending on skills and expertise. Excellent benefits.\nTo apply Send cover letter and a CV via email to:\nGeoffrey Bilder\ngbilder@crossref.org\nAbout Crossref Crossref is a not-for-profit membership organization representing members of all stripes (commercial publishers, scientific societies, university presses, open access, etc.). Crossref currently has just over 30 staff in the US and UK and France, yet it combines the small, intimate atmosphere of a technical startup, with the financial stability and strong international presence of a major commercial organization. We do important stuff, but we have a lot of fun doing it.\nPlease contact Geoffrey Bilder with any questions.\n", "headings": ["Come work at Crossref as an R\u0026amp;D Programmer. It\u0026rsquo;ll be fun!","About the position","We are looking for someone who:","Bonus points for:","Responsibilities","Location \u0026amp; travel requirements","Salary","To apply","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2017-12-19-tech-pm/", "title": "Product Manager - APIs", "subtitle":"", "rank": 4, "lastmod": "2017-12-20", "lastmod_ts": 1513728000, "section": "Jobs", "tags": [], "description": "Applications for this position are closed as of 2018-05-04 Come work with us as a Product Manager for APIs. It\u0026rsquo;ll be fun! - **Location:** Oxford, UK; Lynnfield, MA; or remote\n- **Reports to:** Director of Product Management\n- **Benefits:** Competitive\nSummary Crossref makes research objects easy to find, cite, link, assess, and reuse.\nWe’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.", "content": " Applications for this position are closed as of 2018-05-04 Come work with us as a Product Manager for APIs. It\u0026rsquo;ll be fun! - **Location:** Oxford, UK; Lynnfield, MA; or remote\n- **Reports to:** Director of Product Management\n- **Benefits:** Competitive\nSummary Crossref makes research objects easy to find, cite, link, assess, and reuse.\nWe’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nIt’s as simple—and as complicated—as that.\nWe are a small team that makes a big impact, and we’re looking for a creative, technically-oriented Product Manager to join us in improving scholarly communications, building “roads” to make the next big research discovery possible.\nKey responsibilities Manage all aspects of the product life cycle for Crossref’s suite of APIs through product requirements, feature prioritization, implementation, and measurement Articulate and influence product strategy focusing on business objectives and user experience Represent the voice of the Crossref API users in the continued improvement of these APIs and bring back the insight to our engineering and outreach teams Identify, grow, and manage our developer relations program for the community of Crossref API users About you You can think in terms of the big picture, but deliver on the details You have a nose for great products, and advocate for new features with qualitative and quantitative reasoning You do whatever it takes to make your product and team successful, whether that means writing a QA plan or hunting down the root cause of a user’s frustration You can break down large projects into granular milestones and track progress on them You can turn incomplete, conflicting, or ambiguous inputs into solid action plans You communicate with empathy and exceptional precision You’re comfortable working with developers. You are technical enough to ask engineers good questions about architecture and product decisions alike Beyond just shipping new products, you obsess about continuous product improvement You can maintain order in a dynamic environment, independently managing multiple priorities You are self-driven with a collaborative and can-do attitude and enjoy working with a small team across multiple time zones You champion agile development best practices and are able to teach, mentor, and coach teams in adopting this methodology. You have experience with JSON and can develop a simple application in any modern language such as Python, Clojure, etc. Bonus: you have experience developing APIs and/or web applications that use APIs Salary: Competitive depending on skills and expertise. Excellent benefits.\nTo apply To apply, please send your cover letter, resume, and at least one sample of an effective product specification or epic/story development to Jennifer Lin.\nAbout Crossref Crossref is a not-for-profit membership organization representing members of all stripes (commercial publishers, scientific societies, university presses, open access, etc.). Crossref currently has just over 30 staff in the US and UK and France, yet it combines the small, intimate atmosphere of a technical startup, with the financial stability and strong international presence of a major commercial organization. We do important stuff, but we have a lot of fun doing it.\nPlease contact Jennifer Lin with any questions.\n", "headings": ["Come work with us as a Product Manager for APIs. It\u0026rsquo;ll be fun!","Summary","Key responsibilities","About you","To apply","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/projects-using-the-crossref-rest-api/metadata-api-users/", "title": "REST API users", "subtitle":"", "rank": 1, "lastmod": "2014-11-03", "lastmod_ts": 1414972800, "section": "Labs", "tags": [], "description": "Open Tree of Life (NSF) http://blog.opentreeoflife.org/ Maintained by: Jim Allman @jimallman Using: Work queries https://pbs.twimg.com/media/B2bYcKWIUAETSf2.png Enriched Biodiversity http://bdj.pensoft.net/articles.php?id=1125 @rmounce Journalhub showcase journal http://journalhub.io/journals/acta-dermatovenerol-apa Maintained by: Jure Using: Citation linking and json DOI query Also elife lens and metypeset PLOS Extensions to Wikimedia Visual Editor http://cdn.substance.io/ve/ Maintained by: http://www.adamhyde.net/plos/ Using: Work queries Kudos http://growkudos.com Maintained by: Lou Peck, David Sommer, Leigh Dodds Using: Work queries doimgr https://github.com/dotcs/doimgr Maintained by: dotcs Using: Work queries, work filtering bibby https://github.", "content": "\rOpen Tree of Life (NSF) http://blog.opentreeoflife.org/ Maintained by: Jim Allman @jimallman Using: Work queries https://pbs.twimg.com/media/B2bYcKWIUAETSf2.png Enriched Biodiversity http://bdj.pensoft.net/articles.php?id=1125 @rmounce Journalhub showcase journal http://journalhub.io/journals/acta-dermatovenerol-apa Maintained by: Jure Using: Citation linking and json DOI query Also elife lens and metypeset PLOS Extensions to Wikimedia Visual Editor http://cdn.substance.io/ve/ Maintained by: http://www.adamhyde.net/plos/ Using: Work queries Kudos http://growkudos.com Maintained by: Lou Peck, David Sommer, Leigh Dodds Using: Work queries doimgr https://github.com/dotcs/doimgr Maintained by: dotcs Using: Work queries, work filtering bibby https://github.com/jdherman/bibby Maintained by: Jon Herman, Cornell Using: Conneg, work queries CHORUS Search http://search.chorusaccess.org Maintained by: propulsion.io Using: Funder work queries, filters, faceting CHORUS Dashboards http://dashboard.chorusaccess.org Maintained by: propulsion.io Using: Funder work queries, filters Crossref Search [http://search.crossref.org][1] Maintained by: Karl Ward Crossref Funder Search http://0-search-crossref-org.libus.csd.mu.edu/funding Maintained by: Karl Ward PLoS Enhanced Citations Experiment Maintained by: Adam Becker, PLoS Using: Work queries and filters PLoS ALM / Crossref DET**** http://0-det-labs-crossref-org.libus.csd.mu.edu Maintained by: Martin Fenner Using: Work queries, update and publication date filters PKP OJS Crossref Plugin Maintained by: Juan, James, Bozana, PKP Using: deposits (xml deposits) Crossref Crossmark Statistics Maintained by: Joe Wass Using: Work queries, date filters, significant update filters, update policy filter Crossref Metadata Participation Maintained by: Joe Wass Using: Publisher routes, publisher feature coverage values pdfextract Maintained by: Karl Ward Using: Work queries, work metadata transforms Cambia patent to scholarly literature citation matching Maintained by: Doug Ashton Using: Work queries, deposits (patent citation deposits) ", "headings": ["Open Tree of Life (NSF)","Enriched Biodiversity","Journalhub showcase journal","PLOS Extensions to Wikimedia Visual Editor","Kudos","doimgr","bibby","CHORUS Search","CHORUS Dashboards","Crossref Search","Crossref Funder Search","PLoS Enhanced Citations Experiment","PLoS ALM / Crossref DET****","PKP OJS Crossref Plugin","Crossref Crossmark Statistics"," Crossref Metadata Participation","pdfextract","Cambia patent to scholarly literature citation matching"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/strategy/archive-2018/", "title": "Strategic agenda 2018-2020", "subtitle":"", "rank": 1, "lastmod": "2019-10-31", "lastmod_ts": 1572480000, "section": "Strategic agenda and roadmap", "tags": [], "description": "This is our strategic agenda from 2018-2020 and it\u0026rsquo;s now archived, please visit the main strategy page for the most up-to-date version. Crossref makes research objects easy to find, cite, link, assess, and reuse.\nWe’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services\u0026mdash;all to help put scholarly content in context.", "content": " This is our strategic agenda from 2018-2020 and it\u0026rsquo;s now archived, please visit the main strategy page for the most up-to-date version. Crossref makes research objects easy to find, cite, link, assess, and reuse.\nWe’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services\u0026mdash;all to help put scholarly content in context.\nIt’s as simple\u0026mdash;and as complicated\u0026mdash;as that.\nRally\nGetting the community working together to make scholarly communications better\nTag\nStructuring, processing, and sharing metadata to reveal relationships between research outputs\nRun\nOperating a shared, open infrastructure that is community-governed and evolves with changing needs\nPlay\nEngaging in debate and experimenting with technology to solve our members’ problems\nMake\nCreating tools and services to enable connections and give context\nThe strategic landscape Scholarly communications is changing, and putting research outputs into context is becoming more complicated. Our membership is part of a community that values and exchanges metadata between themselves as well as with a broader community.\nSome of our existing members no longer classify themselves as “publishers”, and some of our newer members have never classified themselves as “publishers”. Governments, funders, institutions, and researchers—parties who once had tangential involvement in scholarly publishing—are taking a more direct role in shaping how research is registered, certified and disseminated. Additionally, low income (but emerging) countries increasingly see it as a strategic imperative that they own and manage a research communication system that reflects their regional research priorities.\nResearchers are increasingly insisting that new kinds of research outputs, like data, software, preprints, and peer reviews form a critical part of the scholarly record. New players (e.g. sharing networks, alt-metrics services, and Current Research Information Systems) are becoming critical elements of the research landscape. New technologies like ML and AI promise to change the way in which research is produced, assessed, and consumed.\nFor Crossref and its membership to stay sustainable in this new environment, we need to adapt, do and encourage new things. But we have limited resources. So in order to adapt and do new things, we also need to also make sure that we are currently doing the right things efficiently. Hence, our strategic plan is a combination of consolidation and expansion:\nSimplify and enrich existing services Adapt to expanding constituencies Improve our metadata Collaborate and partner Simplify and enrich existing services The characteristics of our members and users continue to diversify—to scholar publishers, library publishers, and other emerging organizations. Furthermore, the use of our APIs has grown significantly in recent years as Crossref becomes better known as a source of metadata. Users are therefore asking for a more predictable service-based option in addition to the public options. We have and will continue to develop service-level guarantees in order to meet this growing demand, which will strengthen Crossref\u0026rsquo;s position as a way for the wider community to centrally access information from 10,000+ publishers.\nA focus on user experience will allow us to make it easier for all of them to participate in Crossref as fully as possible, irrespective of their depth of need or their level of technical skill.\nWe are also focusing our efforts on ensuring there is broad support for systems in accessing Crossref metadata so that reuse reaches its fullest potential across the entire research ecosystem. This necessary evolution of Crossref services will ensure that we can support the changing needs and priorities of all involved in research.\nWe do not want to add resources infinitum so we must make sure that we are performing our existing functions efficiently. To this end, we are streamlining processes to improve member experience, modernize infrastructure, and upgrade tools and data provision capabilities. These activities will achieve efficiencies for members, metadata users, as well as staff.\nRecently completed Similarity Check service transition Metadata Manager for journal articles Reference-matching improvements (phase 1) Transition from GitHub and Jira to GitLab In focus Pending publication (in Beta) Event Data Incident response process refinements Automated monitoring \u0026amp; status updates Support documentation re-write and migration to website API ElasticSearch migration Enhanced JATS support DevOps automation Research Organizations Registry (ROR) Scheduled 2020 REST API improvements Similarity Check v2 Address technical debt Pending Metadata Plus sync\nCloud migration for Content Registration infrastructure\nCrossmark reports\nConsolidated Member Center\nSelf-repairing DOIs\nJoint DataCite \u0026amp; Crossref Search (with FREYA)\nStandard Crossref DOI display/status widget\nR\u0026amp;D Image manipulation detection Auto-classification of journal types Citation classification Improve our metadata Metadata provided by our members is the foundation of all our services. Crossref membership is a collective benefit. The more metadata a member is able to put in—and the greater adherence to best practice—the easier it is for other members and community users downstream to find, cite, link, assess, and reuse their content. Furthermore, the more discoverable and more trusted is the content. Better quality metadata improves the system for each member and all of Crossref\u0026rsquo;s other members and stakeholders.\nExisting Crossref members may have joined Crossref when only providing minimal bibliographic metadata was required for reference linking. But, increasingly, Crossref is becoming a hub which the community relies on to get both complete bibliographic metadata and non-bibliographic metadata (e.g. funding information, license information, clinical trial information, etc.) We need to help our existing members meet the new metadata expectations. Our objectives are to better communicate what metadata best practice is, equip members with all the data and tools they need to meet best practice and achieve closer cooperation from service providers.\nWe will focus on expanding the links between scholarly objects to all their associated research outputs. We will also expand support for new record types to ensure that they integrated into the scholarly record and can be discovered. At the other extreme, some new Crossref members have little technical infrastructure for creating and maintaining quality metadata. We need to help provide them with tools to ensure that we don’t dilute the Crossref system with substandard and/or incomplete metadata.\nBut metadata quality is a strategic focus across the entire Crossref membership. While we improve this across our entire membership by implementing stronger validation measures internally in our deposit processes, we will also employ mechanisms that engage the broader community to fill in gaps and correct metadata with a clear provenance trail of every metadata assertion in the Crossref system.\nRecently completed Metadata Manager for journal articles Reference-matching improvements (phase 1) Improvements to OJS integration (with PKP) Research grants deposit In focus Metadata \u0026lsquo;health checks\u0026rsquo; Support documentation re-write and migration to website Research Organizations Registry (ROR) support Data citations Improving JATS support Research grants retrieval Conference IDs Metadata Practitioners Interest Group Scheduled 2020 Metadata schema enhancements Multiple resolution improvements (\u0026amp; decommission co-access) Pending Metadata Principles and Best Practices New Service Providers program Emerging Publisher Education Coalition Crossmark reports Revised relations taxonomy Improvement for bulk updates of metadata Standard Crossref DOI display/status widget R\u0026amp;D Participation reports (phase 2) Automating metadata extraction, preflight checking Metadata profiling Public feedback channel for metadata quality issues Adapt to expanding constituencies Members are at the heart of the Crossref community. Scholarly publishers are geographically expanding at a rapid pace and we currently have members in 140 countries. With that comes the need to increasingly and proactively work with emerging regions as they start to share research outputs globally. To this end, we will expand our geographic support through concerted efforts in international outreach, working with government education/science ministries and local Sponsors and Ambassadors, and developing as much localized content as we can.\nFurthermore, funders and research institutions are increasingly involved in the scholarly publishing process. As the research landscape changes, we need to respond and ensure our relevance by evolving in a way that better reflects these shifts. Our overarching objective is to expand our value proposition to convince these new constituents of Crossref’s relevance, getting them into our system and using our infrastructure.\nRecently completed In focus Sponsors program LIVE local educational events Research managers outreach Forum introduction (community.crossref.org) Ambassador program Multi-language webinars Scheduled 2020 Funder outreach Emerging Publisher Education Coalition Law journals Pending Non-English language documentation Non-English language interfaces DOI linking in mainstream media R\u0026amp;D Collaborate and partner Crossref faces a tension. We want to—where possible—take advantage of existing organizations, services, tools and technologies. We aim to do more, more efficiently, by focusing on expanding existing infrastructure and organizations rather than creating things from scratch. We don’t want to reinvent the wheel.\nSo that our alliances with others have the greatest impact, we align our strategic plans for scholarly infrastructure with others, and ensure that the community has the most up-to-date and accurate information.\nThis is part and parcel of our role as an community-wide infrastructure provider as we achieve our mission by supporting the entire research ecosystem. But at the same time, we take care not to introduce risky dependencies for the entire community. Hence, the bulk of our collaborations are with open initiatives.\nSome are led and driven by Crossref. Others are not.\nRecently completed ROR Registry launch In focus Value proposition for DOI Foundation Persistent identifier infrastructure through FREYA project Advocacy for richer metadata through Metadata 2020 Use of persistent identifiers in references with Wikimedia Research Organizations Registry (ROR) with Digital Science, CDL, and DataCite Distributed usage logging (DUL) with COUNTER Data citation with Scholix, RDA, STM Association, DataCite, and Make Data Count OJS development with Public Knowledge Project Open Funder Registry with Elsevier Similarity Check with Turnitin Joint value proposition with DataCite Foundational infrastructure with ORCID and DataCite PIDapalooza festival of open persistent identifiers Scheduled 2020 Pending Emerging Publisher Education Coalition with DOAJ, COPE, and INASP Joint search with DataCite R\u0026amp;D DOIs for static website generators Reference implementation for open platforms ", "headings": ["This is our strategic agenda from 2018-2020 and it\u0026rsquo;s now archived, please visit the main strategy page for the most up-to-date version.","The strategic landscape","Simplify and enrich existing services","Adapt to expanding constituencies","Improve our metadata","Collaborate and partner","Simplify and enrich existing services","Recently completed","In focus","Scheduled","2020","Pending","R\u0026amp;D","Improve our metadata","Recently completed","In focus","Scheduled","2020","Pending","R\u0026amp;D","Adapt to expanding constituencies","Recently completed","In focus","Scheduled","2020","Pending","R\u0026amp;D","Collaborate and partner","Recently completed","In focus","Scheduled","2020","Pending","R\u0026amp;D"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/dashboard/", "title": "Dashboard", "subtitle":"", "rank": 3, "lastmod" : "", "lastmod_ts" : 0, "section": "Dashboard", "tags": [], "description": "The dashboard is currently unavailable as we\u0026rsquo;re working to improve the way the data is presented. Apologies for any inconvenience.", "content": " The dashboard is currently unavailable as we\u0026rsquo;re working to improve the way the data is presented. Apologies for any inconvenience.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/", "title": "Labs", "subtitle":"", "rank": 1, "lastmod": "2023-02-20", "lastmod_ts": 1676851200, "section": "Labs", "tags": [], "description": "Crossref Labs is the dedicated R\u0026amp;D arm of Crossref. In late 2021 we announced that we are re-energizing Labs after a period of working mostly on development tasks.\nWhat\u0026rsquo;s our focus? The division between what the R\u0026amp;D group does at the group-level, and what the wider organisation does will always be more of a gradient than a line. But at the highest level we\u0026rsquo;d say that R\u0026amp;D will focus on projects that:", "content": "\rCrossref Labs is the dedicated R\u0026amp;D arm of Crossref. In late 2021 we announced that we are re-energizing Labs after a period of working mostly on development tasks.\nWhat\u0026rsquo;s our focus? The division between what the R\u0026amp;D group does at the group-level, and what the wider organisation does will always be more of a gradient than a line. But at the highest level we\u0026rsquo;d say that R\u0026amp;D will focus on projects that:\nAddress new constituencies. Involve fundamentally new approaches (technology or process or both). Are exploratory with no clear product application yet. And that a \u0026ldquo;strategic initiative\u0026rdquo; (as opposed to a new feature, service, or product) is something that:\nInvolves something we\u0026rsquo;ve never done before. Involves potential changes to our membership model and fees. Would require a large investment of resources outside of normal budget. We’re certainly not the only group at Crossref who experiment, build proof of concepts (POCs), and do research, but we hope to support other groups who do - both inside our organization and in the wider research ecosystem.\nSound right up your street? Interested in collaborating on something you’re working on? Let us know.\nWhat are we working on now? All the projects we are working on can be browsed here. We also have a list of Labs ideas.\nA few current highlights are:\nMeasuring how good our community is at preservation. Dashboard at https://the-vault.fly.dev/ and these are displayed on the Labs prototype (see next\u0026hellip;!) Playing around with how we might evolve our Participation Reports. Looking at running a \u0026ldquo;labs\u0026rdquo; version of our API where we can experiment with exposing new functionality and metadata in order to get quick feedback from the community. Making the Retraction Watch data openly available via the Labs API (and in .csv format) so we can get feedback on it before integrating it into our REST API. Looking at how we can extend the classification information we currently make available via our API across more journal titles. Exploring how we currently do citation matching with a view to evolving this approach i.e. making it better, more transparent and open to community contributions. Creating a sampling framework that can be used to extract samples of works and make them publicly available. Building POC tools to help our members and support team more easily accomplish common tasks. A flavor of Labs research and experiments Matching Grants registered with Crossref to other research outputs: read Dominika\u0026rsquo;s blog to explain the methodology we used to match grants to other works registered in Crossref, using grant identifier metadata. Setting up our test journal on Open Journal Systems (OJS): Many of our members use OJS so having a test site helps us help them with queries and our own testing. DOI Popup Proof of Concept: what would it look like if we displayed Crossref metadata in a handy work-level popup? DOI Chronograph: which websites do people come from when they click on a DOI? Live DOI event stream: Hypnotising. A a demo designed to exhibit the stream of data that is flowing through Crossref Event Data at any given time. Funder Registry reconciliation service: designed to help members (or anybody) more easily clean-up their funder data and map it to the Funder Registry. pdfmark and pdfstamp: Open source command line tools to add Crossref metadata to a PDF (pdfmark) and to automate the application of linked images to PDFs (pdfstamp). These never graduated from Labs so we aren\u0026rsquo;t supporting their active maintenance and development. If you\u0026rsquo;re a keen user however, let us know. Open source search-based reference matchers Java version/Python version. Continuing adventures in reference matching: Comparing approaches Adding unstructured reference strings And structured references Automatic citation style classifier Detective work on duplicate DOIs Documents Recommendations on RSS Feeds for Scholarly Publishers Unixref Reference (the schema used in Crossref OpenURL results) Disambiguation without de-duplication: Modeling authority and trust in the ORCID system Graduated to production services (or became part of them) Crossref Metadata Search PatentCite LicenseRef DOI content negotiation Text and data mining services Citation formatting Randoim TOI DOI Crossref Contributor ID Reverse domain lookup Funder Registry widget Linking data and publications Crossref REST API Member participation browser Lots of tools and demos that became Event Data Retired QR Code generator WordPress \u0026amp; Moveable Type plugins Ubiquity plugin Linked periodical data InChI lookup Family name detector pmid2doi PDF-extract Taxonomy interest group URL to DOI Labs Group Esha Datta Paul Davis Martin Eve Dominika Tkaczyk Alumni Rachael Lammey Karl Ward Joe Wass ", "headings": ["What\u0026rsquo;s our focus?","What are we working on now?","A flavor of Labs research and experiments","Documents","Graduated to production services (or became part of them)","Retired","Labs Group ","Alumni"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/tips-for-using-the-crossref-rest-api/", "title": "Tips for using the Crossref REST API", "subtitle":"", "rank": 4, "lastmod": "2021-09-02", "lastmod_ts": 1630540800, "section": "Documentation", "tags": [], "description": "The REST API documentation is useful, but it doesn\u0026rsquo;t tell you much about best practices. It also doesn\u0026rsquo;t tell you how you can best avoid getting blocked. With this document we hope to remedy that.\nPlease read this entire document carefully\u0026ndash;it includes some advice that may seem counter-intuitive.\nIt also includes some advice that might be obvious to professional programmers, but that might not be obvious to researchers or others who are just starting out with scripting or interactive research notebooks (e.", "content": "The REST API documentation is useful, but it doesn\u0026rsquo;t tell you much about best practices. It also doesn\u0026rsquo;t tell you how you can best avoid getting blocked. With this document we hope to remedy that.\nPlease read this entire document carefully\u0026ndash;it includes some advice that may seem counter-intuitive.\nIt also includes some advice that might be obvious to professional programmers, but that might not be obvious to researchers or others who are just starting out with scripting or interactive research notebooks (e.g. Jupyter).\nInstability in our APIs is almost always tracked down to a particular user who has created a script that either:\nperforms needlessly complex and inefficient requests performs requests that repeatedly trigger server errors (sometimes related to above) polls the API too quickly (we rate limit at the IP level, but some users run distributed systems coming in on multiple IPs) performs redundant requests These are not malicious actions. And they are easily correctable. And, in almost all cases, when we advise users that they need to fix their process, they do. And we are never concerned with them again. But, we can’t help everyone with every use case.\nOur advice is split into three sections:\nPick the right service. Understand the performance characteristics of the REST API. Optimize your requests and pay attention to errors. Consider not using the API What? No, seriously\u0026ndash;Crossref periodically releases a public data file of all the metadata that is available via our public API. You can download this file from Academic Torrents with your favorite torrent client and do a lot of work locally if you want to. At the very least, having the local file will allow you to focus on using the API only for metadata records that have been updated since the last public data file was released.\nIf you cannot use torrent files(some organizations don\u0026rsquo;t allows emplyees to use bitorrent at all- shame on them), see the sub section below titled \u0026ldquo;reduce the chance of cursor timeouts by segmenting your requests.\u0026rdquo;\nPick the right service level. Consider using our “Polite” or “Plus” versions of the REST API.\nWhat does this mean?\nThere are three ways to access the REST API. In increasing levels of reliability and predictability. They are:\nAnonymously (aka Public) With self identification (aka Polite) With authentication (aka Plus) Why three ways? Because Crossref is committed to providing free, open and as-anonymous-as-possible access to scholarly metadata. We are committed to this because research can often involve controversial subjects. And what is considered “controversial” can vary widely across time and jurisdictions. It is extremely difficult to provide truly “anonymous” access to an internet service. We will always, for example, be able to tie a request to an IP address and we keep this IP information for 90 days. The best we can do is make sure that some people who have the need for extra precautions can access the API without needing to authenticate or explicitly identify themselves.\nBut this semi-anonymous access is also hard for us to manage. Because the “Public” version of the REST API is free, traffic patterns can vary wildly. Because the service is semi-anonymous it makes it very hard for us to contact people who are causing problems on the system.\nSo we offer a compromise as well. If you do not have a requirement for anonymity, you can also self-identify by including contact information in your requests. The service is still open and free, but this way we can quickly get in touch with you if your scripts are causing problems. And in turn for providing this contact information, we redirect these requests to a specific “Polite” pool of servers. These servers are generally more reliable because we are more easily able to protect them from misbehaving scripts.\nNote that, in asking you to self-identify, we are not asking you to completely give up privacy. We do not sell (or give) this contact information to anybody else and we only use it to contact users who are causing problems. Also, any contact information that you provide in your requests will only stay in our logs for 90 days.\nAnd finally, if you are using our REST API for a production service that requires high predictability, you should really consider using our paid-for “Plus” service. This service gets you an authentication token which, in turn, directs your request to a reserved pool of servers that are extremely predictable.\nUnderstand the performance characteristics of REST API queries. If you are using the API for simple reference matching, and are not doing any post validation (e.g. your own ranking of the returned results), then just ask for the first two results (rows=2). This allows you to identify the best result and ignore any where there is a tie in score on the first two results (e.g. an inconclusive match). If you are analyzing and ranking the results yourself, then you can probably get away with just requesting five results (rows=5). Anything beyond that is very unlikely to be a match. In either case- restricting the number of rows returned will be more efficient for you and for the API.\nFor matching references (either complete or partial), use the query.bibliographic parameter and minimize the number of other parameters, filters and facets. Most additional parameters, filters and facets will make the query slower and less accurate. You might be surprised at this advice as it seems counterintuitive, but we assure you the advice is backed up by many millions of tests.\nSpecifically, do not construct your queries like this:\nhttp://api.crossref.org/works?query.author=\u0026#34;Josiah Carberry\u0026#34;\u0026amp;filter=from-pub-date:2008-08-13,until-pub-date:2008-08-13\u0026amp;query.container-title=\u0026#34;Journal of Psychoceramics\u0026#34;\u0026amp;query=\u0026#34;Toward a Unified Theory of High-Energy Metaphysics\u0026#34;\u0026amp;order=score\u0026amp;sort=desc The above is a massively expensive and slow query. If it doesn’t time-out, you are likely to get a false negative anyway.\nAnd also don’t do this:\nhttp://api.crossref.org/works?query=\u0026#34;Toward a Unified Theory of High-Energy Metaphysics, Josiah Carberry 2008-08-13\u0026#34; Using the plain query parameter will search the entire record, including funder and other non-bibliographic elements. This means that it will also match any record that includes the query text in these other elements, resulting in many false positives and distorted scores.\nIf you are trying to match references, the simplest approach is the best. Just use the query.bibliographic parameter. It restricts the matching to the bibliographic metadata and the default sort order and scoring mechanism will reliably list the best match first. Restricting the number of rows to 2 allows you to check to see if there is an ambiguous match (e.g. a “tie” in the scores of the first two items returned” (see above tip). So the best way to do the above queries is like this:\nhttp://api.crossref.org/works?query.bibliographic=\u0026#34;Toward a Unified Theory of High-Energy Metaphysics, Josiah Carberry 2008-08-13\u0026#34;\u0026amp;rows=2 Don\u0026rsquo;t use rows and offsets to page through the /works route . They are very expensive and slow. Use cursors instead. We implemented rows and offsets early in the development of the API and regretted it immediately. So we implemented cursors instead and kept rows and offsets so as to not break existing scripts. But they are not recommended.\nReduce the chance of cursor timeouts by segmenting your requests into subsets If you are trying to download large amounts of data from the API, our first suggestion is don\u0026rsquo;t. Instead use the public data file that we mention above.\nBut if for some reason, you cannot use the public data file, then you need to know how to use cursors efficiently.\nThe problem is this- if you are doing a long sequence of cursor requests, and the API (or your script) becomes unstable in the middle of the sequence, and you get an error, you will have to start from scratch with a new cursor.\nFor example, if you were attempting to download the entire corpus, then (as of mid-2022) you would need 136,000 successful requests of 1000 rows. In doing this, you would run a high risk of an unsuccessful request and, consequently, you may never be able to download everything.\nThe best strategy to use here is to segment your requests so that you can download smaller subsets of the data. That way, if you encounter an error, you just need to retry downloading the subset- not the entire set. For example, you might choose to segment by Crossref member or by a time period (e.g., week, month, day).\nOptimize your requests and pay attention to errors. If you have an overall error (4XX + 5XX) rate \u0026gt;= 10%, please stop your script and figure out what is going on. Don’t just leave it hammering the API and generating errors\u0026ndash;you will just be making other users (and Crossref staff) miserable until you fix your script.\nIf you get a 404 (not found) when looking up a DOI, do not just endlessly poll Crossref to see if it ever resolves correctly. First check to make sure the DOI is a Crossref DOI. If it is not a Crossref DOI, you can stop checking it with us and try checking it with another registration agency’s API. You can check the registration agency to which a DOI belongs as follows:\nhttps://api.crossref.org/works/{doi}/agency Adhere to rate limits. We limit both requests per second and concurrent requests by IP. There can be good reasons to run your scripts on multiple machines with different IPs, but if you do please continue to respect the overall rate limit by restricting each process to working at an appropriate sub-rate of the overall rate limit.\nConcurrent requests are requests that arrive at the same time or overlap in execution while a client is waiting for a response. Running parallel processes against the API, or multiple users using the same IP may result in a temporary block until the number of concurrent requests falls below the current limit.\nRequests that make use of complex queries or filters may result in extended response times. If too many subsequent requests are made before a response is returned, you may run into the concurrent request rate limit. Individual DOI lookups are less likely to trigger the concurrent request rate limit.\nWe generally limit the number of concurrent requests to 5, though we reserve the right to adjust this in order to keep the API operational for all users.\nWhen rate limits are exceeded, requests will receive a 429 HTTP response code and the offending user will be blocked for 10 seconds. If you continue to make requests while blocked, you will be blocked for another 10 seconds. Please monitor HTTP response codes and utilize automated backoff and retry logic in your application.\nAt any time we may change how we apply and enforce rate limits.\nCheck your errors and respond to them. If you get an error\u0026ndash;particularly a timeout error, a rate limit error (429), or a server error (5XX)\u0026ndash;do not just repeat the request or immediately move onto the next request, back-off your request rate. Ideally, back-off exponentially. There are lots of libraries that make this very easy. Since a lot of our API users seem to use Python, here are links to a few libraries that allow you to do this properly:\nBackoff Retry But there are similar libraries for Java, Javascript, R, Ruby, PHP, Clojure, Golang, Rust, etc.\nMake sure you URL-encode DOIs. DOIs can contain lots of characters that need to be escaped properly. We see lots of errors that are simply the result of people not taking care to properly encode their requests. We advise you URL-encode DOIs.\nCache the results of your requests. We know a lot of our users are extracting DOIs from references or other sources and then looking up their metadata. This means that, often, they will end up looking up metadata for the same DOI multiple times. We recommend that, at a minimum, you cache the results of your requests so that subsequent requests for the same resource don’t hit the API directly. Again, there are some very easy ways to do this using standard libraries. In Python, for example, the following libraries allow you to easily add caching to any function with just a single line of code:\nRequests-cache Diskcache Cachew There are similar libraries for other languages.\nIf you are using the Plus API, make sure that you are making intelligent use of the snapshots. Only use the API for requesting content that has changed since the start of the month, and use the metadata already in the snapshot for everything else.\nManaging the snapshot can be cumbersome as it is inconveniently large. Remember that you do not have to uncompress and unarchive the snapshot in order to use it. Most major programming languages have libraries that allow you to open and read files directly from a compressed archive. For example:\ntarfile If you parallelize the process of reading data from the snapshot and loading it into your database, you should be able to scale the process linearly with the number of cores you are able to take advantage of.\n", "headings": ["Consider not using the API","Pick the right service level.","Understand the performance characteristics of REST API queries.","Don\u0026rsquo;t use rows and offsets to page through the /works route . They are very expensive and slow. Use cursors instead.","Reduce the chance of cursor timeouts by segmenting your requests into subsets","Optimize your requests and pay attention to errors."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/services-chinese/", "title": "Crossref Services - Simplified Chinese", "subtitle":"", "rank": 3, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Find a service", "tags": [], "description": "Find a service Content Registration Similarity Check Reference Linking Crossmark Metadata Search Metadata APIs Cited-by Content Registration ", "content": "Find a service Content Registration Similarity Check Reference Linking Crossmark Metadata Search Metadata APIs Cited-by Content Registration ", "headings": ["Find a service"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2024-slate/", "title": "Board election 2024 candidates", "subtitle":"", "rank": 2, "lastmod": "2024-09-16", "lastmod_ts": 1726444800, "section": "Board & governance", "tags": [], "description": "Help elect the incoming class of 2025 Crossref board members", "content":["Help","elect","the","incoming","class","of","2025","Crossref","board","members"], "headings": ["Tier 1, Small and mid-sized members (electing two seats)","Tier 2, Large members (electing two seats)","Tier 1, Small and mid-sized members (electing two seats)","Katharina Rieck, Open Science Manager, Austrian Science Fund (FWF)","Austria\n","Personal statement ","Organization statement","Lisa Schiff, Associate Director, Publishing, Archives, \u0026amp; Digitization, California Digital Library","United States\n","Personal statement ","Organization statement","Ejaz Khan, Editor in Chief \u0026amp; Professor, Health Services Academy, Pakistan Journal of Public Health","Pakistan\n","Personal statement ","Organization statement","Karthikeyan Ramalingam, Editorial Manager \u0026amp; Associate Dean, MM Publishers","India\n","Personal statement ","Organization statement","Tier 2, Large members (electing two seats)","Aaron Wood, Head, Product \u0026amp; Content Management, American Psychological Association","United States\n","Personal statement ","Organization statement","Dan Shanahan, Publishing Director, Public Library of Science (PLOS)","United States\n","Personal statement ","Organization statement","Amanda Ward, Director of Open Research, Taylor and Francis","United Kingdom\n","Personal statement ","Organization statement"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2023-slate/", "title": "Board election 2023 candidates", "subtitle":"", "rank": 2, "lastmod": "2023-08-21", "lastmod_ts": 1692576000, "section": "Board & governance", "tags": [], "description": "From 87 applications, our Nominating Committee put forward the following 11 candidates to fill the seven seats open for election to the Crossref Board of Directors. We will elect two large member seats and five small/midsized member seats. Please read their candidate statements below.\nIf you are a voting member of Crossref your ‘voting contact’ will receive an email the week of September 25th. Please follow the instructions in that email which includes links to the relevant election process and policy information.", "content":["From","87","applications,","our","Nominating","Committee","put","forward","the","following","11","candidates","to","fill","the","seven","seats","open","for","election","to","the","Crossref","Board","of","Directors.","We","will","elect","two","large","member","seats","and","five","small/midsized","member","seats.","Please","read","their","candidate","statements","below.\nIf","you","are","a","voting","member","of","Crossref","your","‘voting","contact’","will","receive","an","email","the","week","of","September","25th.","Please","follow","the","instructions","in","that","email","which","includes","links","to","the","relevant","election","process","and","policy","information."], "headings": ["Tier 1, Small and mid-sized members (electing five seats)","Tier 2, Large members (electing two seats)","Tier 1, Small and mid-sized members (electing five seats)","Wendy Patterson, Scientific Director, Beilstein-Institut","Germany\n","Personal statement ","Organization statement","Olu Joshua, Director, Lujosh Ventures Limited","Nigeria\n","Personal statement ","Organization statement","Kihong Kim, President, Korean Council of Science Editors","South Korea\n","Personal statement ","Organization statement","Mike Schramm, Managing Director, NISC Ltd","South Africa\n","Personal statement ","Organization statement","Marin Dacos, Senior Advisor, OpenEdition","France\n","Personal statement ","Organization statement","Dr. Ivan Suazo, Vice Rector of Research, Universidad Autónoma de Chile","Chile\n","Personal statement ","Organization statement","Vincas Grigas, Head of Scholarly Journals, Vilnius University","Lithuania\n","Personal statement ","Organization statement","Tier 2, Large members (electing two seats)","Scott Delman, Director of Publications, Association for Computing Machinery (ACM)","United States\n","Personal statement ","Organization statement","James Phillpotts, Director of Content Transformation \u0026amp; Standards, Oxford University Press","United Kingdom\n","Personal statement ","Organization statement","Dan Shanahan, Publishing Director, Public Library of Science (PLOS)","United States\n","Personal statement ","Organization statement","Ashley Towne, Journals Director, University of Chicago Press","United States\n","Personal statement ","Organization statement"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2022-slate/", "title": "Board election 2022 candidates", "subtitle":"", "rank": 2, "lastmod": "2022-09-16", "lastmod_ts": 1663286400, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your ‘voting contact’ will receive an email the week of September 19th. Please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom about 40 applications, our Nominating Committee put forward the following seven candidates to fill the five seats open for election to the Crossref Board of Directors. Please read their candidate statements below.", "content":["If","you","are","a","voting","member","of","Crossref","your","‘voting","contact’","will","receive","an","email","the","week","of","September","19th.","Please","follow","the","instructions","in","that","email","which","includes","links","to","the","relevant","election","process","and","policy","information.\nFrom","about","40","applications,","our","Nominating","Committee","put","forward","the","following","seven","candidates","to","fill","the","five","seats","open","for","election","to","the","Crossref","Board","of","Directors.","Please","read","their","candidate","statements","below."], "headings": ["Tier 1, Small and mid-sized members","Tier 2, Large members","Tier 1, Small and mid-sized members (electing one seats)","Damian Pattinson, eLife, UK\n","Personal statement ","Organization statement","Déclaration personnelle du candidat","Déclaration organisationnelle","Declaração pessoal do candidato","Declaração da organização","Declaración personal del candidato","Declaración de la organización","후보자의 개인 소개","단체 소개","Oscar Donde, Pan Africa Science Journal, Kenya\n","Personal statement\n","Organization statement","Déclaration personnelle du candidat","Déclaration organisationnelle","Declaração pessoal do candidato","Declaração da organização","Declaración personal del candidato","Declaración de la organización","후보자의 개인 소개","단체 소개","Tier 2, Large members (electing four seats)","Christine Stohn, Clarivate, US\n","Personal statement ","Organization statement ","Déclaration personnelle de la candidate","Déclaration organisationnelle","Declaração pessoal da candidata","Declaração da organização","Declaración personal de la candidata","Declaración de la organización","후보자의 개인 소개","단체 소개","Rose L\u0026rsquo;Huillier, Elsevier, Netherlands\n","Personal statement","Organizational statement ","Déclaration personnelle de la candidate","Déclaration organisationnelle","Declaração pessoal da candidata","Declaração da organização","Declaración personal de la candidata","Declaración de la organización","후보자의 개인 소개","단체 소개","Nick Lindsay, The MIT Press, US\n","Personal statement","Organizational statement ","Déclaration personnelle du candidat","Déclaration organisationnelle","Declaração pessoal do candidato","Declaração da organização","Declaración personal del candidato","Declaración de la organización","후보자의 개인 소개","단체 소개","Anjalie Nawaratne, Springer Nature, UK\n","Personal statement","Organizational statement ","Déclaration personnelle de la candidate","Déclaration organisationnelle","Declaração pessoal da candidata","Declaração da organização","Declaración personal de la candidata","Declaración de la organización","후보자의 개인 소개","단체 소개","Allyn Molina, Wiley, US\n","Personal statement","Organizational statement ","Déclaration personnelle de la candidate","Déclaration organisationnelle","Declaração pessoal da candidata","Declaração da organização","Declaración personal de la candidata","Declaración de la organización","후보자의 개인 소개","단체 소개"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2021-slate/", "title": "Board election 2021 candidates", "subtitle":"", "rank": 2, "lastmod": "2021-08-11", "lastmod_ts": 1628640000, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 61 applications, our Nominating Committee put forward the following eight candidates to fill the five seats open for election to the Crossref Board of Directors. Please read their candidate statements below.\nTier 1, Small and mid-sized members Member organization Candidate standing Title Org type Country California Digital Library, University of California Lisa Schiff Associate Director, Publishing, Archives, and Digitization Library US Center for Open Science Nici Pfeiffer Chief Product Officer Researcher service US Melanoma Research Alliance Kristen Mueller Senior Director, Scientific Program Research funder US Morressier Sebastian Rose Head of Data Data repository Germany NISC Mike Schramm Managing Director Publisher South Africa Tier 2, Large members Member organization Candidate standing Title Org type Country AIP Publishing Penelope Lewis Chief Publishing Officer Society US American Psychological Association (APA) Jasper Simons Chief Publishing Officer Society US Association for Computing Machinery Scott Delman Director of Publications Society US Tier 1, Small and mid-sized members (electing three seats) Lisa Schiff, University of California, USA", "content":["If","you","are","a","voting","member","of","Crossref","your","‘voting","contact’","will","receive","an","email","in","late","September,","please","follow","the","instructions","in","that","email","which","includes","links","to","the","relevant","election","process","and","policy","information.\nFrom","61","applications,","our","Nominating","Committee","put","forward","the","following","eight","candidates","to","fill","the","five","seats","open","for","election","to","the","Crossref","Board","of","Directors.","Please","read","their","candidate","statements","below.\nTier","1,","Small","and","mid-sized","members","Member","organization","Candidate","standing","Title","Org","type","Country","California","Digital","Library,","University","of","California","Lisa","Schiff","Associate","Director,","Publishing,","Archives,","and","Digitization","Library","US","Center","for","Open","Science","Nici","Pfeiffer","Chief","Product","Officer","Researcher","service","US","Melanoma","Research","Alliance","Kristen","Mueller","Senior","Director,","Scientific","Program","Research","funder","US","Morressier","Sebastian","Rose","Head","of","Data","Data","repository","Germany","NISC","Mike","Schramm","Managing","Director","Publisher","South","Africa","Tier","2,","Large","members","Member","organization","Candidate","standing","Title","Org","type","Country","AIP","Publishing","Penelope","Lewis","Chief","Publishing","Officer","Society","US","American","Psychological","Association","(APA)","Jasper","Simons","Chief","Publishing","Officer","Society","US","Association","for","Computing","Machinery","Scott","Delman","Director","of","Publications","Society","US","Tier","1,","Small","and","mid-sized","members","(electing","three","seats)","Lisa","Schiff,","University","of","California,","USA"], "headings": ["Tier 1, Small and mid-sized members","Tier 2, Large members","Tier 1, Small and mid-sized members (electing three seats)","Lisa Schiff, University of California, USA\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Nici Pfeiffer, Center for Open Science, USA\n","Organization statement","Personal statement\n","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Kristen Mueller, Melanoma Research Alliance, USA\n","Organization statement","Personal statement\n","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Sebastian Rose, Morressier, Germany\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Mike Schramm, NISC, South Africa\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Tier 2, Large members (electing two seats)","Penelope Lewis, AIP Publishing, USA\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Jasper Simons, American Psychological Association (APA), USA\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Scott Delman, Association for Computing Machinery (ACM), USA\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/crossref-annual-meeting/archive/", "title": "Annual meetings archive", "subtitle":"", "rank": 2, "lastmod": "2020-09-01", "lastmod_ts": 1598918400, "section": "Crossref Annual Meeting", "tags": [], "description": "An archive of every annual Crossref meeting since 2002, now called Crossref LIVE. Jump to a year to review the programs and speakers, and link to some recordings and slides.\n2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2024 annual meeting October 29 | Online | Twitter Hashtag: #Crossref2024\nPlease see information from #Crossref2024 below, and cite the outputs as #Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], https://doi.", "content": "An archive of every annual Crossref meeting since 2002, now called Crossref LIVE. Jump to a year to review the programs and speakers, and link to some recordings and slides.\n2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2024 annual meeting October 29 | Online | Twitter Hashtag: #Crossref2024\nPlease see information from #Crossref2024 below, and cite the outputs as #Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/1KJ1GBDA9B:\nYouTube recording Slides Posters Board election results 2023 annual meeting October 31 | Online | Twitter Hashtag: #Crossref2023\nPlease see information from #Crossref2023 below, and cite the outputs as #Crossref2023 Annual Meeting and Board Election, October 31, 2023 retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/h3yygefpyf\nYouTube recording Google slides or pdf slides #Crossref2023 Mastodon stream #Crossref2023 Twitter stream Posters from community guest speakers Board election results 2022 annual meeting October 26 | Online | Twitter Hashtag: #CRLIVE22\nTheme: The Research Nexus, the importance of relationships in metadata and community stories of building the research nexus together.\nPlease check out the materials from LIVE22 below, and cite the outputs as Crossref Annual Meeting LIVE22, October 26, 2022 retrieved [date], [https://0-doi-org.libus.csd.mu.edu/10.13003/i3t7l9ub7t]:\nYouTube recording Recording transcript Zoom Q\u0026amp;A transcript Google slides or pdf slides Twitter thread #CRLIVE22 Posters from community guest speakers Board election results\n2021 annual meeting 9 November | Online | Twitter Hashtag: #CRLIVE21\nPlease check out the materials from LIVE21 below, and cite the outputs as Crossref Annual Meeting LIVE21, November 9, 2021 retrieved [date], [https://0-doi-org.libus.csd.mu.edu/10.13003/s0slxfq]:\nYouTube recording Recording transcript Zoom Q\u0026amp;A transcript Google slides or pdf slides Board election results\n2020 annual meeting 10 November | Online | Twitter Hashtag: #CRLIVE20\nCrossref turned 20 in 2020 and despite the challenges in the world lately, it felt like a natural milestone to take our annual meeting, LIVE20, online.\nPlease check out the materials from LIVE20 below, and cite the outputs as Crossref Annual Meeting LIVE20, November 10, 2020, retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/5gq8v1q:\nYouTube recording Recording transcript Zoom Q + A transcript Google slides or pdf slides Board election results\n2019 annual meeting 13 - 14 November, 2019 | Tobacco Theater, Amsterdam | Twitter Hashtag: #CRLIVE19\nTheme: Have your say where and it included an afternoon of scene-setting talks from Ed on our strategy and trends, and Ginny on research into the value of Crossref.\nDay 1 - Recording\nRecorded sessions from November 13, 2019\nSlides Ed Pentz and Ginny Hendricks - \u0026quot;Welcome\u0026quot;, \u0026quot;Perceived value of Crossref\u0026quot;, \u0026quot;Strategic scene-setting\u0026quot; (PDF) Catriona Maccallum of Hindawi - “In our own words: Hindawi” (PDF) Todd Toler of Wiley - “Crossref’s Value in an era of Open Science” (PDF) Anna Danilova of Ukrinformauka - “Cooperation of SA “Ukrinformnauka” with Crossref (PDF) Christian Gutknecht of the Swiss National Science Foundation - “Crossref from a funder perspective” (PDF) Ludo Waltman of CWTS at Leiden University - “Researcher and metadata user view” (PDF) Day 2 - Recording\nRecorded sessions November 14, 2019\nSlides Slides for Workshop 1 Slides for Workshop 2 Slides for Workshop 2 Slides for the Workshop 'rollup' Get the feel of the event by listening to talks from the session entitled “In their own words\u0026hellip;” from: Todd Toler of Wiley, Catriona Maccallum of Hindawi, Anna Danilova of Ukrinformauka, Christian Gutknecht of the Swiss National Science Foundation, and Ludo Waltman of CWTS at Leiden University. The podcast also includes some of our guest’s comments about the value of Crossref.\nPlease note there will be no in-person event around the November 2020 annual election, which will instead be announced virtually. We want to take stock of the feedback from LIVE19, gather more from others throughout 2020, and following up on the value research outputs we\u0026rsquo;ll be ready to report progress in 2021. Please check to see if there will be a LIVE local near you instead.\n2018 annual meeting 13 - 14 November, 2018 | Toronto Reference Library, Canada | Twitter Hashtag: #CRLIVE18\nTheme: How good is your metadata? where Crossref staff and invited speakers inspired us with their metadata tales of woe and wonder. It was two full days packed with a mixture of plenary sessions, the results of our members\u0026rsquo; newly elected board members, and interactive activities.\nCrossrefLIVE18.sched.com agenda #CRLIVE18 Twitter stream Recordings from Day 1 Recordings from Day 2 2017 annual meeting 14 - 15 November, 2017 | 8:00AM - 5:45PM | Hotel Fort Canning, Singapore | Twitter Hashtag: #CRLIVE17\nTheme: We focused on the theme of Metadata + Relations + Infrastructure = Context. We had our fullest program yet, with the broadest representation yet - researchers, editors, journal and book people, publishing execs, librarians, and funders. We talked a lot about relationships: between metadata; between research outputs; and between the scholarly community.\nAgenda Tuesday, November 14\n08:00 Registration 09:00 Ed Pentz: Year in Review \u0026amp; Strategy Introduction Video Recording 09:30 Jennifer Lin: Metadata for the Scholarly Map Video Recording 10:00 Theodora Bloom: Handling the Dynamic Nature of Scholarly Communications Video Recording 10:30 Break 11:00 Amanda Bartell, Susan Collins, Ginny Hendricks: How to Win at Being a Crossref Membern Video Recording Vanessa Fairhurst, Rachael Lammey: Reaching our International Community Video Recording Madeleine Watson and Patricia Feeney: Relations, Translations, \u0026amp; Versions - Oh My! Video Recording Madeleine Watson and Chuck Koscher: This New Metadata Manager Will Change Your Life Video Recording 12:30 Lunch 13:30 Yin-Leng Theng: ARIA: A Scholarly Metrics Information System for Universities Video Recording Mark Patterson: I4OC: The Initiative for Open Citations Video Recording Nicholas Bailey: What does data science tell us about social challenges in scholarly publishing? Video Recording Jennifer Kemp and Madeleine Watson: Exploring relationships with Event Data Video Recording 15:00 Break 15:30 P Showraiah, Venkatraman Anandalkshmn, Brandon Koh, Alisha Ramos: Singapore Three Minute Thesis (3MT) competition finalists Video Recording 15:45 Casey Greene: Research and Literature Parasites in a Culture of Sharing Video Recording 16:25 John Chodacki: Metadata 2020: What Could Richer Metadata Enable? Video Recording 16:45 Lisa Hart Martin: Annual Meeting \u0026amp; Board Election 17:15 Ed Pentz, P Showraiah, Jean Christophe \u0026amp; We Are CONfidence: Drinks Reception \u0026amp; Entertainment\nWednesday, November 15 08:30 Coffee 09:00 Geoffrey Bilder: Information Trust: Metadata = provenance; provenance = key to trust Video Recording 09:30 Doreen Liu: Scholarly Publishing in Asia today Video Recording 10:00 Trevor Lane: Introduction to COPE \u0026amp; Research Integrity Video Recording 10:20 Miguel Escobar Varela: Digital Humanities in Singapore: some thoughts for the future Video Recording 10:40 Break 11:10 Geoffrey Bilder: You\u0026rsquo;ll never guess who uses Crossref metadata the most Video Recording 11:30 Drop-in: Trevor Lane: Bring Your Ethics Issues to Discuss with COPE Drop-in: Gareth Malcolm and Madeleine Watson: our Similarity Check Questions Answered Drop-in: Jennifer Lin: Bring your Crossmark questions around updates and corrections 12:30 Lunch 13:30 Jennifer Kemp: Introducing the Crossref Plus service Video Recording 13:50 Alan Rutledge: Anatomy of a personalized research recommendation engine Video Recording 14:00 Kuansan Wang: Democratize access to scholarly knowledge with AI Video Recording 14:15 Lenny Teytelman: Call to Reduce Random Collisions with Information Video Recording 14:45 Break 15:15 Ed Pentz: Crossref Strategic Outlook Video Recording 15:40 Paul Peters: The OI Project: Disambiguating affiliation names Video Recording 15:55 Adam Hyde: Developing an open source publishing ecosystem founded on community collaboration Video Recording 16:15 Liz Allen: New models of publishing research outputs: the importance of infrastructure Video Recording 16:40 Ed Pentz: Wrap-up \u0026amp; Key Takeaways\n2016 annual meeting 2 November, 2016 | 8:30AM - 6PM | The Royal Society, London, UK | Twitter Hashtag: #LIVE16\nTheme: Smart alone, brilliant together\nAgenda 08:30 – 10:00 Registration and Breakfast 10:00 – 10:30 Dario Taraborelli, Wikimedia: Citations for the sum of all human knowledge (as linked open data) Video Recording 10:30 – 11:00 Ian Calvert, Digital Science: \u0026ldquo;You don\u0026rsquo;t have metadata Video Recording 11:00 – 11:30 Break 11:30 – 11:50 Ed Pentz: Crossref’s outlook \u0026amp; key priorities Video Recording 11:50 – 12:10 Ginny Hendricks: A vision for membership Video Recording 12:10 – 12:30 Lisa Hart Martin: The meaning of governance Video Recording 12:30 – 13:00 Business meeting \u0026amp; Election results, with Crossref\u0026rsquo;s Chair and Treasurer. Moderators: Lisa Hart Martin 13:00 – 14:00 Join your colleagues for a hot lunch 14:00 – 14:20 Geoffrey Bilder: The case of the missing leg Video Recording 14:20 – 14:40 Jennifer Lin: New territories in the Scholarly Research Map Video Recording 14:40 – 15:00 Chuck Koscher: Relationships and other notable things Video Recording 15:00 – 15:30 Break 15:30 – 16:00 Carly Strasser, Gordon and Betty Moore Foundation: Funders and Publishers as Agents of Change Video Recording 16:00 – 16:30 April Hathcock, New York University: Opening Up the Margins Video Recording 16:30 – 16:45 Closing Remarks 16:45 – 18:00 Reception\nGet a flavor of the event by listening to three of the most popular talks from 2016. The podcast also includes some of our guest\u0026rsquo;s comments about the talks, including Crossref\u0026rsquo;s Geoffrey Bilder on our plans for a new organization identifier registry, Dario Taraborelli from the Wikimedia Foundation, and April Hathcock of New York University.\n2015 annual meeting 18 November, 2015 | 8:30AM - 5PM | The Taj Hotel, Boston, MA, USA | Twitter Hashtag: #crossref15\nAgenda 08:30 Registration and Breakfast\n09:00 (Optional) Taxonomies Interest Group\n09:00 (Optional) Business Meeting\n09:30 Registration and Breakfast\n09:55 Welcome\n10:00 Marc Abrahams, Improbable Research : Improbable research, the Ig Nobel Prizes, and you: Presentation | Video Recording\n10:45 Juan Pablo Alperin, Public Knowledge Project and Crossref: Two P\u0026rsquo;s in a Cross Presentation | Video Recording\n11:30 Break\n12:00 Ed Pentz: Executive Update Presentation | Video Recording\n12:20 Ginny Hendricks: Outreach \u0026amp; Brand Presentation | Video Recording\n12:45 Lunch\n13:45 Jennifer Lin: Products \u0026amp; Services Presentation | Video Recording\n14:15 Chuck Koscher: The Metadata Engine Presentation | Video Recording\n14:35 Geoffrey Bilder: Strategic Initiatives Presentation | Video Recording\n15:00 Break\n15:30 Scott Chamberlain, rOpenSci: Thinking programmatically Video Recording\n16:15 Martin Eve, Birbeck, University of London: Open Access \u0026amp; The Humanities Presentation | Video Recording\n17:00 Ed Pentz: Closing Remarks\n17:05 Champagne Reception\n2014 annual meeting 12 November, 2012 | 8:30AM - 6:30PM | The Royal Society, London, UK | Twitter Hashtag: #crossref14\nAgenda 8:30 - 10:00 Registration and Breakfast 9:15 - 9:45 Corporate Annual Meeting for Members and Board Election (Ian Bannerman, Chair, Board of Directors, Bernard Rous, Treasurer, Ed Pentz, Executive Director, Lisa Hart, Secretary) 10:00 - 10:20 Ed Pentz, Executive Director: Main Open Meeting, Introduction and CrossRef Overview Presentation | Video Recording 10:20 - 10:40 Chuck Koscher, Director of Technology: System Update Presentation | Video Recording 10:40 - 11:00 Geoffrey Bilder, Director of Strategic Initiatives: Strategic Initiatives Update Video Recording 11:00 - 11:30 Break 11:30 - 12:15 Branding, Carol Anne Meyer, Business Development and Marketing: CrossRef Flash Update Presentation | Video Recording Rachael Lammey, Product Manager: CrossCheck \u0026amp; CrossRef Text and Data Mining Presentation | Video Recording Kirsty Meddings, Product Manager: CrossMark \u0026amp; FundRef Presentation | Video Recording Karl Ward, R\u0026amp;D Programmer: CrossRef Metadata Search Presentation | Video Recording Ed Pentz, Executive Director: ORCID Presentation | Video Recording 12:15 - 13:15 Lunch 13:15 - 14:15 Keynote: Laurie Goodman, PhD., GigaScience: Ways and Needs to Promote Rapid Data Sharing Presentation | Video Recording 14:15-14:45 Break 14:45-16:00 Moderator: Carol Anne Meyer, Business Development and Marketing: Improving Peer Review Panel Adam Etkin, PRE: Securing Trust \u0026amp; Transparency in Peer Review Presentation | Video Recording bioRxiv: the preprint server for biology: Richard Sever, Cold Spring Harbor Laboratory Press Presentation | Video Recording Mirjam Curno, Frontiers: Frontiers’ Collaborative Peer Review Presentation | Video Recording Janne-Tuomas Seppänen, Peerage of Science: Do it once, do it well – questioning submission and peer review traditions Presentation | Video Recording 16:00-17:00 Richard A. Jefferson, Cambia: Innovation Cartography: Creating impact from scholarly research requires maps not metrics Video Recording 17:00-17:15 Wrap Up 17:15-18:15 Cocktail reception\n2013 annual meeting 13 November, 2013 | 8:30AM - 5PM | The Charles Hotel, Cambridge, MA, USA | Twitter Hashtag: #crossref13\nAgenda 8:30 - 10:00 Registration and Breakfast 9:15 - 9:45 Corporate Annual Meeting for Members and Board Election –– Bernie Rous, Treasurer, Board of Directors –– Ed Pentz, Executive Director –– Lisa Hart, Secretary 10:00 - 10:20 Ed Pentz, Executive Director: Main Open Meeting: Introduction and CrossRef Overview Video recording \u0026amp; presentation 10:20 - 10:40 Chuck Koscher, Director of Technology: System Update Video recording \u0026amp; presentation 10:40 - 11:00 Geoffrey Bilder, Director of Strategic Initiatives : Strategic Initiatives Update Video recording \u0026amp; presentation 11:00 - 11:30 Break 11:30 - 12:15 CrossRef Flash Update –– Carol Anne Meyer, Business Development and Marketing: Branding Video recording \u0026amp; presentation –– Rachael Lammey, Product Manager: CrossCheck \u0026amp; CrossMark Video recording \u0026amp; presentation –– Kirsty Meddings, Product Manager: FundRef Video recording \u0026amp; presentation –– Karl Ward, R\u0026amp;D Programmer: CrossRef Metadata Search Video recording \u0026amp; presentation –– Ed Pentz, Executive Director: ORCID Video recording \u0026amp; presentation 12:15 - 13:15 Lunch 13:15 - 14:15 Keynote: Heather Piwowar, co-founder Impactstory: Building skyscrapers with our scholarship Video recording \u0026amp; presentation 14:15 - 14:45 Kristen Fisher Ratan, PLOS: Agile Publishing: responding to the changing requirements in scholarly communication Video recording \u0026amp; presentation 14:45 - 15:15 Break 15:15 - 16:00 Walter Warnick, Director of the U.S. Department of Energy Office of Scientific and Technical Information (OSTI): How CrossRef has Accelerated Science and Its Promise for the Future: A Federal Perspective Video recording \u0026amp; presentation 16:00 - 16:45 CLOCKSS and Portico, Randy S. Kiefer, CLOCKSS and Kate Wittenberg, PORTICO: Archiving Publishing Panel: United on Preservation Video recording \u0026amp; presentation 16:45 - 17:00 Wrap Up\n2012 annual meeting 14 November, 2012 | 8:30AM - 6:30PM | The Royal Society, London, UK | Twitter Hashtag: #crossref12\nAgenda 8:30 - 10:00 Registration and Breakfast 9:15 - 9:45 Corporate Annual Meeting for Members and Board Election –– Linda Beebe, Chair, Board of Directors –– Ian Bannerman, Treasurer, Board of Directors –– Lisa Hart, Secretary –– Ed Pentz, Executive Director 10:00 - 10:20 Ed Pentz, Executive Director: Main Open Meeting, Introduction and CrossRef Overview Presentation | Video Recording 10:20 - 10:40 Chuck Koscher, Director of Technology: System Update Video Recording 10:40 - 11:00 Geoff Bilder, Director of Strategic Initiatives: Strategic Initiatives Update Presentation | Video Recording 11:00 - 11:30 Break 11:30 - 12:00 Laure Haak, ORCID: The Role of ORCID in the Research Community Presentation | Video Recording 12:00 - 13:00 Lunch 13:00 - 14:00 Jason Scott, Archive Team: CITIES ON THE EDGE OF NEVER: Life in the Trenches of the Web in 2012 Video Recording 14:00-15:00 Carol Anne Meyer, Business Development and Marketing: Global Publishing Panel: Perspectives of using CrossRef from publishers in Lithuania, Brazil, South Korea and China ––Eleonora Dagiene, Vilnius Gediminas University Press Presentation | Video Recording ––Edilson Damasio, Universidade Estadual de Maringá – UEM - Eduem Presentation | Video Recording ––Choon Shil Lee, Sookmyung Women’s University, KAMJE Presentation | Video Recording ––Yan Shuai, Tsinghua University Press (TUP) Presentation | Video Recording 15:00 - 15:30 FundRef: In response to the need to standardize the collection and display of funding information for scholarly publications, CrossRef officially launched the FundRef project in March of 2012. Four funding agencies and seven publishers are working together to carry out a pilot project, with the goal of developing and demonstrating a community-wide solution. The pilot group plans to issue recommendations for full integration of funding information in early 2013. ––Fred Dylla, American Institute of Physics (AIP) Presentation | Video Recording ––Kevin Dolby, Wellcome Trust Presentation | Video Recording 15:30 - 16:00 Break 16:00 - 16:30 Rachael Lammey, Product Manager: CrossCheck and CrossMark Update Presentation | Video Recording 16:30 - 17:00 Virginia Barbour, PLOS, Committee on Publications Ethics (COPE): Plagiarism as seen from the editors’ perspective Presentation | Video Recording 17:00 - 17:15 Wrap Up 17:15 - 18:30 Cocktail Reception\n2011 annual meeting 15 November, 2011 | 8:30AM - 6:30PM | The Charles Hotel, Cambridge, MA, USA | Twitter Hashtag: #crossref11\nAgenda 8:30 - 10:00 Registration and Breakfast9:00 - 9:45 Corporate Annual Meeting for Members and Board Election–– Linda Beebe, Chair, Board of Directors–– Ian Bannerman, Treasurer, Board of Directors–– Ed Pentz, Executive Director10:00 - 10:20 Main Open Meeting, Introduction and Crossref Overview, Ed Pentz, Executive DirectorPresentation | Video Recording10:20 - 10:40 System Update, Chuck Koscher, Director of Technology Presentation | Video Recording10:40 - 11:00 Strategic Initiatives Update, Geoff Bilder, Director of Strategic Initiatives Video Recording11:00 - 11:30 Break11:30 - 11:45 Crossref Member Obligations (including Display Guidelines), Carol Anne Meyer, Business Development and Marketing Presentation | Video Recording11:45 - 12:15 CrossMark Update Evan Owens, American Institute of Physics [Presentation] | Video RecordingKirsty Meddings, Product Manager Presentation | Video Recording12:15 - 12:45 ORCID Update, Howard Ratner, Nature Publishing Group Presentation | Video Recording12:45 - 13:15 DataCite: the Perfect Complement to Crossref, James Mullins, Purdue University Presentation | Video Recording13:15 - 14:15 Lunch14:15 - 15:15 Sex and the Scientific Publisher: How Journals and Journalists Collude (despite their best intentions) to Mislead the Public, Ellen Ruppel Shell, Boston University Center for Science \u0026amp; Medical Journalism Presentation | Video Recording15:15-15:45 The Persistence of Error: A Study of Retracted Articles on the Internet, Phil Davis, Publishing Consultant Presentation | Video Recording\n16:15 Break 16:15-16:45 Results from global journal editor survey on detecting plagiarism, Helen (Y.H) ZHANG, JZUS (Journal of Zhejiang University-SCIENCE) Video Recording 16:45 - 17:15 The Good, the Bad and the Ugly: What Retractions Tell Us About Scientific Transparency, Ivan Oransky, Retraction Watch Presentation | Video Recording 17:15 - 17:30 Wrap up 17:30 - 18:30 Cocktail Reception 2010 annual meeting 16 November, 2010 | 8:30AM - 6PM | One Great George Street, London, UK | Twitter Hashtag: #crossref10\nCredibility in Scholarly Communications\nAgenda 8:30 - 9:30 Registration and Breakfast 9:00 - 9:45 Corporate Annual Meeting for Members and Board Election –– Bob Campbell, Chair, Crossref Board of Directors –– Linda Beebe, Treasurer, Crossref Board of Directors Video Presentation –– Ed Pentz, Executive Director Video Presentation 10:00 - 10:15 Main Open Meeting, Introduction and Crossref Overview, Ed Pentz, Executive Director Video Presentation 10:15 - 10:30 System Update, Chuck Koscher, Director of Technology Video Presentation 10:30 - 10:45 Strategic Initiatives Update, Geoff Bilder, Director of Strategic Initiatives Video Presentation 10:45 - 11:00 CrossCheck Kirsty Meddings, Product Manager Video Presentation 11:00 - 11:30 Break 11:30 - 11:35 Transparency in Funding Sources, H. Frederick Dylla, American Institute of Physics Video Presentation 11:35 - 12:10 CrossMark Prototype Demo, Carol Anne Meyer, Business Development and Marketing Video Presentation 12:10 - 12:30 ORCID Update, Howard Ratner, Nature Publishing Group Video Presentation 12:30 - 1:30 Lunch 13:30 - 14:30 Making sense of science and evidence, Tracey Brown, Sense About Science Video Presentation 14:30 - 15:00 Which scientists can we trust? Christine Ottery, Science Journalist Video Presentation 15:00 - 15:30 Break 15:30 - 16:00 Scholarly eBooks - Improving discoverability and usage, Carol Anne Meyer, Business Development and Marketing Manager Video Presentation 16:00 - 17:00 Publishing Data alongside Analysis: a case study from OECD, Toby Green, Organisation for Economic Co-operation and Development (OECD) Video Presentation Communicating Data: New Roles for Researchers, Publishers and Libraries, MacKenzie Smith, Massachusetts Institute of Technology Libraries Video Presentation 17:00 - 17:15 Wrap up 17:15 - 18:00 Cocktail Reception\n2009 annual meeting 10 November, 2009 | 8:30AM - 5PM | The Charles Hotel, Cambridge, MA, USA Twitter Hashtag: #crossref09\nAgenda 8:30 - 9:00 Registration and Breakfast 9:00 - 10:00 Corporate Annual Meeting, Board Election and Strategy Overview Bob Campbell, Chair, Crossref Board of Directors Linda Beebe, Treasurer, Crossref Board of Directors Presentation Ed Pentz, Executive Director Presentation 10:00 - 10:20 System Update, Chuck Koscher, Director of Technology Presentation 10:20 - 10:50 Break 10:50 - 11:15 Strategic Initiatives Update, Geoff Bilder, Director of Strategic Initiatives Presentation 11:15 - 12:00 Crossref DOIs in Use and Branding Guidelines, Carol Anne Meyer, Marketing and Business Development Manager Presentation 12:00 - 13:15 Lunch 13:15 - 14:15 Trust, Communication and Academic Publication, Professor Onora O\u0026rsquo;Neill, Faculty of Philosophy, University of Cambridge (Baroness O\u0026rsquo;Neill of Bengarve) 14:15 - 15:30 CrossCheck: Views from the Field. Kirsty Meddings, Product Manager Presentation Panelists: Phillip E. Canuto, Executive Editor, Cleveland Clinic Journal of Medicine Presentation Cathy Griffin, Journal of Bone and Joint Surgery Presentation Howard Ratner, Nature Publishing Group Presentation 15:30 - 16:00 Break 16:00 - 16:45 Plagiarism in the Academy: Now What Do We Do?, T. Scott Pluchak Director, Lister Hill Library of the Health Sciences, University of Alabama at Birmingham Presentation 16:45 - 17:00 Wrap up\n2008 annual meeting 18 November, 2008 | 9AM - 6PM | Lenox Hotel, Boston, MA\nTheme: Towards the Future of Scientific Communication\nAgenda 9:00 - 10:00 Registration and coffee/tea 9:30 - 10:00 Corporate Annual Meeting, Board Election, and Strategy Overview - Bob Campbell, Chair, Crossref Board of Directors; Linda Beebe, Treasurer, Crossref Board of Directors; Ed Pentz, Executive Director 10:00 - 10:20 Intro to Annual Member Meeting and Crossref Mission, Ed Pentz, Executive Director 10:20 - 10:40 System Update, New Services, MyCrossref, Chuck Koscher, Crossref Technology Director 10:40 - 11:00 CrossCheck Update, Ed Pentz, Executive Director 11:00 - 11:30 Coffee and tea break 11:30 - 12:15 New Strategic Initiatives, Geoff Bilder, Director of Strategic Initiatives 12:15 - 13:30 Lunch 13:30 - 14:15 Karen Hunter, Senior Vice President within the Global Academic and Customer Relations at Elsevier on Opportunities for Cooperation 14:15 - 15:10 Jonathan Zittrain, author of and Co-Founder and Faculty Co-Director Berkman Center for Internet \u0026amp; Society on The Future of the Internet and How to Stop It 15:10 - 15:30 Coffee and tea break 15:30 - 16:15 John Wilbanks, VP of Science at Creative Commons, on Building a Knowledge Network 16:15 - 17:15 Natalie Angier, Pulitzer Prize-winning author of, most recently, The Canon, will provide a fun and lively look at science literacy and how publishers aid and abet the cause 17:15 - 17:30 Wrap up 17:30 - 18:30 Cocktail reception\n2007 annual meeting 1 November, 2007 | 9AM - 6:30PM | The Royal College of Surgeons of England, London, UK\nAgenda 9:00 - 9:30 Registration and new member coffee 9:30 - 10:00 Corporate Annual Meeting, Board Election, and Strategy Overview Anthony Durniak, Chair, Crossref Board of Directors Bob Campbell, Vice-Chairman, Crossref Board of Directors Ed Pentz, Executive Director 10:00 - 10:20 System Update, New Services, Chuck Koscher, Crossref Technology Director 10:20 - 10:40 Strategic Initiatives Update, Geoff Bilder, Crossref Director of Strategic Initiatives 10:40 - 11:00 Coffee and tea break 11:00 - 11:40 Dr. Kieron O\u0026rsquo;Hara. Electronic Publishing and Public Trust in Science. Senior Research Fellow, School of Electronics and Computer Science, University of Southampton. 11:40 - 12:20 Alex Frost. Sermo as a model for online information sharing: relations to open science, open review, and post-publication review.Vice President for Research Initiatives, Sermo 12:20 - 13:30 Lunch 13:30 - 14:10 Dr. Ben Goldacre. On Popular Misunderstanding of Science. Medical Doctor who writes the Bad Science column in the Guardian. 14:10 - 14:50 Richard Kidd. Project Prospect - Introducing Semantics into Chemical Science Publishing. Project Manager, Royal Society of Chemistry. 14:50 - 15:10 Coffee and tea break 15:10 - 15:50 Pritpal S Tamber. Faculty of 1000 Medicine: Post Publication Peer Recommendation. Managing Director, Faculty of 1000 Medicine. 15:50 - 16:30 Edward Wates. Trustworthiness: Does the publisher have a role to play? UK Journal Production Director, Blackwell Publishing. 16:30 - 17:30 Sally Morris. Quality and Trust in Scholarly Publishing. Editor-in-Chief, Learned Publishing, ALPSP. Audience questions and discussion. 17:30 - 18:30 Cocktail reception\n2006 annual meeting 1 November, 2006 | 9AM-6PM | The Charles Hotel, Cambridge, MA, USA\nBuilding on Success\nAgenda 9:30 - 10:00 Registration and new member coffee 10:00 - 10:30 Corporate Annual Meeting, Board Election, and Strategy Overview, Anthony Durniak, Chair, Crossref Board of Directors; Robert Campbell, Treasurer, Crossref Board of Directors; Ed Pentz, Executive Director 10:30 - 10:50 System Update, New Services, and Data Quality Initiative Chuck Koscher, Crossref Technology Director 10:50 - 11:10 Board Committee Updates, with introduction by Amy Brand, Director of Business and Product Development; Howard Ratner, Chair, CWS Committee; Bernard Rous, Chair, Institutional Repositories Committee 11:10 - 11:30 Coffee and tea break 11:30 - 12:15 KEYNOTE: Tim Berners-Lee, Director of the World Wide Web Consortium Identifying and describing things on the web 12:15 - 13:40 Lunch Changes in Research and their Impact on Publishing 13:40 - 14:20 DOIs for Biological Databases, Phil Bourne, Protein Data Bank 14:00 - 14:40 Developments in Author Identification, Niels Weertman, Scopus; Taking the guesswork out of author searching, James Pringle, ISI; Smith, Lee, and Hirano T: How and Why to Find Authors 14:40 - 15:00 Coffee and tea break 15:00 - 15:40 The Future of Archiving, starting with the present, Michael Keller, Stanford 15:40 - 17:30 \u0026ldquo;Building on Success\u0026rdquo; (moderated session) 15:40 - 16:00 Introduction by Tony Durniak, Chair of Crossref 16:00 - 17:00 Panel of Crossref members discussing possible future Crossref developments: Terry Hulbert, Institute of Physics Publishing; Richard Cave, Public Library of Science; Michael Krot, JSTOR; Carol Richman, Sage; Greg Suprock, Nature Publishing Group; Mark Doyle, American Physical Society 17:00 - 17:30 Audience questions and discussion; wrap up 17:30 - 18:30 Cocktail reception\n2005 annual meeting 15 November, 2005 | 9:30AM - 6PM | IOP, Portland Place, London, UK\nAgenda 09:30 - 10:00 Registration, new members coffee 10:00 - 10:45 Corporate annual meeting, board election 10:45 - 11:30 Operational and strategic overview; Crossref Search update 11:30 - 12:00 System report and new services 12:00 - 13:00 Lunch 13:15 - 14:45 Digital preservation panel (Lynne Brindley, Johan Steenbakkers, Eileen Fenton) 14:45 - 15:15 Coffee break 15:15 - 15:45 Book DOI case study 15:45 - 16:15 Developments in academic search tools 16:15 - 16:45 Innovation in scientific publishing (Vitek Tracz) 16:45 - 17:00 Closing remarks 17:00 - 18:00 Cocktail reception\n2004 annual meeting 9 November, 2004 | 10AM - 6PM | The Charles Hotel, Cambridge, MA, USA\nAgenda 9:30 - 10:00 Registration, \u0026ldquo;New Members Coffee\u0026rdquo;, an opportunity for new members to meet staff, Board members and other members 10:00 - 10:45 Corporate Annual Meeting/Board Election and Reports from Chair, Treasurer, and Executive Director 10:45 - 11:30 Operational \u0026amp; Strategic Overview (The year in review; reports from committees) 11:30 - 12:00 System Review \u0026amp; New Developments (Multiple Resolution, Stored Queries, Forward Linking) 12:00 - 12:30 Crossref Search Update and Discussion 12:30 - 13:30 Lunch at Rialto Restaurant (in Charles Hotel) 13:45 - 14:30 A Crossref Case Study: DOIs and the secondary publisher - a match made in heaven? (Andrea Powell) 14:30 - 15:00 Changing Routes to Content and Content Preservation in the Digital Age (Dale Flecker, Harvard University) 15:00 - 15:30 Coffee Break 15:30 - 16:00 The California Digital Library\u0026rsquo;s eScholarship Program (Catherine Candee, CDL) 16:00 - 16:30 The Semantic Web Initiative and its Implications for Publishing (Eric Miller, MIT) 16:30 - 17:00 Intellectual Property Issues in Publishing Today (Allan Ryan Jr., Harvard Business School Publishing) 17:00 Closing remarks 17:00 - 18:00 Cocktail reception at Noir (in Charles Hotel lobby)\n2003 annual meeting 16 September, 2003 | 9AM - 4PM | IEE, Savoy Place, London, UK\nAgenda 8:30 - 9:00 Registration, \u0026ldquo;New Members Coffee\u0026rdquo;, an opportunity for new members to meet staff, Board members and \u0026ldquo;older\u0026rdquo; members 9:00 - 10:00 Corporate Annual Meeting/Board Election 10:00 - 12:30 Member Only Session 10:00 - 10:20 Revised Membership Agreement and Business Development Review 10:20 - 10:40 Crossref Search Recap and Crossref Financial Review 10:40 - 11:05 Forward Linking 11:05 - 11:25 Coffee Break 11:25 - 11:50 Technical Update 11:50 - 12:30 Strategic Discussion 12:30 - 13:30 Lunch 13:30 - 16:00 Open session, all welcome 13:30 - 13:50 IDF Update (Norman Paskin) 13:50 - 14:10 The library perspective: \u0026ldquo;The value of Crossref in an open access world\u0026rdquo; (Fred Friend, UCL) 14:10 - 14:40 Developments at the British Library: \u0026ldquo;Preserving our Digital Heritage: the British Library Strategy and Plan for the 21st Century\u0026rdquo; (Richard Boulderstone) 14:40 - 15:00 Coffee break 15:00 - 15:30 DOI Case Study: Nature Publishing Group (Howard Ratner) 15:30 - 16:00 Publisher Case Study: Blackwell Publishing (Jill Cousins): \u0026ldquo;Information Objects Are Hot, Documents Are Not\u0026rdquo;\n2002 annual meeting 25-26 September, 2002 | Fairmont Copley Plaza Hotel, Boston, MA, USA\nAgenda, 25 September, 2002 8:30 - 9:00 Registration, Coffee 9:00 - 10:00 Corporate Annual Meeting Call to Order - Eric Swanson, Chair, Board of Directors, Crossref Appointment of Inspector of Elections Opening Remarks - Eric Swanson, John Wiley \u0026amp; Sons Report from Executive Director - Ed Pentz, Executive Director, Crossref New Business 10:00 Close of Corporate Annual Meeting 10:00 -10:45 Main Session - Overview of New System - Chuck Koscher, Technical Director, Crossref, and Representatives from Atypon 10:45 - 11:00 Coffee Break 11:00 - 12:00 Crossref Search Prototype Demo and Feedback Forum - Ed Pentz, Executive Director, Crossref 12:00 - 13:30 Lunch 13:30 - 16:00 Open Session 13:15 - 13:45 Introduction to New System, Chuck Koscher, Crossref, and representatives from Atypon 13:45 - 14:30 Speaker, Jim Neale, Columbia University, member Crossref Library Advisory Board. \u0026ldquo;What does the academic community want from Crossref?\u0026rdquo; 14:30 - 14:45 Coffee Break 14:45 - 15:15 Speaker, Jerry Cowhig, IOPP, Publisher Case Study 15:15 - 16:00 Panel, \u0026ldquo;The Article Economy\u0026rdquo;, with Wes Crews, Infotrieve, and Simon Inger, consultant, Simon Inger \u0026amp; Associates, on behalf of Ingenta; and Wes Crews, Infotrieve\nAgenda, 26 September, 2002 Implementation Workshop \u0026ndash; Chuck Koscher, Crossref Implementing Reference Linking \u0026ndash; Mark Doyle, The American Physical Society Deposit Schema 2.0 \u0026ndash; Bruce D. Rosenblum, Inera Incorporated\nPlease contact our outreach team with any questions.\n", "headings": ["2024 2023 2022 2021","2020 2019 2018 2017 2016","2015 2014 2013 2012 2011","2010 2009 2008 2007 2006","2005 2004 2003 2002 2001","2024 annual meeting","2023 annual meeting","2022 annual meeting","2021 annual meeting","2020 annual meeting","2019 annual meeting","2018 annual meeting","2017 annual meeting","Agenda","2016 annual meeting","Agenda","2015 annual meeting","Agenda","2014 annual meeting","Agenda","2013 annual meeting","Agenda","2012 annual meeting","Agenda","2011 annual meeting","Agenda","2010 annual meeting","Agenda","2009 annual meeting","Agenda","2008 annual meeting","Agenda","2007 annual meeting","Agenda","2006 annual meeting","Agenda","2005 annual meeting","Agenda","2004 annual meeting","Agenda","2003 annual meeting","Agenda","2002 annual meeting","Agenda, 25 September, 2002","Agenda, 26 September, 2002"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2020-slate/", "title": "Board election 2020 candidates", "subtitle":"", "rank": 2, "lastmod": "2020-09-01", "lastmod_ts": 1598918400, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 72 applications, our Nominating Committee proposed the following eight candidates to fill the six seats open for election to the Crossref Board of Directors.\nTier 1, Small and mid-sized members Member organization Candidate standing Country Beilstein-Institut Wendy Patterson Germany Korean Council of Science Editors Kihong Kim South Korea OpenEdition Marin Dacos France Scientific Electronic Library Online (SciELO) Abel Packer Brazil The University of Hong Kong Jesse Xiao Hong Kong Tier 2, Large members Member organization Candidate standing Country AIP Publishing Jason Wilde USA Oxford University Press James Phillpotts UK Taylor \u0026amp; Francis Liz Allen UK Jason Wilde, AIP Publishing, USA", "content":["If","you","are","a","voting","member","of","Crossref","your","‘voting","contact’","will","receive","an","email","in","late","September,","please","follow","the","instructions","in","that","email","which","includes","links","to","the","relevant","election","process","and","policy","information.\nFrom","72","applications,","our","Nominating","Committee","proposed","the","following","eight","candidates","to","fill","the","six","seats","open","for","election","to","the","Crossref","Board","of","Directors.\nTier","1,","Small","and","mid-sized","members","Member","organization","Candidate","standing","Country","Beilstein-Institut","Wendy","Patterson","Germany","Korean","Council","of","Science","Editors","Kihong","Kim","South","Korea","OpenEdition","Marin","Dacos","France","Scientific","Electronic","Library","Online","(SciELO)","Abel","Packer","Brazil","The","University","of","Hong","Kong","Jesse","Xiao","Hong","Kong","Tier","2,","Large","members","Member","organization","Candidate","standing","Country","AIP","Publishing","Jason","Wilde","USA","Oxford","University","Press","James","Phillpotts","UK","Taylor","\u0026amp;","Francis","Liz","Allen","UK","Jason","Wilde,","AIP","Publishing,","USA"], "headings": ["Tier 1, Small and mid-sized members","Tier 2, Large members","Jason Wilde, AIP Publishing, USA\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Wendy Patterson, Beilstein-Institut, Germany\n","Organization statement","Personal statement\n","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Kihong Kim, Korean Council of Science Editors, South Korea\n","Organization statement","Personal statement\n","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Marin Dacos, OpenEdition, France\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","James Phillpotts, Oxford University Press, UK\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Abel Packer, Scientific Electronic Library Online (SciELO), Brazil\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","Liz Allen, F1000, Taylor \u0026amp; Francis, UK\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","Declaración personal","단체소개서","자기소개서","Jesse Xiao, The University of Hong Kong Libraries, Hong Kong\n","Organization statement","Personal statement ","Déclaration de l’organisation","Déclaration personnelle","Declaração da organização","Declaração pessoal","Declaración de la organización","단체소개서","자기소개서"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/about/history/", "title": "History", "subtitle":"", "rank": 2, "lastmod": "2020-02-02", "lastmod_ts": 1580601600, "section": "About us", "tags": [], "description": "Crossref was born of the radical changes in the 1990s brought on by the spread of the Internet and development of the World Wide Web and other technologies (HTML, SGML, XML). Everything started moving online, including research and scholarly communications.\nOur roots go back to 1996 when the Enabling Technologies Committee of the Association of American Publishers put out a call for a persistent identifier system for online content and the Corporation for National Research Initiatives (CNRI) answered the call with the Handle system.", "content": "Crossref was born of the radical changes in the 1990s brought on by the spread of the Internet and development of the World Wide Web and other technologies (HTML, SGML, XML). Everything started moving online, including research and scholarly communications.\nOur roots go back to 1996 when the Enabling Technologies Committee of the Association of American Publishers put out a call for a persistent identifier system for online content and the Corporation for National Research Initiatives (CNRI) answered the call with the Handle system. A couple of years of work and discussions led to the founding of the International DOI Foundation to develop and govern the DOI (Digital Object Identifier) System which was the application of the Handle System to the digital content space.\n1999 was the year things came together enabling the formation of Crossref. It was a big year: there was the Y2K/Millennium bug\u0026ndash;an example of infrastructure causing problems\u0026ndash;and there was the launch of Napster signaling massive disruption for the music industry. On more positive notes, 1999 also saw the release of version 1.0 of the Bluetooth specification which enables connectivity across millions of devices, and the formation of the WIFI Alliance which is a nonprofit association tasked with promoting the adoption and development of wireless technologies across all vendors. Bluetooth and WIFI have been incredibly successful standards backed up by non-profit organizations that foster collaboration, adoption and development to benefit a wide range of stakeholders. Remind you of anyone in the scholarly communications space?\nWith respect to Crossref’s formation, things really kicked off in 1999. A prototype project by Academic Press, Wiley, and the DOI-X project, created the technical foundations for reference linking based on centralized metadata and the assignment of Digital Object Identifiers (DOIs). The prototype system was demonstrated at the Frankfurt Book Fair in 1999. Publishers quickly rallied around and in December 1999 a working group of 12 organizations met and decided to form Crossref as an independent, not-for-profit organization. Crossref was incorporated in January 2000 - as Publishers International Linking Association, Inc. (PILA). The Crossref system, the first collaborative reference linking system, went live in June 2000.\nCrossref has grown steadily over the years - from the original 12 founding members to the over 17,000 organizations who are currently members of Crossref. To see more details of Crossref’s history and developments over the years please see our Annual Reports.\nIn celebration of Crossref\u0026rsquo;s 10th Anniversary in 2009, Crossref commissioned The Formation of Crossref: A Short History. A Japanese translation is also available.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2019-slate/", "title": "Board election 2019 candidates", "subtitle":"", "rank": 2, "lastmod": "2019-08-23", "lastmod_ts": 1566518400, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 52 applications, our Nominating Committee proposed the following seven candidates to fill the five seats open for election to the Crossref Board of Directors.\nMember organization Candidate standing Country Clarivate Analytics Nandita Quaderi USA eLife Melissa Harrison UK Elsevier Chris Shillum Netherlands IOP Publshing Graham McCann UK Springer Nature Reshma Shaikh UK The Royal Society Stuart Taylor UK Wiley Todd Toler USA Nandita Quaderi, Clarivate Analytics, USA", "content": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 52 applications, our Nominating Committee proposed the following seven candidates to fill the five seats open for election to the Crossref Board of Directors.\nMember organization Candidate standing Country Clarivate Analytics Nandita Quaderi USA eLife Melissa Harrison UK Elsevier Chris Shillum Netherlands IOP Publshing Graham McCann UK Springer Nature Reshma Shaikh UK The Royal Society Stuart Taylor UK Wiley Todd Toler USA Nandita Quaderi, Clarivate Analytics, USA\nThe Web of Science Group provides the tools and resources researchers, publishers, institutions and research funders need to monitor, measure, and make an impact in the world of research. Like Crossref, we want to “make scholarly communications better.”\nGuided by the legacy of Dr Eugene Garfield, inventor of the world’s first citation index, our goal is to provide guaranteed quality, impact and neutrality through world-class research literature and meticulously captured metadata and citation connections. We devised and curate the largest, most authoritative citation database available, with over 1 billion cited reference connections indexed from high quality peer reviewed journals, books and proceedings from 1900 to the present.\nOur products include ScholarOne, which provides comprehensive workflow management systems for scholarly journals, books and conferences; Publons, the home of the peer review community; and Kopernio, a free web browser plug-in which enables one-click access to full text PDFs. We too “make tools and services – all to help put scholarly content in context.”\nThe existing Crossref Board is comprised of publishers, libraries, non-profit organisations looking to increase access, and technology providers. As an entirely publisher-independent organisation, we would complement the current membership, and as a hugely diverse organisation ourselves, we would add to the diverse nature of the board. Our presence on the Crossref board would enhance our organisations’ shared goals of revealing meaningful connections among research, collaborators, and data as we strive to enable researchers to accelerate discovery, evaluate impact, and benefit society worldwide.\nPersonal statement Firstly, as a former researcher myself, I would bring a deep understanding of the challenges academics and institutions face in the research communication process. In my post-academia career, I have consistently championed excellence in diversity, accessibility, and open science.\nAfter receiving my PhD in Molecular Genetics, and completing a post-doctoral fellowship in Italy, I ran a Wellcome Trust-funded developmental biology lab at Kings College London. This experience provided me with a nuanced understanding of the technological infrastructures supporting research, peer review, and publishing.\nCrossref’s commitment to a culture of openness and transparency mirrors my own post-academic pursuit of universally accessible research, making sure that our industry provides places where people can come to find trusted, high-quality content. I managed the Open Access journal portfolios at Nature Research and BMC. I have experience successfully transitioning journals from hybrid to OA models, leading new OA journals from conception to launch, and promoting open research among my teams and the wider community.\nIn my current role as Editor in Chief for the Web of Science, I have overall editorial responsibility for the content on Web of Science and oversee the team of in-house editors that select content for the Web of Science Core Collection. My team’s principles are objectivity, selectivity and collection dynamics; we look to promote and protect the Web of Science’s reputation for excellence, while remaining publisher-independent and innovating to meet the needs of the community.\nAs a woman of Bangladeshi heritage in STEM, in all my roles I have worked to build environments where diversity, equality, and gender parity are championed and valued. At the Web of Science Group, I have been honoured to facilitate insightful discussions for International Women’s Day, taking parts in events to encourage women in STEM. I am part of the Women@Clarivate group, an initiative to help women across our parent company grow in their careers.\nI love metadata, love data, and love science. If I were elected to the Crossref board, I would help to increase the impact of existing Crossref initiatives. I aim to bring fresh ideas, based on the diversity and experience of the Web of Science Group to the board. I would promote diversity, open science and excellence in research.\nMelissa Harrison, eLife, UK\neLife is an open-access publishing initiative launched by three major biomedical funders, the Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust (and joined more recently by the Wallenberg Foundation). eLife publishes a single open-access journal that covers important new findings in the life and biomedical sciences, designs and builds new open-source products and infrastructure to support publishing, and works with the scientific community to advocate for improvement in the way that science is evaluated and communicated. An important aspect of the work at eLife is our commitment to sharing our findings and resources as effectively as possible, so that others can reuse, build on and improve on our work. eLife is often early to engage or propose new ideas related to openness and improving scientific communication and several eLife staff are involved in community initiatives including OASPA, FORCE11, DORA and JATS4R as well as open-source technology collaborations such as COKO and The Substance Consortium.\neLife has served one term on the Board of Crossref and we would be delighted to continue to serve on the board because of Crossref’s role in providing sustainable and shared infrastructure supporting critical aspects of research communication.\nMelissa Harrison (http://orcid.org/0000-0003-3523-4408)\nPersonal statement\nI am Head of Production Operations at eLife, where I manage the production process as well as content and metadata delivery to other services. I have served as eLife’s alternate representative on the Crossref Board for the past three years.\nI have worked in publishing for over 20 years in a variety of organisations:, beginning with small independent publishers, followed by the Royal College of Psychiatrists and then the BMJ Group. I also worked for an Indian production services vendor before joining eLife in 2012. I, therefore, have substantial and in-depth knowledge of publishing and the importance of high-quality metadata. I chair JATS4R and actively serve on subgroups in the Metadata2020 and FORCE11 communities. As a member of multiple communities, I have a broad perspective on the opportunities for improvement in general publishing infrastructure and processes. Like Crossref, eLife is also committed to open source technologies and re-use. I initiated the development of an open source tool to convert JATS XML to Crossref and PubMed XML and eLife is working collaboratively on community open source tools to support the complete publishing process.\nThe services and infrastructure that Crossref provide are of great value to the publishing community and there are many opportunities to develop them. There is also an expanding range of organisations, including institutions and funders, that Crossref has the potential to serve. With my broad experience in publishing and community engagement, I can offer understanding and insight into the opportunities and challenges faced by Crossref and would relish the chance to continue to represent eLife as part of the Crossref Board.\nAlternate statement: Mark Patterson (orcid.org/0000-0001-7237-0797) is the Executive Director of eLife. Previously, Mark was the Director of Publishing at PLOS where he worked for 9 years and helped to launch several of the PLOS Journals including PLOS Biology and PLOS ONE. Mark began his career as a researcher in yeast and human genetics before moving into scientific publishing in 1994 first as the Editor of Trends in Genetics and later as one of the launch editors for NPG’s Nature Reviews Journals. Mark has served as the representative for eLife on the Crossref Board for the past three years (in which role he has chaired the Nominations Committee for two years and currently serves on the Audit Committee), and is one of the founding directors of the Open Access Scholarly Publishers Association.\nChris Shillum, Elsevier, Netherlands\nAt Elsevier, we believe that to meet the ever-growing needs of researchers and scholars, the information system supporting research must be founded upon the principles of source neutrality, interoperability and transparency, placing researchers under full control of their information. As the fastest-growing open access publisher, we provide authors with a choice of publication models, and are investing in tools to support Open Science such as SSRN and Mendeley Data.\nToday, a vibrant array of companies, non-profits, universities, and researchers provide scholarly tools and information resources. Common standards and shared infrastructure play a vital role enabling innovation and fostering competition towards finding ever better ways for researchers and their institutions to manage the unprecedented amount of knowledge and data that is available to today. As an industry leader, Elsevier has a strong track record of supporting many of the standards and infrastructure services we rely on today: We were charter members of the International DOI Foundation, have been involved with Crossref since its start, and are founding members of ORCID and CHORUS.\nAs the largest financial contributor to Crossref, we are proud of the role we play in sustaining this vital shared resource, and as a large organization, we are also able to commit significant staff time to the development of new initiatives such as Funding Data, Access and Licensing Indicators and Distributed Usage Logging. We continue to support the maintenance and enhancement of the Crossref Funder Registry as a free service to the community.\nPersonal statement\nI am an industry veteran, having spent 25 years in scholarly publishing, starting my career during the exciting first wave of the move to online publishing. As well as my day job, where I currently look after Identity and Platform Strategy for Elsevier, I am fortunate to have served in leadership positions at the International DOI Foundation, Crossref, ORCID and NISO. Over the years, I’ve gained experience in most aspects of scholarly communication technology, as well as expertise in strategic management, financial planning and industry collaboration. I am passionate about the role that industry organizations such as Crossref play in enabling innovation and accelerating the development of new ways to help researchers and scholars; I consider it both a privilege and a responsibility to use my skills to help develop, lead and guide such organizations. Philippe Terheggen, Elsevier’s designated alternate, is Manager Director of our STM Journals publishing group.\nCrossref is at a pivotal moment in its development, as it seeks to expand its constituency, develop new services and embrace the opportunities of new technology such as AI, whilst at the same time updating its aging core infrastructure. It’s also vital that Crossref maintains the support of its members by delivering value for money and operating its services as efficiently as possible. Crossref will need experienced, disciplined leadership with representation from all sectors of the community to successfully navigate the complex challenges ahead. Should I be fortunate enough to be re-elected to the board, I believe I have the right combination of experience, technical, and strategic skills to help Crossref make the best decisions for its members, its future, and for the community of researchers and scholars that we all serve.\nGraham McCann, IOP Publishing, UK\nIOP Publishing is one of the world’s leading scholarly publishers. It has partnered with Crossref since it was founded and shares its mission to make research outputs easy to find, cite, link to and assess.IOP Publishing is a subsidiary of the Institute of Physics, a leading scientific society promoting physics for the benefit of all. Through its worldwide membership of more than 50,000, the Institute works to advance physics research, application and education, and engages with policy makers and the public to develop awareness and understanding of physics. The revenue generated by IOP Publishing is used by the Institute to support science and scientists in both the developed and developing world.IOP Publishing would be proud to continue representing publishers on the Board of Crossref and further its work to solve industry-wide problems and help shape the future of research.\nPersonal statement Graham McCann is currently Chair of Crossref’s Membership and Fees Committee, which has responsibility for reviewing the value that members receive from Crossref services. He has more than 25 years’ experience in STM publishing, initially in editorial development and then in product management. Currently, Graham manages IOP Publishing’s electronic content and customer-facing platforms for journals and books. Graham is involved in a number of new industry initiatives and integrations. As a continuing member of the Board, he would be particularly keen to support a refresh of Crossref’s core DOI registration technologies. With his mix of publishing and technical experience and expertise, Graham is well placed to help advise Crossref on development of their services and he remains committed to Crossref’s goal of supporting both researchers and science as a whole.\nReshma Shaikh, Springer Nature, UK\nSpringer Nature is one of the founding members of Crossref and has been a very active and loyal board member since. We understand that Crossref needs tangible support from the membership and as a large organization we take the responsibility to deliver such support. We admire the role Crossref plays in both operating the basic infrastructure of content linking for academic communities while also playing a leading role in the development of new identifier and metadata initiatives. Supporting innovative projects to (further) develop open science is a major reason for Springer Nature to be active in Crossref. Springer Nature is committed to work together with Crossref and its membership to implement new processes and functionalities.\nAs a network of 12,667 members from 118 countries, Crossref represents a truly global movement to ensure the world’s knowledge is understood and consumed within the context it was intended. The organisation’s success and increasing impact derive from its commitment to openness and transparency while staying true to its original mission of metadata at the center of all it does.\nThe world of academic publishing is currently going through the largest disruption since the invention of the Open Access business model. The use of technology and the ubiquity of startups in this space will continue to challenge the status quo for the foreseeable future. In these times, organisations such as Crossref, with an established reputation as one of the original pan publishing organisations and one that is digital by birth, have a responsibility to help shepherd the industry as it goes through its transformation, while also ensuring it keeps itself relevant. The vision of Metadata 2020 and the desire to ensure Funders are able to really understand the impact of their funding, are important goals that require strong execution and oversight.\nAs a technology transformation specialist, I have used Agile and Lean principles, combined with bundles of creativity to persuade companies to try things differently and embrace Digital. While at Springer Nature, I have directed large automation programmes, championed innovation and led departments of 170+ people over three different locations, set up digital and data capabilities and reinvented an innovative global content delivery platform. Having worked in North America, Europe and Asia and with an Asian background, I believe that I can offer the board a diversity of background, knowledge and experience.\nStuart Taylor, The Royal Society, UK\nThe Royal Society is the national academy of science for the UK. We provide science policy advice to government, we recognise excellence with Fellowships, medals and awards and we fund research. We are also a publisher of journals and although modest in scale (ten journals) we launched the world’s first science journal in 1665. Our publishing is financially strong and we are entirely independent of any commercial publishing organisation. We are very active in the learned society publishing sector and have a seat at the table at many of the key groups and committees. We are in close contact with many of the learned society publishers and we understand and share the issues and challenges they see in the evolving publishing system. Most recently we have been instrumental in setting up the Society Publishers’ Coalition in response to Plan S. Our journal publishing workflow is based on full text JATS XML, we use a continuous publication model and were the first publisher to make ORCID iD mandatory for submitting authors. We seek to adapt and innovate constantly particularly in terms of open science. Two of our ten journals are fully open access with CC-BY licence, we operate open peer review on four journals, we have mandatory open data on all journals, we permit text and data mining for both commercial and non-commercial purposes and we support zero embargo green open access. We have been strongly supportive of Crossref from the outset and we use Crossmark, funder registry IDs and Crossref similarity checking. We are also participants in the i4OC open references project. We also support preprints by encouraging our authors to share early versions of their work and by appointing a dedicated preprints editor for our flagship biology journal.\nPersonal statement Dr Stuart Taylor is the Publishing Director at the Royal Society. He has responsibility for the Royal Society’s publishing operation which consists of a staff of 30 who publish the Society’s ten journals. He joined the Society in 2006 after working as a Publisher at Blackwell Science (now Wiley) in Oxford where he was responsible for postgraduate book and journal acquisitions in clinical medicine. He is a keen advocate of open science and believes that the scholarly communication system should genuinely serve science and do so far more effectively and efficiently than it does at present. He is a member of FORCE11, is also on the Board of Directors of the Open Access Scholarly Publishers’ Association (OASPA) and works in several other open science groups. He is a strong supporter of cross-stakeholder collaborative solutions in general and Crossref in particular as the single most ambitious and successful of them. He would welcome the opportunity to input into Crossref’s strategy and decision making. He feels it is essential that the voice of the smaller and scholar-led publishers is represented as they face some very distinct challenges as the publishing landscape continues to evolve. Crossref is in a unique position with the trust, breadth of influence and technical competence to continue be a key actor at the centre of the rapidly developing scholarly communication system.\nHe has an MA in chemistry and a DPhil in psychopharmacology from the University of Oxford, and has published 25 peer reviewed scientific papers in neuroscience. He has also published a number of articles on journal publishing and scholarly communication. In 2015, he organised a four day conference as part of the Royal Society’s celebration of 350 years of science publishing entitled The Future of Scholarly Scientific Communication. He has served on the Crossref Board previously (2010 - 2013).\nORCID iD: http://orcid.org/0000-0003-0862-163X\nTodd Toler, Wiley, USA\nFounded in 1807, John Wiley and Sons (Wiley) publishes on behalf of more academic, professional and scholarly societies and associations than any other publisher.\nWiley was a founding member of Crossref, as was Blackwell, which merged with Wiley in 2007. We see Crossref as a core component of the research publishing ecosystem and are committed to helping Crossref as an organization meet its goals and objectives. Overall, Wiley strives to take a leadership role in scholarly publishing, and through initiatives such as CHORUS and ORCID (among others) we work with stakeholders across the industry to help develop infrastructure that supports researchers and scholarly publishing globally.\nWe are a global company, with major publishing centres in in the United States, the United Kingdom, Germany, Australia, China, and Japan. Wiley colleagues are actively engaged in numerous industry membership associations, such as the International Association of Scientific, Technical and Medical Publishers (STM), the Association of Learned and Professional Society Publishers (ALPSP), the Publishers Association (PA) and the American Association of Publishers (AAP). Wiley is also a co-chair of Project DARE, which is working to address skills gaps in the Asia-Pacific region.\nWe recognize that the market for scholarly publishing is changing rapidly, creating opportunities to develop new ways to describe, share, and disseminate research advances; we are committed to supporting the transition to Open Science in all manifestations, and to managing the transition in a sustainable manner for all stakeholders.\nPersonal statement I am currently Vice President, Product Strategy \u0026amp; Partnerships, for Wiley’s Research business, where I work across business units to help develop the strategic direction for Wiley’s products and services for researchers. In addition, I also represent Wiley in strategic external relations with industry and trade groups, government organizations and scientific communities.\nI am currently Wiley’s alternate on the STM board, am a founding member of RA-21 (working towards standards of seamless, secure access to scholarly content) and frequently meet with industry peers on a wide variety of matters related to improving scholarly communications standards and infrastructure.\nI have been at Wiley for 12 years, originally joining the company as its first ever Director of User Experience in 2007. I’ve also held the titles of Director \u0026amp; Publisher, Wiley Online Library and Vice President of Digital Product Management. My background and expertise is in the field of interaction design, instructional design, product management, and digital strategy. I love working in the field of research communication and feel we are only at the beginning of what’s possible for the field. I am particularly passionate about the future of linked data and reproducible science, and am currently working on a next-generation journal workflow for Wiley that is committed to open data, open web standards, and open annotation.\nCrossref is the most successful example in our industry of what’s possible through collaboration and infrastructure sharing. It provides vital technology and serves as an important forum to navigate a rapidly evolving technology landscape for content providers and other stakeholders across a wide variety of business models. I would be honored to serve on the board, and my alternate would be Duncan Campbell, Senior Director, Global Sales Partnerships, who has previously sat on the Crossref board representing Wiley since 2015.\n", "headings": ["Nandita Quaderi, Clarivate Analytics, USA\n","Personal statement ","Melissa Harrison, eLife, UK\n","Personal statement\n","Chris Shillum, Elsevier, Netherlands\n","Personal statement\n","Graham McCann, IOP Publishing, UK\n","Personal statement ","Reshma Shaikh, Springer Nature, UK\n","Stuart Taylor, The Royal Society, UK\n","Personal statement ","Todd Toler, Wiley, USA\n","Personal statement "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2018-slate/", "title": "Board election 2018 candidates", "subtitle":"", "rank": 2, "lastmod": "2018-08-15", "lastmod_ts": 1534291200, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 26 applications, our Nominating Committee proposed the following seven candidates to fill the five seats open for election to the Crossref Board of Directors.\nMember organization Candidate standing Country African Journals Online Susan Murray South Africa American Psychological Association Jasper Simons USA Association for Computing Machinery Scott Delman USA California Digital Library Catherine Mitchell USA Hindawi Paul Peters UK SAGE Richard Fidczuk USA Wiley Duncan Campbell USA Susan Murray, African Journals Online (AJOL), South Africa", "content": "If you are a voting member of Crossref your ‘voting contact’ will receive an email in late September, please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 26 applications, our Nominating Committee proposed the following seven candidates to fill the five seats open for election to the Crossref Board of Directors.\nMember organization Candidate standing Country African Journals Online Susan Murray South Africa American Psychological Association Jasper Simons USA Association for Computing Machinery Scott Delman USA California Digital Library Catherine Mitchell USA Hindawi Paul Peters UK SAGE Richard Fidczuk USA Wiley Duncan Campbell USA Susan Murray, African Journals Online (AJOL), South Africa\nUnlike in Europe and North America with the giant commercial academic publishers there, journals publishing from developing countries are usually stand-alone journals run by subject-specific experts on a Non Profit basis. This is why hosting platforms like African Journals Online (AJOL) in Africa, other (INASP supported) country-level Journal Online (JOL) platforms around the world, and our “big sister” SciELO in Latin America have emerged… to provide technical support, quality improvements, and increased visibility of peer-reviewed Southern-published research content on a Non Profit basis.\nMany journals don’t have the resources or the IT capacity to get online in isolation, and a small journal website is not very visible to search engines, so AJOL provides free highly visible aggregated hosting, software updates, high machine-readability of metadata, and various other free services like monthly usage statistics and free Digital Object Identifiers (DOIs), as a Sponsoring Member of Crossref. We were the first Non Profit Organisation to have successfully negotiated a discounted rate from Crossref for DOIs for journals publishing from Low and Lower Middle Income Countries, and we absorb the costs of membership and DOI assignation on behalf of our journal partners.\nWe are familiar with Crossref and have had a long-standing collaboration with your organisation. Additionally, AJOL supports the subject experts running the journals to become more aware of the quality norms and standards of robust research publishing, and assist them to implement these. Hence the development of the Journal Publishing Practices \u0026amp; Standards (JPPS) framework and assessment process. This work is important so that Southern research journals can be credible and trusted by authors and readers.\nBased in South Africa, AJOL hosts 521 journals based in 32 African countries from all academic disciplines on its platform and over 150,000 full text articles available to download, more than half of which are free to read and download. There are 350 more journals who’ve applied for inclusion and are waiting to be assessed, but this has been put on hold while a new journal publishing practices framework and assessment has been developed by AJOL and INASP (www.journalquality.info). Once all current journal partners are assessed, we will start working through new applicants’ assessments again.\nThere is a lot of demand for AJOL’s services by journals who’ve heard from and observed the benefits of AJOL inclusion, and in the long term, we hope to include all qualifying peer-reviewed, African-published journals on the continent.\nAJOL is a long-standing Non Profit Organisation (started in 1998 and run independently from South Africa since 2005). AJOL\u0026rsquo;s strategy, implementation and finances are overseen by a Board drawn from AJOL\u0026rsquo;s main stakeholder groups, we are annually audited, and we are a well-known and trusted platform internationally. AJOL\u0026rsquo;s users have been tracked with Google Analytics. The primary users 10 years ago were European and North American, but user numbers from Africa have increased substantially over the years to the point that in 2017 there were nearly a quarter of a million repeat African-based users of our platform. This is a vital achievement for us… making African research output available and accessed/used by African researchers! Globally, there were over 8.5 million full-text article downloads from AJOL during 2017, so we know that AJOL is accomplishing the goal of increased accessibility of the research outputs on our website. AJOL also offers support of journal via email, as well as in-person training workshops, which have been held in countries all over the continent. We at AJOL understand the contexts, challenges, and needs of the small, usually stand-alone or scholarly association-based journals in developing countries, as well as the intricacies of implementing DOIs for hundreds of journals as a Crossref Sponsoring Member.\nPersonal statement\nFor the past 11 years, I have been the Executive Director of African Journals OnLine (www.ajol.info). AJOL is a South African Non Profit Organisation working toward increased visibility and quality of African-published research journals. AJOL hosts the world’s largest online collection of peer-reviewed, African-published scholarly journals and is a sponsoring member of Crossref. I am also a member of the Advisory Committee of the Directory of Open Access Journals (DOAJ www.doaj.org), a member of the Advisory Board for the Public Knowledge Project’s current study on Open Access Publishing Cooperatives, and member of the Advisory Board of CODESRIA’s new African Citation Index initiative.\nI have an academic background in Development Economics and have an abiding interest in the role that increased access to research outputs can play in low income and emerging economies, as well as the practicalities of attaining this. AJOL hosts 521 Journals of which 440 have active DOIs assigned. At last count there were more than 84 000 articles with DOI\u0026rsquo;s on AJOL: including roughly 56 000 with DOIs assigned by AJOL (prefix 10.4314), and 28 000 with DOIs assigned by the Journal itself or relevant publisher. As AJOL has no control over the DOIs assigned by entities other than ourselves, we have found in the past that conflicts can arise when changes take place at Journals\u0026rsquo; publishers and ownership of DOIs. Hence we run a periodic verification process in order to ascertain that the DOIs we display on www.ajol.info are accurate, during which we fetch the latest valid DOI for each article directly from Crossref, regardless of publisher/owner. In order to avoid conflicts, any DOIs added in the interim period to the site do not get displayed until the next verification process. This causes delays in the display of DOIs on AJOL. The verification process happens approximately every 2 months. Even though the process itself is automated, it takes about two days, during which it requires manual monitoring to ensure all goes smoothly, and therefore we are not always able to execute it more often.\nWe would very much like to discuss a different pricing model for the Similarity Checker aspect of Crossref\u0026rsquo;s services for developing country Sponsoring Members to make this essential service more practicable for developing countries, and I believe that our deep knowledge of the (sometimes heterogeneous needs) of our journal partners in Africa can contribute to a more balanced decision-making process of the Crossref Board as regards working towards offsetting the inequalities of scholarly research outputs between the Global North and Global South, helping boost the verified credibility, sharing and preservation of quality research from developing countries.\nMy colleague at AJOL, Kate Snow, who is the Content \u0026amp; Communications Manager, would likely be the person to attend Crossref meetings if I could not. Kate deals with implementing DOIs together with our partner journals every day, and has an intimate knowledge of the specifics of that work, as well as an overall understanding of AJOL\u0026rsquo;s work, organisational ethos and the nature and needs of our partner journals.\nJasper Simons, American Psychological Association (APA), USA\nWith more than 115,000 members, it is APA’s mission to advance the creation, communication and application of psychological knowledge to benefit society and improve people\u0026rsquo;s lives. An important part of achieving this mission is our vibrant publishing program, which includes a portfolio of 80+ journals published on behalf of both communities within APA and related scholarly societies worldwide, a scholarly books program, a children’s book imprint, digital learning solutions and a suite of databases, including the discovery platform PsycINFO®, the most trusted and comprehensive library of psychological science in the world.\nFor decades, APA Publishing has been a leader in developing knowledge solutions and setting standards for scholars in social and behavioral sciences. The APA Style and our new Journal Article Reporting Standards (JARS) for quantitative and qualitative research impact the reporting of scholarly work well beyond APA’s journals or the field of psychology. APA’s partnership with the Center for Open Science further demonstrates our commitment to collaborating across the industry to improve research outcomes.\nAPA has been a member of Crossref and on the Crossref board since its early days. Together with fellow publishers, exceptional Crossref staff and industry partners, APA has helped to structure, process, and share metadata to reveal relationships among research outputs across the world. APA is ambitious about the future and excited about the role Crossref can play to make research outputs easy to find, cite, link, assess, and reuse. It is important that the Crossref board has representation from a trusted Society publisher in the social and behavioral sciences so that the needs of this vibrant community are reflected in the organization’s important work.\nPersonal statement\nAs the Chief Publishing Officer of APA, Jasper is responsible for driving strategy, establishing editorial policies, producing content, developing products and overseeing the related sales and marketing services. He manages a team of almost 200 expert staff in APA’s Washington, DC office. In 2017, he led the collaboration with the Center for Open Science to support Open Science and Reproducibility in psychology. The collaboration advances the integration between the content in the Open Science Framework and the peer-reviewed content of the APA.\nJasper served on the Crossref board from 2015 through 2017 and was the Chair of Crossref’s Nominating Committee in 2015-2016. He also served on the Executive Committee of the AAP/PSP board, and is a member of the Standards \u0026amp; Technology Committee of the STM Association. Jasper has 20 years of experience in scholarly publishing, working at leading publishing organizations such as Elsevier, SAGE Publications and Thomson Reuters.\n(Alternate) Tony Habash, Chief Information Officer: Tony F. Habash, DSc, has been the Chief Business Integration Officer and the Chief Information Officer of APA since 2007.\nScott Delman, Association for Computing Machinery (ACM), USA\nACM is a scientific society devoted to furthering computing research and education and supporting the professional needs of its 110,000 members around the world. As a non-profit publisher who’s governance is composed primarily of members of the scientific community, ACM has a unique perspective on many of the changes we are experiencing in the scholarly publishing industry, in that every decision we make needs to consider both the short and long term effects on computer scientists, students, and educators as its primary concern and generating income as an important but secondary consideration.\nACM is by most standards an incredibly lean and efficiently managed organization. As a society publisher, our publication’s program supports the research community through its various high impact publications, but our publications also fund a wide variety of educational and practitioner-oriented good works programs around the world, as well underwrite public policy initiatives that provide a leading independent and non-partisan voice on U.S. and European public policy issues relating to computing and information technology, such as innovation, artificial intelligence, privacy, big data and analytics, security, accessibility, intellectual property, and technology law. As a standalone organization, ACM is neither big, nor small, in comparison to many of our peers and service to both the computer science community and the scholarly publishing community is embedded in our DNA.\nFor the past two decades, ACM has been committed to serving the interests of the scholarly publishing community as a founding and active board member of many of the industry’s most important publishing technology initiatives, including Crossref, CHORUS, ORCID, and Portico. ACM has served on the Board of Directors of Crossref since inception, and I currently serve on Crossref’s Executive Committee as Board Treasurer.\nI believe Crossref is at a critical moment in its history and now, perhaps more than any other time in the past, requires strategic and disciplined leadership from its Board of Directors, and leadership that can help to provide an important balance to the strong and sometimes dominant voices and special interests of the large commercial publishers’ on the organization’s Board and to Crossref’s highly skilled and professional staff, who are constantly looking at the scholarly publishing landscape and proposing new ways for Crossref to play a role. As a result, Crossref is rapidly expanding its menu of services to the publishing community and is for the first time considering expanding its membership beyond publishers to research institutions and funders to drive growth for the organization. At the same time, key technologies, such as Machine Learning, have the potential to impact Crossref’s core DOI registration service, which the organization relies so heavily on.\nThe decisions Crossref takes over the next few years, in terms of its own growth and where it chooses to invest its staff and financial resources, and more generally how it responds to the changing technology landscape, will have a transformational impact on the organization. As a technology-focused organization that faces many of the same challenges that Crossref is facing, ACM, as represented by its Director of Publications, is in a good position to offer sound, consistent, and strategic leadership on the Board.\nPersonal Statement\nDuring my tenure on the Board, I have served as both a board alternate and member, chaired and served on the Nominations Committee, chaired and served on the Membership \u0026amp; Fees Committee, and currently serve on Crossref’s Executive Committee as Board Treasurer. If elected to another term on the Board, I am committed to working closely with Crossref’s excellent staff and leadership to ensure Crossref’s future success, but also to ensure that Crossref’s growth is managed carefully and in a way that supports both the scholarly publishing community and the scientific community.\nMy alternate is Bernadette Shade, Print Production Manager, ACM.\nCatherine Mitchell, California Digital Library (CDL), USA\nSince its founding in 1997, the California Digital Library (CDL) at the University of California has been engaged in building a library-based publishing program that supports emerging disciplines and scholars, explores new publishing models, and seeks to reach professionals in applied fields beyond academia.\nCDL provides open access publishing services for UC-affiliated departments, research units, publishing programs, and individual scholars who seek to publish or edit original work - as well as comprehensive repository services for a wide range of scholarship including working papers, conference proceedings, electronic theses and dissertations, and data sets.\nNow with over 80 journals published on its eScholarship platform, CDL is an established library publisher with robust infrastructure, deep knowledge of the myriad publishing practices and needs across the academy, and a strong interest in the discoverability challenges faced, in particular, by open access materials; the organization regularly grapples with the complexity of rationalizing metadata across distinct research outputs, identifying new mechanisms to ensure global access to this research, and encouraging authors to license their work in ways that support sharing and reuse while at the same time retaining their own copyright.\nAs a major library publisher and a member of the Library Publishing Coalition, CDL would help broaden Crossref’s understanding of the unique needs and opportunities within this burgeoning space, particularly in the context of DOI registration, metadata requirements associated with new research output formats, aggregated metrics, and the Event Data initiative. As the consortial digital library for the ten University of California campuses, CDL would represent a public institution of great breadth and ambition that has declared its commitment to open scholarship and continues to seek new ways to make good on that promise. And, as a current member of Crossref and ORCID, as well as a founding member of DataCite, CDL would bring its dedication to supporting community-led efforts to enhance the discoverability, interconnectedness and broader contextualization of scholarly communication around the world.\nPersonal statement\nAs the Director of Publishing \u0026amp; Special Collections at the California Digital Library, I am responsible for overseeing the strategic planning, development, and operational management of CDL’s suite of library publishing services for the ten University of California campuses, including an open access publishing platform (80+ active journals), a research information management system, and an institutional repository. I am also Operations Director of UC’s Office of Scholarly Communication and, in this capacity, am particularly engaged in questions of use, value, authorship, and professional legitimacy. And finally, I have served as the President of the Board for the Library Publishing Coalition for the past two years (June 2016 - June 2018), and will remain on the Board this year as Immediate Past-President.\nAs someone with over a decade of leadership experience within scholarly communication and, in particular, library publishing, I am well positioned to bring a new perspective to the Crossref Board that reflects a growing community of academic publishers who operate outside the commercial and university press world but are, nonetheless, responsible for the production and distribution of a significant and growing number of scholarly research publications.\nI would contribute to Crossref as a board member by representing the substantial commitment of libraries as partners in establishing an open scholarly communication environment and the unique needs of and challenges faced by library publishers worldwide in ensuring appropriate visibility and “credit” for open access publications. As a community, we have worked with DOIs for articles but are keen to see the standard evolve in ways that can better accommodate the full complement of scholarly research output, from data sets and pre-prints to blog posts and micro-publications. Crossref is uniquely positioned, as a globally engaged organization, to solve these problems and help create a standards-driven, hyper-connected scholarly communication environment that transcends siloed publishing systems and national boundaries to maximize the availability of contextualized and relevant information.\nPrior to joining CDL, I earned an AB in English from the University of Chicago and a PhD in English Literature from the University of California, Berkeley.\nI propose John Kunze, Identifier Systems Architect at CDL, as our alternate.\nPaul Peters, Hindawi, UK\nAs one of the leading independent publishers of Open Access journals, Hindawi has played an important role in representing the interests of OA publishers on the Crossref Board for the past 9 years. Hindawi\u0026rsquo;s commitment to develop open infrastructure to support scholarly communications is highly aligned with Crossref\u0026rsquo;s mission, as are Hindawi\u0026rsquo;s efforts to support universities and funding agencies in their efforts to become more closely involved in the scholarly communications system.\nIn addition to the contributions that Hindawi has made to Crossref, it has also been an important contributor to OASPA, I4OC, JATS4R, and many similar initiatives. Hindawi has also built strong partnerships with some of the most well-established scholarly publishers, including Wiley and AAAS, in order to work together in expanding the open access publishing activities of these organizations.\nPersonal statement\nIn my 9 years serving on the Crossref Board I have contributed to a number of important initiatives within Crossref, served on a number of Crossref\u0026rsquo;s working groups and committees, and most recently served as the Chair of Crossref\u0026rsquo;s Board. I have a very detailed understanding of Crossref\u0026rsquo;s existing services, as well as the projects that are currently under development.\nIn terms of my vision for the Crossref community, I believe that one of the greatest opportunities and challenges for Crossref in the years ahead will be to expand the organization\u0026rsquo;s current membership to include new categories of members while continuing to serve the needs of the existing membership. I also believe that Crossref has a great opportunity to expand the constituencies it serves by building closer relationships with other organizations who share Crossref\u0026rsquo;s mission but cater to constituencies that are not currently represented in Crossref\u0026rsquo;s membership.\nIf Hindawi is re-elected to the Crossref Board, Craig Raybould (Hindawi\u0026rsquo;s Director of Operations) would be Hindawi\u0026rsquo;s alternate Board Member. In addition to currently serving as a member of Crossref\u0026rsquo;s Board, Craig is also involved in several closely-aligned initiatives including JATS4R and Metadata 2020.\nRichard Fidczuk, SAGE, UK\nFounded in 1965, SAGE is a leading independent, academic and professional publisher of innovative, high-quality content. SAGE publishes journals, and books, digital publications and online resources, across the STEM and HSS fields, and has been heavily involved with Crossref since its inception, with members on the Executive, Nominating and Finance Committees in recent years.\nWe are uniquely positioned because of our size and independent status to bridge the gap between the large commercial publishers and smaller publishers and to represent and understand the interests of both in a way no other organization can. SAGE publishes in partnership with a large number of society partners and so is intimately connected with the academic community and its changing needs, which SAGE can feed into its work with Crossref.\nBecause of the great breadth of its publishing, including journals (STM, HSS, and Open Access), college textbook publishing, library products and data, SAGE is intimately familiar with the large range of the requirements and challenges faced by Crossref and therefore well situated to ensure that it remains at the forefront of developments, whether it is in the use and deployment of new services, identifiers, metadata, or new forms of content. The breadth of SAGE’s use of DOIs and the spectrum of coverage is vast when thinking about the use and place of Crossref services and the importance of quality identifiers and metadata services in our industry.\nSAGE can also call on a wide range of experts within the organization beyond the proposed Board members if necessary to assist in projects: SAGE staff also participate already in a number of Crossref ad hoc technical working groups.\nPersonal statement\nI am Global Journals Production Director at SAGE Publishing, where I am responsible currently for the production of over 1000 journals, both traditional and open access. I have worked previously at IOP Publishing, Prentice Hall, and Blackwell Publishing and I’ve been in the publishing industry for over 30 years, managing production operations of both books and journals. I was involved right at the start of the publication of online journals in the 1990s.\nI have served as SAGE representative for NISO (the National Standards Information Organization) and on the ALPSP (Association of Learned and Professional Society Publishers) Board, where I was Chair of their Research Committee. I am also a Chartered Director and Fellow of the UK Institute of Directors.\nAs a result of this wide experience I have a deep comprehension of the crucial role that Crossref plays as the underpinning of so much of the infrastructure of scholarly communications today. It’s important that Crossref continues to innovate as the needs of the scholarly community continue to evolve, and to aid this I can contribute and advise in both the technical aspects required to understand and to set Crossref’s vision and future strategy, and the commercial aspects needed to ensure that Crossref continues to be successful and to thrive into the future.\nThe alternate for me on the Board of Crossref would be John Shaw, Vice President, Publishing Technologies. This would reverse the current situation where John is the main Board member and I am his alternate.\nDuncan Campbell, Wiley, USA\nFounded in 1807, John Wiley and Sons (Wiley) publishes on behalf of more academic, professional and scholarly societies and associations than any other publisher.\nWiley was a founding member of Crossref, as was Blackwell, which merged with Wiley in 2007. We see Crossref as a core component of the research publishing ecosystem and are committed to helping Crossref as an organization meet its goals and objectives. Overall, Wiley strives to take a leadership role in scholarly publishing, and through initiatives such as CHORUS and ORCID (among others) we work with stakeholders across the industry to help develop infrastructure that supports researchers and scholarly publishing globally.\nWe are a global company, with major publishing centres in the United States, the United Kingdom, Germany, Australia, China, and Japan. Wiley colleagues are actively engaged in numerous industry membership associations, such as the International Association of Scientific, Technical and Medical Publishers (STM), the Association of Learned and Professional Society Publishers (ALPSP), the Publishers Association (PA) and the American Association of Publishers (AAP). Wiley is also a co-chair of Project DARE, which is working to address skills gaps in the Asia-Pacific region.\nWe recognize that the market for scholarly publishing is changing rapidly, creating opportunities to develop new ways to describe, share, and disseminate research advances; we are committed to supporting the transition to Open Science in all manifestations, and to managing the transition in a sustainable manner for all stakeholders.\nPersonal statement\nI am currently Director, Global Sales Partnerships, for Wiley’s Research business, where I am responsible for licensing, agent relations and copyright \u0026amp; permissions for Wiley’s academic journal and database content. In addition, I am also engaged in developing Wiley’s strategies and policies in areas such as government affairs, content sharing/syndication and text \u0026amp; data mining. I am co-chair of the CLOCKSS digital archive, a not-for-profit joint venture between the world\u0026rsquo;s leading academic publishers and research libraries, and am also a member of the International Publishers’ Rights Organization (IPRO) board, and a non-executive director of Seren Books, a non-profit literary publisher based in Wales.\nI have represented Wiley on the Crossref board since 2015, and am currently a member of the Audit Committee and the Membership \u0026amp; Fees Committee. Crossref is a key component of the scholarly publishing ecosystem, and plays a major role in the development of the standards and infrastructure that we all depend on for our day-to-day publishing activities. If we want our industry to survive (and thrive) in the future, we need to work together as Crossref members and stakeholders to build a robust and sustainable open infrastructure for scholarly publishing that supports the continuing development of innovative products and services, irrespective of business models.\nWiley’s alternate is Sophia Joyce, Vice President, Content Strategy \u0026amp; Management.\n", "headings": ["Susan Murray, African Journals Online (AJOL), South Africa\n","Personal statement\n","Jasper Simons, American Psychological Association (APA), USA\n","Personal statement\n","Scott Delman, Association for Computing Machinery (ACM), USA\n","Personal Statement\n","Catherine Mitchell, California Digital Library (CDL), USA\n","Personal statement\n","Paul Peters, Hindawi, UK\n","Personal statement\n","Richard Fidczuk, SAGE, UK\n","Personal statement\n","Duncan Campbell, Wiley, USA\n","Personal statement\n"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/2017-slate/", "title": "Board election 2017 candidates", "subtitle":"", "rank": 2, "lastmod": "2017-09-29", "lastmod_ts": 1506643200, "section": "Board & governance", "tags": [], "description": "If you are a voting member of Crossref your \u0026lsquo;voting contact\u0026rsquo; will have received an email on September 28, 2017. If you are the designated voting contact please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 25 applications, our Nominating Committee proposed the following nine candidates to fill the six seats open for election to the Crossref Board of Directors. For the first time, we have more candidates than available seats:", "content": "If you are a voting member of Crossref your \u0026lsquo;voting contact\u0026rsquo; will have received an email on September 28, 2017. If you are the designated voting contact please follow the instructions in that email which includes links to the relevant election process and policy information.\nFrom 25 applications, our Nominating Committee proposed the following nine candidates to fill the six seats open for election to the Crossref Board of Directors. For the first time, we have more candidates than available seats:\nMember organization Candidate standing Country American Institute of Physics Jason Wilde USA F1000 Research Liz Allen UK Institute of Electronic and Electrical Engineers Gerry Grenier USA The Institution of Engineering and Technology Vincent Cassidy UK Massachusetts Institute of Technology Press Amy Brand USA OpenEdition Marin Dacos France SciELO Abel Packer Brazil SPIE Eric Pepper USA Vilnius Gediminas Technical University Press Eleonora Dagiene Lithuania Jason Wilde, American Institute of Physics (AIP), USA\nFor more than 80 years, AIP Publishing has championed the needs of the global research community as a wholly owned, not-for-profit, subsidiary of the American Institute of Physics (AIP). Our mission is to support the charitable, scientific and educational purposes of AIP through scholarly publishing activities across the physical sciences.\nFrom being an instigator in the development of CHORUS to taking a lead role in enabling public access to research content AIP Publishing has played a pivotal role in shaping our industry. Through our role on the boards of key industry bodies, Crossref, International Association of Scientific, Technical and Medical Publishers (STM), CHORUS, and Professional and Scholarly Publishing (PSP) Executive Council, AIP Publishing continues to ensure that the voice of small, scholarly, publishers is clearly heard.\nAt a time when a small number of large commercial publishers are taking dominant roles in scientific publishing, there is an ongoing need for strong representation of self-published not-for-profit societies, such as AIP Publishing, across our industry and, most importantly, on the Crossref board.\nPersonal statement\nAs the Chief Publishing Officer (CPO) for AIP Publishing, Jason is responsible for the leadership, vision, and growth of the publishing program. Jason currently serves as a board member for Crossref (2014–2017) and previously from 2007–2013 when he was with Nature. Jason was a member of the Crossref Nominating Committee in 2011, 2012, and 2015, and in 2017 was elected to Crossref’s Executive Committee.\nWith 20 years’ experience in both large for-profit and smaller not-for-profit organizations and significant involvement in the Crossref organization, Jason has a unique understanding of our industry and the vital role that Crossref plays. Crossref is not only the core infrastructure that links research and aids discoverability, but it provides the framework and data on which many other industry tools and services are built. As publishing continues to evolve, there is opportunity for Crossref to increasingly serve as a focal point for the industry, fostering collaboration among publishers, and developing solutions that serve the needs of our customers – researchers, librarians, funders and governments.\nBefore joining AIP Publishing, Jason was Business Development Director for Nature Publishing Group (NPG). During his twelve years with NPG, Jason oversaw the creation of the Nature-branded physical science research journals and NPG’s Open Access publications Nature Communications, Scientific Reports and Scientific Data. Prior to his career in publishing, Jason earned degrees from Dundee University (BEng in Electrical and Electronic Engineering), Cambridge University (PGCE in Balanced Science and Physics) and Durham University (PhD in Molecular Electronics).\nJason also serves on the board of the International Association of Scientific, Technical and Medical Publishers (STM).\nThe alternate for Jason on the board of Crossref would be Dr. John Haynes, CEO AIP Publishing.\nLiz Allen, F1000, UK\nF1000 Research would be an important and strategic addition to the Crossref Board at this time. Faculty of 1000 is a long-standing open research innovator and since 2013 has been working with a range of partners across the world to provide online, open research publishing solutions, to accelerate access to research findings outside of traditional publishing outlets, and to support the associated use and potential impact of research (e.g. Wellcome Open Research; Gates Open Research; F1000Research).\nUltimately Crossref exists to build an infrastructure to bring about efficiencies in how we deliver, discover and use knowledge and scholarly insights. F1000 is therefore well placed to specifically advise and play an active role in Crossref Board and strategic discussions about how best to ensure that newer research publishing models and outlets are considered in specific initiatives and broader strategy and projects (e.g. Event Data; Funding Registry; Crossmark). In addition, through its direct work with funding agencies, research institutions and Learned Societies, F1000 would bring a broad, cross-sector perspective and experience to the Crossref Board as it seeks to evolve its scholarly infrastructure and metadata requirements and services.\nPersonal Statement\nI am currently Director of Strategic Initiatives at F1000, and involved in seeking opportunities and shaping new initiatives particularly to promote and foster open research. I have spent much of my career thus far involved in projects and initiatives that aim to improve the understanding of how science progresses and how knowledge can be used - essentially to accelerate access to and the potential impact of research. In 2015 I became a Visiting Senior Research Fellow in the Policy Institute at King’s College London and continue to advise on academic projects that seek to understand research impact.\nPrior to joining F1000 in 2015, I spent over a decade as Head of Evaluation at the Wellcome Trust (a major biomedical research funding agency), with a particular specialism in impact assessment and the development of science-related indicators, serving as an adviser on the 2015 UK government commissioned Independent review of the role of research metrics in research assessment https://www.hefce.ac.uk/rsrch/metrics/. I understand the vital importance of building a data infrastructure to connect science, scientists and associated research outputs. I was a Board Director of ORCID between 2010-2015 and helped to mandate the adoption of ORCID for all Wellcome grantees. While at Wellcome I also co-led the development of project CRediT (Contributor Roles Taxonomy - http://www.casrai.org/CRediT) and continue to serve on the CASRAI CRediT committee.\nDuring my time at Wellcome I played an active role in helping Crossref to shape what is now the Funding Registry and worked with funding agencies to encourage the adoption of consistent ways to classify and capture research-related data – though there is lots more to be done to consolidate and develop this valuable information.\nI am very supportive of Crossref’s metadata drive and think that my knowledge and cross-sector experience and networks, would make me a valuable addition to the Crossref Board at this time. If I were appointed to the Crossref Board, my alternate would be Michaela Torkar, PhD, Publishing Director at F1000.\nGerry Grenier, Institute of Electronic and Electrical Engineers (IEEE), USA\nThe Institute of Electrical and Electronics Engineers (IEEE) is the world’s leading technical membership organization, with over 420,000 members and a mission to advance technology for the benefit of society. It inspires a global community of scholars and industry professionals to innovate for a better tomorrow through highly-cited publications, conferences, technology standards, and professional education services.\nA member and supporter of Crossref since its inception, IEEE sees its contribution to the organization as an opportunity to influence the future of scholarly publishing through continuing development of innovative and shared collaborative services for the industry. IEEE as a publisher and a not-for- profit organization is at the forefront of numerous initiatives to serve authors, reviewers, readers, and customers and brings a community-centric approach to its work with Crossref. It will seek to balance the needs of Crossref’s global consumers and its diverse member base, advocating sustainable, fiscally sound management of the organization.\nPersonal statement\nGerry is Senior Director of Publishing Technologies \u0026amp; Content Management, a role he has held since joining IEEE in 1999. He manages the development and operation of the IEEE’s publishing ecosystem, including the systems used to create, enhance, distribute, and archive IEEE’s intellectual property. He has been a member of the Crossref Board since 2004, a member of its Executive Committee since 2013, and its Treasurer since 2016.\nGerry’s career in the STM business began in 1980 with editorial and production roles for life sciences publisher Alan R. Liss, Inc. He joined John Wiley \u0026amp; Sons with its acquisition of Liss in 1987 and served in editorial, production, and technology roles \u0026ndash; culminating in his role as Director of Publishing Technology. He was a key member of the team that developed Wiley’s initial online digital library, Wiley Interscience.\nHe is currently serving on the NISO Board of Directors, is a past Board Member of International STM, Past Chair of the New Jersey Chapter of the American Society for Information Science and is a member of the Association for Computing Machinery.\n(Alternate) Michael Forster, Managing Director, Publications\nMichael is Managing Director for Publications at IEEE, a position he has held since late 2015, and has more than two decades of experience in educational, professional, and research publishing. Working for both Elsevier and Wiley prior to IEEE, he has served in senior management positions in the UK, the US, and in Germany, with expertise in strategic planning, M\u0026amp;A and portfolio analysis, product management, and content development.\nAs a leader within Wiley for nearly a decade, he was responsible for journal, book, magazine, database, and scientific workflow product portfolios, led development of new products in researcher workflow, and introduced the first semantic enrichment products to Wiley’s online platform. Michael began his career with publisher Butterworth-Heinemann, then worked as Publishing Director for Engineering and Computer Science in Elsevier’s Science and Technology Books business where he led innovative projects in online learning and eBook products and to transform business models for the textbook market.\nMichael received his B.A. (Hons) and M.Eng. in Engineering Science from the University of Oxford.\nVincent Cassidy, Institution of Engineering and Technology (IET), UK\nThe Institution of Engineering and Technology will contribute to the Crossref board in two ways. Firstly, as a learned and professional society publishing a broad range of content (journals, books, standards, multi-media) we will represent the interests of a significant section of Crossref’s core membership. Learned membership bodies are developing their own distinct approaches to the open science and open data debate and, being well networked within the world of learned societies, we will be able to reflect the role of the learned society in the changing landscape of scholarly discourse.\nIn addition, through the process of semantically enriching Inspec, our flagship A\u0026amp;I service, we appreciate the opportunities and concerns inherent in managing distributed big-data assets and the evolution of new service models and relationships in the scholarly network. Inspec supports a metadata service business and gives the IET an interesting perspective on the challenges and opportunities facing Crossref.\nPersonal statement\nI have 30 years experience in scholarly and professional publishing and having held leadership positions at Academic Press, Thomson Reuters, Elsevier Health Sciences, British Standards (BSI) and the IET I have experience of many information markets and sectors across Crossref’s membership base. I have worked closely with Crossref and DOIs from the birth of the organisation (in my time working with Academic Press\u0026rsquo;s Ideal project) and have continued to support and promote Crossref through my career, including leading the adoption of DOIs by the standards community during my period at British Standards.\nI understand the key role of standards in our industry and Crossref’s value and potential in the networked information economy. It is easy to take Crossref\u0026rsquo;s central role in facilitating scholarly discourse for granted, and it is instructive to remember the high degree of competition between the major commercial publishers that was the original spur for Crossref\u0026rsquo;s creation. Crossref has facilitated the development of a system of networked content that has improved the process of scholarly communication.\nMy vision is that our content centric industry is evolving new service dimensions that will enhance, not only the content output, but the process of research and development itself. Crossref can continue to add value by providing the standards and the institution to connect this increasingly complex stakeholder group.\nI propose Sara Killingworth (Head of Marketing, IET) as our alternate.\nAmy Brand, Massachusetts Institute of Technology Press (MIT Press), USA\nOne of the largest and most distinguished university presses in the world, the MIT Press is known for bold content and design, creative technology, and its commitment to re-imagining university based publishing. By electing the MIT Press to the Crossref Board, the Crossref community will have the representation not only of a leading university press, but also of a technically progressive press that spans STEM and HSS fields within its book and journal programs. The MIT Press is resolutely focused on pushing the boundaries of digital publication across the board, and is committed to integrating DOIs into all aspects of digital publication. Core to our vision is enriching the metadata environment for academic research, and interlinking as extensively as possible with text, data and nontraditional objects, and with persistently identified organizations, stakeholders, and roles that comprise a more articulated scholarly communication infrastructure. With a seat on the Crossref Board, the MIT Press will be able to help grow adoption of DOIs for books and other digital objects within the university press community, many members of which have only recently embarked on digital publishing.\nPersonal statement\nAmy Brand worked at Crossref for several years in the early days of the organization, serving as Director of Business and Product Development from 2001-2007. Hence, she should would bring to the Crossref Board a uniquely deep perspective on how the organization functions, how members of our community work together to move a shared agenda forward, and what the opportunities are for continuing to develop and improve Crossref services.\nBecause her career has afforded her experience in several regions of the scholarly communication landscape — academic research, university administration, open access, analytics, entrepreneurship, in addition to book and journal publishing — Amy brings valuable insight into diverse perspectives.\nShe is unusually well qualified to help build the consensus that makes Crossref work so well. In recent years, she helped launch the CRediT contributor role taxonomy initiative, and she’s currently involved in a new community-wide project to grow the adoption of peer review badges. Both of these projects produce metadata that can and should be integrated into the Crossref system. Amy was also a founding member of the ORCID Board of Directors and currently serves on the National Academies Board of Research Data and Information, the Duraspace Board of Directors, and the advisory board for altmetic.com.\nThe MIT Press Alternate to the Crossref board, should we be elected, would be our Journals Director, Nick Lindsay. Nick has successfully run the MITP journals program since 2009, having brought forward several significant technology and business model changes that positively transformed the division. Prior to his move to the MIT Press, Nick worked at the University of California Press in their journals department. He has served on several committees within the Society for Scholarly Publishing and the Association of American University Presses, and chaired the scholarly journals committee for AAUP for two years.\nMarin Dacos, OpenEdition, France\nOpenEdition is a major European platform for HSS in Europe, which provides publishing services for 450 journals, 4000 books and more than 2000 academic blogs. We are based in Marseille and Paris. We are pursuing long-term projects in Italy (OpenEdition Italia), Portugal (LusOpenEdition), Germany (de.hypotheses.org), Spain (es.hypotheses.org) and the Netherlands (OpenEdition is the co-founder of the DOAB Foundation, with the OAPEN Foundation). Our content is published in five European languages: French, English, German, Spanish and Italian. We are in the process of establishing a European network of all players involved in open digital publishing. This is OPERAS http://operas.hypotheses.org, which brings together 20 partners, 9 countries and 2 lots of H2020 funding. Together, those involved have published 800,000 documents and 250,000 authors.\nOpenEdition’s presence on the board will show that disciplines whose scholarly communication is plurilingual have a place within the organization. It will also underline the inclusion of non-STM platforms and publishers as integral parts of the digital scholarly publishing sector. This will help Crossref to embrace the cultural and organizational diversity of its communities.\nPersonal statement\nWith 20 years of experience as the founding director of OpenEdition, I am very familiar with the diversity of the HSS publishing ecosystem in Europe. I have been at the forefront of the digitalization of scholarly communication, supporting editors, publishers and academics to shift from paper to electronic formats. This ongoing revolution has deeply transformed the HSS publishing industry, with some embracing it and inventing new formats and economic models, and others remaining reluctant and seeing it as only a secondary market and communication channel.\nIn the face of this diversity, I have developed tools and adapted standards in order to include as many publishing actors as possible and to drive them to the most open solutions. As DOIs and Crossref have become key services for open science, it is necessary to take into account small publishers and platforms, in order to provide a comprehensive service for the whole industry, and not only to major players and international initiatives. To do so, the diversity of languages and disciplines for future developments of Crossref services should be acknowledged.\nI have been using Crossref’s services as a publisher for almost a decade and know them fairly well. I hope that these services will expand even more substantially in the future, particularly in terms of covering the minor forms of scientific communication that are blogs, as well as by offering functionalities specific to fine-grained forms of publishing (particularly critical editions of historical sources).\nProposed alternate : Pierre Mounier (EHESS, OpenEdition).\nAbel Packer, SciELO, Brazil\nMy organization is the Scientific Electronic Library Online (SciELO) Program which is an international collaboration on scholarly communication aiming at improving quality, visibility and credibility of nationally edited journals and the research they communicate.\nSciELO was launched in 1998 in Brazil and it is implemented through the SciELO Network composed by 15 national collections of selected open access peer reviewed journals from Latin American countries, Portugal, Spain and South Africa. The network is fully decentralized using the same principles and methodology and publishes about 1 thousand journals from all disciplines, 50 thousand new articles per year and an accumulated repository of over 700 thousand articles. The network serves over one million downloads per day according COUNTER methodology.\nSciELO is probably the most important international cooperation program on scientific communication among developing countries.\nPersonal statement\nWith more than 20 years of experience on scientific information management in Latin America, my application envisages to enrich the Crossref Board with a voice and demands of Latin American and other developing regions on the adoption of Crossref products and services.\nMy alternate will be Fabio Batalha, SciELO leader on systems development and information technology infrastructures. Fabio has an extensive experience on the management of Crossref services.\nEric Pepper, SPIE, USA\nI have worked in publishing at SPIE, the international society for optics and photonics, for thirty-five years, the past twenty-four of those as Director of Publications, with responsibilities encompassing editorial, production, technology, and business aspects of all our publishing activities. In this time I have seen and been directly involved in enormous changes in our industry and in how we think about, produce, and disseminate scholarly information.\nCrossref is of course an important outcome of this transformation. SPIE’s first connection to Crossref dates back to 2000 when our journals were hosted on the AIP platform and we participated in Crossref via an agency agreement with AIP. SPIE joined as an independent member a few years after that.\nPersonal statement\nI have been SPIE’s liaison to Crossref during this entire period and in addition to basic DOI registration and linking was instrumental in adding Similarity Check, Funder Registry, and now Cited-by to the suite of Crossref services we use. SPIE greatly values these Crossref services and the organization that developed and enables them and will appreciate an opportunity to have a voice in Crossref’s governance and strategic direction.\nAs our representative, I believe that my experience helping to develop and manage a diverse portfolio of scholarly publications in a variety of print and digital formats has given me the perspective needed to understand the needs of publishers as well as the research and educational communities that provide and utilize the content we publish. I have worked with many other organizations, vendors, and publishing partners over the years and through that have developed an understanding of broader industry needs and priorities, which which I can bring to my role in Crossref leadership should I have the opportunity.\nEleonora Dagiene, VGTU Press, Lithuania\nIn recent years, many small publishers have joined Crossref. VGTU Press has been representing this particular and growing group of members on the Board for three years. We are an innovative small publisher functioning within a university structure, which means we are close to the academic community and aware of everyday issues faced by researchers, librarians, and administrators.\nThe services offered by Crossref enable small publishers to keep up with the rapidly changing scholarly communication trends. However, it happens that many small publishers find it difficult to implement some of the services due to having fewer staff with limited technical expertise. The young, promising, and curious team of VGTU Press helps Crossref understand these challenges, and we share with staff and the rest of the Board the reasons and causes of such obstacles. VGTU Press seeks to implement as many Crossref innovations as possible so we have firsthand experience with almost every initiative that Crossref is involved in, actively working with staff to share information and help with improvements.\nAs a small publisher supporting Crossref initiatives, VGTU is a good example for other small publishers, showing the feasibility of implementing not only cornerstone services, such as DOI registration or the use of Similarity Check, but also other services offered by Crossref. As the academic community continues to seek new forms and models of publishing, VGTU Press, as one of the most advanced non-profit academic publishers in the Baltics, would be an excellent choice as a representative of small publishers, an area of the world never represented previously on the Crossref Board which is heavily monopolized by large US and UK publishers.\nPersonal statement\nAs someone in the modern academic publishing industry with over ten years’ experience, and for whom work in scholarly communication is a personal passion, I would be a great candidate to re-elect to Crossref\u0026rsquo;s board. Serving on the Board for the term 2015–2017 was not only a privilege and an excellent driver of my professional development but also a great pleasure. It also means I can offer some continuity as I fully understand the key issues that currently face Crossref and the Board. As Director of VGTU Press and President of the Association of Lithuanian Serials, I have gained a considerable expertise related to scholarly publishing, especially familiarity of the challenges faced by small publishers. I have learned from my experience that a person running a small press and turning it successful and contemporary, should have knowledge of different areas and the ability to think creatively.\nTo manage processes efficiently, it is important to gain a thorough understanding not only of everyday activities in a press but also to take an interest in innovations which are usually related to technology. Especially when the descriptions of any current innovation can sound as if they were magic words, demanding more time to gain a required level of understanding.\nLiaising with researchers/authors, journal editors, and university employees is a part of my daily agenda, which helps me to keep my finger on the pulse of needs and expectations arising from the academic community. Besides, I am also a researcher, author and peer-reviewer, working in the field of scholarly communication.\nThe three years of experience as a member of Crossref Board provided me with an excellent opportunity to enhance the understanding of Crossref services. Moreover, I have valuable knowledge of various innovations and it is clear for me what problems current scholarly communication faces.\nFurthermore, I firmly believe that many publishing-related improvements would have been impossible without the services provided by Crossref. It would be an honour to be selected for an additional term. In any case, I would like to take this opportunity to say that I feel inspired by Crossref activities and will continue to promote them actively.\n", "headings": ["Jason Wilde, American Institute of Physics (AIP), USA\n","Personal statement\n","Liz Allen, F1000, UK\n","Personal Statement\n","Gerry Grenier, Institute of Electronic and Electrical Engineers (IEEE), USA\n","Personal statement\n","Vincent Cassidy, Institution of Engineering and Technology (IET), UK\n","Personal statement\n","Amy Brand, Massachusetts Institute of Technology Press (MIT Press), USA\n","Personal statement\n","Marin Dacos, OpenEdition, France\n","Personal statement\n","Abel Packer, SciELO, Brazil\n","Personal statement\n","Eric Pepper, SPIE, USA\n","Personal statement\n","Eleonora Dagiene, VGTU Press, Lithuania\n","Personal statement\n"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/crossref-annual-meeting/2023-annual-meeting/", "title": "Crossref annual meeting and board election 2023", "subtitle":"", "rank": 6, "lastmod": "2023-07-27", "lastmod_ts": 1690416000, "section": "Crossref Annual Meeting", "tags": [], "description": "#Crossref2023 online, October 31, 2023 Our annual meeting, #Crossref2023, was held online on October 31 from 9:30 AM UTC to 4:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2023 below, and cite the outputs as #Crossref2023 Annual Meeting and Board Election, October 31, 2023 retrieved [date], https://doi.", "content": "#Crossref2023 online, October 31, 2023 Our annual meeting, #Crossref2023, was held online on October 31 from 9:30 AM UTC to 4:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2023 below, and cite the outputs as #Crossref2023 Annual Meeting and Board Election, October 31, 2023 retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/h3yygefpyf:\nUpdate on the Research Nexus The latest on the Research Nexus by Ginny Hendricks, Crossref Updates from the community Making data citations available at scale: The Global Open Data Citation Corpus by Iratxe Puebla, DataCite Who cares? Defining Citation Style in Scholarly Journals by Vincas Grigas, Vilnius University DOI registration for scholarly blogs by Martin Fenner, Front Matter Grant DOIs at the AU Publications Office by Izabela Szprowska, OP and European Commission; Nikolaos Mitrakis, European Commission; and Paola Mazzucchi, mEDRA Demos, Experiments and Q\u0026amp;A Registration form for journal content by Lena Stoll, Crossref PKP to demo the latest plugin by Erik Hanson, PKP Data citations matching by Martyn Rittman, Crossref Using Crossref APIby Luis Montilla, Crossref RW data in Labs API by Rachael Lammey \u0026amp; Martin Eve, Crossref Digital preservation reports by Martin Eve, Crossref DOIs for static site generators by Esha Datta, Crossref Panel Discussion What do we still need to build the Research Nexus? Panel discussion with Ginny Hendricks, Crossref; Patricia Feeney, Crossref; Matt Buys, DataCite; Kevin Stranack, PKP; Ludo Waltman, CWTS Leiden University; Mercury Shitindo, St. Paul\u0026rsquo;s University, Kenya; Ran Dang, Atlantis Press. Recording\nAnnual meeting and board election and Crossref strategy Crossref Strategy by Ed Pentz, Crossref Thanks to partners and advocates by Johanssen Obanda Member governance and board election lead by Lucy Ofiesh, Crossref Spotlight on community initiatives Enhancing Research Connections through Metadata: A Case Study with AGU and CHORUS by Tara Packer (CHORUS), Sara Girard (CHORUS), Shelley Stall (AGU), Kristina Vrouwenvelder (AGU) Index Crossref Integrity, Professional, and Institutional Development by Engjellushe Zenelaj, Reald University College Brazilian retractions in the Retraction Watch Database by Edilson Damasio, Maringá State University / Crossref Ambassador Now that you\u0026rsquo;ve published, what do you do with metadata? by Joann Fogleson, American Society of Civil Engineers ROR / Open Funder Registry Overlap by Amanda French, Crossref Other Outputs Presentation: Google slides or pdf slides #Crossref2023 Mastodon stream #Crossref2023 Twitter stream Posters from community guest speakers Board election results LIVE22 online, October 26, 2022 Our annual meeting, LIVE22, was held online on October 26 at 4:00 PM UTC (universal coordinated time). We invited all our members from 140+ countries, and everyone in our community, to hear the results of our board election and team updates.\nHere are some of the outputs from the full session:\nEd Pentz spoke about our vision, mission, strategic goals, update on our efforts toward POSI, and our role in ISR (inclusive scholarly record). Vanessa Fairhurst and Isaac Farley highlighted contributors from those in our community who help make our work possible. Kora Korzec led our community speaker session, \u0026ldquo;Building the Research Nexus together: flash talks\u0026rdquo;, with presentations by Hans de Jonge, Bianca Kramer, Javier Arias, Julie Lambert, Lettie Y. Conrad, and Edilson Damasio. Amanda Bartell and Patricia Feeney talked about the state of membership and metadata members register with us. Dominika Tkaczyk talked about our work around linking grants to research outputs. Lucy Ofiesh led our annual meeting, and board election results looked at our financial performance and the 2023 draft budget. Led by our ambassadors, four in-person \u0026lsquo;LIVE\u0026rsquo; satellite events took place in Lithuania, Brazil, Turkey, and Kenya. Some included a watch party of the meeting, and all included good talks and discussions about metadata and the scholarly record. Please check out the materials from LIVE22 below, and cite the outputs as Crossref Annual Meeting LIVE22, Ocrober 26, 2022 retrieved [date], [https://0-doi-org.libus.csd.mu.edu/10.13003/i3t7l9ub7t]:\nYouTube recording Recording transcript Zoom Q\u0026amp;A transcript Google slides or pdf slides #CRLIVE22 Twitter stream Posters from community guest speakers Board election results\nThe annual meeting archive Browse our archive of annual meetings with agendas and links to previous presentations from 2001 through 2015. Its a real trip down memory lane!\nPlease contact us with any questions.\n", "headings": ["#Crossref2023 online, October 31, 2023","Update on the Research Nexus","Updates from the community","Demos, Experiments and Q\u0026amp;A","Panel Discussion","Annual meeting and board election and Crossref strategy","Spotlight on community initiatives","Other Outputs","LIVE22 online, October 26, 2022","The annual meeting archive"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/brand/", "title": "Brand & logos", "subtitle":"", "rank": 3, "lastmod": "2025-02-07", "lastmod_ts": 1738886400, "section": "Brand & logos", "tags": [], "description": "Typeface We use the Inter typeface family. It is free and open source, and the fonts are optimised for accessibility on computer screens. They are also great for our global community as they feature over 2000 glyphs covering 147 languages.\nLogos Please reference our logos on your site using the snippets below, rather than copying locally. This will ensure that if/when the logos change in future, it will automatically be updated across the web.", "content": "Typeface We use the Inter typeface family. It is free and open source, and the fonts are optimised for accessibility on computer screens. They are also great for our global community as they feature over 2000 glyphs covering 147 languages.\nLogos Please reference our logos on your site using the snippets below, rather than copying locally. This will ensure that if/when the logos change in future, it will automatically be updated across the web. Contact our community team if you’d like a variation or to request special use.\nStacked organisational logo Please reference this default logo to represent Crossref, the organisation and use this snippet: \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/crossref-logo-200.svg\u0026quot; width=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref logo stacked\u0026quot;/\u0026gt;.\nLandscape logo The landscape version of our logo can be used when space is tight, with the snippet \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/crossref-logo-landscape-200.svg\u0026quot; width=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref landscape logo\u0026quot;\u0026gt;.\n\u0026lsquo;Metadata-from\u0026rsquo; logo If you use our metadata, the research community might like to know your sources, so here\u0026rsquo;s a logo that you can use, with the snippet \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/metadata-from-crossref-logo-200.svg\u0026quot; width=\u0026quot;200\u0026quot; alt=\u0026quot;Metadata from Crossref logo\u0026quot;\u0026gt;.\nDisplaying links Crossref identifiers are also persistent links and should be displayed as full HTTPS links within reference lists and anywhere else on your website that you use Crossref DOIs. Optionally, you can add the Crossref icon (the \u0026lsquo;zigzag\u0026rsquo;) prepending the link, like this:\nhttps://doi.org/10.13003/5jchdy\nPlease review the official Crossref display guidelines for full details.\nBadges We know that many people like to reference Crossref in your communications, promoting how you are contributing to scholarly infrastructure. To support this we have created a set of Account badges. If you currently use the Crossref logo on your website, you might like to replace it with or add your Account Type badge.\nColour palette Blue - Hex #3eb1c8 Sand - Hex #d8d2c4 Grey - Hex #4f5858 Red - Hex #ef3340 Dark Blue - Hex #005f83 Secondary palette - use sparingly as an accent only Yellow - Hex #ffc72c Green - Hex #00ab84 Mocha - Hex #a39382 Gold - Hex #ffa300 Orange - Hex #fd8332 Dark Red - Hex #a6192e Brand guide This brand guide was produced in 2016 and is somewhat out of date e.g. we changed typefaces to Inter. But the general rules about the logo still apply!\nGet in touch with the Community team with any questions.\n", "headings": ["Typeface","Logos","Stacked organisational logo","Landscape logo","\u0026lsquo;Metadata-from\u0026rsquo; logo","Displaying links","Badges","Colour palette","Secondary palette - use sparingly as an accent only","Brand guide"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/preprints-meeting-notes/", "title": "Preprint AG meeting notes", "subtitle":"", "rank": 1, "lastmod": "2024-05-03", "lastmod_ts": 1714694400, "section": "Working groups", "tags": [], "description": "Preprint Advisory Group meeting notes On this page you can find a summary of meetings held by the preprint advisory group.\nAG meeting, 25 April 2024 Agenda Update on metadata schema changes. Update on new preprint matching method. Ithenticate removal requests. Actions Schema changes will be open for community consultation in the coming weeks. Work continues on implementation of the new matching method, although will be delayed by other high priority development work.", "content": "Preprint Advisory Group meeting notes On this page you can find a summary of meetings held by the preprint advisory group.\nAG meeting, 25 April 2024 Agenda Update on metadata schema changes. Update on new preprint matching method. Ithenticate removal requests. Actions Schema changes will be open for community consultation in the coming weeks. Work continues on implementation of the new matching method, although will be delayed by other high priority development work. AG meeting, 22 February 2024 Agenda Development work starting on implementing preprint matching in production. Proposal for schema changes to support preprints. Follow up discussion on modifying titles for withdrawn preprints. Actions Crossref to continue implementation of new preprint matching method. Continued feedback from the group about proposed schema changes. Crossref to seek community feedback on schema changes. AG meeting, 16 November 2023 Agenda Update on preprint matching: public dataset of matches from the whole corpus available, see blog post. Discussion with Anurag Acharya (Google Scholar) about identifying withdrawn preprints. Actions Members to consider whether to modify titles to indicate withdrawn status. AG meeting, 24 August 2023 Agenda Presentation of results on preprint/article matching prototype (the code is open). Actions Plan for production implementation of the best strategy for matching. AG meeting, 22 June 2023 Agenda Crossref schema change survey and progress. Crossref investigations into improving preprint/article matching. Discussion of the Elife editorial process. Actions Priority list for schema changes at Crosref. Continue experiments in preprint matching. AG meeting, 23 February 2023 Agenda Welcome to new members Update on schema changes: upcoming survey of Crossref members to prioritize planned changes. Introduction to the new eLife editorial workflow and how metadata is handled. Introduction to Docmaps. AG meeting, 27 October 2022 Agenda Upcoming changes to preprint/journal-article notification emails. Discussion of feedback on preprint metadata recommendations: Origin of withdrawals. Representation of editorial progress in preprint metadata. Work of related groups, including NISO and ASAPbio. Interoperability of Crossref metadata with other platforms. Actions No suggested changes to recommendations as a result of feedback. Review forthcoming report on AG discussions. AG meeting, 25 August 2022 Agenda Adding summary of AG meetings to the Crossref website. Update on feedback recommendations. Forthcoming COAR/ASPbio recommendations. Crossmark documentation to support registration of DOIs for preprints. Actions Take feedback on documentation to colleagues looking at website design. Revise the subject line of preprint/journal-article notification emails. AG meeting, 23 June 2022 Agenda Publication of AG recommendations. Discussion of withdrawal processes. Notify Project from COAR. Actions Public posting of AG recommendation document. AG meeting, 26 April 2022 Agenda Multilingualism in preprints Actions Improve Crossref documentation to explain which fields can have multilingual content and how to deposit the metadata. Clearer guidance from Crossref about how to register DOIs for lay summaries in different languages. AG meeting, 17 March 2022 Agenda Feedback from recent Crossref Board meeting Discussion on preprint recommendations: Relationships Versioning Process retrospective Actions Crossref can consider matching preprints to articles and adding them directly to metadata provided the false-positive rate can be quantified and is very low. Crossref should provide more guidance on when to use types of relationships. Recommendation discussion around a \u0026lsquo;citable DOI\u0026rsquo; removed from recommendations document. Seek public feedback on recommendations. AG meeting, 16 February 2022 Agenda Updates to preprint/journal-article notification emails Working group summaries. Metadata recommendation discussions: Withdrawals and removals. Preprints as an article type. Preprint relationships. Actions Discuss Crossref\u0026rsquo;s fee structure in a future meeting. Keep the term \u0026lsquo;preprint\u0026rsquo; despite limitations if it\u0026rsquo;s taken too literally. Crossref should consider notifying interested parties when a preprint is withdrawn. Asynchronous discussion, October 2022 Agenda The balance between best practice and capturing current practice. Followup topics from previous prioritization exercise. Actions Make recommendations optional, do not force specific practices. Come back to language metadata in a future meeting. AG meeting, 13 September 2021 Agenda Initial discussion outcomes of prioritization exercise: Version number. Withdrawal/removal and Crossmark metadata. Establish questions to research, look at current practice. Actions Condense discussions into a list of recommendations. AG meeting, 26 July 2021 Agenda Outcomes of prioritzation exercise. Preprints as an article type. Relationships to/from preprints. Actions Continue discussion on other selected topics in next meeting. AG meeting, 7 June 2021 Agenda Scope of the group. Review of current metadata practice for preprints. Actions Prioritize topics around preprint metadata. Appoint chair. ", "headings": ["Preprint Advisory Group meeting notes","AG meeting, 25 April 2024","Agenda","Actions","AG meeting, 22 February 2024","Agenda","Actions","AG meeting, 16 November 2023","Agenda","Actions","AG meeting, 24 August 2023","Agenda","Actions","AG meeting, 22 June 2023","Agenda","Actions","AG meeting, 23 February 2023","Agenda","AG meeting, 27 October 2022","Agenda","Actions","AG meeting, 25 August 2022","Agenda","Actions","AG meeting, 23 June 2022","Agenda","Actions","AG meeting, 26 April 2022","Agenda","Actions","AG meeting, 17 March 2022","Agenda","Actions","AG meeting, 16 February 2022","Agenda","Actions","Asynchronous discussion, October 2022","Agenda","Actions","AG meeting, 13 September 2021","Agenda","Actions","AG meeting, 26 July 2021","Agenda","Actions","AG meeting, 7 June 2021","Agenda","Actions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/africa/", "title": "Meet our ambassadors in Africa", "subtitle":"", "rank": 4, "lastmod": "2023-08-01", "lastmod_ts": 1690848000, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is in Africa:\nAlgeria - Younes Saaid Younes SAAID is in charge of Scientific and Technological Information at the National Centre for Research in Social and Cultural Anthropology (CRASC) in Oran, Algeria. He is a skilled researcher with a master\u0026rsquo;s degree in Linguistics and Language Contact and is currently pursuing a Ph.D. in the Didactics of English for Specific Purposes at the University of Oran 2. His expertise includes language contact, language teaching methodologies, and English for Specific Purposes. Younes actively contributes to the field by indexing academic journals and enhancing research visibility. He also plays a vital role in the Algerian Scientific Journals Platform (ASJP), promoting Algerian research and managing the Open Journal Systems (OJS) for various journals.\nيونس سعيد مكلف بالإعلام العلمي والتكنولوجي في المركز الوطني للبحث في الأنثروبولوجيا الاجتماعية والثقافية (كراسك) في وهران، الجزائر. هو باحث ماهر حاصل على الماستر في اللسانيات وتواصل اللغات، وهو حالياً يحضر لنيل درجة الدكتوراه في تعليم الإنجليزية لأغراض محددة في جامعة وهران 2. تشمل خبرته اتصال اللغات، مناهج تعليم اللغات، والإنجليزية لأغراض محددة. يساهم يونس بشكل فعال في هذا المجال من خلال فهرسة المجلات الأكاديمية وتعزيز رؤية البحث. كما يلعب دورًا مهمًا في منصة المجلات العلمية الجزائرية، حيث يروج للبحث الجزائري ويدير منصة نظم المجلات المفتوحة لعدة مجلات. Cameroon - Audrey Kenni Nganmeni I am Audrey Kenni Nganmeni, editor at the Pan African Medical Journal in charge of legal affairs and focal person of the journal with Crossref, a journal based in Cameroon, Africa. I am very glad to be part of Crossref ambassadors where I will learn more about Crossref services, benefit of diverses training and help Crossref with my French skills.\nJe suis Audrey Kenni Nganmeni, éditrice au Pan African Medical Journal en charge des affaires juridiques et point focal de la revue avec Crossref. Je suis basée au Cameroun en Afrique. Je suis très heureuse de faire partie des ambassadeurs Crossref où je pourrai en savoir plus sur les services Crossref, bénéficier de diverses formations et aider Crossref avec mes compétences en français. Egypt - Ahmed Moustafa I’m an Academic Publishing professional who has wide expertise in the Scholarly Publishing industry. I have a B.Sc. in Chemistry/Geology from Cairo University in Egypt, and now I’m the Production Manager at Knowledge E in the UAE. I started my career by joining the leading open access publisher, Hindawi Publishing Corporation. Afterwards, I had an enriching journey at Andromeda Publishing and Academic Services as an Executive Director. I support openness and integrity of research, as well as integration and collaboration between the different societies of the publishing community. I’m a member of the Creative Commons Global Network and of the Industry Partnership Committee of the International Society of Managing and Technical Editors.\nأحمد مصطفى، متخصص في النشر الأكاديمي ولديه خبرة واسعة في صناعة النشر أحمد مصطفى، متخصص في النشر الأكاديمي ولديه خبرة واسعة في صناعة النشر العلمي. حصل على درجة البكالوريوس في الكيمياء/الجيولوجيا من جامعة القاهرة في مصر وهو الآن مدير الإنتاج بنولدج اي في الإمارات العربية المتحدة. بدأ حياته المهنية بالانضمام إلى رائد النشر مفتوح الوصول \"غير مقيد الوصول إليه\"، مؤسسة هنداوي للنشر ثم خاض تجربة غنية بشركة اندروميدا للنشر والخدمات الأكاديمية في المملكة المتحدة كمديراً تنفيذياً. يدعم أحمد انفتاح ونزاهة البحث، فضلاً عن التكامل والتعاون بين مختلف مؤسسات مجتمع النشر. وهو عضو في شبكة المشاع الإبداعي العالمية ولجنة الشراكة الصناعية إحدي لجان الجمعية الدولية للمحررين الإداريين والتقنيين. Ghana - Richard Bruce Lamptey Richard Bruce Lamptey is the Librarian of the College of Science Library and a Deputy Librarian in the KNUST Library System. He is knowledgeable in digital libraries, data curation, digital repositories, information management, and open access / open data issues. Very much results-driven, go-getter, follow transformational leadership principles. Has supported national and institutional open access awareness raising and advocacy workshops that have resulted in several open access repositories in the country. Through his work, the first open access mandate in the country was introduced by Kwame Nkrumah University of Science and Technology. He holds PhD, MPhil, MA and Diploma (Library and Information Studies). He is very passionate about Knowledge sharing, interested in equity in scholarly communications and research, alternative metrics, grey literature and open access.\nKenya - Mercury Shitindo With over 18 years of management experience, Ms. Shitindo is a researcher and bioethicist. She currently chairs the Africa Bioethics Network, is an editor at the African Journal of Bioethics, serves on the BCA-WA-ETHICS II Project Advisory Board, and is a Technical Expert for Global Impact. She is an alumnus of the WCG IRB International Fellows Program and trains Open Peer Reviewers. She promotes human rights and human dignity in African society through research, writing, and capacity-building. She volunteers at Africans Rising as a Regional Resource Mobilizer and is a Crossref Ambassador. Her research aims to promote equitable access to necessities, education, and health by endorsing ethical research conduct and upholding human rights and dignity.\nAkiwa na uzoefu wa usimamizi wa zaidi ya miaka 18, Bi. Shitindo ni mtafiti na mtaalamu wa maadili. Kwa sasa ni mwenyekiti wa Mtandao wa Maadili ya Kibiolojia Afrika, ni mhariri katika Jarida la Afrika la Maadili ya Kibiolojia, anahudumu katika Bodi ya Ushauri ya Mradi wa BCA-WA-ETHICS II, na ni Mtaalam wa Kiufundi wa Athari za Kiulimwengu. Yeye ni mhitimu wa Mpango wa Kimataifa wa Wenzake wa WCG IRB na hufunza Wakaguzi Huria wa Rika. Anakuza haki za binadamu na utu katika jamii ya Kiafrika kupitia utafiti, uandishi, na kujenga uwezo. Anajitolea katika Africans Rising kama Mhamasishaji wa Rasilimali za Kanda na ni Balozi wa Crossref. Utafiti wake unalenga kukuza upatikanaji sawa wa mahitaji, elimu, na afya kwa kuidhinisha mwenendo wa utafiti wa kimaadili na kuzingatia haki za binadamu na utu.\nSenegal - Oumy Ndiaye Dr. Oumy Ndiaye, a certified gender and ethics expert, is a health economist at Cheikh Anta Diop University, Dakar. Her areas of expertise include health financing, inequalities in access to care, sexual and reproductive health, and children’s health. She has led numerous projects and programs in this field, and has collaborated with the United Nations and the international NGO FIND as an international consultant. Dr. Ndiaye has published several articles and a significant work on demographic dividend and development. Currently, she is spearheading a project on violations of the rights of domestic workers in French-speaking West Africa.\nLe Dr Oumy Ndiaye, experte certifiée en genre et en éthique, est économiste de la santé à l\u0026rsquo;Université Cheikh Anta Diop de Dakar. Ses domaines d’expertise incluent le financement de la santé, les inégalités d’accès aux soins, la santé sexuelle et reproductive et la santé des enfants. Elle a dirigé de nombreux projets et programmes dans ce domaine, et a collaboré avec les Nations Unies et l\u0026rsquo;ONG internationale FIND en tant que consultante internationale. Le Dr Ndiaye a publié plusieurs articles et un ouvrage important sur le dividende démographique et le développement. Actuellement, elle mène un projet sur les violations des droits des travailleuses domestiques en Afrique de l’Ouest francophone.\nSouth Africa - Sidney Engelbrecht Sidney is a Senior Research Compliance Specialist at King Abdullah University of Science and Technology in Saudi Arabia with 15 years of experience in research ethics and integrity. He is an accredited Research Management Professional by the International Professional Recognition Council. He is the recipient of the Award for Distinguished Contribution to the Research Management Profession and co-recipient of the Anderson-Kleinert Diversity Award. He is a Research Group Fellow (with distinction) of the Center for AI and Digital Policy (US) and participates in the EdSafe Catalyst Fellowship Programme. He is currently pursuing a PhD in AI Ethics.\nSidney is \u0026rsquo;n Senior Navorsingsnakomingsspesialis by King Abdullah Universiteit van Wetenskap en Tegnologie in Saoedi-Arabië met 15 jaar ondervinding in navorsingsetiek en integriteit. Hy is \u0026rsquo;n geakkrediteerde Navorsingsbestuursprofessiel deur die International Professional Recognition Council. Hy is die ontvanger van die Toekenning vir Uitnemende Bydrae tot die Navorsingsbestuursberoep en mede-ontvanger van die Anderson-Kleinert Diversiteitstoekenning. Hy is \u0026rsquo;n Navorsingsgroepgenoot (met lof) van die Sentrum vir KI en Digitale Beleid (VS) en neem deel aan die EdSafe Catalyst Fellowship-program. Hy is tans besig met \u0026rsquo;n PhD in KI-etiek.\nTanzania - Baraka Manjale Ngussa Baraka Manjale NGUSSA holds a PhD in Education (Curriculum and Teaching) from the University of Eastern Africa Baraton, Kenya. He is an experienced educator, researcher, and administrator in higher learning institutions particularly at the University of Arusha in Tanzania where he currently serves as the Director of Human Resources and Administration. He has been the President for Tanzania Adventist Authors and Writers Association (TAAWA) since 2019. He has a wide experience in teaching, publication and supervision and has authored over 60 publications including books, journal articles, encyclopedia sections, and book chapters. Baraka is the founder, CEO and Chief Editor of the East African Journal of Education and Social Sciences (EAJESS) which is indexed by African Journals Online (AJOL). His research areas include curriculum and teaching, educational management, and leadership. As Crossref Ambassador, Baraka’s passion is to provide his expertise in supporting academic journals in Africa to acquire and maintain high quality standards.\nBaraka Manjale NGUSSA ana Shahada ya Uzamivu katika Elimu (Mitaala na Ufundishaji) ya Chuo Kikuu cha Afrika Mashariki, Baraton kilichoko Kenya. Baraka ana uzoefu mwingi katika ufundishaji, utafiti na uongozi katika Elimu ya Juu hasa Katika Chuo Kikuu cha Arusha ambako kwa sasa ni Mkurugenzi wa Rasilimali watu na Utawala. Amekuwa Rais wa Chama cha Watunzi wa Waandhishi wa Kiadventista nchini Tanzania tangu mwaka 2019. Ana uzoefu mpana katika kufundisha, uandhishi na usimamizi wa tafiti, na ana machapisho zaidi ya 60 ikiwa ni pamoja na vitabu, sura za vitabu pamoja na makala mbalimbali. Baraka ni muasisi na mhariri mkuu wa jarida la Afrika Mashariki la Elimu na Sayansi Jamii ambalo limewekwa katika African Journals Online (AJOL). Maeneo yake ya utafiti ni katika mitaala na ufundishaji, utawala wa elimu pamoja na uongozi. Akiwa Balozi wa Crossref, lengo lake ni kutoa uzoefu wa kitaalamu ili kuwezesha majarida yaliyoko Afrika Kushiriki katika mtandao wa Crossref wa kiulimwengu wa majarida ya kitaaluma.\n", "headings": ["Algeria - Younes Saaid","Cameroon - Audrey Kenni Nganmeni","Egypt - Ahmed Moustafa","Ghana - Richard Bruce Lamptey","Kenya - Mercury Shitindo","Senegal - Oumy Ndiaye","South Africa - Sidney Engelbrecht","Tanzania - Baraka Manjale Ngussa"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/americas/", "title": "Meet our ambassadors in Americas", "subtitle":"", "rank": 4, "lastmod": "2023-08-01", "lastmod_ts": 1690848000, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is in the Americas:\nArgentina - Sandra Gisela Martín Sandra Gisela Martín holds a PhD in Library Science and Documentation, a Master\u0026rsquo;s degree in Digital Documentation, a Bachelor\u0026rsquo;s degree in Computer Science and a Bachelor\u0026rsquo;s degree in Library Science and Documentation. Since 2003 she has been Director of the Library System of the Catholic University of Córdoba. She teaches in the Bachelor\u0026rsquo;s Degree in Library Science at the National University of Córdoba, Argentina. She also teaches in doctoral, master\u0026rsquo;s and specialization courses on information technology, information search and retrieval, scientific production, scientific journals, citations and bibliographic references in academic writing. She is a member of the IFLA Bibliography Section, the OCLC Global Council and the Editorial Board of re3data. Author of several scientific articles and numerous conference presentations.\nSandra Gisela Martín es Doctora en Bibliotecología y Documentación, Máster en Documentación Digital, Licenciada en Informática y Licenciada en Bibliotecología y Documentación. Desde el año 2003 es dirige del Sistema de Bibliotecas de la Universidad Católica de Córdoba. Ejerce la docencia en la Licenciatura en Bibliotecología de la Universidad Nacional de Córdoba, Argentina. Además, es profesora en carreras de doctorado, maestrías y especializaciones sobre temas de tecnologías de la información, búsquedas y recuperación de la información, producción científica, revistas científicas, citas y referencias bibliográficas en la escritura académica. Es miembro de la Sección Bibliografía de la IFLA, del Consejo Global de OCLC y del Consejo Editorial de re3data. Autora de diversos artículos científicos y con numerosas presentaciones a congresos.\nBrazil - Edilson Damasio Edilson Damasio has been a librarian since 1995, holding a PhD in Information Science from the Federal University of Rio de Janeiro-UFRJ/IBICT. He currently works in the Department of Mathematics Library of State University of Maringá-UEM, Brazil. With 20 years’ experience in scientific metadata and publishing, is expertise is wide-ranging including knowledge of scientific communication, Crossref services, research integrity, misconduct prevention in science, publishing in Latin America, biomedical information, OJS-Open Journal Systems, Open Access journals, scientific journal quality and indexing, and scientific bibliographical databases. He is enthusiastic about presenting and disseminating information about Crossref services to his community in Brazil and working within the wider community, exchanging ideas and experience. You can contact Edilson via Twitter @edilsondamasio or on LinkedIn.\nEu sou bibliotecário desde 1995, Doutor em Ciência da Informação pela Universidade Federal do Rio de Janeiro-UFRJ/convênio IBICT. Eu trabalho na Biblioteca do Departamento de Matemática da Universidade Estadual de Maringá-UEM. Com 20 anos de experiência em metadados científicos e editoração, entre outros. Meus conhecimentos são diversos sobre comunicação científica, cientometria, metadados XML, serviços Crossref, integridade em pesquisa, prevenção de más condutas na ciência, editoração, editoração na América Latina, informação biomédica, OJS-Open Journal Systems, revistas de Acesso Aberto, qualidade de periódicos científicos e indexação, bases de dados bibliográficas. Gosto de disseminar meu conhecimento a outras regiões e pessoas e de trabalhar em comunidade junto as instituições e outros países, de planejar novas apresentações, de trocar experiências como palestrante ou convidado e trabalhar na disseminação do conhecimento para todos.\nBrazil - Bruna Erlandsson Bruna graduated in 2012 in Editorial Production and since then has been actively involved in scientific publication. As co-owner of the company Linceu Editorial, Bruna has supported Brazilian journals in improving their quality by teaching them the best way of presenting rich metadata. She acted as a facilitator at ABEC-IBICT-Crossref partnership, supporting hundreds of journals to get their own DOI prefix. She has been elected to join the council of The Brazilian Association of Scientific Editors, as well as the editorial board of publication ‘Science Editors’ from The Council of Science Editors (CSE) and is currently in the final stage of the CSE Publication Certificate Program. She expects, as an Ambassador, to improve dissemination of knowledge and teaching on how to explore and use all the tools offered by Crossref.\nBruna graduou-se em 2012 em Produção Editorial e, desde então, tem se envolvido em publicação científica de forma bastante atuante. Sócia-proprietária da empresa Linceu Editorial, tem apoiado periódicos na melhoria da qualidade por meio da adoção de padrões internacionais para apresentação de metadados. Atuou como facilitadora na parceria ABEC-IBICT-Crossref, a qual possibilitou a atribuição de DOI a centenas de periódicos no Brasil. Faz parte do Comitê Editorial da revista Science Editors publicada pelo Council of Science Editors (CSE) e é membro do conselho deliberativo da Associação Brasileira de Editores Científicos (ABEC). Participa também do CSE Publication Certificate Program, encontrando-se na fase final da certificação. Espera, como embaixadora, disseminar o conhecimento e ensinar como explorar e usar todas as ferramentas oferecidas pela Crossref.\nColombia - Nicolás Mejía Torres Nicolás Mejía Torres is a professional social communicator, who specializes in editorial production and bibliometrics. Since 2020 he has been an associate editor for Palabra Clave, a social science communications journal. He has been working with the Universidad de La Sabana as Scientific Journal Coordinator, a role that has given him the chance to work with Open Journal Systems (OJS). He has also worked with different high-impact databases for scientific journals, such as Scopus, SciVal, and Web of Science, and has been using Crossref as a tool to improve the value of the metadata of all publications of his institution. He likes to discover new technologies and tools, useful for both his life and work. He is an amateur enthusiast for bibliometric data and wants to use it to explore new paths for the journals he manages and the disciplines they impact.\nNicolás Mejía Torres es profesional en comunicación social. Se ha especializado en producción editorial y bibliometría. Desde 2020 es editor asociado de Palabra Clave, una revista de comunicación en ciencias sociales. Trabaja con la Universidad de La Sabana como el coordinador de revistas científicas. En ese rol ha podido trabajar con Open Journal Systems (OJS); también, trabaja con bases de datos de alto impacto para revistas científicas, como Scopus, SciVal y Web of Science. Se han involucrado con Crossref como una herramienta que mejora el valor de los metadatos de las publicaciones en su institución. Le gusta descubrir nuevas tecnologías y herramientas que le sirvan en su vida y trabajo. Es un entusiasta por la bibliometría y le gusta explorar con datos nuevos caminos y comportamientos de revistas científicas de su institución y las disciplinas que impactan con lo que publican.\nColombia - Arley Soto Arley Soto is a professional in Information systems, Librarianship and Archives. He is also the co-founder of BITECA Ltda, a company that provides services for libraries and publishers since 2006, and his current role as innovation manager is to explore new products, services and tools that contribute to enhance the scholarly communication processes in Latin America. He is constantly exploring new technologies, methodologies and practices that improve the quality and dissemination of scholarly content, like new functionality for OJS, OCS, OMP, XML, DOI, ORCID, Crossmark and other editorial activities. He has also completed The European Master in Digital Libraries from the University of Oslo and Akershus, Tallin University and University of Pharma in 2016, with a thesis about the digital preservation of Colombian academic journals. Currently he is exploring the field of Web Archiving as well as Mobile Human-Computer Interaction in Scholarly Contexts.\nArley Soto es profesional en Sistemas de información, bibliotecología y archivística. En 2016, completó el Máster Europeo en Bibliotecas Digitales de la Universidad de Oslo y Akershus, la Universidad de Tallin y la Universidad de Pharma, con una investigación sobre la preservación digital para revistas académicas colombianas. Es cofundador de BITECA Ltda., una compañía que brinda servicios a bibliotecas y editoriales desde el año 2006. Su rol actual como gerente de innovación es explorar y gestionar nuevos productos, servicios y herramientas que contribuyan a mejorar los procesos de comunicación académica en América Latina. Estudia y examina nuevas tecnologías, metodologías y herramientas que permitan mejorar la calidad y difusión del contenido académico, tales como las funcionalidades para OJS, OCS, OMP, XML, DOI, ORCID, Crossmark y otras actividades editoriales. Adicionalmente, se encuentra interesado en analizar temas relacionados con la preservación de sitios web y la interacción con dispositivos móviles en contextos académicos.\nColombia - Juan Felipe Vargas Martínez Juan Felipe Vargas Martínez, Systems Engineer and Senior Management Specialist, has worked for more than 10 years in contributing to the editorial work of scientific publications. Previously, he was component coordinator of ‘Sistema Nacional de Acceso Abierto al Conocimiento’ (National System of Open Access to Knowledge) in Colombia and has served as assistant and editorial advisor for various scientific publications in Colombia, Ecuador and Spain. Currently, Juan Felipe works as Co-Founder and Director of Journals \u0026amp; Authors, a company that, has supports scientific publications to improve their editorial quality and scientific dissemination through the optimization of editorial processes. Journals \u0026amp; Authors holds a regional meeting of academic journal editors, through which it provides training in editorial work and generates an integrated space within academia to address and discuss the new challenges of scientific publishing. Juan Felipe also acts as coordinator of journal management processes in Open Journals System, metadata deposit in Crossref, Crossmark, and databases.\nJuan Felipe Vargas Martínez, Ingeniero de Sistemas, Especialista en Alta Gerencia, ha trabajado por más de 10 años contribuyendo a la labor editorial de las publicaciones científicas. Fue coordinador de componente del Sistema Nacional de Acceso Abierto al Conocimiento (Colombia) y se ha desempeñado como asistente y asesor editorial para diversas publicaciones científicas en Colombia, Ecuador y España. Es cofundador y actualmente director de Journals \u0026amp; Authors, empresa que por más de 5 años viene apoyando las publicaciones científicas en el mejoramiento de la calidad editorial y la difusión científica a través de la creación de metodologías que permitan la optimización de los procesos editoriales. Journals \u0026amp; Authors realiza un encuentro regional de editores de revistas académicas, a través del cual busca capacitar en la labor editorial y generar espacios de integración entre la academia para abordar y discutir los nuevos retos de la edición científica. Coordinador de procesos de gestión de revistas en Open Journals System, depósito de metadatos en Crossref, Crossmark y bases de datos.\nMexico - Amanda Falcone Amanda Falcone is an editor and translator. She holds a B.A. in English Studies, particularly interested in literature and postcolonial theory. She also holds a degree in Reading Promotion—keen to know the reading habits of university students, she carried out a research stay on digital reading at the University of Salamanca, Spain. Since 2014 she has worked for the Publishing House of the University of Veracruz in the acquisition of foreign rights and recently, she has focused on the display of scientific works in open access. She speaks Spanish, English, and Italian.\nAmanda Falcone es editora y traductora. Estudió la licenciatura en Lengua Inglesa con interés particular por la literatura y la teoría poscolonial. También tiene una especialización en Promoción de la Lectura, motivada por conocer los hábitos lectores de los estudiantes universitarios, lo que la llevó a realizar una estancia de investigación sobre lectura digital en la Universidad de Salamanca, España. Desde 2014 ha trabajado para la Editorial de la Universidad Veracruzana en la gestión de derechos de autor y recientemente se ha enfocado en la puesta a disposición de las obras científicas en acceso abierto. Habla español, inglés e italiano.\nMexico - Guillermo Chávez Guillermo Chávez, Deputy Director of Academic Journals and Digital Publications at UNAM, oversees strategies to bolster academic and scientific publications. He has led multiple projects on digitization, repositories, and information systems. With over 15 years of experience, he\u0026rsquo;s an advocate for open access publishing, the use of open-source platforms like OJS, OMP, and DSPACE, and the adoption of interoperable standards. He also coordinates the LATINDEX System and teaches at UNAM\u0026rsquo;s School of Librarianship.\nGuillermo Chávez, subdirector de Revistas Académicas y Publicaciones Digitales de la UNAM, supervisa las estrategias para impulsar las publicaciones académicas y científicas. Ha liderado múltiples proyectos sobre digitalización, repositorios y sistemas de información. Con más de 15 años de experiencia, es un defensor de la publicación de acceso abierto, el uso de plataformas de código abierto como OJS, OMP y DSPACE, y la adopción de estándares interoperables. También coordina el Sistema LATINDEX y es docente en la Escuela de Biblioteconomía de la UNAM.\nMexico - Maria Ramos-Escamilla Dr. Maria Ramos-Escamilla, a PhD in Economics from the Instituto Politécnico Nacional, has trained over 5000 postgraduates worldwide and authored over 200 works in international economics and fractal modelling. She has participated in numerous international research groups, earning recognition in economics and finance. With 25 years of experience, she has edited over 100 indexed journals across continents and is currently the General Director of ECORFAN-MEXICO, S.C. At Crossref, her mission is to aid researchers in globally connected written science.\nLa Dra. Maria Ramos-Escamilla, Doctora en Economía del Instituto Politécnico Nacional, ha capacitado a más de 5000 posgraduados en todo el mundo y es autora de más de 200 trabajos en economía internacional y modelado fractal. Ha participado en numerosos grupos de investigación internacionales, obteniendo reconocimiento en economía y finanzas. Con 25 años de experiencia, ha editado más de 100 revistas indexadas en todos los continentes y actualmente es la Directora General de ECORFAN-MEXICO, S.C. En Crossref, su misión es ayudar a los investigadores en la ciencia escrita conectada globalmente.\nUSA - Lauren Lissaris Lauren Lissaris has dedicated much of her career to the dissemination of valuable content on a robust platform. She takes pride in her achievements as the Digital Content Manager at JSTOR. JSTOR provides access to more than 10 million academic journal articles, books, and primary sources in 75 disciplines. JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways.\nLauren successfully works with all aspects of journal content to effectively assist publishers with their digital content. This includes everything from XML markup, content registration/multiple resolution, and HTML website updates. Lauren has been involved in hosting current content on JSTOR since the program\u0026rsquo;s launch in 2010. She continues to collaborate with organizations to successfully contribute to the evolution of digital content. The natural spread from journals to books has set Lauren up for developing and planning the book Content Registration program for JSTOR. She is a member of the Crossref Books Advisory Group and she helped successfully pilot Crossref\u0026rsquo;s new Co-access book deposit feature.\n", "headings": ["Argentina - Sandra Gisela Martín","Brazil - Edilson Damasio","Brazil - Bruna Erlandsson","Colombia - Nicolás Mejía Torres","Colombia - Arley Soto","Colombia - Juan Felipe Vargas Martínez","Mexico - Amanda Falcone","Mexico - Guillermo Chávez","Mexico - Maria Ramos-Escamilla","USA - Lauren Lissaris"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/asia/", "title": "Meet our ambassadors in Asia", "subtitle":"", "rank": 4, "lastmod": "2023-08-01", "lastmod_ts": 1690848000, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "\nThe Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is in Asia:\nAzerbaijan - Iltifat Ibrahimov Iltifat Ibrahimov, is a leading specialist for the core applications and integrations at ADA University in Azerbaijan. He also works as senior consultant for digitization and library technologies at Azerbaijan Technical University. Iltifat also represent Follett School Solutions on Destiny Library Manager system in Azerbaijan exclusively and provides consultation and support for local institutions. Iltifat works as volunteer actively in EIFL.net as country coordinator and licensing coordinator in Azerbaijan. In addition, he was the Open Access coordinator of EIFL.net previously. He has over ten years’ experience in library management systems, RIFD systems, Open Access initiatives, digital repository systems, digitization, web design, academic databases, and discovery search tools. Iltifat also organizes workshops and seminars to disseminate OA initiatives, help digitization, build digital repositories, and to assist academic publishing in OA journals. In his new appointment as Crossref Ambassador in Azerbaijan he is ready to help organizations and researchers to provide the necessary information and consultation on how to use and benefit from Crossref services.\nİltifat İbrahimov, Azərbaycanda ADA Universitetində əsas tətbiqlər və inteqrasiyalar üzrə aparıcı mütəxəssisdir. O, həmçinin Azərbaycan Texniki Universitetində rəqəmsallaşdırma və kitabxana texnologiyaları üzrə baş məsləhətçi vəzifəsində çalışır. İltifat həm də Azərbaycanda müstəsna olaraq Destiny Library Manager sistemi üzrə Follett School Solutions şirkətini təmsil edir və yerli qurumlara məsləhət və dəstək verir. İltifat eyni zamanda EIFL.net-də aktiv könüllü olaraq Azərbaycan üzrə ölkə koordinatoru və lisenziyalaşdırma üzrə koordinator kimi çalışır. Bundan əlavə, o, daha öncə EIFL.net-in Open Access üzrə ölkə koordinatoru vəzifəsini icra edirdi. O, kitabxana idarəetmə sistemləri, RFID sistemləri, Open Access (OA) təşəbbüsləri, rəqəmsal repozitariya sistemləri, rəqəmsallaşdırma, veb dizayn, akademik verilənlər bazaları və axtarış sistemləri sahəsində on ildən artıq təcrübəyə malikdir. İltifat həmçinin OA təşəbbüslərini yaymaq, rəqəmsallaşdırma və rəqəmsal repozitariyaların qurulmasına və Open Access jurnallarda akademik məqalə nəşrinə kömək etmək üçün seminarlar verir və təlimlər keçirir. İltifat, Crossref-in Azərbaycandakı Səfiri olaraq bu yeni təyinatında Crossref-in xidmətlərindən istifadə və faydalanmaq haqqında lazımi məlumat və məsləhətlərin verilməsində təşkilatlara və tədqiqatçılara kömək etməyə hazırdır.\nBangladesh - Md Jahangir Alam Dr. Md Jahangir Alam is an academician and researcher currently affiliated with the Department of Japanese Studies, Faculty of Social Sciences, University of Dhaka, Bangladesh. He completed his Ph.D from the Graduate School of International Cooperation Studies, Kobe University, Japan. Dr. Alam’s experiences embrace collaborating with international organizations, especially Japan International Cooperation Agency (JICA), International Labor Organization (ILO), International Organization for Migration (IOM), United Nations Development Programme (UNDP), Hiroshima Peacebuilders Center (HPC), Citizenship \u0026amp; Immigration Canada (CIC) and the Japan Foundation. In 2019, Dr. Alam received several international awards for his outstanding research contributions to international education development. Dr. Alam is enthusiastic about disseminating knowledge and transmitting familiarity with Crossref services to his community in Bangladesh. He supports journals and publishers in joining Crossref and using the tools and services offered by Crossref. Dr. Alam speaks Bengali, English, and Japanese.\nড. মো. জাহাঙ্গীর আলম একজন শিক্ষাবিদ ও গবেষক। বর্তমানে তিনি ঢাকা বিশ্ববিদ্যালয়ের সামাজিক বিজ্ঞান অনুষদের জাপানিজ স্টাডিজ বিভাগে শিক্ষকতা করছেন। তিনি জাপানের কোবে বিশ্ববিদ্যালয়ের গ্র্যাজুয়েট স্কুল অব ইন্টারন্যাশনাল কো-অপারেশন স্টাডিজ থেকে পিএইচডি সম্পন্ন করেছেন। ড. আলম বিভিন্ন আন্তর্জাতিক সংস্থা- বিশেষ করে জাপান ইন্টারন্যাশনাল কো-অপারেশন এজেন্সি (জাইকা), ইন্টারন্যাশনাল লেবার অর্গানাইজেশন (আইএলও), ইন্টারন্যাশনাল অর্গানাইজেশন ফর মাইগ্রেশন (আইওএম), ইউনাইটেড নেশনস ডেভেলপমেন্ট প্রোগ্রাম (ইউএনডিপি), হিরোশিমা পিসবিল্ডার্স সেন্টার (এইচপিসি), সিটিজেনশিপ, ইমিগ্রেশন কানাডা (সিআইসি) এবং জাপান ফাউন্ডেশনের সাথে কাজ করেছেন। তিনি আন্তর্জাতিক শিক্ষা উন্নয়নে তাঁর অসামান্য গবেষণা অবদানের জন্য ২০১৯ সালে বেশ কয়েকটি আন্তর্জাতিক পুরস্কার লাভ করেছেন। জ্ঞানের আলো ছড়াতে এবং বাংলাদেশে নিজ সমাজের কাছে ক্রসরেফ পরিষেবাগুলির পরিচিতি প্রেরণে উৎসাহী ড. আলম। তিনি জার্নাল এবং প্রকাশকদের ক্রসরেফে যোগদান করতে ও ক্রসরেফের দেওয়া সরঞ্জাম এবং পরিষেবাগুলি ব্যবহার করতে সহায়তা করেন। তিনি বাংলা, ইংরেজি এবং জাপানি ভাষায় পারদর্শী। ড.\nBangladesh - Shaharima Parvin Shaharima Parvin is Assistant Librarian at East West University, Dhaka, Bangladesh with more than 10 years’ experience in information science and library management. Her current role includes managing the acquisitions life cycle of electronic resources including subscriptions, access, troubleshooting, usage analysis, and budgeting. She obtained a BA and MA in Information Science and Library Management from University of Dhaka, Bangladesh. Shaharima is an independent researcher with interests in Open Access, Open Education, Creative Commons, the Open Science Framework and Open Data. She has held numerous diverse positions including SIG-USE Recruitment/Membership Officer of Association for Information Science and Technology (ASIS\u0026amp;T), Country Ambassador of The Center for Open Science, USA, Country Ambassador of CORE, UK, and Country Ambassador of International Librarians Network (ILN). Shaharima is keen to promote the benefits of Crossref services not only among her university community but also among library and information science (LIS) professionals in her country. She is enthusiastic about travel, reading and writing. She loves working with diverse groups of people and appreciates taking on new challenges and exploring unique experiences.\nশাহারিমা পারভীন বর্তমানে ইস্ট ওয়েস্ট ইউনিভার্সিটি, ঢাকা, বাংলাদেশের সহকারী গ্রন্থাগারিক হিসেবে কর্মরত। তথ্য বিজ্ঞান এবং গ্রন্থাগার ব্যবস্থাপনা বিষয়ে তার দশ বছরেরও বেশি অভিজ্ঞতা রয়েছে। সহকারী গ্রন্থাগারিক হিসাবে তার বর্তমান দায়িত্ব হল ইলেকট্রনিক রিসোর্স নির্বাচন, সাবস্ক্রিপশন, অ্যাক্সেস, ট্রাবলশুটিং, ব্যবহার বিশ্লেষণ এবং বাজেট সহ যাবতীয় বিষয় তদারকি করা। তিনি ঢাকা বিশ্ববিদ্যালয় থেকে তথ্য বিজ্ঞান এবং গ্রন্থাগার ব্যবস্থাপনা বিষয়ে স্নাতক এবং স্নাতকোত্তর ডিগ্রি অর্জন করেন। শাহারিমা একজন স্বাধীন গবেষক যার আগ্রহ ওপেন অ্যাক্সেস, ওপেন এডুকেশন, ক্রিয়েটিভ কমন্স, ওপেন সায়েন্স ফ্রেমওয়ার্ক এবং ওপেন ডেটা। তিনি SIG-USE মেম্বারশিপ অফিসার, অ্যাসোসিয়েশন ফর ইনফরমেশন সায়েন্স অ্যান্ড টেকনোলজি (ASIS\u0026amp;T), কান্ট্রি অ্যাম্বাসেডর, দ্য সেন্টার ফর ওপেন সায়েন্স, মার্কিন যুক্তরাষ্ট্র, কান্ট্রি অ্যাম্বাসেডর, CORE, যুক্তরাজ্য এবং ইন্টারন্যাশনাল লাইব্রেরিয়ান নেটওয়ার্কের কান্ট্রি অ্যাম্বাসেডর সহ অসংখ্য বিভিন্ন পদে অধিষ্ঠিত হয়েছেন। শাহরিমা কেবল তার বিশ্ববিদ্যালয় সম্প্রদায়ের মধ্যেই নয়, তার দেশের গ্রন্থাগার এবং তথ্য বিজ্ঞান (LIS) পেশাদারদের মধ্যেও Crossref এর পরিষেবার সুবিধাগুলি প্রচার করতে আগ্রহী। তিনি ভ্রমণ, পড়া এবং লেখার বিষয়ে উত্সাহী। তিনি বিভিন্ন গোষ্ঠীর মানুষের সাথে কাজ করতে পছন্দ করেন, নতুন চ্যালেঞ্জ গ্রহণ এবং অনন্য অভিজ্ঞতা অন্বেষণ করতে পছন্দ করেন।\nChina - Ran Dang Ran Dang is Editorial Director of Atlantis Press, Springer Nature. She is in full management of Atlantis Press imprint within book division to drive the growth of open access conference proceedings within all major STM and HSS disciplines globally, by leading the team and overseeing and managing the strategic and day-to-day publishing activities, in collaboration with learnt societies, institutions in academia and departments within the whole group. She is also location lead of Zhengzhou office in Springer Nature China. Prior to joining Springer Nature, she gained extensive experience in academic publishing, including Managing Director China at Atlantis Press, Senior Managing Editor \u0026amp; Section Leader at MDPI, Publishing Support Manager at Elsevier, and Project/General Manager for MLS Journals. With over ten years’ experience in the STM publishing industry, Ran has successfully managed over 90 international journals, conference proceedings series and maintained strong external relationships in academia and industry. Ran is a passionate Open Access and Open Science advocator, who currently serves as volunteer Associate Editor of DOAJ (Directory of Open Access Journals). Ran would like to disseminate information about the value of Crossref services to her community in China, including how the wider global community can make the best use of metadata in Mandarin. In her spare time, Ran likes walking with her family and her puppy “Max”, as well as volunteering at “Tree Hole Rescue Team” using AI to proactively identify and help potential suicide victims who post messages online asking for help. You can contact Ran on LinkedIn.\n党冉女士现任施普林格自然集团旗下Atlantis Press的编辑总监。在图书部,她负责Atlantis Press独立品牌的全方位管理,包括战略部署和合作、团队领导以及日常出版活动。通过与学协会及集团内部各部门的合作,推动全球所有主要STM和HSS学科的开放获取会议论文集的发展。她同时是施普林格-自然中国郑州办公室的负责人。在加入施普林格-自然之前,她在学术出版领域积累了丰富的经验,包括Atlantis Press的中国区董事总经理、MDPI的高级编辑和部门负责人、爱思唯尔的出版经理以及MLS期刊的项目/总经理。在超过十年的STM出版工作经历中,党冉女士,成功管理/合作出版过超过90种国际期刊和会议论文集系列,并在学术界和工业界保持着强大的外部关系。与此同时,作为一名积极的开放获取和开放科学的倡导者,党冉女士目前也在担任DOAJ(开放获取期刊目录)的志愿副主编。党冉女士希望传播有关Crossref的服务对她所在的中国社区的价值的信息,包括更广泛的全球社区如何最好地利用普通话元数据。在业余时间,党冉女士喜欢和家人以及她的拉布拉多犬Max一起散步,并在“树洞救援团”做志愿者工作,利用人工智能技术,主动寻找和帮助那些在网上发布信息寻求帮助的潜在自杀者。你可以在LinkedIn上联系Ran。\nIndia - Anjum Sherasiya Anjum Sherasiya has been Editor-in-Chief of Veterinary World since 2008 and the International Journal of One Health since 2015. With 11 years’ experience in scientific publishing, he has extensive experience in Open Access (OA) and scholarly publishing. He was the first to bring the idea of an OA Veterinary journal to India and made Veterinary World the first open access journal among Veterinary journals of India in 2011. He is enthusiastic about disseminating information about Crossref services to his community and throughout the world. His journals benefit from Crossref services such as Content Registration, Similarity Check, and Cited-By.\nDr. Anjum Sherasiya is a passionate advocate of the importance of plagiarism checking in academia. However, in his experience, few universities have adopted this practise in Southeast Asian countries. In addition to English, Dr. Anjum Sherasiya speaks Hindi and Gujarati. He is from Wankaner, Gujarat, India.\nIndia - Babu Balraj Dr. Babu Balraj had earned his Ph.D. in 2017 from Bharathiar University, India. Later, he worked as a postdoctoral fellow at the Department of Physics at the National Chung Hsing University in Taiwan (2018–2019) and the Department of Electrical Engineering at the National Tsing Hua University in Taiwan (2019–2021). Dr. Babu is the founder of the Asian Research Association and the Publishing Director of IOR Press. In India, he initiated the use of DOI numbers in publishing Tamil scholarly articles and books, and he created awareness among Tamil scholars. Dr. Babu is now extending the same to the other Eighth Schedule Languages of the Indian Constitution. Furthermore, he is enthralled by the Crossref\u0026rsquo;s Crossmark service.\nமுனைவர் பாபு பால்ராஜ் அவர்கள், 2017 ஆம் ஆண்டு இந்தியாவிலுள்ள பாரதியார் பல்கலைக்கழகத்தில் முனைவர் பட்டம் பெற்றவர். பின்னர் தைவானில் அமைந்துள்ள புகழ்பெற்ற கல்வி நிறுவனமான தேசிய சுங் ஹ்சிங் பல்கலைக்கழகத்தின் இயற்பியல் துறையிலும் (2018-2019) தேசிய சிங் ஹுவா பல்கலைக்கழகத்தின் மின் பொறியியல் துறையிலும் (2019- 2021) முதுமுனைவராக பணியாற்றி உள்ளார். தற்போது பதிவு பெற்ற ஆராய்ச்சி நிறுவனமான ஆசிய ஆராய்ச்சி சங்கத்தின் நிறுவனராகவும், ஐ ஓ ஆர் பன்னாட்டுப் பதிப்பகத்தின் வெளியீட்டு இயக்குனராகவும் இருந்துவருகிறார். தமிழாய்வுக் கட்டுரைகள் மற்றும் நூல்வெளியீட்டில் எண்ணிமப் பொருள் அடையாளங்காட்டி (DOI) பயன்பாட்டை நடைமுறைப்படுத்தியதில் குறிப்பிடத்தக்கவர். தமிழ்மொழிக்கு மட்டுமல்லாமல் இந்திய அரசியலமைப்பில் எட்டாவது அட்டவணையில் உள்ள பிற இந்திய மொழிகளுக்கும் இதே முறையை நடைமுறைப்படுத்தும் நோக்கில் செயல்பட்டுக்கொண்டிருக்கிறார். அதுமட்டுமல்லாமல் கிராஸ்ரெப்ஃபரன்ஸ் இன் கிராஸ்மார்க் (Crossref\u0026rsquo;s Crossmark) சேவைகளில் மிகவும் ஆர்வமும் ஈடுபாடும் உள்ளவர்.\nIndia - Sumit Narula Dr. Sumit Narula, Ph.D. in Electronic Media and Conflict Resolution, serves as the Deputy Dean Research at Amity University Madhya Pradesh and heads Amity School of Communication. He holds multiple postgraduate degrees, has published two books, edited seven, and written numerous research papers. Notably, one of his books is indexed by Clarivate’s Web of Science. His research focuses on media research, fake news detection, and conflict resolution. He is the Editor in Chief of the SCOPUS indexed Journal of Content Community and Communication and chairs the Centre of Excellence for Detection of Fake News at AUMP. An expert in various research software, he has conducted over 400 workshops on identifying fraudulent academic journals and has launched an app called \u0026lsquo;Amity Quality Journal Checker\u0026rsquo; to combat such fraud. His work has earned recognition both in print media and digital platforms like YouTube and AAJTAK.\nIndia - Sushil Kumar Dr. Sushil Kumar is Deputy Dean of journals and publications at Chitkara University Publications, Chandigarh, India. He has organized several national and international academic events as well as presenting his own research. He is founding editor of the \u0026lsquo;Journal of Nuclear Physics, Material Sciences, Radiation and Applications\u0026rsquo; since 2013. Dr. Sushil set up the Open Journal Systems (OJS) platform for Chitkara University Publications. This role includes managing indexing, formatting, publication policies, membership, digital marketing, SEO of research articles and journal management for all Open Access journals. He is a strong advocate of the Open Access movement and all Chitkara University Publications are available via platinum Open Access. Additionally, Sushil aims to increase awareness about publication and research ethics among the scientific community, along with providing technical support and information about indexing, repositories, preprint archives, Open Access rights and licenses, and persistent Identifiers. Dr. Sushil is also a scientific blogger and has his own YouTube channel, with more than 73,500 subscribers, to share his learning and related information with the wider academic community.\nडॉ0 सुशील कुमार, चितकारा विश्वविधालय प्रकाशन, चंडीगढ़, भारत में शोध पत्रिकाओं और प्रकाशन का कार्यभार संभालते हैं। उन्होंने कई राष्ट्रीय और अंतर्राष्ट्रीय शैक्षणिक कार्यक्रम आयोजित किए हैं, और हिस्सा लिया है । वह 2013 में शुरू किए गए भौतिकी जर्नल के संस्थापक संपादक हैं। डॉ0 सुशील, ने चितकारा विश्वविद्यालय प्रकाशनों के लिए ओपन जर्नल सिस्टम (OJS) की स्थापना की है, और इसकी अनुक्रमण, लेआउट, प्रकाशन नीतियों, सदस्यता, डिजिटल मार्केटिंग का प्रबंधन किया है। सभी खुली पहुंच वाली पत्रिकाओं के लिए शोध लेख और जर्नल मैनेजर की भूमिका वहन कर रहे हैं । वह अनुसंधान पत्रों और पुस्तकों के प्रकाशन के लिए ओपन एक्सेस मॉडल का समर्थन और वकालत करते है। सभी जर्नल प्लैटिनम ओपन एक्सेस जर्नल हैं। इसके अलावा, वह शिक्षा और शोध समुदाय के बीच \u0026ldquo;प्रकाशन और अनुसंधान नैतिकता\u0026rdquo; के बारे में जागरूकता पैदा कर रहै हैं। साथ ही अनुक्रमण, रिपॉजिटरी, प्रीप्रिंट डायरेक्ट्रीज , खुले उपयोग के अधिकार और लाइसेंस, और लगातार पहचानकर्ताओं की तकनीकी जानकारी को युवा शोधार्थियों तक पहुंचाने का कार्य भी कर रहैं है। डॉ0 सुशील एक विज्ञानं ब्लॉगर और यूट्यूब चैनल संभालते है, जिसमें 73500 से अधिक व्यक्तियों के साथ शोध और प्रकाशन संबंधित जानकारी को साझा करते हैं।\nIndonesia - Zulidyana D Rusnalasari Zulidyana D Rusnalasari is a researcher and scientific journal editor. Teaching is not only her primary career but also her hobby. Zulidyana is a strong advocate of Open Science and believes that information, particularly related to public knowledge and science, should be available openly and reliably. As a lecturer and trainer, she campaigns for Open Science to her students and trainees. Her interest in this area began in 2010 when she researched the Open Source Community as part of her postgraduate studies in Cultural Studies at the University of Indonesia. Although Zulidyana is a junior lecturer, she works to improve her colleagues\u0026rsquo; knowledge regarding scientific publication literacy and its relation to Open Science Movements. Currently, she is finalizing her dissertation to complete her doctoral degree in Literature and Cultural Education. Zulidyana believes that education is the key to improving human quality of life. Being a Crossref Ambassador supports Zulidyana in her mission to improve scholarly communications among the communities and NGOs that she is involved in. She is also keen to help academic researchers know how to better use Crossref metadata. You can interact with her on her Twitter account @zulidyana.\nZulidyana D. Rusnalasari, peneliti sekaligus editor di beberapa jurnal ilmiah di Indonesia. Sebenarnya, mengajar adalah hobi sekaligus profesi utamanya, sebagai dosen maupun trainer, dia senantiasa mengkampanyekan Sains Terbuka pada mahasiswa maupun peserta pelatihan. Zulidyana optimis dan percaya bahwa informasi terutama yang berkaitan dengan ilmu pengetahuan/sains seharusnya tersedia secara terbuka dan dapat dipercaya. Ketertarikannya pada Sains Terbuka dimulai sejak dia meneliti komunitas Open Source ketika dia menyeleseikan studi S2-nya di Universitas Indonesia dalam bidang Cultural Studies pada tahun 2010. Walaupun masih sebagai dosen junior di kampus, dia selalu berusaha meningkatkan literasi koleganya mengenai literasi publikasi ilmiah dan kaitannya dengan Sains Terbuka. Saat ini, dia sedang menyeleseikan disertasinya untuk menuntaskan pendidikan doktoralnya di bidang Pendidikan Sastra dan Budaya. Meskipun Pendidikan adalah domain baru baginya, namun dia selalu percaya bahwa Pendidikan adalah hal paling esensial untuk meningkatkan kualitas manusia. Anda dapat berinteraksi dengannya di akun twitter-nya @zulidyana.\nIraq - Salwan Abdulateef Dr. Salwan M. Abdulateef currently works as Scientific Research Director, at the University of Anbar. Abdulateef specializes in animal behavior and endocrinology research. He is a member of the Advisory Board of QS World Universities Rankings. Dr. Abdulateef also works as Director of the Academic Journal’s Unit and Managing Editor of the Anbar Journal of Agricultural Sciences and is a member of the Scientific Sobriety Committee for Scientific Research and Publications. Abdulateef has authored more than 44 scientific papers in his field of specialization. He supports journals to join Crossref and expects, as an Ambassador, to improve the dissemination of knowledge and teaching on how to explore and use the tools and services offered by Crossref.\nيعمل الدكتور سلوان محمد عبداللطيف حالياً مديراً للبحث العلمي في جامعة الأنبار. عبد اللطيف متخصص في علم الغدد الصماء وسلوك وفسلجة الحيوان. وهو عضو في المجلس الاستشاري لتصنيفات الجامعات العالمية. يعمل الدكتور عبداللطيف أيضًا كمدير لوحدة المجلة الأكاديمية ومدير التحرير في مجلة الأنبار للعلوم الزراعية وعضوًا في لجنة الامانة العلمية للبحث العلمي والنشر الرصين. قام عبد اللطيف بتأليف أكثر من أربعين بحث علمي في مجال تخصصه. وهو يدعم العديد من المجلات في الحصول على بادئة المعرف الرقمي الخاصة بها ويتوقع، كسفير، تحسين نشر المعرفة والتعليم حول كيفية استكشاف واستخدام جميع الأدوات التي تقدمها كروسريف، من خلال إجراء العديد من ورش العمل التدريبية والمحاضرات والندوات حول كروسريف. Japan – Christopher Magor Christopher Magor is a Japan-based science editor with approximately two decades of experience who specializes in helping researchers for whom English is a secondary language find a global readership for their work. Christopher has worked with scholars from diverse fields, both in Japan and other countries. He is acutely aware of the distinct challenges these authors face, both linguistically and during the publication process. A passionate advocate for open science, Christopher is dedicated to ensuring that language barriers do not impede the dissemination of valuable research.\nクリストファー・メイゴーは、日本に在住し約20年の経験を持つサイエンス・エディターであり、英語が母国語でない研究者が彼らの研究を世界中に広める手助けを専門としています。彼は、日本国内外の幅広い分野の学者と協力してきており、他言語や出版プロセスで直面する特有の課題を深く理解しています。オープンサイエンスの熱心な支持者として、彼は言語の壁が貴重な研究の普及を妨げないように献身的です。\nKazakhstan – Gulzhaina Kuralbaevna Kassymova Ph.D., Dr. in Ed. Gulzhaina Kuralbaevna Kassymova is a teacher in adult education and a linguist in a foreign language. She started her career in 2008 as a flight attendant in the aviation industry and then transitioned to the educational sector. Currently, she teaches educational psychology at Abai University and Suleyman Demirel University in Kazakhstan. She published several research articles at the international level in a short time period during her bilateral doctoral studies in Indonesia and Kazakhstan, ORCID ID: 0000-0001-7004-3864. In her spare time, she loves diving and reading scientific investigations. Besides teaching students, she likes to work with metadata and she is responsible for Membership of the Institute of Metallurgy and Ore Beneficiation with Crossref. She is also responsible for promoting Crossref in the Republic of Kazakhstan.\nPh.D., білім беру ғыл. док. Гулжайна Куралбаевна Касымова – ЖОО-да педагог және шет тілі бойынша лингвист. Ол өзінің еңбек жолын 2008 жылы авиация саласында стюардесса болып бастады, содан кейін мансабын білім беру саласына ауыстырды. Қазіргі уақытта ол Қазақстандағы Абай университетінде және Сүлеймен Демирел университетінде білім беру психологиясынан сабақ береді. Ол Индонезияда және Қазақстанда екіжақты докторантурада оқу кезінде қысқа уақыт ішінде халықаралық деңгейде бірнеше ғылыми мақалаларды басып шығара алды, ORCID ID: 0000-0001-7004-3864. Ол бос уақытында суға түсуді және ғылыми зерттеулерді оқығанды ұнатады. Cабақ берумен қатар, ол DOI-мен жұмыс істегенді ұнатады және Crossref-пен Металлургия және кен байыту институтының арасындағы қарым-қатынасқа жауапты. Ол сонымен қатар Қазақстан Республикасында Crossref DOI ілгерілетуіне жауапты.\nMongolia - Gantulga Lkhagva Gantulga Lkhagva has over 19 years\u0026rsquo; experience working in research and academic institutions and public libraries in Mongolia. Gantulga’s work focuses on improving the publication quality of Mongolian scholarly communities, providing advise on scientific publishing processes, and promoting Mongolian knowledge in the digital age. Gantulga is the Founder and CEO of Mongolian Digital Knowledge Solutions(MDKS), LLC, and MongoliaJOL. He manages Mongolia Journals Online (MongoliaJOL) – a journal platform which was established through collaboration with INASP. MongoliaJOL is a Crossref member and also participates in additional services such as Similarity Check.\nGantulga is happy to support the introduction and promotion of Crossref services in Mongolia. He enjoys collaborating with others; exchanging ideas and experiences. Gantulga also plans to disseminate knowledge and provide training to authors, journal editors, as well as work with other stakeholders. You can contact Gantulga via Twitter @cybermongol or on LinkedIn\nNepal - Niranjan Koirala Dr. Niranjan Koirala, a Ph.D. holder in Biochemistry (Pharmaceutical) from Sun Moon University in the Republic of Korea, is a highly skilled biochemist. He completed postdoctoral research fellowships at the Universidad Nacional Autonoma de Mexico and the University of Macau in Macau SAR-China. Dr. Koirala specializes in genome-guided mining, gene isolation, cloning, protein purification and natural products drugs discovery. His work has been cited over 2500 times by the international research community, and he has received numerous awards for his research, including the Nepal Bidhya Bhushan “A” medal, Science and Technology Youth Award and Dean’s Choice award.\nडा. निरञ्जन कोइराला, कोरिया गणतन्त्रको सन मुन विश्वविद्यालयबाट जीव रसायन (औषधि विज्ञान) मा पएच.डी. गरेका एक उच्च कुशल बायोकेमिस्ट हुन् । उनले युनिभर्सिडेड नेशनल अटोनोमा डे मेक्सिको र मकाउ SAR-चीनमा रहेको मकाउ विश्वविद्यालयमा पोस्टडक्टोरल अनुसन्धान फेलोशिपहरू पूरा गरे। डा. कोइराला जीनोम निर्देशित खनन, जीन आइसोलेसन, क्लोनिङ, प्रोटिन शुद्धिकरण र प्राकृतिक उत्पादन औषधि खोजमा विशेषज्ञ छन्। उहाँको कामलाई अन्तर्राष्ट्रिय अनुसन्धान समुदायले 2500 पटक बढी उद्धृत गरेको छ, र उहाँले आफ्नो अनुसन्धानका लागि नेपाल विद्या भूषण \u0026ldquo;ए\u0026rdquo; पदक, विज्ञान र प्रविधि युवा पुरस्कार र डीन च्वाइस पुरस्कार सहित धेरै पुरस्कारहरू प्राप्त गर्नुभएको छ।\nPakistan - Amber Osman Amber Osman is a passionate expert in open science and a research enthusiast. Over the last decade, she has been actively involved in different international academic, research \u0026amp; publishing organizations and with the Higher Education Commission (Govt. of Pakistan). She has been an award-winning journal editor for advancing the publishing process by adopting innovative research and publishing solutions. Amber advocates for best practices in open access scholarly content.\nامبر عثمان اوپن سائنس کی ایک پرجوش ماہر اور تحقیق کی دلدادہ ہیں۔ پچھلی دہائی کے دوران، وہ مختلف بین الاقوامی تعلیمی، تحقیقی اور اشاعتی اداروں اور ہائر ایجوکیشن کمیشن (حکومت پاکستان) کے ساتھ سرگرم عمل رہی ہیں۔ وہ اختراعی تحقیق اور اشاعت کے حل کو اپنا کر اشاعت کے عمل کو آگے بڑھانے کے لیے ایک ایوارڈ یافتہ جریدے کی ایڈیٹر رہی ہیں۔ امبر کھلی رسائی کے علمی مواد میں بہترین طریقوں کی وکالت کرتی ہے۔ Pakistan - Muhammad Imtiaz Subhani Dr. Subhani, with a PhD in Financial Econometrics and a Postdoc in open science, is a passionate advocate for open science and ethical initiatives in scholarly communication. He serves as a DOAJ Ambassador, a Crossref Ambassador, and a Director of FORCE11, focusing on promoting open-access publishing, best practices, and liaising with key stakeholders. He\u0026rsquo;s an education committee member of the Society for Scholarly Publishing (SSP), a lead of Creative Commons Pakistan, and has received multiple grants to promote open science. He\u0026rsquo;s a member of various global networks and task forces, a scientific publishing consultant for the Higher Education Commission, Pakistan, and an author of numerous scientific articles. He currently holds prominent positions at ILMA University and is an Editor of PLOS ONE.\nڈاکٹر سبحانی، فنانشل اکانومیٹرکس میں پی ایچ ڈی اور اوپن سائنس میں پوسٹ ڈاک کے ساتھ، علمی ابلاغ میں کھلی سائنس اور اخلاقی اقدامات کے پرجوش وکیل ہیں۔ وہ DOAJ سفیر، ایک Crossref سفیر، اور FORCE11 کے ڈائریکٹر کے طور پر کام کرتا ہے، کھلی رسائی کی اشاعت، بہترین طریقوں، اور کلیدی اسٹیک ہولڈرز کے ساتھ رابطہ قائم کرنے پر توجہ مرکوز کرتا ہے۔ وہ سوسائٹی فار اسکالرلی پبلشنگ (SSP) کے ایک تعلیمی کمیٹی کے رکن ہیں، جو Creative Commons Pakistan کی قیادت ہے، اور اوپن سائنس کو فروغ دینے کے لیے متعدد گرانٹس حاصل کر چکے ہیں۔ وہ مختلف عالمی نیٹ ورکس اور ٹاسک فورسز کے رکن ہیں، ہائر ایجوکیشن کمیشن، پاکستان کے لیے سائنسی اشاعت کے مشیر، اور متعدد سائنسی مضامین کے مصنف ہیں۔ وہ فی الحال ILMA یونیورسٹی میں نمایاں عہدوں پر فائز ہیں اور PLOS ONE کے ایڈیٹر ہیں۔ Singapore - Woei Fuh Wong Woei Fuh Wong has gone through a career transformation from a researcher to an engineer and later, an information specialist over a period of three decades. During his 10 years with Web of Science (in Thomson Reuters) since 2004, he actively engaged with researchers and academics in Asia Pacific region on bibliometric insights and research assessment. In 2015, Woei Fuh started a consulting firm based in Singapore and he founded a research program, Research 123, helping younger researchers with research skill sets. Since then, he is a strong advocate of research best practices ranging from authoring, publishing and outreaching. His consultancy work focuses on research communication and he delivered numerous workshops to university researchers about: research impact, research storytelling, and research clustering \u0026amp; collaboration. In his recent consulting work, he helped local journal publishers in improving their visibility through research outreaching program for their research authors. He holds a Ph.D. in Polymer Science \u0026amp; Technology from the University of Manchester, UK, and a MBA from Louisville University, USA.\n超过三十年的职业生涯,黄伟富博士从研究者转型成为工程师与企业管理者,目前则是资讯科学专家。在汤森路透旗下专售Web of Science部门的十年间,黄博士积极参与亚太地区书目计量与研究评量相关的各项活动。2015年于新加坡成立以研究传播为焦点的顾问团队,并致力推广为年轻研究者建立研究核心能力的计划Research 123,积极提倡从写作、出版、到行销全方位的研究最佳实践。黄博士经常至学术机构进行演讲与工作坊,主题包含研究的社会影响力、研究口语化、协同合作等。在近期的顾问计画中,他协助地区性的期刊出版社利用研究行销计画提高期刊与作者的能见度。黄博士于英国曼彻斯特大学高分子科学取得博士学位,并为美国路易斯维尔大学企管硕士。\nSingapore - Xiaofeng Guo Ms. Xiaofeng Guo boasts over 16 years of expertise in Persistent Identifiers (PIDs). As the Director of the Chinese DOI Registration and Service Centre, she spearheaded research, software development, and services based on the DOI/Handle system since its launch in 2007. Serving as the sole Crossref Sponsoring Organisation in Mainland China since 2013, the Center actively promotes Crossref DOI services in China and Asia. Recognizing PIDs as fundamental to Open Science infrastructure, Guo is dedicated to advancing PID standards and services. A FREYA Ambassador since 2019, she joined the Executive Committee of DOIF in 2023 and now serves as the first Crossref Sponsoring Organisation in Singapore. Guo\u0026rsquo;s extensive background includes software engineering, data management, scholarly publishing, and advocacy for Open Access and Open Science. Connect with her on LinkedIn.\n郭晓峰女士在持久性标识符(PIDs)领域拥有超过16年的专业经验。作为中国DOI注册与服务中心主任,自2007年以来,她一直领导着基于DOI/Handle系统的研究、软件开发和服务推广工作。自2013年以来,该中心作为Crossref在中国大陆的唯一赞助机构,积极在中国和亚洲推广Crossref DOI服务。由于认识到PIDs是开放科学基础设施的基础,郭女士致力于推进PID标准和服务。她在2019年担任FREYA项目大使,并于2023年加入DOIF执行委员会,目前服务于新加坡第一个Crossref赞助机构。郭女士的广泛背景包括软件工程、数据管理、学术出版等,并积极倡导开放获取和开放科学。请在 LinkedIn 上与她联系。\nSouth Korea - Yera Hur My side hustle has transformed me from a full-time medical professor to a flower designer, research ethics lecturer, family therapist, and character traits instructor five years ago. But my involvement as a researcher (Hallym University College of Medicine), board member and reviewer for SCIE level journals continues to this day. During my experience as an associate editor of two major medical education journals in Korea (KJME and JEEHP), I discovered the excellent tools offered by Crossref. In light of how the two journals have benefited from Crossref’s diverse programs, I am looking forward to my role as an ambassador of Crossref. I always find that anything volunteered that does not involve money is always the most fun!\n안녕하세요. N잡러, “꽃만지는 상담사” 허예라입니다. 5년전 경력전환을 하여 전임 의과대학 교수에서 현재는 한림의대 연구원, 연구윤리 강사, SCIE급 저널 심사위원과 편집위원, 플라워디자이너, 가족상담사, 그리고 도형심리 교육강사로 가장 활발히 활동하고 있습니다. Korean Journal of Medical Education의 부편장과 편집위원의 봉사 경력과, 지난 7년간 Journal of Educational Evaluation for Health Professions (JEEHP)의 부편집장으로 일하면서 Crossref를 접하게 되었습니다. JEEHP에서는 Crossref의 모든 서비스를 활용하고 있으며 이로 인해 학술지의 국제화에 큰 혜택을 보고 있습니다. Crossref ambassador 활동을 통해 다양한 프로그램 기획에 참여하고 싶고, 세계적인 네트워크뿐만 아니라, 즐거운 배움과 봉사활동도 함께 기대합니다. 자발적이면서 돈 버는 일이 아닌 것은 언제나 가장 즐거우니까요!\nSri Lanka - Lasith Gunawardena Lasith Gunawardena is a Professor in Information Technology at the University of Sri Jayewardenepura, Sri Lanka. At present, he spearheads the University\u0026rsquo;s Innovation Arm - Invention, Innovation and Venture Creation Council (IIVCC) as it\u0026rsquo;s Co-Chair. He has been a founder steering committee member of the ICTer (International Journal on Advances in ICT for Emerging Regions) Journal and is a Mendeley Advisor. Currently, he has been elected as a fellow of BCS, the Chartered Institute for IT, UK, and a Senior Member of the Institution of Electrical and Electronics Engineers, USA as well as a Senior Member of the Association for Computing Machinery, USA. He also serves in the Researcher Advisory Council of ORCID Inc. and Advisory Board of STEMUp Foundation.\nමහාචාර්ය ලසිත් ගුණවර්ධන, ශ්‍රී ජයවර්ධනපුර විශ්වවිද්‍යාලයේ තොරතුරු තාක්ෂණ අධ්‍යනාංශයේ වර්තමාන අධ්‍යනාංශ ප්‍රධාන ලෙස කටයුතු කරයි . තවද ඔහු එම විශ්ව විද්‍යාලයේ නව නිපැයුම්, නවෝත්පාදන සහ ව්‍යාපාර නිර්මාණ සභාවේ සම සභාපතිවරයා ලෙසද කටයුතු කරයි. ඔහු ICTer (International Journal on Advances in ICT for Emerging Regions) ශාස්ත්‍රීය සඟරාවේ ආරම්භක මෙහෙයුම් කමිටු සාමාජිකයෙකු වන අතර මෙන්ඩේලි (Mendeley) උපදේශකවරයෙකි. දැනට, ඔහු එක්සත් රාජධානියේ තොරතුරු තාක්ෂණ සඳහා වරලත් ආයතනය (BCS) හි ජ්‍යෙෂ්ඨ සාමාජිකයෙකු ලෙසත්, ඇමරිකා එක්සත් ජනපදයේ විදුලි හා ඉලෙක්ට්‍රොනික ඉංජිනේරු ආයතනයේ (IEEE )ජ්‍යෙෂ්ඨ සාමාජිකයෙකු ලෙසත්, ඇමරිකා එක්සත් ජනපදයේ පරිගණක යන්ත්‍රෝපකරණ සංගමයේ (ACM) සාමාජිකයෙකු ලෙසත් තේරී පත් වී ඇත. එසේම ඔහු ORCID Inc. හි පර්යේෂක උපදේශක කවුන්සිලයේ සහ STEMUp පදනමේ උපදේශක මණ්ඩලයේ ද සේවය කරයි.\nTaiwan - Iris Hsu Iris Hsu has acquired an in-depth knowledge of scholarly publishing ecosystem progressively at different stages of her 15-year career. With the foundation of a Bachelor\u0026rsquo;s Degree in Library \u0026amp; Information Science from National Taiwan University, she took several leadership roles in iGroup, the leading regional information provider for science and education in Asia Pacific. That has given her the opportunity to collaborate with global premier publishers like ACS (American Chemical Society). Currently Iris Hsu is the key consultant based in Taipei for ies Research, an innovative startup from iGroup helping researchers to improve their research visibility across different disciplines and in alignment with SDG (Sustainable Development Goal) Impacts. She is also the Chief Editor of ACCESS@LibraryLearningSpace.com, Asia’s newspaper on information products and services, sponsored by iGroup.\n徐惠玲女士服務於學術出版相關產業多年, 自臺灣大學圖書資訊學系畢業後, 任職於亞太區規模最大之學術資訊提供商 iGroup, 其間經歷商品開發、行銷、教育訓練等不同職位, 與知名學術出版社如ACS (American Chemical Society, 美國化學會) 等維持著長期的合作關係。目前徐女士在iGroup旗下新創團隊ies Research中擔任諮詢角色, 協助研究者提高其研究能見度, 並結合SDG (Sustainable Development Goal, 聯合國永續發展指標) 以提升研究的社會影響力。徐女士同時亦擔任ACCESS@LibraryLearningSpace.com 的主編, 該網站以報導亞太區圖書資訊與數位出版動態為主要任務\nTurkey - Ahmet Müngen Ahmet A. Müngen is an academic and entrepreneur who holds bachelor’s, master’s, and doctoral degrees in computer science. Since 2020, he has been a founding partner of INSERES, where he develops innovative software for academic networks in the fields of artificial intelligence, data mining, and citation analysis. In the same year (2020), he began serving as an Assistant Professor in the Software Engineering Department at OSTIM Technical University. As the Crossref Ambassador, he provides technical support to researchers and institutions, organizes training sessions and online seminars, and promotes the effective use of Crossref services. His work focuses on AI-driven analyses and knowledge discovery within academic information.\nAhmet A. Müngen, bilgisayar bilimleri alanında lisans, yüksek lisans ve doktora derecelerini tamamlamış bir akademisyen ve girişimcidir. 2020 yılından bu yana INSERES\u0026rsquo;in kurucu ortağı olarak akademik ağlarda yapay zekâ, veri madenciliği ve atıf analizi alanlarında yenilikçi yazılımlar geliştirmektedir. Aynı yıldan (2020) itibaren OSTİM Teknik Üniversitesi\u0026rsquo;nde Yazılım Mühendisliği Bölümü\u0026rsquo;nde Dr. Öğretim Üyesi olarak görev yapmaktadır. Crossref Elçisi sıfatıyla araştırmacılar ve kurumlar için teknik destek sağlamakta, eğitimler ve çevrimiçi seminerler düzenleyerek Crossref hizmetlerinin etkin kullanımını teşvik etmektedir. Çalışmaları, akademik bilgi içinde yapay zekâ analizleri ve bilgi keşfi üzerine odaklanmıştır.\nTurkey - Ramazan Turgut Ramazan Turgut is a faculty member at Mardin Artuklu University in Türkiye. He holds a Ph.D. in the History of Religion and has conducted research at Radboud University and the University of Chicago. As a multilingual scholar fluent in Kurdish, Turkish, English, and Dutch, Ramazan actively collaborates with academics from around the globe. In addition to his role as Crossref Ambassador in Türkiye, Ramazan serves as an Ambassador and Managing Editor for the Directory of Open Access Journals (DOAJ), overseeing activities in Türkiye, Azerbaijan, and the Netherlands. He is also deeply committed to integrating AI technologies into academic research and publishing. Beyond his professional life, Ramazan is a proud father of three, an avid reader, and a passionate cook. If he could be a character in Middle Earth, he would have chosen Boromir.\n", "headings": ["Azerbaijan - Iltifat Ibrahimov","Bangladesh - Md Jahangir Alam","Bangladesh - Shaharima Parvin","China - Ran Dang","India - Anjum Sherasiya","India - Babu Balraj","India - Sumit Narula","India - Sushil Kumar","Indonesia - Zulidyana D Rusnalasari","Iraq - Salwan Abdulateef","Japan – Christopher Magor","Kazakhstan – Gulzhaina Kuralbaevna Kassymova","Mongolia - Gantulga Lkhagva","Nepal - Niranjan Koirala","Pakistan - Amber Osman","Pakistan - Muhammad Imtiaz Subhani","Singapore - Woei Fuh Wong","Singapore - Xiaofeng Guo","South Korea - Yera Hur","Sri Lanka - Lasith Gunawardena","Taiwan - Iris Hsu","Turkey - Ahmet Müngen","Turkey - Ramazan Turgut"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/europe/", "title": "Meet our ambassadors in Europe", "subtitle":"", "rank": 4, "lastmod": "2023-08-01", "lastmod_ts": 1690848000, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is in Europe:\nBelarus - Alexey Skalaban (currently inactive) Alexey graduated from the Belarusian State University of Culture, specializing in automated library and information systems. From 2009 to August 2017, he served as director of the Scientific Library of the Belarusian National Technical University. Prior to that, he was engaged in the acquisition of electronic resources in the National Library of Belarus. Specialist in the field of creating institutional open access repositories, managing and providing access to scientific electronic information resources, PIDs. Alexey is an author of more than 30 publications in professional journals and collections and trained in the largest libraries in the USA, Poland, Sweden, Germany.\nОкончил Белорусский государственный университет культуры, специализация — автоматизированные библиотечно-информационные системы. С 2009 г. по август 2017 г. занимал должность директора Научной библиотеки Белорусского национального технического университета. До этого занимался комплектованием электронных ресурсов в Национальной библиотеке Беларуси. Специалист в области создания институциональных репозиториев открытого доступа, управлению и обеспечению доступа к научным электронным информационным ресурсам, постоянным идентификаторам. Автор более 30 публикаций в профессиональных журналах и сборниках. Стажировался в крупнейших библиотеках США, Польши, Швеции, Германии.\nFrance - Frédéric Lefrançois Frédéric is a researcher/lecturer at the University of the French West Indies. While practising research and lecturing in English Studies and Visual/Performing Arts, he has developed a keen interest in cultural anthropology. After the obtention of his Ph.D. in English Studies, his research has focused on the relationship between aesthetics and social sculpture in a variety of Transamerican diasporic contexts: visual arts, cinema, drama, and performance. He has authored two books and edited 3 journal international issues. He believes that research accessibility and visibility are key to fostering excellent scientific cooperation, hence his engagement with Crossref.\nFrédéric est chercheur/enseignant à l’Université des Antilles Françaises. Tout en pratiquant la recherche et en enseignant en études anglaises et en arts visuels/du spectacle, il a développé un vif intérêt pour l\u0026rsquo;anthropologie culturelle. Après l\u0026rsquo;obtention de son doctorat. en études anglaises, ses recherches se sont concentrées sur la relation entre l\u0026rsquo;esthétique et la sculpture sociale dans divers contextes diasporiques transaméricains : arts visuels, cinéma, théâtre et performance. Il est l\u0026rsquo;auteur de deux livres et a édité trois numéros internationaux de revues. Il estime que l’accessibilité et la visibilité de la recherche sont essentielles pour favoriser une excellente coopération scientifique, d’où son engagement auprès de Crossref.\nItaly - Eleonora Colangelo An Open Science and academic publishing professional, she obtained her PhD in Classics in 2020. Since January 2023, she has been a Publishing Specialist at Frontiers, contributing to science outreach and research dissemination. Formerly a Project Manager at a leading Italian software house, she played a pioneering role in introducing Crossref services to Italy. Qualified as a Maître de conférences by the French National University Council, she mentors at STM, the University of Pisa, and the Society for Scholarly Publishing. Additionally, she collaborates with the Council of Science Editors as a member of the Education Committee.\nEsperta nel campo della scienza aperta e dell\u0026rsquo;editoria scientifica, ha conseguito il dottorato in Storia Antica nel 2020. Dal gennaio 2023, ricopre il ruolo di Publishing Specialist presso Frontiers, contribuendo attivamente alla divulgazione scientifica e alla diffusione della ricerca. In precedenza, ha svolto un ruolo chiave nella promozione dei servizi di Crossref in Italia come Project Manager presso una nota software house, partner tecnologico globale di importanti università, centri di ricerca e university press italiane. Qualificata come Maître de conférences dal Consiglio Nazionale delle Università francese, è mentore presso STM, l\u0026rsquo;Università di Pisa e la Society for Scholarly Publishing. Inoltre, collabora con il Council of Science Editors come membro del suo Education Committee.\nRomania - Nicoleta-Roxana Dinu Nicoleta-Roxana Dinu holds a PhD in Library and Information Science at the University of Bucharest. She works at the National Library of Romania, Institutional Development Department. She has been responsible for international relations for the Profesional de la información Journal and the webmaster of the mentioned journal. She is currently editor of the Infonomy Journal (Spain), Scientific advisor of the Journal of Creative Industries and Cultural Studies (JOCIS), Portugal and member of the Advisory Board of the Central European Library and Information Science Review (CeLISR) Journal, Hungary. She is editor of e-LIS for Romania and Moldova. She has published articles on digital information, metadata, trends in scientific journals, use of repositories and social networks.\nNicoleta-Roxana Dinu este doctor în Științele informării și documentării, în cadrul Universității din București. Își desfășoară activitatea la Biblioteca Națională a României, în serviciul Dezvoltare instituțională. A lucrat în Biroul de Relații internaționale al revistei Profesional de la información și a fost webmaster-ul aceleași reviste. În prezent, este editor al revistei Infonomy (Spania), consilier științific al revistei Journal of Creative Industries and Cultural Studies (JOCIS), Portugalia, și membru în Consiliul Consultativ al revistei Central European Library and Information Science Review (CeLISR), Ungaria. Este editor e-Lis pentru România și Republica Moldova. A publicat articole despre informații digitale, metadate, tendințe în revistele științifice, dar și despre utilizarea depozitelor digitale și rețelelor de socializare.\nRussia - Maxim Mitrofanov (currently inactive) Current position \u0026ndash; Not-for-Profit Partnership National Electronic Information Consortium (NP NEICON). NEICON provides all kinds of services for the editors, universities, libraries, etc., such as publishing, consulting, ensuring access to international databases and many more. I was born in Moscow, Russia April 8, 1977, graduated from the University of Foreign Relations and then worked for the Ministry of Foreign Affairs of Russia for nine years. Within that term I spent six years on different positions in Russian Embassies to Canada and then Ghana.\nAfter I left the Ministry in 2007 I worked for the Russian largest Exhibition company, Expocentre and then joined NEICON in 2014. Besides the regular activities in 2015 I was invited to become a DOAJ Ambassador and Associate Editor and I\u0026rsquo;ve held this position since then. Within my activities I was one of the first in Russia who started to promote Crossref services and advise editors about their importance to the editorial process and science information exchange at personal meetings, conferences and seminars. NEICON is a Crossref Sponsor in Russia as well. As a result of the work and the recognition of DOIs by the wide Russian editorial audience I receive a number of applications for Crossref services daily and provide the applicants with the necessary information. Since 2018 Crossref Ambassador in Russia.\nМаксим Митрофанов. В настоящее время Руководитель партнерских программ в Некоммерческом партнерстве Национальный электронно-информационный консорциум (НП НЭИКОН). Основная задача НЭИКОН \u0026ndash; предоставление полного спектра услуг для научных организаций России \u0026ndash; вузов, университетов, библиотек, издательств научной литературы, это и помощь в издании научной периодики, обеспечение подпиской на международные ресурсы, консультации и прочее. Родился в Москве 8 апреля 1977 г. Закончил Московский государственный институт международных отношений после чего девять лет работал в МИД России, включая различные позиции в Посольствах России в Канаде и Гане.\nС 2007 г. работал в крупнейшей российской выставочной компании Экспоцентр. Пришел в НЭИКОН в 2014 г. Помимо основной работы с 2015 г. являюсь представителем и редактором DOAJ в России. В 2014 г. в рамках НЭИКОН одним из первых в России начал пропагандировать использование цифровых идентификаторов для научного контента, пояснять правила и технологии использования doi на встречах, в ходе конференций и семинаров. C 2018 г. являюсь представителем Crossref в России.\nRussia - Angela P. Maltseva (currently inactive) As Doctor of Philosophy, Associate Professor, Chief Researcher and Professor of the Department of Philosophy and Culturology, Angela Maltseva conducts lectures and seminars in Political Science, Sociology, Sociology of Trust, Modern Political Structure of Western countries, History and Philosophy of Science. She has been Editor-In-Chief of the scientific journal \u0026ldquo;Volga Region Pedagogical Search\u0026rdquo; since 2017. The journal is interested in the ways and means of creating trusted environments, the social and educational influences on country development, citizen well-being and dignity, and articles about the role of educational institutions and children-adult communities in the accumulation of social capital. Since 2018 her University (UlSPU named after I.N. Ulyanov) is the first Crossref Sponsor in the Volga region.\nМальцева Анжела Петровна. Будучи доктором философских наук и главным научным сотрудником, Мальцева А.П. в настоящее время читает лекции по социологии, политологии, социологии доверия, современному политическому устройству стран Запада, истории и философии науки. С 2017 года Анжела Петровна является главным редактором научного журнала «Поволжский педагогический поиск», публикующего материалы о доверительных средах и путях их создания, о роли образовательных институтов и взросло-детских сообществ в накоплении социального капитала, о влиянии социальных и образовательных институтов на благополучие граждан, чувство их собственного достоинства. C 2018 года УлГПУ им. И.Н. Ульянова функционирует как первый в Поволжье представитель Crossref и спонсор организаций, осознающих всю важность цифровой идентификации своего контента.\nSerbia - Lazar Stošić Prof. Dr. Lazar Stošić is a university professor at the Faculty of Informatics and Computer Science, University Union—Nikola Tesla, Belgrade, Serbia and the President of the Association for the Develop and the President of the Association for the Development of Science, Engineering and Education, in Serbia. He is also a leading researcher at the Center for Scientific Competence of DSTU, Department of Scientific and Technical Information and Scientific Publications Don State Technical University, Russia. Lazar has over ten years of experience in scholarly publishing. His expertise includes editorial workflow management, conference organization, web technologies, web design, indexing, XML production, SEO, digital marketing, and new media technologies.\nLazar is Editor in Chief of the International Journal of Cognitive Research in Science, Engineering and Education and a member and reviewer of many international scientific journals, providing technical support for submitting metadata to Crossref and using Crossref tools. His main aim as an ambassador is to help organizations and researchers understand the benefits of Crossref membership and the services that Crossref provides.\nProf. dr Lazar Stošić, je univerzitetski profesor na Fakultetu za informatiku i računarstvo, Univerzitet Union - Nikola Tesla, Beograd, Srbija i predsednik Udruženja za razvoj nauke, inženjerstva i obrazovanja u Srbiji. Takođe radi kao vodeći naučni istraživač u Centru za naučne kompetencije DSTU, Odeljenje za naučne i tehničke informacije i naučne publikacije, Donskog državnog tehničkog univerziteta, Rostov na Donu, Rusija. Lazar ima više od deset godina iskustva u naučnom izdavaštvu. Njegova stručnost obuhvata upravljanje procesom rada u izdavaštvu, organizovanje konferencija, veb tehnologije, veb dizajn, indeksiranje, XML produkciju, SEO, digitalni marketing, i nove medijske tehnologije.\nLazar je glavni i odgovorni urednik WoS i Scopus indeksiranog časopisa International Journal of Cognitive Research in Science, Engineering and Education, član i recenzent mnogih međunarodnih naučnih časopisa gde pruža tehničku podršku za dostavljanje metapodataka u Crossref-u kao i za korišćenje drugih alata koje pruža Crossref. Njegov glavni cilj kao ambassadora je da pomogne istraživačkim organizacijama i istraživačima da shvate prednosti članstva u Crossref-u i usluga koje Crossref nudi.\nUkraine - Anna Danilova Anna Danilova, assistant to the Head of subscription agency \u0026ldquo;Ukrinformnauka\u0026rdquo;, is a leading specialist of Publishing House \u0026ldquo;Academperiodyka\u0026rdquo;, founded under the National Academy of Sciences (NAS) of Ukraine. Her primary focus is maintaining online platforms for the scientific journals of NAS of Ukraine; since 2014 she has been providing technical support for submission of the scientific publication metadata to Crossref. Anna takes active part in the organization and holding of conferences, seminars and workshops throughout Ukraine, where all attendees have an opportunity to get detailed information about Digital Object Identifiers (DOIs), additional Crossref services and guidance about submission of scientific publications to the database. She is an author of a number of articles, educational and training materials.\nДанілова Анна - заступник директора передплатного агентства «Укрінформнаука», провідний спеціаліст Видавничого дому «Академперіодика» Національної академії наук України. Сфера її діяльності включає в себе роботу з інтернет-ресурсами наукових періодичних видань НАН України, а з 2014 року – забезпечення технічної підтримки процесів депонування метаданих наукових статей і монографій в базу даних Crossref. Анна бере активну участь в підготовці й проведенні конференцій, семінарів та майстер-класів в різних містах України, на яких усі бажаючі мають можливість отримати детальну інформацію про цифрові ідентифікатори DOI, додаткові сервіси Crossref, підготовку ресурсів наукових видань до депонування. Окрім цього, вона є автором низки статей та навчально-методичних посібників.\n", "headings": ["Belarus - Alexey Skalaban (currently inactive)","France - Frédéric Lefrançois","Italy - Eleonora Colangelo","Romania - Nicoleta-Roxana Dinu","Russia - Maxim Mitrofanov (currently inactive)","Russia - Angela P. Maltseva (currently inactive)","Serbia - Lazar Stošić","Ukraine - Anna Danilova"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/oceania/", "title": "Meet our ambassadors in Oceania", "subtitle":"", "rank": 4, "lastmod": "2023-08-01", "lastmod_ts": 1690848000, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is in Oceania:\nAustralia - Melroy Almeida Melroy currently works at the Australian Access Federation (AAF) as their ORCID Technical Support Analyst. AAF is the consortium lead for the Australian ORCID Consortium and as part of his day to day work Melroy works with the Australian ORCID Consortium members on their ORCID implementations as well as assists them in planning their communication and engagement strategy. As part of his work with ORCID, Melroy occasionally gets questions about DOIs, metadata and discoverability. \u0026ldquo;My aim is to help research organisations and researchers understand the benefits of PIDs, why it is needed and how it helps within the scholarly research lifecycle\u0026rdquo;. In addition to English, Melroy also speaks Hindi and Marathi. In his spare time after work and family commitments, Melroy can be found playing/coaching football (soccer) or sitting on the couch reading a good book.\n", "headings": ["Australia - Melroy Almeida"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-04-20-technical-community-manager/", "title": "Technical Community Manager", "subtitle":"", "rank": 1, "lastmod": "2023-04-20", "lastmod_ts": 1681948800, "section": "Jobs", "tags": [], "description": "Applications for this position closed May 22nd, 2023. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our brand-new Technical Community Manager, working with our API users, service providers, and other metadata integrators.\nLocation: Remote and global (with regular working in European timezones) Salary: Approx.", "content": " Applications for this position closed May 22nd, 2023. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our brand-new Technical Community Manager, working with our API users, service providers, and other metadata integrators.\nLocation: Remote and global (with regular working in European timezones) Salary: Approx. EUR 60,000-72,000 or the local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before the final offer Reports to: Head of Community Engagement and Communications. See team and org chart Application timeline: Advertise in April, interviews in May, and offer by end of May/early June\nThe organisations that make up the Crossref community are involved in documenting the process of scholarship and the progress of knowledge. We provide infrastructure to curate, share, and preserve metadata, which is information that underpins and describes all research activities (such as funding, authorship, dissemination, attention, etc., and relationships between these activities). This enables a rich network of relationships and data underpinning scholarship that improves the discoverability of individual works and supports efforts to increase the openness and integrity of research.\nAs the scholarly communications landscape is dynamically changing, the Technical Community Manager’s key responsibility is to engage with tools and organisations that integrate Crossref metadata solutions into their operations – whether these are service providers enabling others to register and maintain their metadata, or organisations making use of our API and metadata – within their own processes, or developing tools and resources that embed them.\nThis is a new role, taking advantage of the progress we’ve seen towards the research nexus vision, supporting the multiple integrations that rely on Crossref, and maximising the awareness of current and future metadata and API developments.\nKey responsibilities Proactively build and maintain relationships with existing and new community integrators. Research and define their needs, involve them with changes to our services, and bring insights to colleagues to help prioritise service improvements and new developments. Design and implement activities to engage all metadata users and grow the usage of our API, ensuring the community is aware of the possibilities of the ‘research nexus’ and can integrate with us in robust and sustainable ways. Create opportunities for testing, co-development, and mutual learning (e.g. through sprints or working groups), working closely with R\u0026amp;D and Product teams. Provide consultation to all metadata users and facilitate advanced support such as for query efficiency at enterprise levels Improve and manage documentation and create demonstrations and materials to engage existing and potential metadata users. Redesign and grow the Plus program to align with improvements in our (cloud-based) infrastructure, automating the onboarding experience and managing terms of use and service level agreements. Develop our Service Provider program to include all the third-party tools and plugins that our community uses to participate in the Crossref infrastructure. Work with Service Providers such as grant/manuscript/repository platforms and systems to help them understand Crossref’s services, policies, and plans. Explore accreditation options and ensure that information about Service Providers’ offerings is transparent so that the community can assess the different options available to them. Represent Crossref and use the role to bring people together, attending and speaking at relevant community meetings and participating in working groups, hosting workshops and sprints, online and in-person Build and manage relationships with community partners and collaborators worldwide to help progress Crossref’s mission, especially with other adopters of the Principles of Open Scholarly Infrastructure (POSI) Create content and materials such as writing articles and blogs, managing website content and documentation, creating slides, videos, demos, and diagrams Contribute to other outreach and communications activities The role is based within the Community Engagement and Communications team. We work collaboratively across a variety of projects and programmes. We adopt an approachable, community-appropriate tone and style in our communications. We’re looking to re-engage with our community through face-to-face opportunities as well as online, so the post-holder will have their share of travel (accordingly with our latest thinking on travel and sustainability).\nOur primary aim is to engage colleagues from the member organisations and other stakeholders to be actively involved in capturing documentation of the scholarly progress and making it transparent. This contributes to co-creating a robust research nexus. As part of the wider Outreach department at Crossref, we seek to encourage wider adoption and development of best practices in scholarly publishing and communication with regard to metadata and the permanence of the scholarly record. Colleagues across the organisation are helpful, easy-going and supportive, so if you’re open-minded and ready to work as part of the team and across different teams, you will fit right in. Watch the recording of our recent Annual Meeting to learn more about the current conversations in our community.\nAbout you As scientific community engagement is an emerging profession, practical experience in this area is more important to us than traditional qualifications. Also, as this is a new and varied role, the list of requirements is long, but we don’t expect that candidates will meet all of those. It’s best if you can demonstrate that you have most of these characteristics:\nCollaborative attitude Ability to translate complex ideas into accessible narratives in English Ability to engage technical audiences on topics related to research metadata including discussing best practices Experience working with Git, RESTful APIs, JSON metadata, and API interfaces such as Postman Some basic programming skills, ability to write short snippets of code for interacting with APIs and for data manipulation. Familiarity with interactive development environments like JupyterLab and Google Colab. Ability to demonstrate APIs, monitor and interpret usage statistics, and advise on querying in a compelling manner Demonstrable skills in group facilitation and building strong relationships with communities or customers Track record of programme development and improvement, working to budget and timelines Confidence in public speaking in-person and online, including delivery of webinars/workshops Understanding and commitment to the highest standards of equity, diversity and inclusiveness It would be a plus if you also have any of the following:\nUnderstanding of research communications operations such as publishing/repository workflows Data visualisation skills Technical sales or contract management experience Experience working in global or multicultural settings Ability to communicate in languages other than English About Crossref Crossref is a non-profit membership organisation that exists to make scholarly communications better. We make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nCrossref is, at its core, a community organisation with 18,000 members across 150 countries. We work with the community to prototype and co-create solutions for broad benefit, and we’re committed to lowering barriers to global participation in the research enterprise. We’re funded by members and subscribers, and we forge deep collaborations with many like-minded partners, especially those who are equally as committed to the POSI Principles.\nWhat it’s like working at Crossref We’re about 45 staff and now ‘remote-first’ although we have optional offices in Oxford, UK, and Boston, USA. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe actively support ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? We encourage applications from excellent candidates wherever you might be in the world, especially from people with backgrounds historically under-represented in research and scholarly communications. Our team is fully remote and distributed across time zones and continents. This role will require regular work in European time zones. Our main working language is English, but there are many opportunities in this job to use other tongues if you’re able. If anything here is unclear, please contact Kora Korzec, the hiring manager, at kora@crossref.org.\nPlease apply via this form, which allows us to sort your application materials into neat folders for a faster review. We have provided space in the form for you to describe an example of how you use an API. In your cover letter, please feel free to include some examples of relevant projects that you\u0026rsquo;re proud of, perhaps content you\u0026rsquo;ve created or talks you\u0026rsquo;ve given. We would particularly welcome mentions of collaborative work where you led a group or a community through implementing or improving technical solutions. This is a great way for you to show evidence of your suitability for this role.\nNote that if you don’t meet the majority of the criteria we listed here, but are confident you’d be natural in delivering the key responsibilities of the role, we encourage your interest and would still like to hear what strengths you would bring.\nWe aim to start reviewing applications on May 22nd. Please strive to send us your documents by then.\nThe role will report to Kora Korzec, Head of Community Engagement and Communications at Crossref, and she will review all applications along with Michelle Cancel, our HR Manager, and Ginny Hendricks, Director of Member \u0026amp; Community Outreach.\nWe intend to invite selected candidates to a brief initial call to discuss the role as soon as possible following an initial review. Following those, shortlisted candidates will be invited to an interview taking place in late April. The interview will include some exercises you’ll have a chance to prepare for. All interviews will be held remotely on Zoom.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["Key responsibilities","About you","About Crossref","What it’s like working at Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/xml-samples/", "title": "Example XML metadata", "subtitle":"", "rank": 2, "lastmod": "2021-08-27", "lastmod_ts": 1630022400, "section": "Example XML metadata", "tags": [], "description": "Here are some example XML files to help you get started with Content Registration.\nBooks Type Version Input Output Book / monograph 5.3.0 XML JSON (book) JSON (chapter) Book / monograph 4.8.0 XML Book series 5.3.0 XML JSON (series) JSON (book) JSON (chapter) Book set 5.3.0 XML JSON (set) JSON (book) JSON (chapter) Conference Proceedings Type Version Input Output Single proceeding with papers 5.3.0 XML JSON (proceeding) JSON (paper) Single proceeding with papers 4.", "content": "Here are some example XML files to help you get started with Content Registration.\nBooks Type Version Input Output Book / monograph 5.3.0 XML JSON (book) JSON (chapter) Book / monograph 4.8.0 XML Book series 5.3.0 XML JSON (series) JSON (book) JSON (chapter) Book set 5.3.0 XML JSON (set) JSON (book) JSON (chapter) Conference Proceedings Type Version Input Output Single proceeding with papers 5.3.0 XML JSON (proceeding) JSON (paper) Single proceeding with papers 4.8.0 XML JSON (proceeding) JSON (paper) Conference proceeding series 5.3.0 XML JSON (series) JSON (proceeding) JSON (paper) Conference proceeding series 4.8.0 XML JSON (series) JSON (proceeding) JSON (paper) Components Type Version Input Output Component 5.3.0 XML JSON Component 4.8.0 XML JSON Datasets Type Version Input Output Dataset 5.3.0 XML JSON (database) JSON (dataset) Dissertations Type Version Input Output Dissertation 5.3.0 XML JSON Dissertation 4.8.0 XML JSON Grants Type Version Input Output Grant 0.1.1 XML Journals Type Version Input Output Journal with articles 5.3.0 XML JSON (journal title) JSON (article) Journal with articles 4.8.0 XML JSON (journal title) JSON (article) Article with translation 4.8.0 XML JSON Journal title, volume, issue 5.3.0 XML JSON (journal title) JSON (volume) JSON (issue) Journal title 4.8.0 XML JSON Peer reviews Type Version Input Output Peer review 5.3.0 XML JSON Posted content (includes preprints) Type Version Input Output Posted content 5.3.0 XML JSON Posted content 4.8.0 XML JSON Reports and working papers Type Version Input Output Report 5.3.0 XML JSON Standards Type Version Input Output Standard 5.3.0 XML JSON Resource Examples Some metadata segments can be added to an existing record using resource XML.\nType Version Input Clinical trial 4.4.2 XML Crossmark 4.4.2 XML Funding 4.4.2 XML License and full text URL 4.4.2 XML Multiple resolution secondary URLs w/unlock flag 4.4.2 XML Multiple resolution secondary URLs only 4.4.2 XML Multiple resolution unlock only 4.4.2 XML References 4.4.2 XML Relationships 4.4.2 XML Similarity Check URLs 4.4.2 XML Please consult other users on our forum community.crossref.org or open a ticket with our technical support specialists if you have any questions.\n", "headings": ["Books","Conference Proceedings","Components","Datasets","Dissertations","Grants","Journals","Peer reviews","Posted content (includes preprints)","Reports and working papers","Standards","Resource Examples"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/brand/badges/", "title": "Account badges", "subtitle":"", "rank": 5, "lastmod": "2018-11-23", "lastmod_ts": 1542931200, "section": "Brand & logos", "tags": [], "description": "Do you know which type of account you have with us?. If you\u0026rsquo;d like to share that on your website or in your communications, we\u0026rsquo;ve made some standard badges to do so. Note that we recommend using .svg versions online at 200px wide for the sharpest display.\nWhen placing your badge on your website we ask that you reference (not download) them, using the snippets given below. Please copy the code exactly so that if we update our badges, you’ll automatically get the correct file.", "content": "Do you know which type of account you have with us?. If you\u0026rsquo;d like to share that on your website or in your communications, we\u0026rsquo;ve made some standard badges to do so. Note that we recommend using .svg versions online at 200px wide for the sharpest display.\nWhen placing your badge on your website we ask that you reference (not download) them, using the snippets given below. Please copy the code exactly so that if we update our badges, you’ll automatically get the correct file.\nReferencing your badge Account badge Use this code \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/member-badges/member-badge-member.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref Member Badge\u0026quot;\u0026gt; \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/member-badges/member-badge-metadata-user.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref Metadata User Badge\u0026quot;\u0026gt; \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/member-badges/member-badge-service-provider.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref Service Provider Badge\u0026quot;\u0026gt; \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/member-badges/member-badge-sponsored-member.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref Sponsored Member Badge\u0026quot;\u0026gt; \u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/member-badges/member-badge-sponsoring-organization.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;200\u0026quot; alt=\u0026quot;Crossref Sponsor Badge\u0026quot;\u0026gt; ", "headings": ["Referencing your badge"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/orcid/", "title": "ORCID auto-update", "subtitle":"", "rank": 3, "lastmod": "2017-01-13", "lastmod_ts": 1484265600, "section": "Get involved", "tags": [], "description": "An ORCID iD is a persistent identifier for individual researchers and scholarly contributors. It allows everyone (authors, publishers, funders, and research institutions) to uniquely identify the work that they do, and accurately attribute it.\nEnter once, when you submit a paper, then watch as your ORCID record is automatically updated as your work is published, registered with Crossref, and enters the global citation network!\nWho is it for? Authors: save yourself time and reduce the burden of manual data entry, and easily keep your ORCID record up-to-date Publishers: help automate processes for your authors and enhance the discoverability of their work and your content Funders, research administrators, librarians and anyone else interested in tracking research outputs.", "content": "An ORCID iD is a persistent identifier for individual researchers and scholarly contributors. It allows everyone (authors, publishers, funders, and research institutions) to uniquely identify the work that they do, and accurately attribute it.\nEnter once, when you submit a paper, then watch as your ORCID record is automatically updated as your work is published, registered with Crossref, and enters the global citation network!\nWho is it for? Authors: save yourself time and reduce the burden of manual data entry, and easily keep your ORCID record up-to-date Publishers: help automate processes for your authors and enhance the discoverability of their work and your content Funders, research administrators, librarians and anyone else interested in tracking research outputs. How does it work? Crossref’s Auto Update is a classic example of open scholarly infrastructure at work. Registering and sharing metadata and persistent identifiers—such as ORCID iDs and Digital Object Identifiers— means systems can communicate with each other to save everyone a lot of time and effort.\nWhen Crossref members register their content with us, we encourage the inclusion of the ORCID iDs belonging to the individual(s) who contributed to that publication—whether they are authors, peer reviewers, or editors. When ORCID iDs are included in the metadata provided to Crossref along with other information about the work such as title, date, and DOI, we can automatically update the associated ORCID record(s) upon publication (with permission from the record holder(s)).\nHow do I get started? Authors: register for an ORCID iD, and be sure to supply it when you submit your article. Once your article is published, you will receive a notification from ORCID asking you to grant Crossref permission to update your record. Once granted, Crossref will automatically update your record with any future publications that contain your ORCID ID. If you change your mind, you can revoke Crossref’s permission at any time—you can do this directly from our ORCID record (learn more). As an ORCID record holder, you are in control of your record and can grant and revoke access of trusted parties, including Crossref, at any time. By providing your ORCID iD when you submit a paper, you can be sure that as your work takes on a life of its own, you will always be credited. Publishers: Encourage your authors to sign up and submit their ORCID iDs when submitting papers. Let authors know that by doing this, the article will automatically be added to their ORCID record once published, and that they should look out for a notification if/when their paper is published to prompt them grant permission for ORCID to add their publication to their ORCID record. They can grant long-lived permissions so that they enable the automatic addition of any further publications registered with Crossref to their record. Build awareness among editors of the importance of collecting this persistent identifier. Please consult other users on our forum community.crossref.org or open a ticket with our technical support specialists with any questions.\n", "headings": ["Who is it for?","How does it work?","How do I get started?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/researchers/", "title": "For researchers", "subtitle":"", "rank": 5, "lastmod": "2017-01-06", "lastmod_ts": 1483660800, "section": "Get involved", "tags": [], "description": "Find other researchers’ work and let them find yours. Through registering DOIs, we collect and share comprehensive information about research such as citations, mentions, and other relationships. Thousands of tools and services then harness this information—-for search, discovery, and measurement—-through our open APIs.\nPersistent linking Our members register the content they publish with us to tell us it exists - this includes bibliographic (and other) information, and persistent identifiers.\nThis helps make content discoverable by uniquely identifying the work, and giving a means to link to it long-term.", "content": "Find other researchers’ work and let them find yours. Through registering DOIs, we collect and share comprehensive information about research such as citations, mentions, and other relationships. Thousands of tools and services then harness this information—-for search, discovery, and measurement—-through our open APIs.\nPersistent linking Our members register the content they publish with us to tell us it exists - this includes bibliographic (and other) information, and persistent identifiers.\nThis helps make content discoverable by uniquely identifying the work, and giving a means to link to it long-term. When you click on the Digital Object Identifier (DOI) in a reference list, you’ll be reliably taken to the content you’re interested in, regardless of the publication or publisher. If the content moves to a new website, the publisher will come to us and register the new location of the content so that the link doesn’t break and researchers can continue to navigate between content without any snags.\nWe also disseminate that metadata so that other systems - search or subject databases, library systems and scholarly sharing networks - can employ it to help their users find the research they’re interested in.\nRegistering content with us helps the content you publish be found, cited, linked to, and used by other researchers. Watch the video below to find out more:\nAdd DOIs to your reference lists Adding DOIs to your reference lists means that readers can link persistently to related works.\nYou can search for DOIs via our search service or add a list of references to our query tool to have the DOIs we can find returned for those.\nYou’re also welcome to access and use our metadata via our REST API - no sign-up is required and it’s free to use.\nHave an ORCID iD? Speaking of identifiers, we’re big fans of ORCID iDs.\nIt’s simple (and free) to register for an ORCID iD, and providing your ORCID iD when you submit a paper or publish content provides more ways for people to discover your research.\nMany members collect ORCID iDs from their authors, and if they deposit them with us when they register content we can push the publications to the author’s ORCID record automatically. It’s called auto-update and means researchers can skip re-keying information and have their ORCID record show the most complete record of their publications.\nYou can also update your ORCID record by adding existing works that have a Crossref DOI by taking the following steps:\nStart from https://0-search-crossref-org.libus.csd.mu.edu/ and click “Sign in” at the top right of the screen Sign in with your ORCID account credentials, or “Register now” if you don’t yet have an ORCID iD You’ll then see the following message: “Crossref Metadata Search has asked for the following access to your ORCID Record: Add/update your research activities (works, affiliations, etc) Read your information with visibility set to Trusted Parties” Click \u0026lt;i\u0026gt;Authorize \u0026lt;/i\u0026gt; Search for any of your publications that have a Crossref DOI. You can do this by the title, authors or publication. Click “Add to ORCID” to the right of each publication to add it to your ORCID record. Finding important updates to content If you see this button when reading a paper, make sure you click on it!\nUsing Crossmark gives you quick and easy access to the current status of a work. With one click, you can see if content has been updated, corrected or retracted meaning you can have confidence in the status what you’re reading or citing. You can also see useful additional information on things like who funded the work, what licenses apply to the content and much more.\nInterested in text mining? Our REST API is designed to allow researchers to harvest full-text from participating members for the purpose of text mining.\nPublishers deposit license information and links to the full-text of the content with us, and researchers interested in mining cross-publisher content can use that information to find out where the content is located, and what they are allowed to do with it. A GET request then allows you to download the full-text from the publisher’s site (if you have access to it). We walk you through the process in our documentation.\nQuestions about any of this? Get in touch!\n", "headings": ["Persistent linking","Add DOIs to your reference lists","Have an ORCID iD?","Finding important updates to content","Interested in text mining?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-12-05-technical-support-specialist/", "title": "Technical Support Specialist", "subtitle":"", "rank": 1, "lastmod": "2024-12-05", "lastmod_ts": 1733356800, "section": "Jobs", "tags": [], "description": "Applications for this position will be closed on December 20, 2024. Do you want to work directly with Crossref members and metadata users to help progress open science worldwide? Come and join the world of open scholarly infrastructure and metadata as our next Technical Support Specialist.\nLocation: Remote and global with availability from 13:00 to 15:00 UTC Monday through Friday Type: Full-time Remuneration: 45-50K USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer.", "content": " Applications for this position will be closed on December 20, 2024. Do you want to work directly with Crossref members and metadata users to help progress open science worldwide? Come and join the world of open scholarly infrastructure and metadata as our next Technical Support Specialist.\nLocation: Remote and global with availability from 13:00 to 15:00 UTC Monday through Friday Type: Full-time Remuneration: 45-50K USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Head of Participation and Support, Isaac Farley Timeline: Advertise and recruit in December/offer in January About the role Reporting to our Head of Participation and Support, the full time Technical Support Specialist is an important role in our Membership team.\nWe’re looking for a Technical Support Specialist to provide first-line help to our international community of publishers, librarians, funders, researchers, and developers on a range of services that help them deposit metadata to help them find, link, cite, and assess scholarly content. You’ll be working closely with nine other technical and membership support colleagues to provide support and guidance for people with a wide range of technical experience. The strongest candidates will not necessarily be from a technical background, but they’ll have interest and initiative to grow their technical skills while communicating the complexity of our products and services in straightforward and easy-to-understand terms. You’ll help our community both create and retrieve metadata records with tools ranging from simple user interfaces to APIs and integrations.\nCrossref is a distributed team serving members and users around the world. We are seeking candidates to foster a strong team. We work a flexible schedule; for training and synchronous problem-solving, we also ask that candidates have availability between 13:00 and 15:00 UTC.\nKey responsibilities Replying to and solving community queries using the Zendesk support system. Using our various tools and APIs plus knowledge of our XML schema to find the answers to these queries, or pointing users to support materials that will help them. Liaising with colleagues on particularly tricky tickets, escalating as necessary. Working efficiently but also kindly and with empathy with our very diverse, global community. About you We are looking for a proactive candidate with a unique blend of customer service skills, analytical trouble-shooting skills, and a passion to help others. You’ll have an interest in data and technology and will be a quick learner of new technologies. You’ll be able to build relationships with our community members and serve their very diverse needs - from assisting those with basic queries to really digging into some knotty technical queries. Because of this, you’ll also be able to distill those complex and technically challenging queries into easy-to-follow guidance.\nYou’ll need:\nThe ability to clearly communicate complex technical information to technical and non-technical users, using open questions to get to the bottom of things when queries don’t seem to make sense Quick learner of new technologies; can rapidly pick up new processes and systems; and, have interest and initiative to grow your own technical skills Extremely organized and can bring order to chaos, independently manage multiple priorities Ability to balance a very diverse role, wearing many different hats and providing a wide range of support Proactive in asking questions and making suggestions for improvements Process-driven but able to cope with occasional ambiguity and lack of clarity - open to feedback and adaptable when things change quickly A truly global perspective - we have over 20,000 member organizations from 160 countries across numerous time zones Nice to have:\nExperience helping customers and solving problems in creative and unique ways Experience with or interest in XML, metadata, and Crossref as well as scholarly research and information science Experience with Zendesk and Gitlab or similar support and issue management software Experience with Discourse, similar community forums, and/or past community building About the team Our technical support team, part of Crossref’s Outreach group, a distributed team with colleagues across Africa, Asia, Europe, and the US, is critical to the ongoing success and day-to-day operations of the organization. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand.\nThis is a remote role. We ask that candidates have availability between 13:00 and 15:00 UTC. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record. Our main working language is English, but there might be opportunities in this job to use other tongues if you are able.\nAbout Crossref We’re a nonprofit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart, the latest Annual Meeting recordings, and our financial information here.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nClick here to apply!\nPlease strive to submit your application by December 20, 2024.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-10-03-billing-support-specialist/", "title": "Billing Support Specialist, Part-time", "subtitle":"", "rank": 1, "lastmod": "2024-10-03", "lastmod_ts": 1727913600, "section": "Jobs", "tags": [], "description": "Applications for this position closed October 21st, 2024. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our Billing Support Specialist, Part-time.\nLocation: Remote and global (to overlap with colleagues in East Coast USA) Type: Part-time with 15 to 25 hours per week Remuneration: $20/hour USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer.", "content": " Applications for this position closed October 21st, 2024. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our Billing Support Specialist, Part-time.\nLocation: Remote and global (to overlap with colleagues in East Coast USA) Type: Part-time with 15 to 25 hours per week Remuneration: $20/hour USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Accounts Receivable Manager, Amy Bosworth Timeline: Advertise and recruit in October/interview and offer in November About the role Reporting to the Accounts Receivable Manager, the part time Billing Support Specialist is a key support role within the Finance Team.\nThe Billing Support Specialist assists in the administration of the financial functions for the Finance Team, with a focus on supporting the billing needs for our global membership community. Crossref has more than 20,000 members in 160 countries; this role will manage inquiries from members across the world, and as such an ideal candidate will be excited to engage with a wide range of community members. The Billing Support Specialist will need to exercise judgment in selecting and applying established procedures correctly and in determining when to refer situations to other support staff.\nKey responsibilities Monitors Zendesk for billing inquiries and resolves or distributes when appropriate Oversees membership form requests Supports in collections initiatives Assists in payment application within accounting platform Aids in other ad hoc financial, administrative and operations projects About you This role provides assistance and support to the Accounts Receivable Manager, the Billing Support Specialist and Director of Finance. This position also works closely with the Membership Experience team.\nThe Billing Support Specialist will need to be organized and have exemplary communication skills. The Billing Support Specialist will be exposed to Accounts Receivable and other aspects of Finance where accuracy will be essential.\nYou’ll need:\n2-3 years of customer service experience Experience in Accounting Systems (Sage Intacct would be ideal) and Microsoft Excel Strong written and verbal communication skills with the ability to communicate clearly and effectively Prior experience with a globally-minded organization is a plus Critical thinking and problem-solving skills The ability to learn standard processes and apply them when appropriate The ability to track progress on initiatives and follow up on inquires High level of attention to detail About the team The Finance Team at Crossref holds the responsibility for recording and reporting on financial transactions accurately and in a timely manner. We also support the overall membership experience, by billing members accurately and timely, and actively communicating to resolve inquiries they may have about their invoices or payment options. We follow carefully crafted financial controls to ensure that our Financial Reports are robust, accurate and up to date. We work closely with Membership, Technology, and Programs in a variety of ways. Our goal is to provide accurate and timely information so that management can make informed decisions.\nThis is a remote role that will need to overlap with the East Coast, USA. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record.\nAbout Crossref We’re a nonprofit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart and view our Annual Reports and financial information here.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nApplication deadline was October 21st, 2024\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-10-12-senior-developer/", "title": "Senior Software Developer", "subtitle":"", "rank": 1, "lastmod": "2023-10-12", "lastmod_ts": 1697068800, "section": "Jobs", "tags": [], "description": "Applications for this position closed October 27, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new Senior Software Developer.\nLocation: Remote and global (with 3-hour overlap with the UTC-0 time zone) Type: Full-time Remuneration: Approx. EUR 90-95k/USD 128-132k/GBP 56-65k or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before the final offer.", "content": " Applications for this position closed October 27, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new Senior Software Developer.\nLocation: Remote and global (with 3-hour overlap with the UTC-0 time zone) Type: Full-time Remuneration: Approx. EUR 90-95k/USD 128-132k/GBP 56-65k or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before the final offer. Reports to: Head of Software Development, Joe Wass Timeline: Advertise and recruit in October/hire in November About the role We\u0026rsquo;re looking for the next member of the Software Development team. We build, operate and maintain metadata services that enable the global research community. We\u0026rsquo;re on a journey to build features to serve the evolving needs of our changing membership, and migrate our legacy technology. As a team, we have a deep understanding not only of technology, but also the needs of our diverse community.\nYou will specify, design and implement improvements, features and services. You will have a key voice in discussions about technical approaches and architecture. You will always keep an eye on software quality and ensure that the code you and your colleagues produce is maintainable, well tested, and of high quality.\nOur new code is written primarily in Kotlin on the back-end and Typescript on the frontend. We have legacy code in Java. We don\u0026rsquo;t expect you to be familiar with our technologies yet, though it\u0026rsquo;s a bonus. This is primarily a back-end role.\nYou should have in-depth knowledge of at least one compiled, typed or functional language (e.g. Java, Clojure, Kotlin, Scala, C#, Go, Typescript etc) and a history of learning new languages and technologies on the job.\nWe are a geographically distributed, remote-first team with flexible working hours.\nKey responsibilities Understand Crossref’s mission and how we support it with our services Collaborate with external stakeholders when needed Pursue continuous improvement across legacy and green-field codebases Work flexibly in multi-functional project teams, especially with the Product team, to design and develop services Ensure that solutions are reliable, responsive, and efficient Produce well-scoped, testable, software specifications Implement and test solutions using Kotlin, Java and other relevant technologies Work closely with the Head of Software Development to solve problems, maintain and improve our services and execute technology changes Provide code reviews and guidance to other developers regarding development practices and help maintain and improve our development environment Identify vulnerabilities and inefficiencies in our system architecture and processes, particularly regarding cloud operations, metrics and testing Communicate proactively with membership and technical support colleagues ensuring they have all the information and tools required to serve our users Openly document and share development plans and workflow changes Be an escalation point for technical support; investigate and respond to occasional but complex user issues; help minimize support demands related to our systems; be part of our on-call team responding to service outages About you We don\u0026rsquo;t expect a successful candidate to tick all of these boxes right away! If you have any questions, please get in touch.\nQualities\nComfortable collaborating with colleagues or stakeholders in the community Have a proven track record of picking up new technologies Outstanding at interpersonal relations and relationship management Comfortable collaborating with colleagues across the organisation Self-directed, a good manager of your own time, with the ability to focus Comfortable being part of a distributed team Curious and tenacious at learning new things and getting to the bottom of problems Skills\nAn expert senior developer with experience in Java, Kotlin, or related languages Experience in Spring or similar frameworks Experienced with continuous integration, testing and delivery frameworks, and cloud operations concepts and techniques Familiar with AWS, containerization and infrastructure management using tools like Terraform Some experience with Python, JavaScript or similar scripting languages Experience working on open source projects Able to quickly understand, refactor and improve legacy code and fix defects A working understanding of XML and document-oriented systems such as Elasticsearch Experience building tools for online scholarly communication or related fields such as Library and information science, etc Ability to create and maintain a project plan Strong at written and verbal communication skills, able to communicate clearly, simply, and effectively About the team The Crossref team is distributed across the world. The Software Development team is based in Europe and the US. We work alongside the Product, Infrastructure and Labs teams.\nAll new code, and most issue tracking, is open source. We strongly believe in open scholarly infrastructure and openness at all stages of the software development lifecycle. We perform code reviews and practice continuous deployment for all new code.\nAs a membership organization we keep closely in touch with our users, and encourage our developers to be familiar with our community.\nWe work fully remotely, but try to meet in person at least once a year. This is a full-time position, but working hours are flexible. The applicant should expect they will need to travel internationally to work with colleagues for about 5-10 days a year. If you have any questions we would be happy to discuss.\nYou can be based anywhere in the world where we can employ staff, either directly or through an employer of record.\nAbout Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books, to preprints, data, and grants. Through partnerships and collaborations we engage with members in 148 countries (and counting) and it’s very important to us to nurture that community.\nWe’re about 45 staff and remote-first. This means that we support our teams working asynchronously and to flexible hours. Some international travel will likely be appropriate, for example to in-person meetings with colleagues and members, but in line with our travel policy. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? This position is full time and, as for all Crossref employees, location is flexible. The Crossref team is geographically distributed in Europe and North America and we fully support working from home. It would be good to have a minimum 3-hour overlap with the UTC-0 time zone.\nWe especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nApplications Closed\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-09-27-finance-clerk/", "title": "Finance Clerk", "subtitle":"", "rank": 1, "lastmod": "2023-09-27", "lastmod_ts": 1695772800, "section": "Jobs", "tags": [], "description": "Applications for this position closed October, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new Finance Clerk.\nLocation: Remote and global (to overlap with colleagues in East Coast USA) Type: Full-time Remuneration: 40-44k USD Reports to: Supervising Accountant, Maria Sullivan Timeline: Advertise and recruit in September-October/hire in October-November About the role Reporting to the Supervising Accountant, the Finance Clerk is a key role within the Finance team.", "content": " Applications for this position closed October, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure as our new Finance Clerk.\nLocation: Remote and global (to overlap with colleagues in East Coast USA) Type: Full-time Remuneration: 40-44k USD Reports to: Supervising Accountant, Maria Sullivan Timeline: Advertise and recruit in September-October/hire in October-November About the role Reporting to the Supervising Accountant, the Finance Clerk is a key role within the Finance team. The Finance Clerk is responsible for full cycle Accounts Payable, assuring financial transactions are proper recording within the accounting system. This position is the lead contact for vendor relations and our internal expense reporting application. The position will also be cross trained on cash application and membership support.\nA successful candidate in this role will be comfortable performing a broad range of duties,working remotely and independently, and paying close attention to details.\nKey responsibilities Responsible for the full cycle AP function for both the UK and USA entities including entering invoices into our accounting system, obtaining payment approvals and facilitating payment processing (checks/wires/direct debits/ACHs) Responsible for managing corporate credit cards including reviewing and reconciling to statements monthly Responsible for the Expensify expense reporting platform; including maintaining knowledge of updates and enhancements and troubleshooting Onboard/train new hires in Expensify and be the main point of contact for team members on system and reimbursement process inquiries Responsible for the yearly 1099/1096 filing and vendor reporting Act as backup for other Finance Team staff Responding to Zendesk inquires and assisting in collections as needed Assist with cash application Assist with monthly and quarterly financial reporting Assist with audits Other ad hoc financial and operational projects About you The successful candidate will possess the following\nThe ability to organize work, set priorities, follow-up and work proactively and accurately Excellent oral, written, data entry and communication skills Have experience in a multi-currency environment, to include USD \u0026amp; GBP Equally comfortable communicating with colleagues and our members to help problem solve Experience working across timezones, with an understanding of the global nature of our work and community A self-starter and problem solver with an exceptional attention to detail Be comfortable using a variety of technology and software to communicate with a distributed staff It would be a plus if you possess the following\n2-5 years of accounting experience; basic understanding of US GAAP accounting practices Solid experience using cloud-based/accounting applications (Intacct) Solid experience using Microsoft Excel and other tools (gmail/google docs, etc) Bachelor\u0026rsquo;s Degree in Accounting/Business or equivalent business experience About Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books, to preprints, data, and grants. Through partnerships and collaborations we engage with members in 148 countries (and counting) and it’s very important to us to nurture that community.\nWe’re about 45 staff and remote-first. This means that we support our teams working asynchronously and to flexible hours. Some international travel will likely be appropriate, for example to in-person meetings with colleagues and members, but in line with our travel policy. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nAbout the team This role is in our Finance and Operations team which consists of 6 (soon to be 7) team members remotely sitting in the East Coast, USA. The Finance Clerk is a remote role that will need to overlap with East Coast, USA for at least 3-4 working hours. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nApplications closed\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About Crossref","About the team","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-04-13-member-experience-manager/", "title": "Member Experience Manager", "subtitle":"", "rank": 1, "lastmod": "2023-04-13", "lastmod_ts": 1681344000, "section": "Jobs", "tags": [], "description": "Applications for this position closed May 1st, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our brand new Member Experience Manager.\nLocation: Remote and global (to overlap with colleagues in Indonesia and East Coast USA) Remuneration: Approx. EUR 58,000 - 70,000 or local equivalent, depending on experience.", "content": " Applications for this position closed May 1st, 2023. Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our brand new Member Experience Manager.\nLocation: Remote and global (to overlap with colleagues in Indonesia and East Coast USA) Remuneration: Approx. EUR 58,000 - 70,000 or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before final offer. Reports to: Head of Member Experience, Amanda Bartell Timeline: Advertise and recruit in April/May; offer by end of May About the role This position is a mix of community and relationship management alongside business process management, data quality, and analytics; it’s a very varied role and ideal for an experienced generalist with a passion for collaboration and transparency.\nYou’ll be managing two membership specialists to help ensure our members have a smooth experience with us, with a particular focus on a carefully-managed application and onboarding process, reducing manual tasks, and making things more efficient and transparent wherever possible. You’ll be ensuring that our members understand the role that they and Crossref play in building the vision of a shared research nexus and helping them to join and contribute the best quality and quantity of metadata about their research with the global scholarly community. You’ll be overseeing data integrity and reporting on trends to provide actionable insights. You’ll be active in the scholarly communications community, contributing to volunteer-led co-creation initiatives. And you’ll work hand-in-hand with community engagement colleagues to support key programs for members, sponsors, service providers, and metadata users.\nKey responsibilities Managing our small membership team of two member support specialists (one based in the UK, one based in Indonesia), along with three membership contractors. Managing the new member onboarding process to ensure members have all the information they need to succeed. Supporting the membership specialists by answering particularly involved or knotty questions through our support system (Zendesk), our Community Forum (Discourse), face-to-face (via zoom and in person), or on social media like our growing mastodon presence. Making our membership application process as smooth as possible for new members, while ensuring that applicants have all the information they need to get the most out of their membership. Identifying and implementing process improvements for a more efficient experience (including for our staff) by eliminating manual and intensive tasks where possible. Managing our automated onboarding email program. Working with long-term members to make the most of their membership and follow the member obligations Providing virtual (and in-person) training, support, and metadata ‘health checks’, following up and triaging issues to other expert colleagues. Identifying issues and working proactively with members to solve problems, including any who are not meeting their membership obligations. Working with the membership specialists to ensure that our member data is accurate and up-to-date, and that our CRM system (Sugar) can meet the organisation’s reporting needs. Using the CRM reports to provide actionable insights on trends. Being a key point of contact for and working with our finance team to help improve the payment and invoicing experience for everyone. Supporting meeting our openness and transparency goals (see POSI) by exposing publicly all membership operations and activities. Participating in community events and volunteer initiatives to maintain an awareness of community issues and providing guidance or co-creating shared resources. Working hand-in-hand with community engagement colleagues to support key programs for sponsors, service providers, and metadata users - as these programs support all our members. Helping to create and implement rollout plans for new features that will affect our community, such as new ways of logging in or interacting with our systems, changes to fees, or opportunities to participate in or test Crossref services and initiatives. About you We’re looking for a colleague who will take this opportunity and make it their own. While we have many documented processes for handling such a large membership operation, your fresh eyes will be able to highlight and improve how we work. Ideally, you will have an understanding of the dynamics within the academic research and open science environment, know what metadata is and why it matters, and have the ability to engage with and enthuse a wide range of stakeholders. You’ll have a love of data and analytics, a comfort level with metadata and databases, and a logical, systematic approach to prioritising and work in general.\nYou’ll be experienced working within the broad areas of community, process, and data. You’ll be as comfortable demonstrating online tools or explaining complex concepts (such as the ‘research nexus’) as you are with handling detailed data and systems like CRMs. And you are a persuasive presenter with excellent written English skills (with other languages highly desirable). It’s particularly important that you can explain complicated concepts and multi-step processes clearly and convincingly. You’re able to build strong relationships and collaborate with internal teams and community partners, keeping member experience at the forefront of colleagues’ activity. You’ll be comfortable taking the initiative to lead conversations with people at all levels, with team management experience with remote and international teams. You’ll need to be extremely organized and attentive to detail. You’ll be able to follow set processes carefully and accurately, but adapt when situations change and simplify processes where possible. You’ll have a passion to understand and improve member experience, and are able to build credibility, trust, and relationships with our members, sponsors, service providers, metadata users, and partners. You’re a quick learner of new technologies and can rapidly pick up new programs and systems. You’ll ideally have experience with Zendesk or similar support systems and have used CRM systems (such as Sugar) - and you might even dabble in XML or JSON (we all succumb eventually 😁). You’ll have a truly global perspective - we have 18,000 member organizations from 150 countries across numerous time zones and they engage and interact with us in numerous ways. About Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books, to preprints, data, and grants. Through partnerships and collaborations we engage with members in 148 countries (and counting) and it’s very important to us to nurture that community.\nWe’re about 45 staff and remote-first. This means that we support our teams working asynchronously and to flexible hours. Some international travel will likely be appropriate, for example to in-person meetings with colleagues and members, but in line with our travel policy. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThis role is in our Member Experience team, part of our larger Community Outreach team - a fourteen-strong team split across the US, Africa, Asia and Europe.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/strategy/archive-2021/", "title": "Strategic agenda 2021-2022", "subtitle":"", "rank": 1, "lastmod": "2021-05-28", "lastmod_ts": 1622160000, "section": "Strategic agenda and roadmap", "tags": [], "description": "This is our strategic agenda from 2020-2022 and it\u0026rsquo;s now archived, along with the previous 2018-2022 version, please visit the main strategy page for the current version. Welcome to our new strategic narrative\u0026mdash;published June 2021\u0026mdash;which sets out our agenda through 2025. It encompasses all our plans, from governance and sustainability, our work with different parts of the scholarly community, through to new and existing product development as well as strategic initiatives.", "content": " This is our strategic agenda from 2020-2022 and it\u0026rsquo;s now archived, along with the previous 2018-2022 version, please visit the main strategy page for the current version. Welcome to our new strategic narrative\u0026mdash;published June 2021\u0026mdash;which sets out our agenda through 2025. It encompasses all our plans, from governance and sustainability, our work with different parts of the scholarly community, through to new and existing product development as well as strategic initiatives.\nWe have six overarching goals, two more than the previous set, now archived; \u0026lsquo;bolster the team\u0026rsquo; and \u0026rsquo;live up to POSI\u0026rsquo; are the 2021 additions. Historic strategy information can be found on our blog.\nRead on to learn more about where the Crossref community is heading and let us have your thoughts by starting or joining a discussion in the strategy section of our community forum.\nBolster the team This theme is all about people, support, culture, and resilience. In recent years, Crossref has grown in every way: members, users, registered content, and staff. We see no indication of things slowing down which is usually seen as a positive, but we need to get into a position where we can project and manage growth more purposefully, and catch up with our technical debt which will support our staff and community in improving scholarly communications.\nAll teams have manual processes that need to be automated, and a continued priority is to ‘reduce toil’ for our staff, and therefore for our members and users. Support queries have grown as we’ve introduced new services and tools (along with the volume of content and members). Having needed to recruit contractors for technical support and membership, we need to carefully consider what resourcing Crossref will require if we continue to grow as we have been before we achieve our other goal to radically simplify our services. We won’t be able to meet our community members in their own (now numbering 140) countries, and while we can reach a breadth of people online, we miss the depth of understanding that meeting face-to-face can give. So we need new ways to hear from and understand many varied needs. We will organise our communications and engagement activities under a new community playbook where we have a programming and co-creation approach to content, events, and supporting languages other than English.\nAs our staff have become more distributed, and our organization is adapting to changes in work culture, we want to improve the way we communicate across teams and timezones, and improve how we make decisions and track progress towards the goals set out here. Our recruitment practices are being honed to better reflect our inclusive values, and we are improving the new hire ‘welcome’ experience.\nStrong leadership and organizational resilience are priorities and activities such as knowledge transfer, documentation, succession planning, risk management, and identification of critical functions, will all help to keep Crossref thriving. Live up to POSI This theme is stated because we want to be held publicly accountable to the Principles of Scholarly Infrastructure standards of governance, insurance, and sustainability. Our board adopted POSI in November 2020 and we published a self-assessment soon after. We meet most of the 16 principles and this goal keeps us working towards the others as well as maintaining those we are doing well in.\nThe one principle we meet least well is broad stakeholder governance. A stakeholder is any person or organisation that would be affected by decisions made by Crossref. That includes anyone who has ever clicked on a Crossref DOI. So\u0026mdash;starting with research funders\u0026mdash;we will actively work to broaden the governance of the organisation. In 2021, the board gave guidance to the nominating committee to add at least one funder to the election slate.\nA number of projects in this section center around transparent operations. While we have a lot of information here on our website, some of it is not easy to find and some of it we\u0026rsquo;ve never thought to publish. So we will work to digitise things such as our financial updates and we\u0026rsquo;ve introduced a public product roadmap where we are updating information more frequently throughout the year. Governance and sustainability information will be added to the website, the practices we follow to onboard as well as \u0026lsquo;off-board\u0026rsquo; accounts, as well as recruitment policies and staff handbooks.\nOn the technical side we will continue to release public data files periodically, we’ll clarify licenses of the different kinds of metadata, services, and tools that we have. And we will aim to continue to open-source the majority of our code, bug reporting, and issue tracking. Over time, support will be lifted out of a 1:1 setting and into the open, via our community forum. All of these activities\u0026mdash;and the mindset that goes along with them\u0026mdash;will ensure that Crossref can be \u0026lsquo;forked\u0026rsquo; if anything goes badly wrong.\nEngage with expanding communities This theme centers on scale, strengthening relationships, community facilitation, and communications/content. Crossref has enjoyed incredible growth for many years and we need to be able to predict what’s coming next and project the scale of Crossref some years down the line. We know that not all content has a DOI (even with other agencies) and we know that many countries and organizations still do not find Crossref accessible. The priority activities under this goal include gap analyses work with organizations such as DOAJ to predict future growth areas. We will also be conducting a review of our revenue, its current distribution, historical trends, and capacities for growth across our services, all in line with our mission and the POSI sustainability definition.\nCrossref currently has members in 140 countries and we interact with people in 160. With that comes the need to increasingly and proactively involve emerging markets as they begin sharing research outputs globally. We are regularly reassessing the ways that we manage the long tail of new members, our relationship with and management of Sponsors (also growing in number), and how we have structured our teams to support this growth.\nWe include emerging markets in our training and events, and we will develop as much content in languages other than English as we can. We are also exploring removing barriers to participation for US-sanctioned countries. We work with government education/science ministries and local Sponsors and Ambassadors, and are increasing our efforts to do more public support via the community forum. We are also creating more bitesize and visual/audio resources and taking a programming approach to our online events.\nWe will re-establish a practice of community consultation where possible, involving people in our priorities and plans, refreshing or revising community groups, starting with one for metadata users as a key and growing group of stakeholders.\nFurthermore, funders and research institutions are increasingly interested in supporting open scholarly infrastructure and have increased their engagement with Crossref in recent years. Funder’s service providers are already supporting the registration and use of grant metadata and our outreach activities with them will expand. Library-publishing infrastructure is a growing interest and activity. Our overarching objective is to extend our value proposition to convince these new constituents of Crossref’s relevance, getting them into our system and using our infrastructure.\nImprove our metadata This theme involves researching and communicating the value of richer, connected, and reusable, open metadata, and incentivising people to meet best practices, while also making it possible (and easier) to do so. Increased metadata quality is an ongoing effort and will also be the result of meeting other goals, particularly engaging with new communities (to understand new workflows and record types) and simplifying our services (to provide the data and tools to make better metadata creation easier and reduce manual interventions by our staff).\nWe will commission a metadata ‘reach and return’ study to understand and analyse the outcomes of investing in quality open metadata, and the benefits to members and others. We will convene metadata users in a new user group and we are actively involved in the follow-on phase Metadata 20/20: Your Turn. We will set out the Crossref metadata strategy and create new best practice documentation, incentivising good practice through appropriate fees and actionable participation reports. We have also started a partnership program to work 1:1 with select organisations whose improvement would see the greatest improvement for everyone.\nWe are improving support for standards including JATS for Content Registration, and will provide clear guidelines for service providers. We need to ensure members (and their Service Providers and Sponsors) can register metadata more easily, and we will create a new process to crowdsource corrections and error reporting and make it easier for members to act on those reports.\nBy working to understand new workflows, and applying this knowledge in a revised community data model, we will further develop the research nexus to expand the links between more research objects and plan for new relationships and record types such as protocols, videos, and conferences. We will implement schema 5.0—for both input and output—to enable members to include CRediT and ROR for affiliations.\nCrossref is now a recognized source of scholarly metadata for thousands of services across multiple communities so we will maintain a clear provenance trail of every metadata assertion, assisted by Event Data. And we will continue to work on end-to-end metadata distribution through our public REST API, moving from solr to elasticsearch, while maintaining and growing the Metadata Plus service.\nCollaborate and partner We\u0026rsquo;ve always collaborated but we want to work even more closely with like-minded organisations to solve problems together. This theme cuts across all of our other goals like engaging with new and existing communities, simplifying services, and improving our metadata.\nCrossref did not develop its mission alone, and we cannot meet and exceed these goals alone. The essence of our vision is shared by many organizations and initiatives:\nWe envision a rich and reusable nexus of metadata and relationships connecting all research objects, organizations, people, and actions; an open scholarly infrastructure that the global community can build on forever, for the benefit of society.\nTo realize this vision, we must deepen and cement partnerships with like-minded initiatives and organisations such as DataCite, ORCID, and ROR, to name but a few, ideally reducing the burden of supporting us or our mutual stakeholders. We want to co-create and co-develop and encourage integrations that solve problems for the whole community.\nWe will collaborate on outreach and technical development and improve and extend integrations such as ORCID Auto-update. We will support for DataCite’s ‘commons’ work for a joint discovery experience, our shared Event Data infrastructure, and in promoting data citation. We will continue to spend time and effort co-governing ROR and ensuring its sustainability, and we will continue our partnership with the Public Knowledge Project to build plugins for the thousands of Crossref members who use OJS. Our new partnership with DOAJ means we can lower barriers to participation for small and emerging constituencies.\nIn order to invest in new and upcoming community initiatives, we are also expanding our R\u0026amp;D focus (Labs is back!).\nSimplify and enrich services This goal is all about focus. And about delivering easy-to-use tools that are critically important for our community. If we are to reduce ‘toil’ for our staff and users, handle growth and engage with new communities, expand our support for the research nexus, improve metadata, and collaborate with others\u0026hellip; we need to radically simplify our services, starting with unifying Content Registration tools and deprecating old tools and APIs as we go.\nIn order that we can plan, develop, test, deliver, maintain, and update our services faster and more reliably, we are untangling years of code and rules. We have and will continue focusing on knowledge transfer, documentation, and creating a robust test environment. We will extract and refactor major subsystems (Content Registration, matching, billing, monitoring, reporting) and we will build this modern and extensible architecture around a set of capabilities\u0026hellip;\nUnder a concept known as ‘My Crossref’, we will create my.crossref.org. We need to better understand our users, starting with a new authentication module for all tools and for different layers of permissions to reflect all the ways our users work with Crossref on behalf of each other. We will develop linked and actionable reports to help both users and ourselves to make decisions, improve metadata, and increase participation. We will add to the research nexus of relationships, bringing together the technologies we have for matching and linking all objects in the research nexus and extending our support for additional record types and metadata, including additional provenance tracking. We are working on creating an open crowdsourced facility for updates/corrections for metadata enrichment, an activity and notification capability, and we\u0026rsquo;ll unify designs of all Crossref UIs based on usability, accessibility, responsiveness, and internationalization.\nMy Crossref will also allow self-service for toilsome tasks such as title transfers and bulk URL updates, reducing the pressure on our teams and our community. Our new elasticsearch-backed REST API will be more closely aligned with our metadata input, and we will disseminate all metadata, starting with grant records.\nApart from Content Registration and all the related activities above, we have work to do to introduce and support Similarity Check\u0026rsquo;s iThenticate V2 including training materials, member migration, revenue planning, and a future needs analysis.\nEvent Data is an important part of our strategy as that exposes multiple relationships and extends the research nexus beyond member-defined connections. We have stabilised and relaunched Event Data, and we\u0026rsquo;ll also add to its sources in time. We will monitor and grow our Metadata Plus service, which includes high rate limits and monthly snapshot downloads of all metadata, and with Crossmark we will start to conceptualise a more accessible widget to combine with DOI display guidelines.\n", "headings": ["This is our strategic agenda from 2020-2022 and it\u0026rsquo;s now archived, along with the previous 2018-2022 version, please visit the main strategy page for the current version.","Bolster the team","This theme is all about people, support, culture, and resilience.","Live up to POSI","This theme is stated because we want to be held publicly accountable to the Principles of Scholarly Infrastructure standards of governance, insurance, and sustainability.","Engage with expanding communities","This theme centers on scale, strengthening relationships, community facilitation, and communications/content.","Improve our metadata","This theme involves researching and communicating the value of richer, connected, and reusable, open metadata, and incentivising people to meet best practices, while also making it possible (and easier) to do so.","Collaborate and partner","We\u0026rsquo;ve always collaborated but we want to work even more closely with like-minded organisations to solve problems together.","Simplify and enrich services","This goal is all about focus. And about delivering easy-to-use tools that are critically important for our community."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/ip-blocked/", "title": "Blocked IP", "subtitle":"", "rank": 2, "lastmod": "2019-02-14", "lastmod_ts": 1550102400, "section": "", "tags": [], "description": "Hello. We\u0026rsquo;re delighted that you\u0026rsquo;re using our API and we hope you find it useful.\nUnfortunately, your usage also seems to be causing some problems with the API, which, in turn, is preventing other users from accessing it as well.\nTherefore we have temporarily blocked your access. Please contact us at support@crossref.org. We’ll be happy to help you optimize your system to make the best, unobtrusive use of our API.", "content": "Hello. We\u0026rsquo;re delighted that you\u0026rsquo;re using our API and we hope you find it useful.\nUnfortunately, your usage also seems to be causing some problems with the API, which, in turn, is preventing other users from accessing it as well.\nTherefore we have temporarily blocked your access. Please contact us at support@crossref.org. We’ll be happy to help you optimize your system to make the best, unobtrusive use of our API.\nAlso take a look at these API tips and the etiquette section of our API documentation for more tips on using the system \u0026lsquo;politely\u0026rsquo;.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/developers/", "title": "For developers", "subtitle":"", "rank": 5, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Get involved", "tags": [], "description": "If you develop tools and software to find, cite, link, and/or assess research outputs, you can integrate our metadata about scholarly content into your project, through our open APIs.\nTrying to get data from us? We have an open REST API that provides a flexible way of accessing and using the most up-to-date metadata we have. We store bibliographic metadata related to currently 160,104,382 publications, coupled with funding information, license details, ORCID iDs, full-text links and much more.", "content": "If you develop tools and software to find, cite, link, and/or assess research outputs, you can integrate our metadata about scholarly content into your project, through our open APIs.\nTrying to get data from us? We have an open REST API that provides a flexible way of accessing and using the most up-to-date metadata we have. We store bibliographic metadata related to currently 160,104,382 publications, coupled with funding information, license details, ORCID iDs, full-text links and much more.\nThe API makes it easy to facet, filter, or search the metadata we hold. There\u0026rsquo;s no need to sign-up, but we’d love you to tell us how you’re using it or check out some examples for inspiration.\nIf the REST API isn’t the best fit for what you’re doing, we have a number of other delivery options and are happy to discuss to get you fixed-up with one of those.\nWorking on Crossref services Implementing Crossmark? Publishers participate in Crossmark by depositing additional metadata for their content and adding a snippet of code to their DOI landing pages to generate the Crossmark button and link. Details for each of these steps are on in our documentation. Interested in text mining? Our REST API is designed to allow harvesting full-text from participating members for the purpose of text mining. We walk you through the process if you’re keen to explore. Want to register content with us, upload files or do bulk queries? Stuck? Email our technical support specialists and we’ll try to help.\n", "headings": ["Trying to get data from us?","Working on Crossref services"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/webinars_old/", "title": "Webinars_old", "subtitle":"", "rank": 3, "lastmod": "2019-04-02", "lastmod_ts": 1554163200, "section": "Webinars_old", "tags": [], "description": "Our regular webinars are a great way to find out more about the various Crossref services.\nWe are now doing face-to-face meetings online. You can find our virtual events on our events page.. Look back at some recent recordings such as an update on Crossmark, the Funder Registry, getting started with Content Registration, maintaining your metadata, or the latest on Similarity Check.\nRecordings of recent webinars Our getting started series Get started with Content Registration Slides, Recording Get started with Reference Linking Slides, Recording Getting started as a new Crossref member Slides, Recording Getting started with looking up metadata Slides, Recording Our how-to webinar series Crossmark how-to Crossref Cited-by how-to Participation Reports The ins and outs of our Admin Tool About our services Content Registration maintaining metadata Crossmark update Crossref Cited-by Funding data and the Funder Registry Introduction to Crossmark Introduction to Similarity Check Research infrastructure with and for funders Similarity Check update Similarity Check Members update Webinars in Arabic Getting Started with Content Registration - in Arabic Introduction to Crossref - in Arabic Reference Linking \u0026amp; Cited-By - in Arabic ندوه عن كيفية استخدام كروس مارك باللغة العربية | Crossmark How-To Arabic webinar Webinars in Indonesian Crossref LIVE Indonesia webinar series: Introduction to Crossref, Content Registration, The Value of Crossref metadata - July 13 - 15 - Online Recordings Webinars in Portuguese Funder Registry - in Portuguese Getting started with Content Registration: Portuguese Webinar Introduction to Participation Reports webinar in Portuguese Introduction to Similarity Check webinar - in Portuguese Reference Linking \u0026amp; Cited By webinar - in Portuguese Registering content and adding to your Crossref metadata - in Portuguese Melhores Práticas para Registro de Conteúdo/Crossref Content Registration in Brazilian Portuguese Introduction to Crossmark/Crossmark: O que é e como usar Webinars in Russian Crossref and OJS - in Russian Recording Introduction to Crossmark - in Russian Slides, Recording Understanding Crossref reports - in Russian Slides, Recording Crossref LIVE, in Russian: Value \u0026amp; Use of Metadata Slides, Recording Webinars in Spanish Introduction to Crossref and Content Registration - in Spanish Reference linking and Cited-by - in Spanish Registro y actualización de contenido en Crossref / Register and update content in Crossref Crossref ‘Similarity Check’, en español Seminario web de Informes de Participación / Participation Reports Webinar, in Spanish Crossmark (en español Webinars in Turkish Introduction to Crossref: Turkish Seminar Crossref İçerik Kaydı Webinarı, Türkçe | Content Registration at Crossref , Turkish Webinar Webinars in Ukrainian Participation Reports webinar in Ukrainian Other Asia Pacific community webinar Beyond OpenURL: Making the most of Crossref metadata Crossref and DataCite joint data citation webinar Getting started with books at Crossref Introduction to ROR Maintaining your metadata Participation Reports webinar Preprints and scholarly infrastructure Proposed schema changes - have your say Using ORCID in publishing workflows Where does publisher metadata go and how is it used?", "content": "Our regular webinars are a great way to find out more about the various Crossref services.\nWe are now doing face-to-face meetings online. You can find our virtual events on our events page.. Look back at some recent recordings such as an update on Crossmark, the Funder Registry, getting started with Content Registration, maintaining your metadata, or the latest on Similarity Check.\nRecordings of recent webinars Our getting started series Get started with Content Registration Slides, Recording Get started with Reference Linking Slides, Recording Getting started as a new Crossref member Slides, Recording Getting started with looking up metadata Slides, Recording Our how-to webinar series Crossmark how-to Crossref Cited-by how-to Participation Reports The ins and outs of our Admin Tool About our services Content Registration maintaining metadata Crossmark update Crossref Cited-by Funding data and the Funder Registry Introduction to Crossmark Introduction to Similarity Check Research infrastructure with and for funders Similarity Check update Similarity Check Members update Webinars in Arabic Getting Started with Content Registration - in Arabic Introduction to Crossref - in Arabic Reference Linking \u0026amp; Cited-By - in Arabic ندوه عن كيفية استخدام كروس مارك باللغة العربية | Crossmark How-To Arabic webinar Webinars in Indonesian Crossref LIVE Indonesia webinar series: Introduction to Crossref, Content Registration, The Value of Crossref metadata - July 13 - 15 - Online Recordings Webinars in Portuguese Funder Registry - in Portuguese Getting started with Content Registration: Portuguese Webinar Introduction to Participation Reports webinar in Portuguese Introduction to Similarity Check webinar - in Portuguese Reference Linking \u0026amp; Cited By webinar - in Portuguese Registering content and adding to your Crossref metadata - in Portuguese Melhores Práticas para Registro de Conteúdo/Crossref Content Registration in Brazilian Portuguese Introduction to Crossmark/Crossmark: O que é e como usar Webinars in Russian Crossref and OJS - in Russian Recording Introduction to Crossmark - in Russian Slides, Recording Understanding Crossref reports - in Russian Slides, Recording Crossref LIVE, in Russian: Value \u0026amp; Use of Metadata Slides, Recording Webinars in Spanish Introduction to Crossref and Content Registration - in Spanish Reference linking and Cited-by - in Spanish Registro y actualización de contenido en Crossref / Register and update content in Crossref Crossref ‘Similarity Check’, en español Seminario web de Informes de Participación / Participation Reports Webinar, in Spanish Crossmark (en español Webinars in Turkish Introduction to Crossref: Turkish Seminar Crossref İçerik Kaydı Webinarı, Türkçe | Content Registration at Crossref , Turkish Webinar Webinars in Ukrainian Participation Reports webinar in Ukrainian Other Asia Pacific community webinar Beyond OpenURL: Making the most of Crossref metadata Crossref and DataCite joint data citation webinar Getting started with books at Crossref Introduction to ROR Maintaining your metadata Participation Reports webinar Preprints and scholarly infrastructure Proposed schema changes - have your say Using ORCID in publishing workflows Where does publisher metadata go and how is it used? Not just identifiers: why Crossref DOIs are important - Slides, Recording How to manage your metadata with Crossref Finding your way with Crossref: getting started \u0026amp; additional services SE Asia webinar series OpenCon Oxford 2020 Making the world a PIDder place: it’s up to all of us! (Co-hosted by DataCite, Crossref, ORCID \u0026amp; ROR) - Sep 22, 2021 05:00 PM in Amsterdam, Berlin, Rome, Stockholm, Vienna If you’re interested in viewing recordings of past LIVE events, check out our YouTube channel.\nCrossmark (en español) September 30, 2021\nRecording\nSeminario web de Informes de Participación / Participation Reports Webinar, in Spanish Webinar Recording\nIntroduction to Crossmark/Crossmark: O que é e como usar Webinar Recording\nCrossref ‘Similarity Check’, en español Webinar Recording\nOpenCon Oxford 2020 Webinar Recording\nCrossref İçerik Kaydı Webinarı, Türkçe | Content Registration at Crossref , Turkish Webinar Webinar slides\nWebinar Recording\nHow to manage your metadata with Crossref Webinar slides\nWebinar Recording\nFinding your way with Crossref: getting started \u0026amp; additional services Webinar slides\nWebinar Recording\nMelhores Práticas para Registro de Conteúdo/Crossref Content Registration in Brazilian Portuguese Date: Wednesday, October 7, 2020\nWebinar slides\nWebinar Recording\nRegistro y actualización de contenido en Crossref / Register and update content in Crossref Date: Thursday, October 1, 2020\nWebinar slides\nWebinar Recording\nندوه عن كيفية استخدام كروس مارك باللغة العربية | Crossmark How-To Arabic Date: Tuesday, September 15, 2020\nWebinar slides\nWebinar Recording\nGetting started with books at Crossref Date: Wednesday, July 22, 2020\nWebinar slides\nWebinar Recording\nIntroduction to Similarity Check Date: Thursday, April 30, 2020 Speaker: Vanessa Fairhurst, Susan Collins Webinar Slides\nWebinar Recording\nIntroduction to ROR Date: Wednesday, April 29, 2020 Webinar Recording\nThe ins and outs of our Admin tool Date: Thursday, March 5, 2020 Speakers: Isaac Farley, Paul Davis, Kathleen Luschek Webinar Recording\nParticipation Reports webinar Date: Wednesday, October 7, 2020 Speakers: Anna Tolwinska\nWebinar Slides\nWebinar Recording\nВебінар “Інструмент Participation Reports” (Українською мовою) - Participation Reports webinar in Ukrainian Date: Tuesday, August 11, 2020 Speakers: Anna Danilova and Anna Tolwinska\nWebinar Slides\nWebinar Recording\nIntroduction to Participation Reports webinar in Portuguese Date: Wednesday, October 30, 2019 Speakers: Rachael Lammey, Edilson Damasio\nWebinar Slides\nWebinar Recording\nProposed schema changes - have your say Date: Thursday, January 2, 2020 Speakers: Patrica Feeney\nWebinar Slides\nWebinar Recording\nUsing ORCID in publishing workflows Date: Monday, September 16, 2019 Speakers: Estelle Cheng / Rachael Lammey\nWebinar Recording\nResearch infrastructure with and for funders Date: Thursday, September 6, 2019 Speakers: Josh Brown / Rachael Lammey\nWebinar Recording\nUnderstanding Crossref Reports - presented in Russian Date: Thursday, April 18, 2019 Speakers: Rachael Lammey / Andrii Zolkover\nWebinar Slides\nWebinar Recording\nReference Linking \u0026amp; Cited By webinar - in Portuguese Date: Tuesday, April 16, 2019 Speakers: Edilson Damasio / Rachael Lammey\nWebinar Recording\nCrossref and DataCite joint data citation webinar Date: Monday, Feb 4, 2019\nSpeakers: Rachael Lammey, Helena Cousijn, Patricia Feeney, Robin Dasler\nWebinar Slides\nWebinar Recording\nIntroduction to Crossmark - in Russian Date: Thursday, December 6, 2018 Speakers: Maxim Mitrofanov\nWebinar Slides\nWebinar Recording\nRegistering content and adding to your Crossref metadata - in Portuguese Date: Monday, November 26, 2018 Speakers: Edilson Damasio / Rachael Lammey\nWebinar Slides\nWebinar Recording\nCrossref and OJS - presented in Russian Date: Thursday, November 22, 2018 Speakers: Rachael Lammey / Andrii Zolkover / Vitaliy Bezsheiko\nWebinar Recording\nReference Linking \u0026amp; Cited-By - in Arabic Date: Thursday, November 8, 2018 Speakers: Mohamad Mostafa / Vanessa Fairhurst / Rachael Lammey\nWebinar Slides\nWebinar Recording\nIntroduction to Reference Linking and Cited-by webinar - in Spanish Date: Monday, November 7, 2018 Speakers: Susan Collins / Vanessa Fairhurst / Arley Soto\nWebinar Slides\nWebinar Recording\nIntroduction to Crossref and Content Registration - in Spanish Date: Tuesday, October 24, 2018\nSpeakers: Susan Collins / Vanessa Fairhurst / Arley Soto\nWebinar Slides\nWebinar Recording\nWhere does publisher metadata go and how is it used? Date: Tuesday, September 11, 2018 Speakers: Laura J. Wilkinson, Anna Tolwinska, Stephanie Dawson, and Pierre Mounier\nWebinar Slides: Laura Wilinson\nWebinar Slides: Anna Tolwinska\nWebinar Slides: Stephanie Dawson\nWebinar Slides: Pierre Mounier\nWebinar Recording\nGetting Started with Content Registration - in Arabic Date: Monday, September 17, 2018 Speakers: Mohamad Mostafa/Rachael Lammey/Vanessa Fairhurst\nWebinar Slides\nWebinar Recording\nIntroduction to Crossref - in Arabic Date: Monday, August 6, 2018 Speaker: Rachael Lammey/Mohamad Mostafa\nWebinar recording Getting started as a new Crossref member Date: Thursday, May 24, 2018 Speaker: Rachael Lammey\nWebinar Slides\nWebinar Recording\nReference Linking Date: Date: Wednesday, May 23, 2018\nSpeaker: Patricia Feeney\nWebinar Slides\nWebinar Recording\nCrossmark how-to Date: Tuesday, May 15, 2018\nSpeaker: Kirsty Meddings\nWebinar Slides\nWebinar Recording\nMaintaining your metadata Date: Tuesday, April 24, 2018\nSpeaker: Patricia Feeney\nWebinar Slides\nVimeo recording\nWebinar Recording\nIntroduction to Similarity Check - in Portuguese Date: Tuesday, April 10, 2018 Speaker: Rachael Lammey/Edilson Damasio Webinar Slides\nQ \u0026amp; A: http://bit.ly/simcheck-portuguese\nWebinar Recording\nGetting started with looking up metadata Date: Thursday, March 8, 2018\nSpeaker: Patricia Feeney\nWebinar Slides\nWebinar Recording\nGetting started with Content Registration: Portuguese Webinar Date: Tuesday, September 5, 2017 Speaker: Rachael Lammey/Edilson Damasio\nWebinar Slides\nWebinar Recording\nCrossref Cited-by how-to Date: Tuesday, June 13, 2017\nSpeaker: Patrica Feeney\nWebinar Slides\nWebinar Recording\nBeyond OpenURL: Making the most of Crossref metadata Date: Wednesday, July 12, 2017 Speaker: Patricia Feeney\nWebinar Slides\nWebinar Recording\nCrossref Cited-by Date: Thursday, June 8. 2017\nSpeaker: Anna Tolwinska\nWebinar Slides\nWebinar Recording\nFunding data \u0026amp; the Funder Registry Date: Tuesday, April 4, 2017\nSpeaker: Patricia Feeney\nWebinar Slides\nWebinar Recording\nPreprints and scholarly infrastructure Date: Monday, January 30, 2017\nSpeaker: Rachael Lammey\nWebinar Slides\nWebinar Recording\nCrossmark update Date: Thursday, February 23, 2017\nSpeaker: Kirsty Meddings\nWebinar Slides\nWebinar Recording\nGet Started with Content Registration Date: Tuesday, October 17, 2017\nSpeaker: Patricia Feeney\nWebinar Slides\nWebinar Recording\nIntroduction to Crossmark Date: Tuesday, Nov 21, 2017\nSpeaker: Kirsty Meddings Webinar Slides\nWebinar Recording\nIntroduction to Crossref: Turkish Seminar Date: Thursday, Nov 2, 2017\nSpeaker: Rachael Lammey/Prof. Dr. Serkan Eryilmaz Translation: Dogan Kusmus\nWebinar Recording\nSimilarity Check update Date: Tuesday, September 20, 2016\nSpeaker: Madeleine Watson\nWebinar Slides\nFunder Registry - in Portuguese Date: Monday, September 26, 2016\nSpeaker: Rachael Lammey\nWebinar Slides\nContent Registration maintaining metadata Date: Tuesday, May 17, 2017\nSpeaker: Rachael Lammey\nWebinar Slides\nWebinar Recording Asia Pacific community webinar Date: Thursday, December 14, 2016\nWebinar Slides\nCrossref Similarity Check members update Date: Thursday, March 2\nSpeaker: Gareth Malcolm, Turnitin\nWebinar Recording (must register first to view)\nIf you have a question or would like us to hold a webinar on another topic, please contact our outreach team with your ideas.\n", "headings": ["Recordings of recent webinars","Our getting started series","Our how-to webinar series","About our services","Webinars in Arabic","Webinars in Indonesian","Webinars in Portuguese","Webinars in Russian","Webinars in Spanish","Webinars in Turkish","Webinars in Ukrainian","Other","Crossmark (en español)","Seminario web de Informes de Participación / Participation Reports Webinar, in Spanish","Introduction to Crossmark/Crossmark: O que é e como usar","Crossref ‘Similarity Check’, en español","OpenCon Oxford 2020","Crossref İçerik Kaydı Webinarı, Türkçe | Content Registration at Crossref , Turkish Webinar","How to manage your metadata with Crossref","Finding your way with Crossref: getting started \u0026amp; additional services","Melhores Práticas para Registro de Conteúdo/Crossref Content Registration in Brazilian Portuguese","Registro y actualización de contenido en Crossref / Register and update content in Crossref","ندوه عن كيفية استخدام كروس مارك باللغة العربية | Crossmark How-To Arabic","Getting started with books at Crossref","Introduction to Similarity Check","Introduction to ROR","The ins and outs of our Admin tool","Participation Reports webinar","Вебінар “Інструмент Participation Reports” (Українською мовою) - Participation Reports webinar in Ukrainian","Introduction to Participation Reports webinar in Portuguese","Proposed schema changes - have your say","Using ORCID in publishing workflows","Research infrastructure with and for funders","Understanding Crossref Reports - presented in Russian","Reference Linking \u0026amp; Cited By webinar - in Portuguese","Crossref and DataCite joint data citation webinar","Introduction to Crossmark - in Russian","Registering content and adding to your Crossref metadata - in Portuguese","Crossref and OJS - presented in Russian","Reference Linking \u0026amp; Cited-By - in Arabic","Introduction to Reference Linking and Cited-by webinar - in Spanish","Introduction to Crossref and Content Registration - in Spanish","Where does publisher metadata go and how is it used?","Getting Started with Content Registration - in Arabic","Introduction to Crossref - in Arabic","Getting started as a new Crossref member","Reference Linking","Crossmark how-to","Maintaining your metadata","Introduction to Similarity Check - in Portuguese","Getting started with looking up metadata","Getting started with Content Registration: Portuguese Webinar","Crossref Cited-by how-to","Beyond OpenURL: Making the most of Crossref metadata","Crossref Cited-by","Funding data \u0026amp; the Funder Registry","Preprints and scholarly infrastructure","Crossmark update","Get Started with Content Registration","Introduction to Crossmark","Introduction to Crossref: Turkish Seminar","Similarity Check update","Funder Registry - in Portuguese","Content Registration maintaining metadata","Asia Pacific community webinar","Crossref Similarity Check members update"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/librarians/", "title": "For librarians", "subtitle":"", "rank": 5, "lastmod": "2017-03-02", "lastmod_ts": 1488412800, "section": "Get involved", "tags": [], "description": "Libraries and Crossref are a winning combination. Our shared goal is to improve discoverability of content for researchers. For our part we look after our members\u0026rsquo; metadata and run a registry of persistent links. We also offer services that help systems and people to make connections.\nWe currently look after over 160,104,382 records from theses, dissertations, preprints, grants, and reports\u0026ndash; through to journal articles and books/book chapters. Enhance your metadata and connect your discovery and linking services with these records, they\u0026rsquo;re all available through open APIs and [search] (https://search.", "content": "Libraries and Crossref are a winning combination. Our shared goal is to improve discoverability of content for researchers. For our part we look after our members\u0026rsquo; metadata and run a registry of persistent links. We also offer services that help systems and people to make connections.\nWe currently look after over 160,104,382 records from theses, dissertations, preprints, grants, and reports\u0026ndash; through to journal articles and books/book chapters. Enhance your metadata and connect your discovery and linking services with these records, they\u0026rsquo;re all available through open APIs and [search] (https://0-search-crossref-org.libus.csd.mu.edu/).\nRetrieve metadata into your library discovery system using our OpenURL service If you are a librarian and you need to use OpenURL with your library link resolver e.g. Alma, an email address should be supplied in queries that the link resolver sends to Crossref. This will be configured in your link resolver. You can find more information in our OpenURL documentation.\nThe metadata we take in Our members register records with us including as much metadata as possible, they then use each others\u0026rsquo; metadata to link their references and commit to maintaining these links for the long-term.\nIncreasingly we are being asked to take in other scholarly outputs such as grants, peer review reports, videos, blogs, and other. Watch our blog for news of these additions or sign up to our newsletter via the link in the footer of this website.\nOther ways to get metadata One of our responsibilities is to make metadata available to the systems and people who need it. The benefits to libraries in using our metadata are that you can ensure persistent links, which lead to increased discoverability of your resources. And it also means you don\u0026rsquo;t have to sign individual linking agreements with each publisher or keep track of their different ways of linking.\nWe have a number of metadata retrieval options, in addition to OpenURL. The most visible is search.crossref.org (for humans!) which uses our REST API (for machines!). Both interfaces are open and free to use without registering.\nFrom search.crossref.org you can search for a DOI, an article, an author, an ORCID iD, etc. You can search right from our homepage on the second tab \u0026lsquo;search metadata\u0026rsquo; or go directly to search.crossref.org.\nWe have a browsable title list of all titles registered with us, by journal, book series, or conference proceeding.\nLibrary publishers A growing number of libraries are publishers themselves. Many of the approximately 180 members that join every month are, for example, a scholar-publisher or a library-publisher. If you\u0026rsquo;d like to register content and contribute metadata please take a look at the member terms and if you can meet them, apply to join.\nContact our outreach team to set up an account or ask any questions. Technical support is also available from our metadata experts.\n", "headings": ["Retrieve metadata into your library discovery system using our OpenURL service","The metadata we take in","Other ways to get metadata","Library publishers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-04-19-community-engagement-manager/", "title": "Community Engagement Manager (editorial)", "subtitle":"", "rank": 1, "lastmod": "2023-04-19", "lastmod_ts": 1681862400, "section": "Jobs", "tags": [], "description": "Applications for this position closed May 12, 2023. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our new Community Engagement Manager for the editorial community.\nLocation: Remote and global (with regular working in European timezones) Salary: Approx. EUR 58,000-70,000 or local equivalent, depending on experience.", "content": " Applications for this position closed May 12, 2023. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our new Community Engagement Manager for the editorial community.\nLocation: Remote and global (with regular working in European timezones) Salary: Approx. EUR 58,000-70,000 or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before final offer. Reports to: Head of Community Engagement and Communications. See org chart and team.\nApplication timeline: Advertise in April, interviews in May, and offer by end of May/start of June.\nThe organisations that make up the Crossref community are involved in documenting the process of scholarship and the progress of knowledge. We provide infrastructure to curate, share, and preserve metadata, which is information that underpins and describes all research activities (such as funding, authorship, dissemination, attention, etc., and relationships between these activities).\nIncreasingly, the community is concerned with matters of research integrity, and Crossref plays a central role with several tools and services to help record and track corrections and retractions and identify issues such as plagiarism, along with other trust indicators that metadata can provide.\nAs the scholarly communications landscape is dynamically changing, the Community Engagement Manager’s key responsibility is to engage with the global community of scholarly editors, working with publishers and partners like EASE and CSE. You will help the community leverage metadata to assert the integrity of the scholarly record. This is a new role, responding to a growing need identified through recent consultations with the community on these topics.\nKey responsibilities Create opportunities to engage with editors regarding the integrity of the scholarly record, and develop a programme to sustain collaboration and devise activities and resources that help equip, mobilise and empower editors to collect and leverage rich metadata Build relationships with scientific editorial groups such as EASE (Europe), ACSE (Asia), and CSE (US) and support their communities, bringing insights to shape our development and priorities Identifying and creating opportunities to listen to the sentiment and feedback of the Crossref’s community, sharing community insights with colleagues Representing Crossref and using the role to bring people together, attending and speaking at relevant industry events, online and in-person Building and managing relationships with community partners and collaborators worldwide to help progress Crossref’s mission Creating content, such as writing articles and blogs, creating slides and diagrams Contribute to other outreach and communications activities The role is based within the Community Engagement and Communications team. We work collaboratively across a variety of projects and programmes. We adopt an approachable, community-appropriate tone and style in our communications. We’re looking to re-engage with our community through face-to-face opportunities as well as online, so the post-holder will have their share of travel (accordingly with our latest thinking on travel and sustainability).\nOur primary aim is to engage colleagues from the member organisations and other stakeholders to be actively involved in documenting the scholarly progress and making it transparent. This contributes to co-creating a robust research nexus. As part of the wider Outreach department at Crossref, we seek to encourage wider adoption and development of best practices in scholarly publishing and communication with regard to metadata and the permanence of scholarly record. Colleagues across the organisation are helpful, easy-going and supportive, so if you’re open-minded and ready to work as part of the team and across different teams, you will fit right in. Watch the recording of our recent Annual Meeting to learn more about the current conversations in our community and explore our blog for a series of articles concerning the integrity of the scholarly record.\nAbout you As scientific community engagement is an emerging profession, practical experience in this area is more important to us than traditional qualifications. It’s best if you can demonstrate that you have most of these characteristics:\nCollaborative attitude Demonstrable experience working within the scholarly editorial community Awareness of current trends in academic culture and scholarly communications Curiosity to explore complex concepts and to learn new skills and perspectives Ability to translate complex ideas into accessible narratives in English Experience of community building and management and/or of planning, executing and evaluating participatory initiatives Demonstrable skills in group facilitation and stakeholder relationships management Track record of programme development and improvement, working to budget Confidence in public speaking in-person and online, including delivery of webinars/workshops Event and project management experience Tried and tested strategies for ensuring that your engagement programs are equitable, diverse and inclusive It would be a plus if you also have any of the following: Understanding of matters concerning metadata Experience of working in global or multicultural settings Ability to communicate in languages other than English About Crossref Crossref is a non-profit membership organisation that exists to make scholarly communications better. We make research objects easy to find, cite, link, assess, and reuse. We’re passionate about providing open foundational infrastructure for the scholarly communications ecosystem - and we’re continuously evolving our tools and services in response to emerging needs.\nCrossref is, at its core, a community organisation with 18,000 members across 150 countries. We work with the community to prototype and co-create solutions for broad benefit, and we’re committed to lowering barriers to global participation in the research enterprise. We’re funded by members and subscribers, and we forge deep collaborations with many like-minded partners, especially those who are equally as committed to the POSI Principles.\nWhat it’s like working at Crossref We’re about 45 staff and ‘remote-first’. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe actively support ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nThinking of applying? We encourage applications from excellent candidates wherever you might be in the world, especially from people with backgrounds historically under-represented in research and scholarly communications. Our team is fully remote and distributed across time zones and continents. This role will require regular work in European time zones. Our main working language is English, but there are many opportunities in this job to use other tongues if you’re able. If anything here is unclear, please contact Kora Korzec, the hiring manager, on kora@crossref.org.\nPlease apply via this form which allows us to sort your application materials into neat folders for a faster review. One of the best ways of offering evidence of your suitability within the cover letter is with an example of a relevant project you’re particularly proud of – we would particularly welcome mentions of your work with scholarly editors. If possible, we’d also love to see an example of content you’ve created – a link to a recording of your talk, blog post, infographic, or something else. There is space to share documents and links within the application form.\nLastly, if you don’t meet the majority of the criteria we listed here, but are confident you’d be natural in delivering the key responsibilities of the role, we encourage your interest and would still like to hear what strengths you would bring.\nWe aim to start reviewing applications on May 12th. Please strive to send us your documents by then.\nThe role will report to Kora Korzec, Head of Community Engagement and Communications at Crossref, and she will review all applications along with Michelle Cancel, our HR Manager, and Ginny Hendricks, Director of Member \u0026amp; Community Outreach.\nWe intend to invite selected candidates to a brief initial call to discuss the role as soon as possible following an initial review. Following those, shortlisted candidates will be invited to an interview taking place in May. The interview will include some exercises you’ll have a chance to prepare for. All interviews will be held remotely on Zoom.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["Key responsibilities","About you","About Crossref","What it’s like working at Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/forum/", "title": "Community forum", "subtitle":"", "rank": 3, "lastmod": "2021-01-20", "lastmod_ts": 1611100800, "section": "Get involved", "tags": [], "description": "Community is at the very core of what we do and who we are. At Crossref we work with a diverse, global community of publishers, libraries, government agencies, funders, researchers, universities, ambassadors, and more from over 140 countries. We are also actively part of the larger scholarly research community, which includes other persistent identifier organizations, metadata users and aggregators, open science initiatives, and others with shared aims and values.", "content": " Community is at the very core of what we do and who we are. At Crossref we work with a diverse, global community of publishers, libraries, government agencies, funders, researchers, universities, ambassadors, and more from over 140 countries. We are also actively part of the larger scholarly research community, which includes other persistent identifier organizations, metadata users and aggregators, open science initiatives, and others with shared aims and values.\nWe strive to work more openly and collaboratively with you, our community, and so we have established the Crossref Community Forum - community.crossref.org - using open source discussion platform Discourse. The forum compliments our existing support process by enabling collaborative problem solving, you can post questions to be answered by Crossref staff or other community members, and share your expertise and experiences across various time zones and languages. Our goal is that you, the Crossref community, will own this space. This is a platform for you to connect and build relationships with others working in scholarly communications, to advance your work with us and shape the future of scholarly infrastructure.\nWhy join the Crossref Community Forum? Becoming a member of the Crossref Community Forum allows you to connect and interact with our vast network of members from across the globe.\nShare issues that you need some help resolving, post a question to the forum in your native language and get help from another community member. Easily navigate to FAQs and related support documentation to answer your own questions. Share what activities or projects you are working on and get input from others. Give us feedback on our plans and help us shape future developments at Crossref Test out new tools and services. Find out about upcoming events and webinars, and share any you think are of interest to the community. Help us identify better ways of working together through Crossref and co-create new materials and projects. Make connections with other members, learn what others are working on, and identify opportunities for collaboration. Feel more actively involved in the Crossref community. How to get involved Simply head over to community.crossref.org to set up an account. There’s a useful How-To guide available in our welcome post, as well as some Community Guidelines all our members should follow.\nSome general tips are:\nTake a couple of minutes to personalise your profile - add a picture and a bit of a text about yourself - as well as making sure your preferences such as your email address, language, and notification settings are correct. ‘Track’ or ‘watch’ any specific categories you are interested in so you don’t miss out on new posts. Before creating a new post first look and see if anyone has already raised a similar topic, the answer you are looking for could already be there, or you could add to an existing conversation. When making a new post, make sure it’s in the relevant category, and provide as much information as you can so that other members of the community are able to fully understand and help you. And of course treat the forum and its members with respect. Don’t post spam or other non-relevant content and make sure that any language, links and images you post are professional. If you see a problem, flag it and we can take the appropriate action. For more information on how to use the Crossref Community, please refer to our Code of Conduct and our Terms of Service. You can also give us feedback or ask us any questions via the forum itself or by emailing us at feedback@crossref.org.\n", "headings": ["Why join the Crossref Community Forum?","How to get involved"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/research-administrators/", "title": "For research managers", "subtitle":"", "rank": 5, "lastmod": "2017-01-10", "lastmod_ts": 1484006400, "section": "Get involved", "tags": [], "description": "Crossref helps research institutions with the management, analyses, and reporting of their research activities. Through our open metadata, institutions can:\nSupport researchers in identifying possible collaborators, partner organizations, related research and funding opportunities Locate, enhance, and analyze information about outcomes of previous projects to show impact and influence Help researchers demonstrate compliance with funder policies Enhance assessment activities by streamlining reporting on a range of published outputs Verify and group outputs related to particular grants, projects, and people to better understand the research profile of their organisation.", "content": "Crossref helps research institutions with the management, analyses, and reporting of their research activities. Through our open metadata, institutions can:\nSupport researchers in identifying possible collaborators, partner organizations, related research and funding opportunities Locate, enhance, and analyze information about outcomes of previous projects to show impact and influence Help researchers demonstrate compliance with funder policies Enhance assessment activities by streamlining reporting on a range of published outputs Verify and group outputs related to particular grants, projects, and people to better understand the research profile of their organisation. More than just bibliographic metadata The basic bibliographic metadata we collect from our members is really useful in helping the identification and discovery of content.\nHowever, there’s lots more to the metadata than that. We collect additional information that helps track and link research outputs to researchers, funders and institutions. This includes:\nInformation on who funded the research The ORCID iDs of the authors, and we can push the articles to their ORCID records automatically License information and embargo dates Author affiliation information (soon to be supported by the collection of ROR ids) Links to full text, or different versions of content e.g. accepted manuscript, version of records Third party archive arrangements Abstracts Acceptance dates We allow publishers to register DOIs upon manuscript acceptance, even if the manuscript has not been made available online. This will help notify you and your institution of impending publications by your researchers as early in the process as possible, and track them through to publication and beyond.\nUsing this information to track research outputs This information is freely available for you to use and integrate into your own systems via our REST API and search interfaces. We have a detailed guide on what this information looks like in our metadata to help you search, facet or filter the metadata.\nInterested in what others are doing? Read our case study on how the National Library of Sweden is using our REST API in two of their projects, Open APC Sweden and in their local analysis database for publication statistics used in negotiations with publishers.\nPlease get in touch if you have any questions or suggestions: we’d be happy to provide help, advice or update you on publisher uptake and our future plans.\n", "headings": ["More than just bibliographic metadata","Using this information to track research outputs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-10-31-contract-technical-support/", "title": "Technical Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2024-10-31", "lastmod_ts": 1730332800, "section": "Jobs", "tags": [], "description": "Applications for this position closed December 5, 2024. ## Request for services: contract technical support\rCome and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote and global\nAbout the contractor role The Technical Support Contractor will work closely with our Membership team, part of Crossref’s Programs team, a distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways.", "content": " Applications for this position closed December 5, 2024. ## Request for services: contract technical support\rCome and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote and global\nAbout the contractor role The Technical Support Contractor will work closely with our Membership team, part of Crossref’s Programs team, a distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages. We’re looking for a Technical Support Contractor to provide front-line help to our international community of publishers, librarians, funders, researchers and developers on a range of services that help them deposit, find, link, cite, and assess scholarly content.\nWe’re looking for an independent contractor able to work remotely. There is no set schedule and contractors bill hours monthly.\nScope of work Replying to and solving community queries using the Zendesk support system. Using our various tools and APIs to find the answers to these queries, or pointing users to support materials that will help them. Working with colleagues on particularly tricky tickets, escalating as necessary. Working efficiently but also kindly and with empathy with our very diverse, global community. About the team You’ll be working closely with nine of the technical and membership support staff to provide support and guidance for people with a wide range of technical experience. You’ll help our community create and retrieve metadata records with tools ranging from simple user interfaces to robust APIs.\nAbout Crossref We’re a nonprofit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nHow to respond Responses should be submitted by December 31st, 2024:\nA statement of interest that includes:\nExamples of similar work (and/or your CV) References from previous work Hourly rate Please send your response, statement of interest, and resume to: jobs@crossref.org.\n", "headings": ["About the contractor role","Scope of work","About the team","About Crossref","How to respond"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2023-03-01-contract-technical-support/", "title": "Technical Support Contractor", "subtitle":"", "rank": 1, "lastmod": "2023-03-01", "lastmod_ts": 1677628800, "section": "Jobs", "tags": [], "description": "Request for services: contract technical support Come and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote\nAbout the contractor role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, an eighteen-strong distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways.", "content": "Request for services: contract technical support Come and work with us as an independent Technical Support Contractor. It’ll be fun!\nLocation: Remote\nAbout the contractor role The Technical Support Contractor will work closely with our Member Experience team, part of Crossref’s Outreach team, an eighteen-strong distributed team with members across Africa, Asia, Europe, and the US. We’re at the forefront of Crossref’s growth, building relationships with new communities in new markets in new ways. We’re aiming for a more open approach to having conversations with people all around the world - including within our growing community forum, which the right candidate will help us expand, in multiple languages. We’re looking for a Technical Support Contractor to provide front-line help to our international community of publishers, librarians, funders, researchers and developers on a range of services that help them deposit, find, link, cite, and assess scholarly content.\nWe’re looking for an independent contractor able to work remotely. There is no set schedule and contractors bill hours monthly.\nScope of work Replying to and solving community queries using the Zendesk support system. Using our various tools and APIs to find the answers to these queries, or pointing users to support materials that will help them. Working with colleagues on particularly tricky tickets, escalating as necessary. Working efficiently but also kindly and with empathy with our very diverse, global community. About the team You’ll be working closely with nine other technical and membership support colleagues to provide support and guidance for people with a wide range of technical experience. You’ll help our community create and retrieve metadata records with tools ranging from simple user interfaces to robust APIs.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. It’s as simple—and as complicated—as that.\nWe’re a small but mighty group working with over 17,000 members from 146 countries, and we have thousands of tools and services relying on our metadata. We take our work seriously but usually not ourselves.\nHow to respond We\u0026rsquo;re currently looking for contract help, so responses are accepted on a rolling basis:\nA statement of interest that includes:\nExamples of similar work (and/or your CV) References from previous work Hourly rate Please send your response, statement of interest, and resume to: jobs@crossref.org.\n", "headings": ["Request for services: contract technical support","About the contractor role","Scope of work","About the team","About Crossref","How to respond"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/deprecated/", "title": "This tool is no longer available", "subtitle":"", "rank": 1, "lastmod": "2024-10-23", "lastmod_ts": 1729641600, "section": "", "tags": [], "description": "You may have been redirected here from a tool that that no longer exists.\nFrom time to time we need to sunset tools or services. This might be because we\u0026rsquo;ve changed our policies, developed newer technologies to replace the old, perhaps low usage means maintenance costs are too high, or sometimes the need simply passes as the community changes.\nAn archive of deprecated tools is below.\n2024-Oct-15: Removing the XML journal list (AKA mddb.", "content": "You may have been redirected here from a tool that that no longer exists.\nFrom time to time we need to sunset tools or services. This might be because we\u0026rsquo;ve changed our policies, developed newer technologies to replace the old, perhaps low usage means maintenance costs are too high, or sometimes the need simply passes as the community changes.\nAn archive of deprecated tools is below.\n2024-Oct-15: Removing the XML journal list (AKA mddb.xml) The XML journal list provided an XML-formatted list of journal titles registered with Crossref. The XML journal list had few users and was redundant with the functionality in the Browsable Title List. In the interest of consolidating our reporting tools, we have deprecated the XML journal list.\n2024-April-23: Subject codes removed from REST API Subject codes in the REST API have always been incomplete and unreliable for a variety of reasons explained in this blog post.\nWhile we evaluate replacement systems, we wanted to remove the misleading subject codes already in the system. While the data has been removed, we\u0026rsquo;ve left the features related to subject codes in place, so this is not a breaking change.\n2023-October-31: Shutting down piped queries Piped querying has been deprecated since before this deprecated services page was created. We are now shutting down the service for good.\nWe recommend the REST API for metadata retrieval.\n2023-April-13: Removing localized linking for link resolvers We used to offer localized linking for link resolvers which involved downloading a cookie. As most link resolvers use more modern methods for this, we have removed this outdated technology. There were only two institutions still using this, and they have been informed.\n2022-September-28: Removing reference visibility functionality from REST API Following the change in our reference distribution policy noted below, we have removed all reference visibility functionality from the REST API. This includes the reference-visibility filter, the reference-visibility and public-references fields available via the /members route, and open-references coverage calculations available via the /journals and /members routes. You can read about the board vote and the membership terms change. All the members that previously had limited or closed references have now been set to open.\n2022-June-06: Removing reports of members with open/closed references Since 2017 we hosted API generated tables of members with open and closed references here on our website. The board voted in March 2022 to remove the ability for members to limit the distribution of references, to be more in line with all other metadata which is default open. You can read about the board vote and the membership terms change. All the members that previously had limited or closed references have now been set to open.\n2022-June-01: Crossmark statistics For users of Crossmark, we provided a platform to access statistics about Crossmark usage, the number of status updates, and potential domain violations. The platform received very low usage and, after surveying users of the page, we decided that maintaining it is not beneficial to Crossref or our members. Other aspects of Crossmark are not affected.\n2022-January-12: Sunsetting Simple Text Query Upload The Simple Text Query Upload (STQ Upload) service allows users to upload a text file containing references, with the results emailed back to the user in an HTML file. STQ Upload is lightly used and redundant with the functionality in the Simple Text Query service. In the interest of consolidating and improving our reference matching services, we have deprecated STQ Upload and plan to retire it in August 2022.\n2021-December-10: Distributed Usage Logging (DUL) key registry moves to STM Solutions Through 2020, we re-evaluated the progress of Distributed Usage Logging (DUL) and how it fits with other Crossref services. While technically we had reached the stage of releasing a proof-of-concept service, it became clear to us that Crossref is not best placed to expand the project in the future and increase participation. At the November 2020 Crossref Board meeting a motion was passed “that the Crossref Board supports another organization’s taking ownership of the Distributed Usage Logging (“DUL”) initiative.” We have therefore been seeking other partners to take DUL forward.\nWe are delighted that STM Solutions has agreed to take on a crucial part of the infrastructure for DUL: maintaining a registry of public keys that can be used to authenticate messages. From the end of 2021, the registry will be fully transferred to STM and Crossref’s version of the registry will be removed in early 2022.\nCrossref will continue to collect DUL endpoints for individual works, as part of the metadata deposited by our members.\n2021-July-07: v1 Deposit API via OJS Crossref and PKP have collaborated for some time to help publishers using Open Journal Systems (OJS) to benefit from Crossref services.\nBefore 2019 (OJS 3.1.1 and older) OJS integrated with the dedicated v1 OJS deposit API. From 2019 (OJS 3.1.2 and higher) a newer more reliable API (v2 deposit API) was made available and v1 was deprecated and unsupported. In July 2021 we turned off the v1 deposit API. For members who are still using an older version of OJS (OJS 3.1.1 and older)\nyou can continue to export your xml, and upload it to Crossref’s systems; or you could—-and probably should—-upgrade your OJS instance to a supported version that makes use of the v2 deposit API (OJS 3.1.2 or higher). 2020-November-24: Click-through Service for text and data mining The Click-through Service for text and data mining was a registry of additional TDM license agreements, posted by Crossref members, which researchers could review and accept and then use the API token provided when requesting full-text from the publisher.\nGiven the low take-up of the service by both publishers and researchers, its goals are no longer being met. Therefore we will retire the service on 31 December 2020. Until that date, it will still operate for the two publishers and various researchers who use it while they finish implementing their alternative plans. For more details on this, and our continuing support for text and data mining, please read the blog post Evolving our support for text-and-data mining.\nNote, Crossref will continue to collect member-supplied TDM licensing information in metadata for individual works, and researchers can continue to find this via the Crossref APIs. 2020-November-03: Guest service query accounts Crossref provides various interfaces for query services.\nThe Crossref Query Services interfaces are:\nOpenURL HTTPS Registered Crossref Members, Libraries and Affiliates are able to use these interfaces with their previously supplied system account credentials.\nFollowing a recent change, guest users no longer need to register for a free Guest Services Query account. You must include your email address in your queries. The purpose of requiring an email address is simply to monitor usage to balance system demand and to identify problems.\nThis brings querying our XML API and Open URL services inline with our REST API.\nNote, if you are not a member, you still need to supply your email address in the query; this is only used to contact you if there is a problem with your query. We always give advanced warning so those using the services can prepare and transition away from using it. Please check out information about such changes on our community forum, blog, or subscribe to our newsletter.\nContact our support group with any questions.\n", "headings": ["2024-Oct-15: Removing the XML journal list (AKA mddb.xml)","2024-April-23: Subject codes removed from REST API","2023-October-31: Shutting down piped queries","2023-April-13: Removing localized linking for link resolvers","2022-September-28: Removing reference visibility functionality from REST API","2022-June-06: Removing reports of members with open/closed references","2022-June-01: Crossmark statistics","2022-January-12: Sunsetting Simple Text Query Upload","2021-December-10: Distributed Usage Logging (DUL) key registry moves to STM Solutions","2021-July-07: v1 Deposit API via OJS","2020-November-24: Click-through Service for text and data mining","2020-November-03: Guest service query accounts"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/crossmark/crossmark-stats-deprecation/", "title": "Crossmark statistics is no longer available", "subtitle":"", "rank": 5, "lastmod": "2021-09-20", "lastmod_ts": 1632096000, "section": "Find a service", "tags": [], "description": "Crossmark statistics is changing\u0026hellip; We are re-evaluating Crossmark statistics and looking for your input.\nWe didn\u0026rsquo;t include Crosmark statistics in our recent authentication rollout, and the process of adding it has led us to look again at the service. It has low usage, suggesting that it either isn\u0026rsquo;t sufficiently visible or has low utility. However, we know that there may be a small number of members who find it very valuable and we would like to know more about how it is being used.", "content": "Crossmark statistics is changing\u0026hellip; We are re-evaluating Crossmark statistics and looking for your input.\nWe didn\u0026rsquo;t include Crosmark statistics in our recent authentication rollout, and the process of adding it has led us to look again at the service. It has low usage, suggesting that it either isn\u0026rsquo;t sufficiently visible or has low utility. However, we know that there may be a small number of members who find it very valuable and we would like to know more about how it is being used.\nWe are therefore seeking your feedback. To contribute, please complete our user survey.\nFor further details about how we sunset services, see our deprecation page.\n", "headings": ["Crossmark statistics is changing\u0026hellip;"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/linked-clinical-trials/", "title": "Linked clinical trials working group", "subtitle":"", "rank": 1, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Working groups", "tags": [], "description": "The purpose of this advisory group is to advise on linked clinical trials and support staff. The group comprises Crossref members and non-members, and is lead by a Chair and Crossref staff facilitator.\nGroup members This group is not currently active but was Chaired by Daniel Shanahan, then at BioMed Central, and by Crossref\u0026rsquo;s Kirsty Meddings.\nHow the group works (and the guidelines) Members commit to attend all meetings by conference call, and may choose to send a named proxy if they are not available.", "content": "The purpose of this advisory group is to advise on linked clinical trials and support staff. The group comprises Crossref members and non-members, and is lead by a Chair and Crossref staff facilitator.\nGroup members This group is not currently active but was Chaired by Daniel Shanahan, then at BioMed Central, and by Crossref\u0026rsquo;s Kirsty Meddings.\nHow the group works (and the guidelines) Members commit to attend all meetings by conference call, and may choose to send a named proxy if they are not available. Meeting notes will be circulated to all by the facilitator. The schedule of meetings is at the discretion of the chair and facilitator and may vary depending on whether there are relevant topics for discussion, but will not be more than one per quarter.\nWith the exception of Crossref staff, the group will be limited to one representative from each participating organization, unless particular agenda items or topics call for domain expertise from specific colleagues or departments. Advisory group members are, however, free to discuss the information shared during meetings with colleagues or any external party.\nPlease contact Bryan Vickery with any questions or to apply to join the advisory group.\n", "headings": ["Group members","How the group works (and the guidelines)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/events/the-shape-of-things-to-come/", "title": "The shape of things to come - Crossref community call", "subtitle":"", "rank": 6, "lastmod": "2024-03-20", "lastmod_ts": 1710892800, "section": "Webinars and events", "tags": [], "description": "We joined together to learn about our strategies for making Crossref sustainable and how we planned for the future. We also shared exciting updates on our initiatives and tools that enhance data management, improve accessibility, and ensure metadata is complete. Highlights included our work on preprint matching.\nHere\u0026rsquo;s what was on the agenda:\nExamined our strategic direction \u0026amp; achievements Resourced Crossref for future sustainability Revealed our 2024 Product Roadmap Discussed the metadata development pipeline Explored Crossref for Grants Explored the latest in preprint matching Webinar was held on Wednesday, March 8, 2024.", "content": "We joined together to learn about our strategies for making Crossref sustainable and how we planned for the future. We also shared exciting updates on our initiatives and tools that enhance data management, improve accessibility, and ensure metadata is complete. Highlights included our work on preprint matching.\nHere\u0026rsquo;s what was on the agenda:\nExamined our strategic direction \u0026amp; achievements Resourced Crossref for future sustainability Revealed our 2024 Product Roadmap Discussed the metadata development pipeline Explored Crossref for Grants Explored the latest in preprint matching Webinar was held on Wednesday, March 8, 2024.\nSlides\nRecording\nSlides Poll questions\nQ\u0026amp;A report\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/distributed-usage-logging/", "title": "DUL Project working group", "subtitle":"", "rank": 1, "lastmod": "2021-11-24", "lastmod_ts": 1637712000, "section": "Working groups", "tags": [], "description": "The distributed usage logging (DUL) working group ran until 2020 and was the community group driving the distributed usage logging project.\nThe objectives of the DUL working group were to address the following:\nDefine a way for DOIs to advertise endpoints to which event data may be submitted, including a mechanism to specify the payload schemas that the endpoint accepts. Pilot the transmission of COUNTER-usage events from platforms providing direct access to full text to publishers responsible for that full text, using the above mechanism, in a secure manner Work out the \u0026ldquo;rules of the game\u0026rdquo; for the COUNTER use cases, including message semantics, responsibility for anti-gaming mechanism, etc.", "content": "The distributed usage logging (DUL) working group ran until 2020 and was the community group driving the distributed usage logging project.\nThe objectives of the DUL working group were to address the following:\nDefine a way for DOIs to advertise endpoints to which event data may be submitted, including a mechanism to specify the payload schemas that the endpoint accepts. Pilot the transmission of COUNTER-usage events from platforms providing direct access to full text to publishers responsible for that full text, using the above mechanism, in a secure manner Work out the \u0026ldquo;rules of the game\u0026rdquo; for the COUNTER use cases, including message semantics, responsibility for anti-gaming mechanism, etc. What we’re working on The working group has now retired.\nGroup members The group comprised some of our members as well as some third party platforms who were actively interested in participating in a secure exchange of usage records.\nFacilitator: Martyn Rittman, Crossref\nEsther Heuver, Elsevier (Chair) Paul Dlug, American Physical Society Lorraine Estelle, Project COUNTER John Chodacki, California Digital Library Paul Needham, Cranfield University Johannes Buchmann, De Gruyter Nicko Goncharoff, Digital Science Oliver Pesch, EBSCO Ian Hayes, Atypon Tom Beyer, PubFactory Maciej Rymarz, Mendeley Robert McGrath, ReadCube Kimberly Tryka, National Institute of Standards and Technology John Connolly, Springer Nature Jo Cross, Taylor \u0026amp; Francis Clara Brown, United States Geological Survey Greg Hargrave, Wiley Stuart Maxwell, Scholarly iQ Aaron Wood, American Psychological Association Please contact Martyn Rittman with any questions.\n", "headings": ["What we’re working on","Group members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/project-dul/", "title": "Distributed usage logging collaboration", "subtitle":"", "rank": 1, "lastmod": "2020-11-24", "lastmod_ts": 1606176000, "section": "Get involved", "tags": [], "description": "Current status In November 2020, Crossref\u0026rsquo;s board decided to scale down our involvement in the distributed usage logging (DUL) initiative and sought a new lead organisation. In our view, others were better-placed to progress the work and increase participation, building on the proof-of-concept created.\nSince December 2021, STM Solutions has maintained the public key registry. Crossref will continue to support DUL endpoints included in our members\u0026rsquo; metadata.\nBackground Researchers are increasingly using “alternative” (non-publisher) platforms to store, access and share literature, e.", "content": "Current status In November 2020, Crossref\u0026rsquo;s board decided to scale down our involvement in the distributed usage logging (DUL) initiative and sought a new lead organisation. In our view, others were better-placed to progress the work and increase participation, building on the proof-of-concept created.\nSince December 2021, STM Solutions has maintained the public key registry. Crossref will continue to support DUL endpoints included in our members\u0026rsquo; metadata.\nBackground Researchers are increasingly using “alternative” (non-publisher) platforms to store, access and share literature, e.g. via:\nInstitutional and subject repositories Aggregator platforms (EBSCOhost, IngentaConnect) Researcher-oriented networking sites (e.g. Academia.edu, ResearchGate, Mendeley) Reading environments and tools (e.g. ReadCube, Utopia Docs) Use of content on these platforms is by researchers who also have access to the same content via institutional subscription agreements. However, publishers are unable to report this use in their COUNTER-compliant reports, because it does not occur on their own platforms. This means:\nPublishers are unable to demonstrate the full value of their content to library customers. They are also unable to provide authors will a full picture of usage of their articles Because use is distributed, institutions do not have a complete picture of usage The distributed usage logging collaboration was established as an R\u0026amp;D experimental initiative between COUNTER, Crossref members, and scholarly technology \u0026amp; service providers. The initiative, driven by the DUL working group, has explored and implemented private peer-to-peer channels for the secure exchange and processing of COUNTER-compliant private usage records from hosting platforms to publishers. All data provided back to the original publisher is anonymized, preserving individual user privacy.\nResources COUNTER’s Distributed Usage Logging stakeholder demand report COUNTER Code of Practice v5 compliance requirements for processing and reporting data from non-publisher usage sources DUL proof of concept reference implementation of the end-to-end transaction pipeline with validation credentials For general questions about this initiative, please contact Martyn Rittman.\n", "headings": ["Current status","Background","Resources"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/co-access/", "title": "Help with Co-access", "subtitle":"", "rank": 3, "lastmod": "2017-12-20", "lastmod_ts": 1513728000, "section": "", "tags": [], "description": "What is Co-access? What problem does Co-access solve? Who is Co-access for and how does it work? How much does Co-access cost? How do I participate in Co-access? What is the difference between Multiple Resolution and Co-access? Can Co-access be used for journal content DOIs too? Doesn’t Co-access violate the “uniqueness” rule? What about citation splitting? How does Co-access affect resolution reports? How are Co-access relationships represented in Crossref metadata?", "content": " What is Co-access? What problem does Co-access solve? Who is Co-access for and how does it work? How much does Co-access cost? How do I participate in Co-access? What is the difference between Multiple Resolution and Co-access? Can Co-access be used for journal content DOIs too? Doesn’t Co-access violate the “uniqueness” rule? What about citation splitting? How does Co-access affect resolution reports? How are Co-access relationships represented in Crossref metadata? How are Co-Access groups defined What if a deposit is made by a party not included in a Co-access agreement? I’m a book publisher, how do I know which aggregators have registered me for Co-access? How are Co-Access groups defined? Can I opt-out a single title in a Co-access group? Is Co-access a long term solution? What is Co-access? Co-access allows multiple Crossref members to register content and create DOIs for the same book content; both whole titles or individual chapters. This means that there can be multiple DOIs registered for the same book content. Prefixes must be pre-registered for Co-access in order to participate. Co-access does not require a primary depositor. For Co-access registered members, we’ll match records with near identical metadata (for example, matching ISBNs and titles) and enter them into a Co-access relationship where matches are found between participating members. This means that Co-access enables better linking between different Crossref DOIs and gives book publishers and aggregators of book content greater flexibility to host content within a timeframe and in a location which suits them best.\nWhat problem does Co-access solve? Many organizations can be involved in the hosting and distribution of books. Up until now, Crossref’s Multiple Resolution (MR) functionality has worked well with journals, where content may exist on the publisher’s site and also on a third-party hosting platform. In this case the content on the publisher site gets registered first and then the URLs for the content on the third-party sites can be added after this. Both the publisher and the third-party sites can use the same DOI. Unlike with journal content, book publishers often outsource their content hosting to multiple aggregators/platforms with none of them considered the primary site. Additionally in some cases, the book publisher does not deposit any metadata and DOIs with Crossref at all. Multiple Resolution requires coordination between the primary publisher and the secondary content hosts which is too burdensome to be feasible between book aggregators - especially as there is often no primary depositor (i.e the book publisher). Instead, publishers leave depositing to secondary content hosts. This means that MR functionality is unsuitable for this use case, as book aggregators have expressed the need for a process that allows independent transactions on the part of any secondary content hosts.\nWho is Co-access for and how does it work? Co-access is for any Crossref publisher or sponsoring publisher member who faces the challenge of assigning DOI identifiers to book content that is distributed across a number of different platforms. Crossref members who aggregate book content on behalf of other publishers can contact Crossref to request that their prefix be added to the Co-access group for each of the book publisher members they work with.\nFor items from approved prefixes, Crossref\u0026rsquo;s system will automatically look to establish a Co-access match with any other DOI where near-identical bibliographic metadata is found. When matches are found, an interim page hosted by Crossref will be displayed to end users when any matched DOI URL is followed. Users can then interact with the Co-Access interim page to choose their preferred DOI to follow in order to access the content.\nHow much does Co-access cost? Each depositor is billed for their own item registered. There are no additional fees to participate in Co-access.\nHow do I participate in Co-access? You must first be registered for Co-access in order to participate. Crossref members who host/aggregate book content on behalf of other Crossref book publisher members, can register their prefix, and those of the book publishers they work with, by sending the details of your Co-access relationship to support@crossref.org.\nBefore contacting our support team, aggregators should please ensure they have spoken with each book publisher they work with about Co-access deposits and have collected the following details:\nAggregator\u0026rsquo;s own Crossref membership information (i.e depositor prefix/s)\nThe name, prefix/s and contact details of each Crossref book publisher member you work with\nAggregator\u0026rsquo;s company logo file\nThe logo file for each publisher\nWhat is the difference between Multiple Resolution and Co-access? Multiple Resolution is when a single DOI has multiple resolution URLs; for example, if the same content is available in different locations. When the DOI is resolved it goes to an interim landing page that shows the different resolution URLs. Co-access is for book content and is when multiple DOIs are assigned to the same book title or chapter; each DOI is deposited by a different member and each has only one resolution URL. The DOIs are grouped together and when any of the DOIs is resolved they go to the an interim landing page showing all the DOIs and the resolution URLs.\nCan Co-access be used for journal content DOIs too? No, Co-access is only for book content. Journal publishers do not encounter the same distributed content issues that book publishers and their aggregators face. In the rare cases where journal content is distributed, members can use our existing Multiple Resolution functionality in order to add multiple hosting locations within a single Crossref deposit.\nDoesn’t Co-access violate the “uniqueness” rule? Yes it does. While it is Crossref policy that only one DOI be created for a given work because Crossref DOIs are citation identifiers, we are enabling multiple DOIs to be assigned for the same book content in order to solve a very specific use case identified by our book publisher members and their aggregators. Crossref offers Co-access exclusively to book publishers to address the specific structural needs inherent to their production and distribution environment. See the response above in “What problem does Co-access solve?” and below in “What about citation splitting?”.\nWhat about citation splitting? The Crossref DOI is a citation identifier. This means that we identify content to enable accurate citation for scholarly content. This is different from other identifiers, like the ISBN, which are used to identify all the different formats - hardback, paperback, ePub. Therefore, a basic Crossref principle is that content, even if it’s available in different formats, should only have one Crossref DOI. For content that is part of Co-access, there will be multiple DOIs for the same content and this could mean that where systems and services use the DOI to track citations that all the citations will not be captured since they are spread across multiple DOIs. In addition, a service like Crossref Event Data (which collects post publication events), having multiple DOIs for the same content makes it harder to track activity. Crossref DOIs are increasingly being used to track activity. So Co-access involves a tradeoff between increased flexibility against accurate citation and event counting.\nHow does Co-access affect resolution reports? The DOI that was clicked on initially, before the landing page is enabled, is the DOI resolution that is captured in our reports. If a user selects an alternative DOI from the options presented on the Co-access landing page, we do not report this as a resolution.\nHow are Co-access relationships represented in Crossref metadata? To inspect the output metadata to see what DOIs are related through Co-access, see our technical documentation for details.\nHow are Co-Access groups defined? Co-Access is enabled between prefixes. Prefixes registered for Co-access will be grouped to allow each member of the group access to a given book content item. This allows each member to deposit (and update) their own DOI for that title and for the content items (chapters) within that title. Once a Co-access group is defined any member could create a DOI for any title owned by any member of the Co-Access group.\nWhat if a deposit is made by a party not included in a Co-access agreement? If the depositor’s prefix is not in a registered Co-access group, then the conflicting deposit will be rejected. Multiple registered items for the same book title is only possible if prefixes have been pre-registered for Co-access.\nI’m a book publisher, how do I know which aggregators have registered me for Co-access? Members can email support@crossref.org to obtain a list of all prefixes they are in Co-access with.\nHow are Co-Access groups defined? Co-Access is enabled between prefixes. Prefixes registered for Co-access will be grouped to allow each member of the group access to a given book content item. This allows each member to deposit (and update) their own DOI for that title and for the content items (chapters) within that title.\nCan I opt-out a single title in a Co-access group? Ideally, single title opt-out should occur before any Co-access deposits are made for that item. To do this, the title owner should contact us at support@crossref.org to request we set a Co-access exclusion flag on their title. If a title owner want to undo Co-access deposits that have already been made and matched, then they should our support team to ask that the title be excluded from Co-access and that any unwanted DOIs be aliased.\nIs Co-access a long term solution? No, Co-access is an interim solution to solve the current issue faced by our book publisher members and their content aggregators. In the long run, we will be exploring more effective solutions to improve how Crossref accommodates the unique challenges of the book publishing environment.\nMore questions? Review other Crossref FAQs or visit our support site to open a support ticket or review detailed technical documentation.\n", "headings": ["What is Co-access?","What problem does Co-access solve?","Who is Co-access for and how does it work?","How much does Co-access cost?","How do I participate in Co-access?","What is the difference between Multiple Resolution and Co-access?","Can Co-access be used for journal content DOIs too?","Doesn’t Co-access violate the “uniqueness” rule?","What about citation splitting?","How does Co-access affect resolution reports?","How are Co-access relationships represented in Crossref metadata?","How are Co-Access groups defined?","What if a deposit is made by a party not included in a Co-access agreement?","I’m a book publisher, how do I know which aggregators have registered me for Co-access?","How are Co-Access groups defined?","Can I opt-out a single title in a Co-access group?","Is Co-access a long term solution?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/privacy/", "title": "Privacy", "subtitle":"", "rank": 1, "lastmod": "2024-08-13", "lastmod_ts": 1723507200, "section": "Operations & sustainability", "tags": [], "description": "We are committed to safeguarding your privacy. If you feel we need to change something please contact us with your feedback.\nOne of our guiding principles is to use open source tools and technologies. That extends to how we communicate with you and we endeavor to use systems we think we can trust not to misuse our\u0026mdash;or your\u0026mdash;data.\nBrowsing this website We are trying to strike a balance between understanding what content people appreciate most (and least) on our site, and ensuring your information remains as private as possible.", "content": "We are committed to safeguarding your privacy. If you feel we need to change something please contact us with your feedback.\nOne of our guiding principles is to use open source tools and technologies. That extends to how we communicate with you and we endeavor to use systems we think we can trust not to misuse our\u0026mdash;or your\u0026mdash;data.\nBrowsing this website We are trying to strike a balance between understanding what content people appreciate most (and least) on our site, and ensuring your information remains as private as possible. We therefore decided to use Matomo which is an open source analytics program that respects user privacy. We look at page views, referrers, geographies, exit pages, and try to discern patterns in what content is most used. Matomo does not hold onto your data.\nSubscribing to our updates If you sign up for news and updates, the information you provide will be stored in our email service called Act-On. We will email you once a month with a newsletter containing recent blog posts and product and service information. You won\u0026rsquo;t be added to other mailing lists unless you ask. If you later decide you do not want to receive these communications, we put opt-out links on all our emails. If you just want alerts of new blogs, please use RSS.\nJoining as a member If you sign up to become a member of Crossref, we add you as an \u0026lsquo;inactive\u0026rsquo; account in our Customer Relationship Management (CRM) system, SugarCRM. Once your account is approved, and when you\u0026rsquo;ve signed your contract and paid your first year\u0026rsquo;s membership fee, we set your account to \u0026lsquo;active\u0026rsquo;. When you sign up we ask for a Primary contact (previously \u0026ldquo;business\u0026rdquo; contact) plus contacts for: billing, voting, technical, and metadata quality. The billing contact receives the invoices for annual membership fees and quarterly Content Registration fees. The voting contact receives the annual board election notices and votes on behalf of the member organization. The Primary contact agrees to the terms and receives regular service and product information. The technical contact may be your chosen agent to deposit metadata with us. And the metadata quality contact is the one we send reports to about metadata errors and would be the one to fix them.\nOver time we might ask to be in touch with additional people at your organization, such as editors, product managers, production staff, and other interested colleagues. We will ask upfront for your consent and it is always possible to opt-out later if you change your mind.\nDepending on the type of account, you may receive a series of on-boarding emails that provide information about how to get started with your participation in Crossref. We will also email you with significant news as and when things change, services are improved, or new services are introduced. We may also invite you to participate in research or surveys.\nUsing our metadata services We offer both members and non-members a range of services to access metadata. In some cases, the provided metadata may include data (or metadata derived from data) that originated from sources other than our members (\u0026ldquo;third-party sources\u0026rdquo;). We will take steps to inform all users of our metadata retrieval services about any third-party sources for the metadata. If the third-party source requires it, we will also provide their privacy policies or usage restrictions. We respect the terms and conditions, including the privacy policies, of the third-party sources and we expect the users of our metadata services to do so as well.\nIf you identify yourself in API queries (our \u0026lsquo;Polite\u0026rsquo; or \u0026lsquo;Plus\u0026rsquo; API pools), we will not under any circumstances store or use your email for any other purpose than technical troubleshooting and only if absolutely necessary. You may choose to use our \u0026lsquo;Public\u0026rsquo; API which won\u0026rsquo;t identify you at all. More information can be found in the etiquette section of our REST API documentation.\nUsing our tools and user interfaces Like our website, some of our helper tools and report interfaces use Matomo, an open source analytics program that respects user privacy. We are interested in the number of page views, number of actions taken, and geographies of users in order to understand how, and how much, our tools are used. Matomo does not hold on to your data.\nIf a tool requires you to sign in with your Crossref credentials (username and password), this data is not shared with Matomo or connected with usage analytics in any way.\nAsking for support If you ask for support and raise a ticket with our support team, the information to send will be stored in our ticketing system, Zendesk. We don’t use your contact information for anything other than resolving your support ticket, and any logins or private details provided during a support conversation are secure.\nComments on our blog and community forum Conversations among the Crossref community can be made on our discussion forum at community.crossref.org which uses Discourse. Discourse is an open source platform that asks people for minimal information such as your name, alias (name you wish to display), and your email address. We may use your email address to answer your questions and feedback, but we do not store it in our email database so you won\u0026rsquo;t receive newsletters and other emails unless you specifically sign up. Discourse is the tool enabled for blog comments too.\nCookies Cookies are files with small amount of data, which may include an anonymous unique identifier. Cookies are sent to your browser from a web site and stored on your computer\u0026rsquo;s hard drive. Like many sites, we use \u0026ldquo;cookies\u0026rdquo; to collect basic information in order to allow us to serve you better and improve your experience while visiting our website. Cookies are not used to retain personal data. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent.\nEmail privacy We categorically do not under any circumstances\u0026mdash;however persuasive\u0026mdash;sell or rent email addresses with parties outside of Crossref. On rare occasions\u0026mdash;and only where it prevents you from using a service you\u0026rsquo;ve signed up for\u0026mdash;we may need to provide a contact email address to a supplier or partner in order for you to receive important information or technical support.\nRetaining your information We will keep your information as long as it is necessary to fulfill the purpose for which you gave us the information. We may delete your personal information if the information is incomplete or inaccurate. We may also keep personal information where it is necessary to comply with legal obligations, resolve disputes, and comply with or enforce agreements. While there is not a specific length of time for which we may keep your information, we keep it only as long as we have a permissible purpose for processing it.\nLinks to other sites Our website may contain links to some other sites. We can\u0026rsquo;t be responsible for the privacy policies and content of these other sites, nor whether they\u0026rsquo;ve maintained the links persistently. If you tell us that some links might be questionable, we will look into removing the link.\nSecurity Crossref has taken steps to ensure that personal information collected is secure. However, no method of transmission over the Internet or method of electronic storage is 100% secure. While we strive to use commercially acceptable means to protect your information, we cannot guarantee its absolute security.\nTransfer We will not share your personal information with third parties, except as we describe in this policy or as separately authorized by you. We may share some of your information with companies that perform support services to us, such as accounting or legal firms, firms that provide data hosting or database management services, and other technical support. These third parties are required to maintain the confidentiality of your personal information and to use your personal information only in providing services to us. In rare circumstances, your personal information may be disclosed to third parties to comply with applicable laws and regulations. Any other disclosure of your personal information will only be made following your express consent.\nCrossref is registered in the United States. Wherever you are located, any of the data you provide to Crossref may be transferred to the United States, where data protection and privacy laws may be different, and less rigorous, than the laws in your country.\nAccessing, correcting, and removing information Unless there is an exemption or other applicable law, you have the right to withdraw your consent to us processing your personal data, including the transfer of that information to the United States. You also have a right to request copies of your personal data and request that we correct any information that is inaccurate or out of date, or to ask that it be erased where it is no longer necessary for us to have the data. Where your data has been provided to us by another organization, such as a publisher, you should contact that organization in the first instance. If you have provided your information directly to us, or if you have any other questions about this notice or want additional information, you should contact us directly.\nChanges to this privacy policy This privacy policy is effective as of May 2018 and will remain in effect except with respect to any changes in its provisions in the future, which will be in effect immediately after being posted on this page.\nWe reserve the right to update or change our privacy policy at any time and you should check back periodically.\nPlease email us with any questions about privacy.\n", "headings": ["Browsing this website","Subscribing to our updates","Joining as a member","Using our metadata services","Using our tools and user interfaces","Asking for support","Comments on our blog and community forum","Cookies","Email privacy","Retaining your information","Links to other sites","Security","Transfer","Accessing, correcting, and removing information","Changes to this privacy policy"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/contact/", "title": "Contact", "subtitle":"", "rank": 1, "lastmod": "2017-11-21", "lastmod_ts": 1511222400, "section": "", "tags": [], "description": "Contact Us Please contact us through the form below and your question will be routed to the relevant person.\nYou can also contact us via our main twitter account, our support twitter account, our facebook page.\nIf you\u0026rsquo;re looking for a particular individual, check out our people page and contact any of us directly.\nIt looks like you don't have javascript enabled so please contact us through one of the methods mentioned above.", "content": "Contact Us Please contact us through the form below and your question will be routed to the relevant person.\nYou can also contact us via our main twitter account, our support twitter account, our facebook page.\nIf you\u0026rsquo;re looking for a particular individual, check out our people page and contact any of us directly.\nIt looks like you don't have javascript enabled so please contact us through one of the methods mentioned above.\rMandatory fields are marked with an *.\nFirst Name*\rLast Name*\rEmail Address*\rName of Company or Organization*\rYour Crossref prefix (if you have one)\rJob title (optional)\rMy Question is about*\r-- Please select --\rTechnical support - Content Registration\rTechnical support - other\rBilling\rGeneral feedback or complaint\rMedia or collaboration\rJoining Crossref, fees and membership enquiries\rServices: Cited-by\rServices: Funder Registry\rServices: Crossmark\rServices: Similarity Check\rServices: Event Data\rServices: Querying and using our Metadata\rReporting a metadata error\rPassword reset\rContent Registration method*\rWeb Deposit Form\rOJS plugin\rXML upload in admin tool\rXML via https post\rOther (please specify)\rI don't know\rOther\rSubmission ID (this will be in the email you received after you submitted your content) (optional)\rDo take a look at this page, which gives more more information about the benefits and obligations of membership, how much it costs, and the application process.\rDoes this answer your question? If not, please pop your question in the box below, and we'll get back to you soon.\nMy Question\rOffice locations North America office Europe office Crossref, PO BOX 719, Lynnfield, MA 01940, United States of America Oxford Centre for Innovation, New Road, Oxford, OX1 1BY, United Kingdom ", "headings": ["Contact Us","Office locations"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/", "title": "Metadata best practices", "subtitle":"", "rank": 4, "lastmod": "2021-10-21", "lastmod_ts": 1634774400, "section": "Documentation", "tags": [], "description": "Best practices, like principles, are aspirational for our members, but we’ll do our best to help you meet them. Our systems, schema, and practices have evolved over time and, as with many organizations, we need to balance the decisions of the past with the needs of the future. When a best practice is not met, we try to assess that honestly with a goal of meeting the best practice in the future.", "content": "Best practices, like principles, are aspirational for our members, but we’ll do our best to help you meet them. Our systems, schema, and practices have evolved over time and, as with many organizations, we need to balance the decisions of the past with the needs of the future. When a best practice is not met, we try to assess that honestly with a goal of meeting the best practice in the future.\nCrossref metadata requirements for content registration are minimal but meeting minimum requirements only means that you have succeeded in registering your content and DOIs. Most of the optional metadata we collect is recommended to improve discoverability and connect content persistently to the scholarly record. There are nuances and best practices for both different types of content and different types of metadata.\nBest practices are available for the following:\nBest practices for key metadata elements Abstracts Bibliographic metadata Titles Contributors Dates Page numbers and article IDs Funding License Multi-language and translated content References Relationships Versioning Best practices for key record types Books and chapters Conference proceedings and papers Datasets Dissertations Grants Peer review Pending publication Posted content Reports and working papers Standards ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/abstracts/", "title": "Abstracts", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "You can include JATS-formatted abstracts with your metadata records. All abstracts registered with us will be included in the metadata distributed through our metadata outputs.\nDo:\nsupply abstracts for journal articles and beyond - most metadata models support abstracts supply multiple abstracts where applicable: if your content has multiple abstracts (different languages, or a simple and complex abstract) supply all of them as separate abstracts Use the language tag to identify the language used in each abstract Do not:", "content": "You can include JATS-formatted abstracts with your metadata records. All abstracts registered with us will be included in the metadata distributed through our metadata outputs.\nDo:\nsupply abstracts for journal articles and beyond - most metadata models support abstracts supply multiple abstracts where applicable: if your content has multiple abstracts (different languages, or a simple and complex abstract) supply all of them as separate abstracts Use the language tag to identify the language used in each abstract Do not:\ninclude multiple abstracts in a single abstracts tag See our Abstracts Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/books-and-chapters/", "title": "Books and chapters", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our Books Advisory Group advises on books best practices; its goals are to:\nMaximize reference linking between books and other record types including journals and conference proceedings Ensure that we collect and distributes persistent identifiers and authoritative metadata for online books Ensure that book content is part of all our services Enhance the discovery, visibility, and usage of book content. On this page, learn more about:\nBest practices for depositing metadata, linking, and DOI use for books Best practices for updates and versions Best practices for citation matching for book title queries Best practices for citation matching for book chapters or reference entry queries Best practices for DOIs in citations Best practices for books on multiple platforms Review our Books Markup Guide for XML and metadata help.", "content": "Our Books Advisory Group advises on books best practices; its goals are to:\nMaximize reference linking between books and other record types including journals and conference proceedings Ensure that we collect and distributes persistent identifiers and authoritative metadata for online books Ensure that book content is part of all our services Enhance the discovery, visibility, and usage of book content. On this page, learn more about:\nBest practices for depositing metadata, linking, and DOI use for books Best practices for updates and versions Best practices for citation matching for book title queries Best practices for citation matching for book chapters or reference entry queries Best practices for DOIs in citations Best practices for books on multiple platforms Review our Books Markup Guide for XML and metadata help.\nDepositing metadata, linking, and DOI use for books You should:\nRegister the content by depositing metadata at the time of online publication and assign DOIs at the title and chapter/entry level. Learn more about the benefits of registering book chapters. Add DOI links to references in books. Learn more about our DOI display guidelines. Deposit references from books and collect references from other members to your books via our Cited-by service. Instruct authors to cite specific chapters and entries using page numbers, chapter/entry titles and DOIs. Update your editorial guidelines - ask copyeditors to look for page numbers and chapter titles in book citations. Use our tools to check references as part of the production process so that references can be corrected and missing information added. Learn more about creating reference links. Updates and versions There are two types of updates:\nMajor content changes that may affect the interpretation of a work may mean a new edition with new ISBNs. Major version changes imply that the publisher will formally notify readers that content has changed (through errata, corrigenda, or new editions (which would also get a new ISBN) Minor content changes are unlikely to affect a reader’s interpretation of the work, and the publisher will not generally draw attention to the changes with a new version. Just as publishers decide when a new print edition or version is warranted, it is publishers’ responsibility to distinguish between major and minor versions in online content.\nSince a Crossref DOI is a citation identifier, a new DOI should only be issued if the new version will be cited differently. The same logic applies to differing formats, for example, the file types or containers used to present content: a distinct DOI should not be registered for different formats unless the format will be cited in a different way. This means, for example, that you should not assign one DOI to an EPUB version of a book and another DOI to the PDF version of a book if the format doesn’t affect how the book is cited. You may register a single DOI for all versions of a translated book. Distinct DOIs may also be registered for translated versions of content.\nThe recommended best practice is:\nAssign new DOIs to new major versions or editions of books, chapters and entries. This practice will preserve the scholarly citation record. Older versions should remain available online with links to the latest version. In use, a reader follows a link to the version cited and then has the option to follow a link to the current version. Do not assign new DOIs to minor new versions of books, chapters and entries. Where book content is hosted on multiple platforms (such as NetLibrary, ebrary) and publishers can enable enable linking from a single DOI to those platforms, they should use multiple resolution, which allows multiple URLs to be associated with one DOI. Learn more about multiple resolution. If multiple resolution doesn’t work for your circumstances, or content on your platform does not already have DOIs, please contact us to find a solution.\nCitation matching for book title queries To enable citation matching at the title level, the minimum query must include the following elements:\nbook title book author book copyright year To increase the accuracy of matching, members should also include as many of the following elements as possible in the query:\neditor (where appropriate) ISBN ISSN / DOI publisher Citation matching for book chapters or reference entry queries The metadata provided for a book title is used to identify book chapters during querying. This means that a book chapter query should include title metadata as well. The minimum query for a book chapter must include the following elements:\nbook title title and subtitle should be separated with a colon (:) book year chapter author first page To increase the accuracy of matching, publishers should also include as many of the following elements as possible in the query:\neditor (where appropriate) publisher chapter title Combining chapter title and chapter author returns the best matches.\nDOIs in citations Following a review in 2017 of common citation style guides and publishers’ instructions to authors, this is what we recommend for the use of DOIs in citations, of any style or format:\nInclude DOIs whenever they are available (use Metadata Search to find DOIs for registered content) Display DOIs as links and follow our DOI display guidelines In your in author guidelines, describe the use of DOIs in general, and for different record types (such as journals, conference proceedings) Books that have DOIs at the title and chapter levels should be cited accordingly Print materials (of any record type) may display DOIs but should not have their own print-only DOIs Providing information to us on update cycles (when possible), and contact information for keeping in touch with questions or future developments (such as new record types). You can register books and chapters using one of our helper tools: web deposit form, and by direct deposit of XML - learn more about markup examples for books and chapters.\n", "headings": ["Depositing metadata, linking, and DOI use for books ","Updates and versions ","Citation matching for book title queries ","Citation matching for book chapters or reference entry queries ","DOIs in citations "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/conference-proceedings/", "title": "Conference proceedings and papers", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "Our conference proceedings model supports registration of conference series, proceedings, and papers.\nConference series You can register series-level metadata for proceedings that are part of an ongoing series with an ISSN. A DOI is not required for the series information, but is recommended.\nWhen registering a conference series DO:\ninclude the series name and ISSN in your metadata register each proceedings within the series as a separate volume include series-level contributors register a DOI for the conference series - this makes the series cite-able and easy to identify include a series number (if you have one) Conference proceedings When registering a single conference proceeding DO:", "content": "Our conference proceedings model supports registration of conference series, proceedings, and papers.\nConference series You can register series-level metadata for proceedings that are part of an ongoing series with an ISSN. A DOI is not required for the series information, but is recommended.\nWhen registering a conference series DO:\ninclude the series name and ISSN in your metadata register each proceedings within the series as a separate volume include series-level contributors register a DOI for the conference series - this makes the series cite-able and easy to identify include a series number (if you have one) Conference proceedings When registering a single conference proceeding DO:\ninclude a conference title, publisher, and publication date in the proceedings metadata include a conference name in the event metadata include proceedings-level contributors like editors register a DOI for the conference volume include ISBN assigned to the proceedings include a conference acronym within the even metadata supply relevant event information like date, location, number, acronym, theme, and sponsor Do not:\ninclude more than one proceeding within a single conference element Conference papers When registering conference papers, DO:\ninclude a paper title include a publication date include all relevant funding, license, and relationship metadata include a language (using the language attribute on the conference_paper element) include all contributors include abstracts (recommended for all types of content, but particularly useful for conference papers) include references identify the publication type (full text or abstract) include article / elocation IDs Do not:\nregister separate records for abstract and full text Review our Conference Proceeding Markup Guide for XML and metadata help.\n", "headings": ["Conference series","Conference proceedings","Conference papers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/bibliographic/", "title": "Bibliographic metadata", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "The bibliographic (descriptive) metadata you send us is used to display citations, match DOIs to citations, and to enhance discovery services. It is essential that this metadata is clean, complete, and accurate.\nDo:\nprovide all contributors, titles, dates, and identifiers associated with the item you are registering make sure the contributors, titles, dates, and identifiers are accurate update and correct metadata as needed Do not:\nsupply titles, names, or other metadata in all caps, even if that is how you display and store them - it makes it difficult for others to use your metadata to format citations (and link to your content) omit article identifiers, page numbers, or author names - omissions will make your metadata less or undiscoverable force metadata into fields that aren’t a good match - it’s often better just to leave it out.", "content": "The bibliographic (descriptive) metadata you send us is used to display citations, match DOIs to citations, and to enhance discovery services. It is essential that this metadata is clean, complete, and accurate.\nDo:\nprovide all contributors, titles, dates, and identifiers associated with the item you are registering make sure the contributors, titles, dates, and identifiers are accurate update and correct metadata as needed Do not:\nsupply titles, names, or other metadata in all caps, even if that is how you display and store them - it makes it difficult for others to use your metadata to format citations (and link to your content) omit article identifiers, page numbers, or author names - omissions will make your metadata less or undiscoverable force metadata into fields that aren’t a good match - it’s often better just to leave it out. For example, putting subject keywords into a title Titles Your metadata should include the title used for the content when it was first published. For most types of content alternate titles and subtitles can be provided as well (see each record type markup guide for details). You are also able in most cases to provide titles in multiple languages (see translated and multi-language materials).\nDo:\nuse subtitles - subtitles are supported in a distinct subtitle element, and allow an item to be discoverable using the main title, subtitle, or both combined. supply alternate titles, abbreviated titles, and translated titles if you use them in citation recommendations. use face markup and/or MathML in titles when it impacts the meaning of the text. follow journal title best practise. Do not:\ninclude non-title metadata such as author, price, or volume numbers in a title field - this is a common error that significantly impacts discoverability and display. cram multiple titles in multiple languages in one element - see translated and multi-language materials for guidance. supply titles in ALL CAPS - our metadata is often used for display and citation formatting. Additional best practices may apply for the content you are registering, see specific record type guides for details.\nContributors Contributor metadata is expressed consistently across record types (excluding Grants), and includes contributor names, roles, identifiers, alternate names, and affiliation information. A contributor is a single person or a group of people/organization that has contributed in some way to the content being registered.\nDo include:\ncorrect names, so authors and other contributors can be matched to citations a complete contributor list so that contributors can receive credit for their work, and to help make your content more discoverable Contributor role(s) - at least one for each contributor, but supply as many as apply ORCID iDs, so that authors can be disambiguated and connected to the research they write and support Affiliations and ROR IDs so that contributor institutions can be identified and research outputs can be traced by institution Do not:\ninclude suffixes such as Jr, Sr, IV in the family name field - use the suffix element Guidance on constructing XML for contributors can be found in our Contributors Markup Guide.\nDates (publication and other) Do:\nsupply the entire date whenever possible - for most dates supplied within our metadata we allow you to supply just a year, with month and day being optional, but we encourage you to supply full dates whenever possible, particularly for online content. supply all relevant date types - for most items a publication date is required and other dates are optional but we encourage you to supply all dates that apply to the content you are registering. This includes acceptance dates for most content, and approval and posted dates for others. Include the correct date at both the parent and child level (journal issue / article, book title / chapter) Include both online and print publication dates (if applicable) Do not:\nsupply only the most recent publication date - this is inaccurate and may impact your registration fees, as back year rates are calculated based on the publication year provided in your registration metadata. Page numbers and article identifiers Correct page number and article identifier (aka e-location ID) metadata is essential for many discovery systems.\nDo:\nbe careful with your pages - be sure each page element contains only the page number itself, not a range. This means capture the first page in first_page, the last page in last_page, and any additional page information in other_pages. If your content has pages, first_page is essential. if you use article numbers /e-location IDs, supply them as described in the markup guide Do not:\ninclude an entire page range in first_page - this is incorrect, and will throw off many matching processes (in Crossref and beyond) and cause your metadata to be displayed incorrectly wherever it is used. include extraneous text in the page field - just the page please, no ‘page 1’ or ‘1st pg’ Guidance on constructing XML for article IDs and page ranges can be found in our Article ID Markup Guide.\n", "headings": ["Titles","Contributors","Dates (publication and other)","Page numbers and article identifiers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/datasets/", "title": "Datasets", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "Dataset records capture information about one or more database records or collections.\nWhen registering datasets, DO:\nregister a DOI (or include a registered DOI) for a parent database - datasets must be registered as part of a collection include all relevant funding, license, and relationship metadata include all contributors include relevant dates (supported date types are creation, publication, and update dates) provide description, format, and citation metadata Review our Datasets Markup Guide for XML and metadata help.", "content": "Dataset records capture information about one or more database records or collections.\nWhen registering datasets, DO:\nregister a DOI (or include a registered DOI) for a parent database - datasets must be registered as part of a collection include all relevant funding, license, and relationship metadata include all contributors include relevant dates (supported date types are creation, publication, and update dates) provide description, format, and citation metadata Review our Datasets Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/dissertations/", "title": "Dissertations", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "Dissertation records capture information about dissertations or theses.\nWhen registering dissertations, DO:\ninclude all relevant funding, [license/schema-library//documentation/principles-practices/best-practices/license/), and [relationship/schema-library//documentation/principles-practices/best-practices/relationship/) metadata include contributors metadata, including ORCID iDs and affiliation metadata include an [abstract]((/documentation/principles-practices/best-practices#abstracts) include identifiers associated with the dissertation - if a DAI has been assigned, it should be deposited in the identifier element with the id_type attribute set to \u0026quot;dai\u0026quot;. If an institution has its own numbering system, it should be deposited in item_number, and the item_number_type should be set to \u0026quot;institution\u0026quot; Do not:", "content": "Dissertation records capture information about dissertations or theses.\nWhen registering dissertations, DO:\ninclude all relevant funding, [license/schema-library//documentation/principles-practices/best-practices/license/), and [relationship/schema-library//documentation/principles-practices/best-practices/relationship/) metadata include contributors metadata, including ORCID iDs and affiliation metadata include an [abstract]((/documentation/principles-practices/best-practices#abstracts) include identifiers associated with the dissertation - if a DAI has been assigned, it should be deposited in the identifier element with the id_type attribute set to \u0026quot;dai\u0026quot;. If an institution has its own numbering system, it should be deposited in item_number, and the item_number_type should be set to \u0026quot;institution\u0026quot; Do not:\nuse the dissertation model for versions of a dissertations published in a book or journal Review our Dissertation Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/funding/", "title": "Funding metadata", "subtitle":"", "rank": 4, "lastmod": "2022-06-01", "lastmod_ts": 1654041600, "section": "Documentation", "tags": [], "description": "Funding metadata may be supplied for most record models, and helps link funding to research.\nWhen registering funding metadata, DO:\ninclude the ROR ID of the funding organization, or the name of the funding organization and thefunder identifier from the Open Funder Registry if you have not yet migrated to using the ROR registry include an award/grant number whenever possible include a funder name if a ROR or funder identifier is not available - we may be able to match the name supplied with an identifier and make the data available pay close attention to the structure of your metadata - correct nesting of funder names and identifiers is essential as it significantly impacts how funders, funder identifiers, and award numbers are related to each other Do not:", "content": "Funding metadata may be supplied for most record models, and helps link funding to research.\nWhen registering funding metadata, DO:\ninclude the ROR ID of the funding organization, or the name of the funding organization and thefunder identifier from the Open Funder Registry if you have not yet migrated to using the ROR registry include an award/grant number whenever possible include a funder name if a ROR or funder identifier is not available - we may be able to match the name supplied with an identifier and make the data available pay close attention to the structure of your metadata - correct nesting of funder names and identifiers is essential as it significantly impacts how funders, funder identifiers, and award numbers are related to each other Do not:\nInclude incomplete funder names or acronyms as a funder_name, particularly if you have not supplied an accompanying funder identifier. Some additional best practices for extracting data and working with vendors to supply funding data are available in this Best Practices for depositing funding data blog post, and review our Funding Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/grants/", "title": "Grants", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "Grants may be registered using our grants model.\nWhen registering a grant, DO:\ninclude required project information (a project title, a funder name and identifier, and a funding type) as well as your internal grant or award number include information describing grant-funded projects such as project description, language information, investigator details (including ORCID IDs and ROR IDs within affiliations) include award amounts and currency, and project start and end dates and/or an award date include multiple titles and descriptions as well as language information; include a funding scheme, and planned project start and end dates when relevant Review our Grants Markup Guide for XML and metadata help.", "content": "Grants may be registered using our grants model.\nWhen registering a grant, DO:\ninclude required project information (a project title, a funder name and identifier, and a funding type) as well as your internal grant or award number include information describing grant-funded projects such as project description, language information, investigator details (including ORCID IDs and ROR IDs within affiliations) include award amounts and currency, and project start and end dates and/or an award date include multiple titles and descriptions as well as language information; include a funding scheme, and planned project start and end dates when relevant Review our Grants Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/journals/", "title": "Journals and articles", "subtitle":"", "rank": 4, "lastmod": "2023-10-28", "lastmod_ts": 1698451200, "section": "Documentation", "tags": [], "description": "Our journals model supports registration of records for journal titles and articles, as well for individual volumes and issues of a journal. We recommend registering DOI records for journal titles and articles, and optionally for volumes and issues.\nJournal titles\nDo:\nbe consistent - journal title records are created from the metadata submitted when you first register your journal and articles. You determine the exact title and ISSN included in the deposit, and we record that title and ISSN in a title record in our database.", "content": "Our journals model supports registration of records for journal titles and articles, as well for individual volumes and issues of a journal. We recommend registering DOI records for journal titles and articles, and optionally for volumes and issues.\nJournal titles\nDo:\nbe consistent - journal title records are created from the metadata submitted when you first register your journal and articles. You determine the exact title and ISSN included in the deposit, and we record that title and ISSN in a title record in our database. The title, ISSN, and title-level persistent identifier supplied in your content registration files must be consistent across submissions. register a title-level DOI for your journal include all registered ISSN for your journal - the ISSN is crucial for identifying a serial. If you are supplying us with data for older titles that predate ISSN assignment, you should request ISSNs from your ISSN agency as they can be assigned retroactively. This isn’t only for our convenience - libraries, database providers, and other organizations using your data will welcome (and often require) an ISSN for anything defined as a journal. supply distinct ISSN and / or title DOI for each distinct version of a title. If a title changes significantly the publisher should obtain new ISSNs (both print and online). This rule is established by the ISSN International Centre, not us, but we support and enforce it. Minor title changes (such as changing ‘and’ to ‘\u0026amp;’) don’t require a new ISSN. supply all commonly used title abbreviations within the repeatable abbrev element supply a journal language using the language attribute Do not:\nregister issues and articles published under a past title under the current title - this makes it hard to match DOIs to citations and accurately identify items published over time. ome publishers consolidate all versions of a title under the most recent title. This isn’t recommended practice as it causes a lot of linking and citing confusion – you’ve essentially created two (or more) versions of a title. This is particularly confusing when volume and issue numbers overlap between title iterations. Journal titles should reflect the journal title at the time of publication, and should not be updated if the journal title changes later on. vary your journal title without obtaining a new ISSN For recommendations on displaying information about journals, see the ISSN Manual, and NISO\u0026rsquo;s recommended practice on Presentation \u0026amp; Identification of E-Journals.\nJournal issue and volume metadata You can register DOIs for volumes and issues of journals if you want to make them citeable and linked persistently. If you do not opt to register DOIs for volumes and issues you should still provide clean and complete metadata as it is needed for the article records within each issue.\nDo:\nsupply accurate issue and volume numbers for special issues, include additional metadata to make the issue identifiable and citable - this includes: editors in the contributors section issue title (as title) any special issue numbering, including text (as special_numbering) Do not:\ninclude non-essential text in the volume or issue elements Journal articles\nYou should register all research articles published by your journal, as well as other cite-able content (book reviews, case studies, editorials).\nDo:\npay close attention to title best practice - journal article metadata is used for display and discoverability, so it is vital that article titles are accurate. include all contributors and (again) pay close attention to the metadata you are sending us. include affiliation info for each contributor include abstracts (recommended for all types of content, but particularly useful for journal articles) include all relevant funding, license, and relationship metadata include full text URLs to facilitate text and data mining include information on updates, corrections, withdrawals, and retractions via Crossmark include references include a language (using the language attribute on the journal_article element) Do not:\nregister records for items you do not intend to maintain long-term, such as advertisements supply titles in all caps - titles in article metadata are used for display and citation formatting, and if it supplied in all caps, it will appear in all caps wherever our metadata is used. Review our Journals and Articles Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/license/", "title": "License metadata", "subtitle":"", "rank": 4, "lastmod": "2022-06-01", "lastmod_ts": 1654041600, "section": "Documentation", "tags": [], "description": "Members registering licensing information in their metadata let researchers know when they can perform TDM and under what conditions. This license could be proprietary, or an open license such as Creative Commons.\nWhen supplying license metadata, DO:\nprovide URLs for proprietary or open licenses for your content supply dates that apply to your licenses (to support embargos, for example) and KEEP THEM UP TO DATE! make sure the URLs resolve to an active license review any metadata records you acquire from other members to make sure the license data supplied is accurate and up to date Do not:", "content": "Members registering licensing information in their metadata let researchers know when they can perform TDM and under what conditions. This license could be proprietary, or an open license such as Creative Commons.\nWhen supplying license metadata, DO:\nprovide URLs for proprietary or open licenses for your content supply dates that apply to your licenses (to support embargos, for example) and KEEP THEM UP TO DATE! make sure the URLs resolve to an active license review any metadata records you acquire from other members to make sure the license data supplied is accurate and up to date Do not:\nsupply unverified license URLs - we make sure the URLs provided are URLs, but don\u0026rsquo;t verify that they resolve to an active license Review our License information Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/multi-language/", "title": "Multi-language material and translations", "subtitle":"", "rank": 4, "lastmod": "2023-01-18", "lastmod_ts": 1674000000, "section": "Documentation", "tags": [], "description": "Much of the content in Crossref is English language, but we encourage members to register content in the appropriate language for the content being registered. We support UTF-8 encoded character sets and in many cases you will be able to supply multiple versions of titles, abstracts, and other metadata.\nIf you consider your content to be multi-language and not a translation (meaning it will be cited as a single item) Do:", "content": "Much of the content in Crossref is English language, but we encourage members to register content in the appropriate language for the content being registered. We support UTF-8 encoded character sets and in many cases you will be able to supply multiple versions of titles, abstracts, and other metadata.\nIf you consider your content to be multi-language and not a translation (meaning it will be cited as a single item) Do:\nregister one DOI for the item include titles, abstracts, and other metadata in multiple languages in your metadata record (support for this varies by record type) pay attention to order in your input XML - if the English title is provided as the primary title in your metadata, then the English title will be displayed in citations generated from our metadata If your content is translated -\nDo:\nregister separate DOIs for each translation connect each registered record with relationship metadata (hasTranslation) include language metadata wherever possible Do not:\ncram multiple titles in multiple languages in one element - this impacts discoverability and citation formatting Review our Translated and multi-language materials Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/peer-review/", "title": "Peer review", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "You can register peer review metadata and connect reviews to reviewed items via relationships.\nWhen registering peer reviews, DO:\ninclude a review title - if you don’t have a review-specific title convention, we recommend that you include \u0026ldquo;Review\u0026rdquo; (or your own term for review) as well as a revision and review number. For example, a review pattern of Review: title of article (Revision number/Review number) will be: Review: Analysis of the effects of bad metadata on discoverability (R2/RC3) include reviewer information with the contributor section, including information about anonymous reviewers include relevant stage, type, and recommendation metadata include license information include relationship metadata linking the review with the item being reviewed (relation type isReviewOf).", "content": "You can register peer review metadata and connect reviews to reviewed items via relationships.\nWhen registering peer reviews, DO:\ninclude a review title - if you don’t have a review-specific title convention, we recommend that you include \u0026ldquo;Review\u0026rdquo; (or your own term for review) as well as a revision and review number. For example, a review pattern of Review: title of article (Revision number/Review number) will be: Review: Analysis of the effects of bad metadata on discoverability (R2/RC3) include reviewer information with the contributor section, including information about anonymous reviewers include relevant stage, type, and recommendation metadata include license information include relationship metadata linking the review with the item being reviewed (relation type isReviewOf). Do not:\ninclude metadata specific to the reviewed item, like author and title - that is captured in the record of the reviewed item, which you supply via the isReviewof relationship Other things to know:\nreferences are not currently supported for peer review records but if someone cites your reviews, those citations will be included in our cited-by service Review our Peer Review Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/posted-content/", "title": "Posted content", "subtitle":"", "rank": 4, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Documentation", "tags": [], "description": "You can register records for preprints and other posted material using our posted content metadata model.\nWhen registering posted content, DO:\ninclude the appropriate sub-type using the type attribute (preprint, working paper, letter, dissertation, report, other) - it\u0026rsquo;s vital that preprints be identified as preprints include abstracts connect posted content to related items via relationships - this is required for preprints and important for other posted content as well Do not:", "content": "You can register records for preprints and other posted material using our posted content metadata model.\nWhen registering posted content, DO:\ninclude the appropriate sub-type using the type attribute (preprint, working paper, letter, dissertation, report, other) - it\u0026rsquo;s vital that preprints be identified as preprints include abstracts connect posted content to related items via relationships - this is required for preprints and important for other posted content as well Do not:\nregister non-preprints with the \u0026ldquo;preprint\u0026rdquo; type Other things to know:\nour Preprint Advisory Group is actively creating new recommendations and best practices for preprint metadata records Review our Posted Content Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/references/", "title": "References", "subtitle":"", "rank": 4, "lastmod": "2022-06-01", "lastmod_ts": 1654041600, "section": "Documentation", "tags": [], "description": "Registering references means submitting reference lists as part of your metadata deposit. It is optional but strongly encouraged (especially if you use our Cited-by service) and any references you include in your Crossref metadata will be made available through our APIs, including Event Data.\nDo:\nsupply DOIs with your references! This is the most effective way to identify and connect the references supplied with your content metadata. supply a full reference or complete marked up metadata for every citation that you do not have a DOI for - we do our best to match these citations to registered DOIs.", "content": "Registering references means submitting reference lists as part of your metadata deposit. It is optional but strongly encouraged (especially if you use our Cited-by service) and any references you include in your Crossref metadata will be made available through our APIs, including Event Data.\nDo:\nsupply DOIs with your references! This is the most effective way to identify and connect the references supplied with your content metadata. supply a full reference or complete marked up metadata for every citation that you do not have a DOI for - we do our best to match these citations to registered DOIs. cite data, software, and other materials used to support and supplement the content being registered Do not:\nomit non-Crossref DOIs from your references - this eliminates data, software, and other types of citations from our records. We aren\u0026rsquo;t able to match citations to non-Crossref DOIs, but we do pass them along to our APIs and they are used by our metadata subscribers. Review our References Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/reports/", "title": "Reports and working papers", "subtitle":"", "rank": 4, "lastmod": "2022-09-08", "lastmod_ts": 1662595200, "section": "Documentation", "tags": [], "description": "Our reports model supports registration of reports and working papers within or outside of a series.\nWhen registering reports, DO:\ninclude a report title and publication date include all relevant funding metadata, license information, and relationships include all contributors info, publisher and institution details, abstracts, approval dates, as well any edition numbers, contract numbers, and ISBN applied to your report Report registration files may include a publisher name (within publisher) and/or institution name (within depending on the organization issuing the report.", "content": "Our reports model supports registration of reports and working papers within or outside of a series.\nWhen registering reports, DO:\ninclude a report title and publication date include all relevant funding metadata, license information, and relationships include all contributors info, publisher and institution details, abstracts, approval dates, as well any edition numbers, contract numbers, and ISBN applied to your report Report registration files may include a publisher name (within publisher) and/or institution name (within depending on the organization issuing the report. Reports/working papers may be deposited as a series. register DOIs for each section or chapter of a report or working paper if you want to make each section citeable, or if each section has dates, contributors, or other metadata that differs from the report itself Review our Reports and Working Papers Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/relationships/", "title": "Relationship metadata", "subtitle":"", "rank": 4, "lastmod": "2022-06-01", "lastmod_ts": 1654041600, "section": "Documentation", "tags": [], "description": "Creating relationships helps build a map of scholarly research objects that we call the research nexus. Expressing these relationships in the metadata enables the evolving infrastructure to build on this mapping.\nThese connections may consist of citations, or refer to publications which do not always exist as a single content item (its parts may be produced, curated, and published by different organizations and separate activities). Making these connections creates linked metadata, which is useful because it establishes associations and context.", "content": "Creating relationships helps build a map of scholarly research objects that we call the research nexus. Expressing these relationships in the metadata enables the evolving infrastructure to build on this mapping.\nThese connections may consist of citations, or refer to publications which do not always exist as a single content item (its parts may be produced, curated, and published by different organizations and separate activities). Making these connections creates linked metadata, which is useful because it establishes associations and context.\nConnections between research objects can be established through your reference lists, or by asserting a relationships type.\nWe have also introduced other interlinking services that address specific types of relationships:\nComponents allow for the assignment of DOIs to the component parts of a publication (figures, tables, images) which may lead to their reuse. Updates notify the community about changes that have a material effect on the original work, including corrections and retractions. Funding data supports identifying the organization that financially supports the research behind a specific publication. Peer reviews support the host of outputs made publicly available about published scholarly content, for example: referee reports, decision letters, and author responses. These and other services create relationships between metadata records; however, they share two characteristics that restrict their ability to define relationships:\nBoth items involved in a relationship must be identified by Crossref DOIs. The types of relationships are dictated by the mission of the specific service. The following modifications and new services developed in response to these two limitations:\nAllow non-Crossref DOIs to be deposited in an item\u0026rsquo;s (article/chapter/paper) list of citations. Support the creation of general typed relationships between items with a Crossref DOI, and other content items with a variety of identifiers. Review our Relationships Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/standards/", "title": "Standards", "subtitle":"", "rank": 4, "lastmod": "2022-07-11", "lastmod_ts": 1657497600, "section": "Documentation", "tags": [], "description": "Our standards model allows you to register records for dated and undated standards, as well as families and sets of standards.\nWhen registering a standard, DO:\ninclude all designators associated with the standard include (and keep up to date) the current publication status using the publication_status attribute Do not:\nupdate an existing DOI record with a new primary as-published designator - SDOs may opt to register DOIs for an undated standard, or to register DOIs for each version or update (as-published) of a standard.", "content": "Our standards model allows you to register records for dated and undated standards, as well as families and sets of standards.\nWhen registering a standard, DO:\ninclude all designators associated with the standard include (and keep up to date) the current publication status using the publication_status attribute Do not:\nupdate an existing DOI record with a new primary as-published designator - SDOs may opt to register DOIs for an undated standard, or to register DOIs for each version or update (as-published) of a standard. Review our Standards Markup Guide for XML and metadata help.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/", "title": "REST API", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our publicly available REST API exposes the metadata that members deposit with Crossref when they register their content with us. And it’s not just the bibliographic metadata either: funding data, license information, full-text links, ORCID iDs, abstracts, and Crossmark updates are in members’ metadata too. You can search, facet, filter, or sample metadata from thousands of members, and the results are returned in JSON. Learn more in our REST API documentation.", "content": "Our publicly available REST API exposes the metadata that members deposit with Crossref when they register their content with us. And it’s not just the bibliographic metadata either: funding data, license information, full-text links, ORCID iDs, abstracts, and Crossmark updates are in members’ metadata too. You can search, facet, filter, or sample metadata from thousands of members, and the results are returned in JSON. Learn more in our REST API documentation.\nNumerous tools and services rely on our metadata, be they for search, annotation, sharing, or analysis. Some common uses for our REST API include:\nText and data mining Helping with auditing funder mandates Identifying author publications Many familiar organizations use our metadata through this modern machine interface. Check out the API case studies from organizations like Authorea and SHARE.\nNo sign-up is required to use the REST API, and the data can be treated as facts from members. The data is not subject to copyright, and you may use it for any purpose.\nCrossref generally provides metadata without restriction; however, some abstracts contained in the metadata may be subject to copyright by publishers or authors.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/api-versioning/", "title": "REST API Versions", "subtitle":"", "rank": 4, "lastmod": "2022-12-01", "lastmod_ts": 1669852800, "section": "Documentation", "tags": [], "description": "REST API Versions The Crossref REST API is versioned. You should always use the API version in your REST API requests.\nBreaking changes Any breaking changes will be released in a new API version. Breaking changes are changes that can potentially break an integration. Breaking changes include:\nremoving an entire operation removing or renaming a parameter removing or renaming a response field adding a new required parameter making a previously optional parameter required changing the type of a parameter or response field removing enum values adding a new validation rule to an existing parameter changing authentication or authorization requirements Non-breaking changes Any additive (non-breaking) changes will be available in all supported API versions.", "content": "REST API Versions The Crossref REST API is versioned. You should always use the API version in your REST API requests.\nBreaking changes Any breaking changes will be released in a new API version. Breaking changes are changes that can potentially break an integration. Breaking changes include:\nremoving an entire operation removing or renaming a parameter removing or renaming a response field adding a new required parameter making a previously optional parameter required changing the type of a parameter or response field removing enum values adding a new validation rule to an existing parameter changing authentication or authorization requirements Non-breaking changes Any additive (non-breaking) changes will be available in all supported API versions. Additive changes are changes that should not break an integration. Additive changes include:\nadding an operation adding an optional parameter adding an optional request header adding a response field adding a response header adding enum values Legacy version support When a new REST API version is released, the previous API version will be supported for 24 more months following the release of the new API version. In exceptional circumstances, we may decide to extend this.\nSpecifying an API version To be safe, You should always specify an API version in your requests.\nFor example:\nhttps://api.crossref.org/v1/works?rows=0\nHowever, as long as v1 of the API exists, requests that do not contain an API version in the request will default to v1.\nEventually, if you specify an API version that is no longer supported, you will receive a 400 error.\nSo, if version 1 of the API is ever retired, then requests to the API that do not contain a version number will fail with a 400 error.\nAgain, to be safe, you should include the API version in your requests.\nUpgrading to a new API version Before upgrading to a new REST API version, you should read the changelog of breaking changes for the new API version to understand what breaking changes are included and to learn more about how to upgrade to that specific API version. For more information, see \u0026ldquo;Breaking changes.\u0026rdquo;\nSupported API versions The following REST API versions are currently supported:\nV1 Credits: This policy is adapted from the very short, clear and reasonable Github API policy\n", "headings": ["REST API Versions","Breaking changes","Non-breaking changes","Legacy version support","Specifying an API version","Upgrading to a new API version","Supported API versions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/rest-api-metadata-license-information/", "title": "REST API metadata license information", "subtitle":"", "rank": 4, "lastmod": "2022-02-01", "lastmod_ts": 1643673600, "section": "Documentation", "tags": [], "description": "REST API metadata license information TL;DR You can use and redistribute any metadata you retrieve with the Crossref REST API or that is included in Crossref snapshots and public data files. Have fun.\nDetails What we colloquially call “Crossref metadata” is actually a mix of elements, some of which come from our members, some of which come from third parties, and some of which come from Crossref itself. These elements, in turn, each have different copyright implications.", "content": "REST API metadata license information TL;DR You can use and redistribute any metadata you retrieve with the Crossref REST API or that is included in Crossref snapshots and public data files. Have fun.\nDetails What we colloquially call “Crossref metadata” is actually a mix of elements, some of which come from our members, some of which come from third parties, and some of which come from Crossref itself. These elements, in turn, each have different copyright implications.\nOn top of this, Crossref has terms and conditions for its members and terms and conditions for specific services. These grant Crossref the right to do things with some classes of metadata and not do things with other classes of metadata - regardless of copyright.\nSince 2000 Crossref has stated that it considers basic bibliographic metadata to be “facts.” And under US law (Crossref is registered in the US) these facts are not subject to copyright at all. Note also that, given that this data is not subject to copyright at all, there is no way Crossref can “waive the copyright” under CC0. In short, this metadata has no restrictions on reuse.\nMore recently, some of our members have been submitting abstracts to Crossref. These are copyrighted. In the case of subscription publishers, the copyright usually belongs to the publisher. In the case of open access publishers, the copyright most often belongs to the authors. In both cases, Crossref cannot waive copyright under CC0 because the copyright is not ours to waive. However, we are allowed to redistribute the abstracts with our metadata because that is part of the terms and conditions we have with our members.\nWe also collect Event Data, which contains mentions of research works from across the internet. We can make the majority of this data openly available under CC0, however for a few of the sources there are additional restrictions. These might affect you if you\u0026rsquo;re planning to republish results of event data queries or store them for a long time. You can find more details on the Event Data terms of use page.\nWe also have some data that we\u0026rsquo;ve always released under CC0. This includes the Open Funder Registry and Event Data. This data is currently available through separate APIs, but will eventually be made available via the REST API as well.\nAnd this leaves us with data that is created by Crossref itself as a byproduct of our services. This data includes things like participation reports, conflict reports, member IDs, and Cited-by counts, and any aggregations of our otherwise uncopyrighted data that might, by aggregating it, be subject to sui generis database rights. We also make this latter class of data available CC0.\nTo summarize:\nData Licence Bibliographic metadata, including references Facts, not subject to copyright Crossref-generated data, and any aggregations of our otherwise uncopyrighted data that might, by aggregating it, be subject to sui generis database rights. CC0 Open Funder Registry, Event Data CC0 Abstracts Copyright held by publisher or author, but redistributable as per Crossref membership terms. ", "headings": ["REST API metadata license information","TL;DR","Details"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/tips-for-using-public-data-files-and-plus-snapshots/", "title": "Tips for working with Crossref public data files and Plus snapshots", "subtitle":"", "rank": 4, "lastmod": "2022-02-01", "lastmod_ts": 1643673600, "section": "Documentation", "tags": [], "description": "What is this? About once a year Crossref releases a public metadata file that includes all of Crossref\u0026rsquo;s public metadata. We typically release this as a tar file and distribute it via Academic Torrents.\nUsers of Crossref\u0026rsquo;s Plus service can access similar data snapshots that we updated monthly. These are also tar files, but we distribute them via the Plus service API, and you need a Plus API token to access them.", "content": "What is this? About once a year Crossref releases a public metadata file that includes all of Crossref\u0026rsquo;s public metadata. We typically release this as a tar file and distribute it via Academic Torrents.\nUsers of Crossref\u0026rsquo;s Plus service can access similar data snapshots that we updated monthly. These are also tar files, but we distribute them via the Plus service API, and you need a Plus API token to access them.\nIn either case, these files are large and unwieldy. This document provides you with tips that should make your life easier when handling Crossref public metadata files and Plus snapshots.\nDownloading the public data file directly from AWS The first three public data files were only accessible via torrent download to keep costs manageable and to enable anonymous downloads. As an alternative, we are also making the 2023 file available via a \u0026ldquo;Requester Pays\u0026rdquo; option.\nA copy of the public data file is stored on AWS S3 in a bucket configured with the \u0026ldquo;Requester Pays\u0026rdquo; option. This means that rather than the bucket owner (Crossref) paying for bandwidth and transfer costs when downloading objects, the requester pays instead. The cost is expected to vary slightly year to year depending on variables like file size and end-user setups. The 2024 file is approximately 200 GB, and plugging that into this calculator results in an estimated cost of $18 USD. More information about \u0026ldquo;Requester Pays\u0026rdquo; can be found in the AWS documentation.\nThe bucket is called api-snapshots-reqpays-crossref. You can use either the AWS CLI or the AWS REST API to access it. There are code examples in the AWS documentation.\nUsing the AWS CLI for example, after authenticating, you could run:\n# List the objects in the bucket aws s3 ls --request-payer requester s3://api-snapshots-reqpays-crossref # Download the public data file aws s3api get-object --bucket api-snapshots-reqpays-crossref --request-payer requester --key April-2023-public-data-file-from-crossref.tar ./April-2023-public-data-file-from-crossref.tar Note that the key part of the command is --request-payer requester which is mandatory. Without that flag, the command will fail.\nHandling tar files Q: The tar file contains many files that, in turn, contain the individual DOI records. Some of these files are very large and hard to process. Could you break them out into separate files per DOI instead?\nA: Yes, we could. But that creates its own set of problems. Standard filesystems on Linux/macOS/Windows really, really don\u0026rsquo;t like you to create hundreds of millions of small files on them. Even standard command-line tools like ls choke on directories with more than a few thousand files in them. Unless you are using a specialized filesystem, formatted with custom inode settings optimized for hundreds of millions of files- saving each DOI as an individual record will bring you a world of hurt.\nQ: Gah! The tar file is large and uncompressing it takes up a ton of room and generates a huge number of files. What can we do to make this easier? Can you split the tar file so we can manage it in batches?\nA: Don\u0026rsquo;t uncompress or extract the tar file. You can read the files straight from the compressed tar file.\nQ: But won\u0026rsquo;t reading files straight from the tar file be slow?\nWe did three tests- all done on the same machine using the same tar file, which, at the time of this writing, contained 42,210 files which, in turn, contained records for 127,574,634 DOIs.\nTest 1: Decompressing and untarring the file took about 82 minutes.\nOn the other hand\u0026hellip;\nTest 2: A python script iterating over each filename in the tar file (without extracting and reading the file into memory) was completed in just 29 minutes.\nTest 3: A python script iterating over each filename in the tar file and extracting and reading the file into memory completed in just 61 minutes.\nBoth of the above scripts worked in a single process. However, you could almost certainly further optimize by parallelizing reading the files from the tar file.\nIn short - the tar file is a lot easier to handle if you don\u0026rsquo;t decompress and/or extract it. Instead, it is easiest to read directly from the compressed tar file.\nDownloading and using Plus snapshots Q: How should I best use the snapshots? Can we get them more frequently than each month?\nA: The monthly snapshots include all public Crossref metadata up to and including data for the month before they were released. We make them available to seed and occasionally refresh a local copy of the Crossref database in any system you are developing that requires Crossref metadata. In most cases, you should just keep this data current by using the Crossref REST API to retrieve new or modified records. Typically, only a small percentage of the snapshot changes from month to month. So if you are downloading it repeatedly, you are just downloading the same unchanged records time and time again. Occasionally, there will be a large number of changes in a month. This typically happens when:\\ \\\nA large Crossref member adds or updates a lot of records at once.\\\nWe add a new metadata element to the schema.\\\nWe change the way we caluclate something (e.g. citation counts) and that effects a lot of records.\nIn these cases, it makes sense to refresh your metadata from the newly downloaded snapshot instead of using the API.\nIn short, if you are downloading the snapshot more than a few times a year- you are probably doing something very inefficient.\nQ: The snapshot is large and difficult to download. I keep having it fail and have to start the download again. Can you split the snapshot so that I can download smaller parts instead?\nA: If your download gets interrupted, you can resume the download from the point it got interrupted instead of starting over. This is easiest to do using something like wget.\nBut you can also do it with curl. You can try it yourself:\n\u0026gt; export TOKEN=\u0026#39;\u0026lt;insert-your-token-here\u0026gt;\u0026#39; \u0026gt; curl -o \u0026#34;all.json.tar.gz\u0026#34; --progress-bar -L -X GET https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/latest/all.json.tar.gz -H \u0026#34;Crossref-Plus-API-Token: ${TOKEN}\u0026#34; Wait a few minutes, then execute ctrl-c to interrupt the download.\nThen to resume it from where it left-off, include the switch -C -:\ncurl -o \u0026#34;all.json.tar.gz\u0026#34; --progress-bar -L -X GET https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/latest/all.json.tar.gz -H \u0026#34;Crossref-Plus-API-Token: ${TOKEN}\u0026#34; -C - Then the curl command will calculate the byte offset from where it left off and continue the download from there.\nSupplementary tools and alternative formats In late 2023 we started experimenting with supplementary tools and alternative file formats meant to make our public data files easier to use by broder audiences.\nThe Crossref Data Dump Repacker is a python application that allows you to repack the Crossref data dump into the JSON Lines format.\ndoi2sqlite is a tool for loading Crossref metadata into a SQLite database.\nAnd for finding the record of a particular DOI, we\u0026rsquo;ve published a python API for interacting with the annual public data files. This tool can create an index of the DOIs in the file, enabling easier record lookups without having to iterate over the entire file, which can take hours. A torrent is available for the 2024 index in SQLite format if you do not wish to generate it yourself.\n", "headings": ["What is this?","Downloading the public data file directly from AWS","Handling tar files","Downloading and using Plus snapshots","Supplementary tools and alternative formats"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/a-non-technical-introduction-to-our-api/", "title": "A non-technical introduction to our API", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "You can query our API to find answers to questions about our metadata. Copy and paste these URLs into a browser to find out what has been registered with us.\nYou can modify the parameters in each URL, for example: if https://0-api-crossref-org.libus.csd.mu.edu/members?rows=100 gives the first 100 accounts, modify it to https://0-api-crossref-org.libus.csd.mu.edu/members?rows=200 to give the first 200. There is a maximum row limit of 2000.\nHow many accounts do wehave? (This includes members and others, both inactive and active) https://api.", "content": "You can query our API to find answers to questions about our metadata. Copy and paste these URLs into a browser to find out what has been registered with us.\nYou can modify the parameters in each URL, for example: if https://0-api-crossref-org.libus.csd.mu.edu/members?rows=100 gives the first 100 accounts, modify it to https://0-api-crossref-org.libus.csd.mu.edu/members?rows=200 to give the first 200. There is a maximum row limit of 2000.\nHow many accounts do wehave? (This includes members and others, both inactive and active) https://0-api-crossref-org.libus.csd.mu.edu/members?rows=0 Who are they? Let’s look at the first 100 accounts https://0-api-crossref-org.libus.csd.mu.edu/members?rows=100 And the second 100 accounts https://0-api-crossref-org.libus.csd.mu.edu/members?rows=100\u0026amp;offset=100 How many records do we have? https://0-api-crossref-org.libus.csd.mu.edu/works?rows=0 What record types do we have? https://0-api-crossref-org.libus.csd.mu.edu/types How many journal article DOIs do we have? https://0-api-crossref-org.libus.csd.mu.edu/types/journal-article/works?rows=0 How many conference proceedings records do we have? https://0-api-crossref-org.libus.csd.mu.edu/types/proceedings-article/works?rows=0 How can I see all the records registered under a given prefix? https://0-api-crossref-org.libus.csd.mu.edu/prefixes/10.21240/works?select=DOI\u0026amp;rows=1000 And how can I see all the works registered under a given prefix? https://0-api-crossref-org.libus.csd.mu.edu/prefixes/10.35195/works If your prefix has more than 1,000 DOIs registered, not all of them will display on one page, so it’s best to query for them using machine retrieval from the REST API But eventually you will probably want to start looking at metadata records. Let’s search for records that have the word \u0026ldquo;blood\u0026rdquo; in the metadata and see how many there are. https://0-api-crossref-org.libus.csd.mu.edu/works?query=blood\u0026amp;rows=0 Let’s look at some of the results. https://0-api-crossref-org.libus.csd.mu.edu/works?query=%22blood%22\u0026amp; Now let’s look at one of the records https://0-api-crossref-org.libus.csd.mu.edu/works/10.1155/2014/413629. Interesting. The record has ORCID iDs, full-text links, and license links. You need license and full-text links to text and data mine the content. How many works have license information? https://0-api-crossref-org.libus.csd.mu.edu/works?filter=has-license:true\u0026amp;rows=0 How many license types are there? https://0-api-crossref-org.libus.csd.mu.edu/licenses?rows=0 OK, let’s see how many records with the word \u0026ldquo;blood\u0026rdquo; in the metadata also have license information and full-text links https://0-api-crossref-org.libus.csd.mu.edu/works?filter=has-license:true,has-full-text:true\u0026amp;query=blood\u0026amp;rows=0 And here\u0026rsquo;s how to look up the XML behind a DOI https://0-api-crossref-org.libus.csd.mu.edu/works/10.5555/487hjd.xml. Remove .xml to see results in JSON https://0-api-crossref-org.libus.csd.mu.edu/works/10.5555/487hjd. Learn more about constructing API queries by combining components, identifiers, and parameters.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/text-and-data-mining/", "title": "Text and data mining", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Text and data mining (TDM) is the automatic (bot) analysis and extraction of information from large numbers of documents. TDM is more effective than screen-scraping, which is inefficient, error-prone, and fragile. Screen-scraping puts an unnecessary load on member sites (downloading html, css, javascript and other superfluous web assets), will often break if members (even slightly) redesign their websites, and typically is tied to specific members’ page layouts (and therefore need to be adapted on a member-by-member basis).", "content": "Text and data mining (TDM) is the automatic (bot) analysis and extraction of information from large numbers of documents. TDM is more effective than screen-scraping, which is inefficient, error-prone, and fragile. Screen-scraping puts an unnecessary load on member sites (downloading html, css, javascript and other superfluous web assets), will often break if members (even slightly) redesign their websites, and typically is tied to specific members’ page layouts (and therefore need to be adapted on a member-by-member basis).\nUsing the DOI as the basis for TDM in a common API provides several benefits:\nAn easy way to de-duplicate documents that may be found on several sites. Processing the same document on multiple sites could easily skew TDM results and traditional techniques for eliminating duplicates (such as hashes) will not work reliably if the document in question exists in several representations (such as PDF, HTML, ePub) and/or versions (such as author’s accepted manuscript, and version of record) Persistent provenance information. Using the DOI as a key allows researchers to retrieve and verify the provenance of the items in the TDM corpus, many years into the future when traditional HTTPS URLs will have already broken An easy way to document, share, and compare corpora without having to exchange the actual documents A mechanism to ensure the reproducibility of TDM results using the source documents A mechanism to track the impact of updates, corrections, retractions, and withdrawals on corpora. Researchers are increasingly interested in performing TDM with scholarly content. This requires automated access to the full-text content of large numbers of articles. The format of the full-text content varies by member. Our metadata helps researchers get access to this content and enables members to provide it.\nHow TDM works A member deposits URLs for their full-text and license/waivers (along with other publication metadata) weith us A researcher finds relevant content registered with us (such as journal articles) using a discovery service The researcher retrieves metadata for each item of registered content, including license information The researcher makes a full-text request from the member The member checks the subscription rights of the researcher and returns the full-text to them. Researchers and text miners can access content URLs and license information via our API. If you are a member and would like to begin depositing URLs and access indicators, please contact us.\n", "headings": ["How TDM works "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/text-and-data-mining-for-researchers/", "title": "Text and data mining for researchers", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our API allows researchers to easily harvest full-text documents from all participating members, regardless of whether the content is open access or subscription. The member is responsible for delivering the full-text content requested, so open access content can simply be delivered, while subscription content is available through access control systems.\nTo mine our metadata, you should have a list of DOIs for the content you want to download, and a safelist of licenses that you accept.", "content": "Our API allows researchers to easily harvest full-text documents from all participating members, regardless of whether the content is open access or subscription. The member is responsible for delivering the full-text content requested, so open access content can simply be delivered, while subscription content is available through access control systems.\nTo mine our metadata, you should have a list of DOIs for the content you want to download, and a safelist of licenses that you accept. You can get a list of DOIs from citations, our metadata search, our metadata API, or another source.\nFor each DOI, you should:\nUse content negotiation to get the metadata for the DOI Check to see if the DOI has license and full-text details in its metadata Check the license against your safelist of acceptable licenses If you agree to the license, follow the link and download the full-text of the content item. The absence of a license does not mean that the full-text can be used without one. Members should deposit both the license and the full-text link at the same time.\nWatch a basic introduction or a more detailed presentation on how to perform TDM using our API.\nExample using the cURL utility You should be able to integrate with the API very easily with your TDM software.\nStep 1 Fetch the metadata: at its simplest, you can issue a HTTP GET request using a Crossref DOI and use DOI content negotiation. For example, the following cURL command will retrieve the metadata for the DOI 10.5555/515151:\ncurl -L -iH \u0026#34;Accept: application/vnd.crossref.unixsd+xml\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.5555/515151 This will return the metadata for the specified DOI, as well as a link header which points to several representations of the full-text on the member’s site:\nHTTP/1.1 200 OK Date: Wed, 31 Jul 2013 11:24:14 GMT Server: Apache/2.2.3 (CentOS) Link: \u0026lt;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.pdf\u0026gt;; rel=\u0026#34;http://0-id-crossref-org.libus.csd.mu.edu/schema/fulltext\u0026#34;; type=\u0026#34;application/pdf\u0026#34;, \u0026lt;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.xml\u0026gt;; rel=\u0026#34;http://0-id-crossref-org.libus.csd.mu.edu/schema/fulltext\u0026#34;; type=\u0026#34;application/xml\u0026#34; Vary: Accept Content-Length: 2189 Status: 200 OK Connection: close Content-Type: application/vnd.crossref.unixsd+xml;charset=utf-8 Access this full-text link information using Ruby:\nrequire \u0026#39;open-uri\u0026#39; r = open(\u0026#34;http://0-dx-doi-org.libus.csd.mu.edu/10.5555/515151\u0026#34;, \u0026#34;Accept\u0026#34; =\u0026gt; \u0026#34;application/vnd.crossref.unixsd+xml\u0026#34;) puts r.meta[\u0026#39;link\u0026#39;] Access this full-text link information using Python:\nimport urllib.request opener = urllib.request.build_opener() opener.addheaders = [(\u0026#39;Accept\u0026#39;, \u0026#39;application/vnd.crossref.unixsd+xml\u0026#39;)] r = opener.open(\u0026#39;http://0-dx-doi-org.libus.csd.mu.edu/10.5555/515151\u0026#39;) print (r.info()[\u0026#39;Link\u0026#39;]) Access this full-text link information using R:\nlibrary(httr) r = content(GET(\u0026#39;http://0-dx-doi-org.libus.csd.mu.edu/10.5555/515151\u0026#39;, add_headers(Accept = \u0026#39;application/vnd.crossref.unixsd+xml\u0026#39;))) r If present, the full-text URL will also be returned in the metadata for the DOI. For instance, in our unixref schema, you would also see this in the returned metadata:\nhttp://annalsofpsychoceramics.labs.crossref.org/fulltext/10.5555/515151.pdf http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.xml Step 2 Deciding what to do. Members who enable mining through us need to register a stable license URL using the \u0026lt;license_ref\u0026gt; element. For example, this unixref extract shows that the DOI is licensed under the Creative Commons CC-BY license:\n\u0026lt;license_ref\u0026gt;http://creativecommons.org/licenses/by/3.0/deed.en_US But this shows that the DOI is licensed under a member’s proprietary license:\n\u0026lt;license_ref\u0026gt;http://www.annalsofpschoceramics.org/art_license.html The license that the URL points to does not have to be machine-readable. Check the license against your safelist. If you agree to it, you can proceed. If you don’t agree to it, put it in a list of licenses to review later and add to your safelist (or blacklist).\nIf a content item is under embargo, a slight complication arises: the member can use a start_date attribute on the \u0026lt;license_ref\u0026gt; element. In this example, the content item is under a proprietary license for a year after its publication date, after which it is licensed under a CC-BY license:\n\u0026lt;license_ref start_date=\u0026#34;2013-02-03\u0026#34;\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/license \u0026lt;license_ref start_date=\u0026#34;2014-02-03\u0026#34;\u0026gt;http://creativecommons.org/licenses/by/3.0/deed.en_US TDM tools can easily use a combination of the \u0026lt;license_ref\u0026gt; element(s) and the start_date attribute to determine if the content item is currently under embargo.\nIf you are not interested in receiving the metadata for the DOI, you can simply issue an HTTPS HEAD request and you will get the link header without the rest of the DOI record.\nStep 3 Fetching the full-text: you can now perform a standard GET request on the URL to download the full-text from the member’s site. Because the bulk downloading of large amounts of data may put a strain on the member’s servers, we have defined a set of rate-limiting HTTPS headers. You are not obliged to test for and act on these headers, and not all members will use them, but doing so will avoid surprises.\nAn example session using rate limiting curl -k \u0026#34;https://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/515151\u0026#34; -D - -L -O HTTP/1.1 200 OK Date: Fri, 02 Aug 2013 07:10:53 GMT Server: Apache/2.2.22 (Ubuntu) X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.13 CR-TDM-Client-Token: hZqJDbcbKSSRgRG_PJxSBA CR-TDM-Rate-Limit: 5 CR-TDM-Rate-Limit-Remaining: 4 CR-TDM-Rate-Limit-Reset: 1375427514 X-Content-Type-Options: nosniff Last-Modified: Tue, 23 Apr 2013 15:52:01 GMT Status: 200 Content-Length: 9426 Content-Type: application/pdf Problems accessing full-text URLs using our API If you are having trouble accessing the full-text text URLs returned to you in the link header, this may be because:\nYou have hit a rate limit (learn more about rate-limiting headers) You are trying to access content from a publisher that requires you to accept a TDM license; consider modifying your tools to work with such publishers\u0026rsquo; licenses. ", "headings": ["Example using the cURL utility ","Step 1 ","Step 2 ","Step 3 ","An example session using rate limiting ","Problems accessing full-text URLs using our API "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/text-and-data-mining-for-members/", "title": "Text and data mining for members", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Even if you already have an API, ours provides additional benefits as it\u0026rsquo;s a common, standards-based API that works across all members and records. Researchers having to learn many different member APIs for TDM projects doesn’t scale well.\nIt is up to you to decide formats for your full-text in: some offer PDF, others XML, and some plain text. Some members vary what they deliver depending on the age of the content or other variables.", "content": "Even if you already have an API, ours provides additional benefits as it\u0026rsquo;s a common, standards-based API that works across all members and records. Researchers having to learn many different member APIs for TDM projects doesn’t scale well.\nIt is up to you to decide formats for your full-text in: some offer PDF, others XML, and some plain text. Some members vary what they deliver depending on the age of the content or other variables. Our API does not provide automatic access to subscription content - access to subscription content is managed on your site using your existing access control systems.\nAs a member, you need to do two things to enable text and data mining for the metadata records that you have registered with us:\nInclude the link to full-text in the metadata for each DOI so researchers can follow it to access your content Include a license URL in the metadata for each DOI so researchers can use this to find out if they have permission to carry out TDM with your content item Register this information with us using a resource-only deposit or by uploading a .csv file containing the URLs and the related DOIs.\nIf you are concerned about the impact of automated TDM harvesters on your site performance, you may choose to implement rate-limiting headers.\nRate limiting TDM may change the volume of traffic that your servers have to handle when researchers download large numbers of files in bulk. You can mitigate performance issues with rate limiting.\nWe have defined a set of standard HTTPS headers that can be used by servers to convey rate-limiting information to automated text and data mining tools. Well-behaved TDM tools can simply look for these headers when they query member sites in order to understand how to behave so as not to affect the site’s performance. The headers allow a member to define a rate limit window - a time span, such as a minute, an hour, or a day. The member can then specify:\nHeader name Example value Explanation CR-TDM-Rate-Limit 1500 Maximum number of full-text downloads that are allowed to be performed in the defined rate limit window CR-TDM-Rate-Limit-Remaining 76 Number of downloads left for the current rate limit window CR-TDM-Rate-Limit-Reset 1378072800 Remaining time (in UTC epoch seconds) before the rate limit resets and a new rate limit window is started We do not provide or enforce this rate limiting - it’s up to you to implement it if required, and to define a rate limit appropriate for your servers.\nExample member site We have created TinyPub to show an implementation of our API, including rate limiting and IP-based subscription access. You can download this code for reference, but please note that it’s just to illustrate the workings of the system, and is not intended for production.\n", "headings": ["Rate limiting ","Example member site "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/providing-licensing-information-to-tdm-tools/", "title": "Providing licensing information to TDM tools", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Providing the researcher with a link to full-text of scholarly content is of limited value if the researcher has no means of automatically determining what they are allowed to do with it. Members registering licensing information in their metadata (using the \u0026lt;license_ref\u0026gt; element) lets researchers know when they can perform TDM and under what conditions. This license could be proprietary, or an open license such as Creative Commons.\n\u0026lt;program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;license_ref\u0026gt; http://creativecommons.", "content": "Providing the researcher with a link to full-text of scholarly content is of limited value if the researcher has no means of automatically determining what they are allowed to do with it. Members registering licensing information in their metadata (using the \u0026lt;license_ref\u0026gt; element) lets researchers know when they can perform TDM and under what conditions. This license could be proprietary, or an open license such as Creative Commons.\n\u0026lt;program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;license_ref\u0026gt; http://creativecommons.org/licenses/by/3.0/deed.en_US \u0026lt;/license_ref\u0026gt; \u0026lt;/program\u0026gt; In the \u0026lt;license_ref\u0026gt; element, you can include simple embargo information by registering licenses with different start dates. For example, Annals of Psychoceramics B could show that an article was embargoed for a year by listing a proprietary license with a start date of the publication date and a Creative Commons license with a start date at the end of the embargo:\n\u0026lt;program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;license_ref start_date=\u0026#34;2013-02-03\u0026#34;\u0026gt;http://annalsofpsychoceramics.org/proprietary_license.html\u0026lt;/license_ref\u0026gt; \u0026lt;license_ref start_date=\u0026#34;2014-02-03\u0026#34;\u0026gt; http://creativecommons.org/licenses/by/3.0/deed.en_US \u0026lt;/license_ref\u0026gt; \u0026lt;/program\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/rest-api/providing-full-text-links-to-tdm-tools/", "title": "Providing full-text links to TDM tools", "subtitle":"", "rank": 4, "lastmod": "2022-01-03", "lastmod_ts": 1641168000, "section": "Documentation", "tags": [], "description": "Our API makes use of content negotiation (the possibility to serve different versions of a document) to provide a common mechanism for TDM users to locate the full-text of articles on a member’s site. The member is responsible for delivering the content to the user, applying any access control, and applying usage statistics data to content accessed in this way.\nNormally, when you follow a DOI link, the browser sends a signal to the web server asking for the content to be returned in HTML for display in the browser.", "content": "Our API makes use of content negotiation (the possibility to serve different versions of a document) to provide a common mechanism for TDM users to locate the full-text of articles on a member’s site. The member is responsible for delivering the content to the user, applying any access control, and applying usage statistics data to content accessed in this way.\nNormally, when you follow a DOI link, the browser sends a signal to the web server asking for the content to be returned in HTML for display in the browser. Therefore, when you access a DOI using a browser, you are shown the member’s landing page.\nWith content negotiation, you can write programs that specify that, instead of returning a human-readable HTML representation of the landing page, the server should return a machine-readable representation of the data connected with the DOI.\nTo support TDM, members need include full-text URL(s) in their metadata for each content item with a DOI. Anybody using our API to query will be able to retrieve these URLs and follow them directly to the full-text. Members who want to be able to support multiple formats of the full-text of the article will be able to do so - for example, they could support the retrieval of either PDF, XML, or HTML, or just one of these formats.\nThere are two methods that members can use to register links to full-text content: for members who do (method 1) and do not (method 2) support content negotiation on their platforms.\nMethod 1: Member provides a URL which points to content negotiation resource This method is for members who support content negotiation on their platforms.\nThis section of XML illustrates how to specify a single URL end-point where the member platform supports content negotiation:\nhttp://annalsofpsychoceramics.labs.crossref.org/fulltext/10.5555/525252 In this case, the following cURL HTTP GET request:\ncurl -L -iH \u0026#34;Accept: text/turtle\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.5555/525252 will return the following in the HTTPS link header:\nLink: \u0026lt;https://0-data-crossref-org.libus.csd.mu.edu/fulltext/10.5555%2F525252\u0026gt;; rel=\u0026#34;http://0-id-crossref-org.libus.csd.mu.edu/schema/fulltext\u0026#34;; anchor=\u0026#34;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/525252\u0026#34; Example deposit file for DOI 10.55555/525252.\nMethod 2: Member provides specific URLs for each mime-type they support This method is for members who do not support content negotiation on their platforms.\nThis section of XML for the DOI 10.5555/515151 illustrates how to specify separate full-text URLs for separate mime types.\nhttp://annalsofpsychoceramics.labs.crossref.org/fulltext/10.5555/515151.pdf http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.xml In the above case, the following content negotiation request (using cURL) on the DOI 10.5555/515151:\ncurl -L -iH \u0026#34;Accept: text/turtle\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.5555/515151 would return the following in the HTTPS LINK header:\nLink: \u0026lt;https://0-data-crossref-org.libus.csd.mu.edu/fulltext/10.5555%2F515151\u0026gt;; rel=\u0026#34;http://0-id-crossref-org.libus.csd.mu.edu/schema/fulltext\u0026#34;; type=\u0026#34;application/pdf\u0026#34;; anchor=\u0026#34;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.pdf\u0026#34;, \u0026lt;https://0-data-crossref-org.libus.csd.mu.edu/fulltext/10.5555%2F515151\u0026gt;; rel=\u0026#34;http://0-id-crossref-org.libus.csd.mu.edu/schema/fulltext\u0026#34;; type=\u0026#34;application/xml\u0026#34;; anchor=\u0026#34;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/fulltext/10.5555/515151.xml\u0026#34; which would, in turn, direct the requestor to the appropriate URLs for whatever full-text representations are supported.\nExample deposit file for DOI 10.5555/515151.\n", "headings": ["Method 1: Member provides a URL which points to content negotiation resource ","Method 2: Member provides specific URLs for each mime-type they support "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/openurl/", "title": "OpenURL", "subtitle":"", "rank": 4, "lastmod": "2023-01-27", "lastmod_ts": 1674777600, "section": "Documentation", "tags": [], "description": "Our OpenURL service is used primarily by library link resolvers but can also be used to look up metadata records. Please note that OpenURL retrieval includes only bibliographic metadata.\nOn this page, learn more about:\nHow to access OpenURL OpenURL metadata queries Citation metadata parameters Other parameters OpenURL results Retrieving metadata Example metadata queries Example DOI queries NISO 0.1 or 1.0 URLs Simple reverse lookup Retrieve metadata for a DOI A journal article lookup How to access OpenURL Access to the OpenURL service is free, but it does require you to identify yourself using your email address.", "content": "Our OpenURL service is used primarily by library link resolvers but can also be used to look up metadata records. Please note that OpenURL retrieval includes only bibliographic metadata.\nOn this page, learn more about:\nHow to access OpenURL OpenURL metadata queries Citation metadata parameters Other parameters OpenURL results Retrieving metadata Example metadata queries Example DOI queries NISO 0.1 or 1.0 URLs Simple reverse lookup Retrieve metadata for a DOI A journal article lookup How to access OpenURL Access to the OpenURL service is free, but it does require you to identify yourself using your email address. You do not need to register your email address with us in advance, but you do need to include your email address in your query. Find out more.\nIf you are a librarian and you need to use OpenURL with your library link resolver, an email address should be supplied in queries that the link resolver sends to Crossref. This will be configured in your link resolver.\nProviding an email address in your queries means that we can identify and contact a user in the rare event that their queries are overloading our system or otherwise causing issues. Any contact information that you provide in your requests will only stay in our logs for 90 days. We do not give this contact information to anyone else.\nOpenURL access using an email address You need to include your email address in the pid parameter of the OpenURL request. (Please note, in this context, pid stands for personal ID and does not mean a persistent identifier such as a DOI, ROR or ORCID iD).\nFor interfaces that require a key, your email address is your key.\nUse the format below but include your own email address instead of name@someplace.com:\nhttps://doi.crossref.org/openurl?pid=name@someplace.com\u0026amp;aulast=Maas%20LRM\u0026amp;title= JOURNAL%20OF%20PHYSICAL%20OCEANOGRAPHY\u0026amp;volume=32\u0026amp;issue=3\u0026amp;spage=870\u0026amp;date=2002 OpenURL metadata queries The OpenURL query interface uses metadata to identify a matching DOI, and redirects the user to the target of the DOI.\nFor example, this query contains an author name, a journal title, volume, issue, first page, and publication year:\nhttps://doi.crossref.org/openurl?pid=email@address.com\u0026amp;aulast=Maas%20LRM\u0026amp;title= JOURNAL%20OF%20PHYSICAL%20OCEANOGRAPHY\u0026amp;volume=32\u0026amp;issue=3\u0026amp;spage=870\u0026amp;date=2002 The OpenURL query interface matches the query with a metadata record and redirects the user to the relevant persistent identifier landing page at https://0-doi-org.libus.csd.mu.edu/10.1175/1520-0485(2002)032\u0026lt;0870:CT\u0026gt;2.0.CO;2.\nThe OpenURL Query Interface can accept these parameters:\nCitation metadata parameters issn title (journal title) aulast (family name, preferably of first author) volume issue spage (first page) date (publication year YYYY) stitle (short title, which may be supplied as an alternative to title) Other parameters pid (your email address). Note: pid (personal id) is different from PID (persistent identifier) redirect (set to false to return the DOI in XML format instead of redirecting to the target URL. The default is true) multihit (set to true to return DOIs for more than one content item if our system does not find an exact match. The default is false) format (set to unixref to return metadata in UNIXREF format) OpenURL results By default, an OpenURL match will direct the user to the landing page registered for the matched metadata record.\nIn most instances, only a single identifier will be returned. If more than one identifier is returned, the user will be directed to a list of all available DOIs. For example, the query:\nhttps://www.crossref.org/openurl?pid=email@address.com\u0026amp;title=Science\u0026amp;aulast=Fernández\u0026amp;date=2009 will return multiple results.\nRetrieving metadata OpenURL may be used to retrieve metadata records by setting the redirect parameter to \u0026ldquo;false\u0026rdquo;. By default an OpenURL response uses the XSD XML format. The UNIXREF format may be requested by setting the format parameter to \u0026ldquo;unixref\u0026rdquo;.\nExample metadata queries This query will return an XSD-formatted XML metadata record:\nhttps://doi.crossref.org/openurl?issn=03770273\u0026amp;aulast=Walker\u0026amp;volume=54\u0026amp;spage=117\u0026amp;date=1983**\u0026amp;redirect=false**\u0026amp;pid=email@address.com There are multiple matches for this query, when multihit=true the metadata record is returned for all results:\nhttps://doi.crossref.org/openurl?issn=03603016\u0026amp;volume=54\u0026amp;issue=2\u0026amp;spage=215\u0026amp;date=2002\u0026amp;multihit=true\u0026amp;pid=email@address.com Setting multihit=exact will return no matches:\nhttps://doi.crossref.org/openurl?issn=03603016\u0026amp;volume=54\u0026amp;issue=2\u0026amp;spage=215\u0026amp;date=2002\u0026amp;multihit=exact\u0026amp;pid=support@crossref.org Example DOI queries We support DOI queries formatted as OpenURL version 0.1 requests:\nOpen URL query https://0-doi-crossref-org.libus.csd.mu.edu/openurl/?pid=email@address.com\u0026amp;id=doi:10.1103/PhysRev.47.777\u0026amp;noredirect=true Crossref query https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid=email@address.com\u0026amp;id=10.1006/jmbi.2000.4282 Like metadata queries, DOI query results are returned in XML format.\nNISO 0.1 or 1.0 URLs We also support NISO 0.1 and 1.0 URLs as well as some common deviations. In general it supports the San Antonio Profile #1, including in-line, by-value, and by-reference. In the presence of a url_ver= Z39.88-2004 parameter this service will operate on a info:ofi/fmt:kev:mtx:ctx context format with referent formats info:ofi/fmt:kev:mtx:journal or info:ofi/fmt:kev:mtx:book.\nSimple reverse lookup https://0-doi-crossref-org.libus.csd.mu.edu/openurl?pid=email@address.com\u0026amp;url_ver=Z39.88-2004\u0026amp;rft_id=info:doi/10.1103/PhysRev.47.777 Retrieve metadata for a DOI https://0-doi-crossref-org.libus.csd.mu.edu/openurl?pid=email@address.com\u0026amp;url_ver=Z39.88-2004\u0026amp;rft_id=info:doi/10.1361/15477020418786\u0026amp;noredirect=true A journal article lookup https://0-doi-crossref-org.libus.csd.mu.edu/openurl?pid=email@address.com\u0026amp;url_ver=Z39.88-2004\u0026amp;rft_val_fmt=info:ofi/fmt:kev:mtx:journal\u0026amp;rft.atitle=Isolation of a common receptor for coxsackie B\u0026amp;rft.jtitle=Science\u0026amp;rft.aulast=Bergelson\u0026amp;rft.auinit=J\u0026amp;rft.date=1997\u0026amp;rft.volume=275\u0026amp;rft.spage=1320\u0026amp;rft.epage=1323 ", "headings": ["How to access OpenURL ","OpenURL access using an email address ","OpenURL metadata queries ","Citation metadata parameters ","Other parameters ","OpenURL results ","Retrieving metadata ","Example metadata queries ","Example DOI queries ","Open URL query ","Crossref query ","NISO 0.1 or 1.0 URLs ","Simple reverse lookup ","Retrieve metadata for a DOI ","A journal article lookup "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/simple-text-query/", "title": "Simple Text Query", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "The Simple Text Query form allows you to retrieve DOI names for journal articles, books, and chapters by cutting and pasting a reference or reference list into the query box. References are entered as a standard bibliographic entry, such as:\nClow GD, McKay CP, Simmons Jr. GM, and Wharton RA, Jr. 1988. Climatological observations and predicted sublimation rates at Lake Hoare, Antarctica. Journal of Climate 1:715-728. For best results, each reference should appear on a single line.", "content": "The Simple Text Query form allows you to retrieve DOI names for journal articles, books, and chapters by cutting and pasting a reference or reference list into the query box. References are entered as a standard bibliographic entry, such as:\nClow GD, McKay CP, Simmons Jr. GM, and Wharton RA, Jr. 1988. Climatological observations and predicted sublimation rates at Lake Hoare, Antarctica. Journal of Climate 1:715-728. For best results, each reference should appear on a single line. When submitting multiple references, you can enter them in alphabetical order, or as a numbered list. Separate each reference using a blank line.\nUsing Simple Text Query to match references with DOIs If you are a member and want to match and deposit references, please see using Simple Text Query to add references. If you just want to match references to DOIs, follow these instructions.\nGo to the Simple Text Query form and enter a reference or list of references into the search box. Optional: select List all possible DOIs per reference to return multiple results select Include PubMed IDs in results to include PubMed IDs Click Submit The system attempts to find exactly one DOI for each reference. For some citations, multiple DOIs may be deposited for an item, or the metadata in either the reference or record registered with us is not sufficient to make a single match. Selecting List All Possible DOIs will return multiple results which will need to be evaluated to select the appropriate DOI.\nWe want our members to match and register as many references as possible, so there are no limits on the use of this service. We provide space for 1,000 references per submission.\n", "headings": ["Using Simple Text Query to match references with DOIs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/retrieving-identifiers-for-deposited-references/", "title": "Retrieving identifiers for deposited references", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Including a content item\u0026rsquo;s references as a citation list in the item’s metadata is encouraged, especially if you participate in our Cited-by service. During the deposit process, these references are matched and the resulting DOIs are returned in the deposit log.\nReferences that do not match at the time of deposit will be remembered internally and periodically re-run. These subsequent attempts to match references contribute to Cited-by data. The member who deposited the article containing these references can use the following API call to retrieve an updated list of the article\u0026rsquo;s matched references, using role credentials:", "content": "Including a content item\u0026rsquo;s references as a citation list in the item’s metadata is encouraged, especially if you participate in our Cited-by service. During the deposit process, these references are matched and the resulting DOIs are returned in the deposit log.\nReferences that do not match at the time of deposit will be remembered internally and periodically re-run. These subsequent attempts to match references contribute to Cited-by data. The member who deposited the article containing these references can use the following API call to retrieve an updated list of the article\u0026rsquo;s matched references, using role credentials:\nhttps://doi.crossref.org/getResolvedRefs?doi=DOI\u0026amp;usr=role\u0026amp;pwd=password Or using user credentials:\nhttps://doi.crossref.org/getResolvedRefs?doi=DOI\u0026amp;usr=email@address.com/role\u0026amp;pwd=password Note that the role used in the query must have permission to view references for the deposited DOI.\nExamples of retrieving identifiers for references The key returned in the matches is the same key supplied for the corresponding citation in the reference deposit. These results contain both journal articles and a book series:\n{ doi: \u0026#34;10.1103/PhysRevE.91.062714\u0026#34;, matched-references: [ { key: \u0026#34;PhysRevE.91.062714Cc1R1\u0026#34;, doi: \u0026#34;10.1038/nature01609\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc2R1\u0026#34;, doi: \u0026#34;10.1016/S0301-4622(02)00177-1\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc3R1\u0026#34;, doi: \u0026#34;10.1073/pnas.1214051110\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc4R1\u0026#34;, doi: \u0026#34;10.1529/biophysj.106.093062\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc5R1\u0026#34;, doi: \u0026#34;10.1073/pnas.1833310100\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc6R1\u0026#34;, doi: \u0026#34;10.1073/pnas.0802484105\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc7R1\u0026#34;, doi: \u0026#34;10.1021/jz301537t\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc8R1\u0026#34;, doi: \u0026#34;10.1103/PhysRevX.2.031012\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc9R1\u0026#34;, doi: \u0026#34;10.1016/j.bpj.2011.03.067\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc10R1\u0026#34;, doi: \u0026#34;10.1529/biophysj.104.045344\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc11R1\u0026#34;, doi: \u0026#34;10.1021/ja970640a\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc12R1\u0026#34;, doi: \u0026#34;10.1073/pnas.0509011103\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc13R1\u0026#34;, doi: \u0026#34;10.1126/science.1057886\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc14R1\u0026#34;, doi: \u0026#34;10.1103/PhysRevA.18.255\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc15R1\u0026#34;, doi: \u0026#34;10.1103/PhysRevE.89.012144\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc16R1\u0026#34;, doi: \u0026#34;10.1140/epje/e2003-00019-8\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc17R1\u0026#34;, doi: \u0026#34;10.1021/jp061840o\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc18R1\u0026#34;, doi: \u0026#34;10.1021/jp073413w\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc20R1\u0026#34;, doi: \u0026#34;10.1142/2012\u0026#34;, type: \u0026#34;book_title\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc23R1\u0026#34;, doi: \u0026#34;10.1103/PhysRev.91.1505\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc24R1\u0026#34;, doi: \u0026#34;10.1039/b918607g\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc25R1\u0026#34;, doi: \u0026#34;10.1021/jp311028e\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc26R1\u0026#34;, doi: \u0026#34;10.1016/S0006-3495(03)74892-9\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc27R1\u0026#34;, doi: \u0026#34;10.1209/epl/i2003-10158-3\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc28R1\u0026#34;, doi: \u0026#34;10.1016/j.physa.2004.12.005\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc29R1\u0026#34;, doi: \u0026#34;10.1529/biophysj.106.094052\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc30R1\u0026#34;, doi: \u0026#34;10.1529/biophysj.106.094243\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;PhysRevE.91.062714Cc31R1\u0026#34;, doi: \u0026#34;10.1007/978-3-642-61544-3\u0026#34;, type: \u0026#34;book_title\u0026#34; } ] } These results contain a match made to a DataCite DOI (via reg-agency):\n{ doi: \u0026#34;10.50505/doi_2006042312\u0026#34;, matched-references: [ { key: \u0026#34;ref1\u0026#34;, doi: \u0026#34;10.50505/test_20051231320\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;ref2\u0026#34;, doi: \u0026#34;10.50505/test_200611161351\u0026#34;, type: \u0026#34;journal_article\u0026#34; }, { key: \u0026#34;ref3\u0026#34;, doi: \u0026#34;10.5167/UZH-29884\u0026#34;, reg-agency: \u0026#34;DataCite\u0026#34; } ] } ", "headings": ["Examples of retrieving identifiers for references "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/oai-pmh/", "title": "OAI-PMH", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We operate an OAI-PMH service for the distribution of metadata in XML. This system is based on the OAI-PMH version 2 repository framework and implements the interface as documented here.\nThe service interface can be used in different ways by public metadata users, Metadata Plus subscribers, and Crossref members.\nPublic metadata users: we allow public access to two OAI verbs, ListSets and ListIdentifiers, which allow for discovery of available information. Metadata Plus subscribers: access to OAI verbs GetRecord and ListRecords require a subscription to our Metadata Plus service.", "content": "We operate an OAI-PMH service for the distribution of metadata in XML. This system is based on the OAI-PMH version 2 repository framework and implements the interface as documented here.\nThe service interface can be used in different ways by public metadata users, Metadata Plus subscribers, and Crossref members.\nPublic metadata users: we allow public access to two OAI verbs, ListSets and ListIdentifiers, which allow for discovery of available information. Metadata Plus subscribers: access to OAI verbs GetRecord and ListRecords require a subscription to our Metadata Plus service. Users of this service are provided with tokens to identify them. Tokens are placed in HTTPS Authorization headers as: Crossref-Plus-API-Token: Bearer FullTokenHere Crossref members may also use OAI-PMH to retrieve their deposited metadata using our deposit harvester using their member account. Set hierarchy We support selective harvesting according to sets defined by the hierarchy of publisher and title. Setspecs are formatted as follows:\nrecord type:prefix:pubID (learn more about publication IDs) (for example: J:10.1002:4 = Journal content by the publisher Wiley, journal title Applied Organometallic Chemistry) record type:prefix (for example: J:10.1002, journals owned by publisher Wiley) The from and until dates in a request capture when a record was deposited or updated, not the published date of the item. This means a request for records from yesterday through today will return all records added or changed between then and now, regardless of the publication dates included in the records.\nSet record types are:\nJ for journals B for books, conference proceedings, dissertations, reports, and datasets S for series The default set for both ListIdentifiers and ListRecords is J (journals). A set (B for books or conference proceedings, S for series) must be specified to retrieve non-journal data.\nWith the ListSets request the set parameter is optional. Leaving off the set parameter will return only journal data which includes a list of publishers, their journal titles, and each year of publication for which we have metadata records.\nWith the ListIdentifiers request the set, from, and until parameters are optional. The from and until parameters are used to specify dates when the DOIs were registered with us and not the publication date.\nExamples of requests Request a list of DOIs registered since 2010-08-11:\nhttps://oai.crossref.org/oai?verb=ListIdentifiers\u0026amp;metadataPrefix=cr_unixsd\u0026amp;from=2010-08-11 Request all journal sets:\nhttps://oai.crossref.org/oai?verb=ListSets\u0026amp;set=J Request all sets with record type \u0026lsquo;B\u0026rsquo;:\nhttps://oai.crossref.org/oai?verb=ListSets\u0026amp;set=B Request records for title \u0026lsquo;98765\u0026rsquo; with prefix 10.1234 registered or updated on 2017-07-06:\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=J:10.1234:98765\u0026amp;from=2017-07-06\u0026amp;until=2017-07-06 Best practice for performance We allow 3 concurrent initial OAI-PMH requests per user. There is no concurrency limit for follow-on requests (requests made with a resumption token). Due to the size of the repository, it is highly discouraged to perform a ListRecords action for the entire collection.\nThe best possible performance is had by requesting the changes made to one publication on a given date, such as:\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=J:10.1234:98765\u0026amp;from=2017-07-06\u0026amp;until=2017-07-06 If you are harvesting a large amount of data and run up against our 3 concurrent initial request limitation, it is recommended that you request data by prefix for a short time-frame (days to a week). For example, this request will give you all journal records owned by prefix 10.1234 registered or updated between 2017-07-06 and 2017-07-09 :\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=J:10.1234\u0026amp;from=2017-07-06\u0026amp;until=2017-07-09 Using resumption tokens with OAI-PMH Many OAI requests are too big to be retrieved in a single transaction. If a given response contains a resumption token, you must make an additional request to retrieve the rest of the data. Resumption tokens remain viable for 48 hours.\nThe resumption token includes an expiry date of 48 hours:\n\u0026lt;resumptionToken expirationDate=\u0026#34;2015-10-28T00:00:00\u0026#34;\u0026gt;c6cafedc-ef48-42a3-847c-b682dc58b617\u0026lt;/resumptionToken\u0026gt; The token should be appended to the end of the next request:\nhttps://oai.crossref.org/oai?verb=ListSets\u0026amp;set=J:10.1007\u0026amp;resumptionToken=c6cafedc-ef48-42a3-847c-b682dc58b617 Snapshots - part of our Metadata Plus service Metadata Plus snapshots provide access to our 160,104,382-plus metadata records in a single file, providing an easy way to retrieve an up-to-date copy of our records. Snapshots are available for Metadata Plus service users.\nThe files are made available via a /snapshots route in the REST API which offers a compressed .tar file (tar.gz) containing the full extract of the metadata corpus in either JSON or XML formats.\nOAI-PMH example files An example application for harvesting Crossref OAI data\nHTTPClient.jar oaipmhRequest.class oaipmhRequest.java oaipmhRequest$FullParser.class oaipmhRequest$SAXError.class Example ListIdentifiers, ListSets, and ListIdentifiers responses\ncmsEnhanced_ListIdentifiers.xml cmsEnhanced_ListRecords.xml cmsEnhanced_ListRecords.xml ", "headings": ["Set hierarchy ","Examples of requests ","Best practice for performance ","Using resumption tokens with OAI-PMH ","Snapshots - part of our Metadata Plus service ","OAI-PMH example files "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/deposit-harvester/", "title": "Deposit harvester", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The deposit harvester allows you to retrieve metadata records for content that you\u0026rsquo;ve registered. The metadata retrieved is in our UNIXSD output format, which delivers the exact metadata submitted in a deposit, including any citations registered. Members (or their designated third parties) may only retrieve their own metadata.\nThe harvester uses Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to deliver the metadata. The verbs Identify, ListMetadataFormats, ListSets, ListIdentifiers, ListRecords, and GetRecord are supported.", "content": "The deposit harvester allows you to retrieve metadata records for content that you\u0026rsquo;ve registered. The metadata retrieved is in our UNIXSD output format, which delivers the exact metadata submitted in a deposit, including any citations registered. Members (or their designated third parties) may only retrieve their own metadata.\nThe harvester uses Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to deliver the metadata. The verbs Identify, ListMetadataFormats, ListSets, ListIdentifiers, ListRecords, and GetRecord are supported.\nOwnership and retrieval restrictions - who can retrieve records? The deposit harvester will only retrieve records for the authorized owner of the metadata records. Metadata ownership is established by the DOI prefix(es) associated with a user\u0026rsquo;s account (learn more about transferring responsibility for DOIs. Many members have one prefix and one account, but some members may have multiple prefixes. For example, Member A has been assigned account abcd, which is associated with prefixes 10.xxxx, 10.yyyy, and 10.zzzz. Member A can retrieve metadata owned by prefixes 10.xxxx, 10.yyyy, and 10.zzzz using their abcd account.\nOwnership of DOIs and titles often moves from member to member, so a title-owning prefix will not always match the prefix of the DOIs attached to the title. Retrieval permission is granted to the current owner, not the original depositor. For example, Member B registers identifier 10.5555/jfo.33425. Ownership of the journal and all identifiers is transferred to Member A with prefix 10.50505. The DOI is now \u0026ldquo;owned\u0026rdquo; by prefix 10.50505, and only Member A may harvest the metadata record for that identifier.\nSets The deposit harvester supports a hierarchy of sets. The hierarchy is in three parts: \u0026lt;work-type\u0026gt;:\u0026lt;prefix\u0026gt;:\u0026lt;publication-id\u0026gt;. For example, the set J:10.12345:6789 will return metadata for a journal (J), with prefix 10.12345, and publication id 6789. The set B will return all book metadata. The set S:10.12345 will return all the series metadata associated with the 10.12345 prefix.\nThe work-type designators are:\nJ for journals B for books and book-like works (reports, conference proceedings, standards, dissertations) S for non-journal series and series-like works. If no set is specified, the set \u0026ldquo;J\u0026rdquo; is used.\nExample requests ListSets Retrieve list of titles owned by the prefixes assigned to your account:\nhttps://oai.crossref.org/DepositHarvester?verb=ListSets\u0026amp;usr=username\u0026amp;pwd=password ListRecords Retrieve data for a prefix:\nhttps://oai.crossref.org/DepositHarvester?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=work-type:prefix\u0026amp;usr=username\u0026amp;pwd=password Retrieve data for a single title:\nhttps://oai.crossref.org/DepositHarvester?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=work-type:prefix:title ID\u0026amp;usr=username\u0026amp;pwd=password GetRecord Retrieve data for a single DOI:\nhttps://oai.crossref.org/DepositHarvester?verb=GetRecord\u0026amp;metadataPrefix=cr_unixsd\u0026amp;identifier=info:doi/DOI\u0026amp;usr=username\u0026amp;pwd=password When using GetRecord, the \u0026lt;DOI\u0026gt; value should be URL encoded.\nIdentify Use to check the status of the deposit harvester (no account needed):\nhttps://oai.crossref.org/DepositHarvester?verb=Identify ListMetadataFormats Lists available metadata formats (currently UNIXREF)\nhttps://oai.crossref.org/DepositHarvester?verb=ListMetadataFormats Request parameters work-type: J for journals, B for book or conference proceeding titles, S for series prefix: the owning prefix of the title being retrieved title ID: the title identification number assigned by us. Title IDs are included in the ListSets response described above. username and password: account details for the prefix/title being retrieved Results Results conform to Crossref\u0026rsquo;s UNIXREF format and may contain the following root elements:\njournal book conference dissertation report-paper standard sa_component database Using resumption tokens with the deposit harvester Some OAI-PMH requests are too big to be retrieved in a single transaction. If a given response contains a resumption token, the user must make an additional request to retrieve the rest of the data. You must provide the account name and password with both the initial request and subsequent resumption requests. A resumption without authentication details will fail. Learn more about resumption tokens.\nInitial request\nhttps://oai.crossref.org/DepositHarvester?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=J:10.4102:83986\u0026amp;usr=username\u0026amp;pwd=password Request with resumption token\nhttps://oai.crossref.org/DepositHarvester?verb=ListRecords\u0026amp;metadataPrefix=cr_unixsd\u0026amp;set=J:10.4102:83986\u0026amp;usr=username\u0026amp;pwd=password\u0026amp;resumptionToken=01f7f30e-f692-4cc4-97b2-1eaf88b3f17f ", "headings": ["Ownership and retrieval restrictions - who can retrieve records? ","Sets ","Example requests ","ListSets ","ListRecords ","GetRecord ","Identify ","ListMetadataFormats ","Request parameters ","Results ","Using resumption tokens with the deposit harvester "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/content-negotiation/", "title": "Content negotiation", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Content negotiation means the ability to specify the format in which a record will be returned.\nCrossref and DataCite support content-negotiated DOIs via their respective DOIs. Learn more about content negotiation.\nContent negotiation responses are returned by one of our three API pools, just like any other REST API response. Requests that include mailto identification in the user agent header will be serviced by the polite pool, and requests that include a token for Metadata Plus subscribers will be serviced by the Plus pool.", "content": "Content negotiation means the ability to specify the format in which a record will be returned.\nCrossref and DataCite support content-negotiated DOIs via their respective DOIs. Learn more about content negotiation.\nContent negotiation responses are returned by one of our three API pools, just like any other REST API response. Requests that include mailto identification in the user agent header will be serviced by the polite pool, and requests that include a token for Metadata Plus subscribers will be serviced by the Plus pool. All other requests will be serviced by the public pool.\nExamples using cURL curl -D - -L -H \u0026#34;Accept: application/rdf+xml\u0026#34; \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026#34; curl -D - -L -H \u0026#34;Accept: text/turtle\u0026#34; \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026#34; curl -D - -L -H \u0026#34;Accept: application/atom+xml\u0026#34; \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026#34; curl -D - -L -H \u0026#34;Accept: text/x-bibtex\u0026#34; \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026#34; The Crossref-specific \u0026lsquo;unixref\u0026rsquo; format can be used as well:\ncurl -D - -L -H \u0026#34;Accept: application/unixref+xml\u0026#34; \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026#34; Content negotiation also supports formatted citations via the text/x-bibliography record type. A list of all supported citation styles is available via our API.\n", "headings": ["Examples using cURL "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/", "title": "XML API", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The XML API supports XML-formatted querying. XML queries give you significant control over the DOI matching process. The XML API is designed to (typically) return only one DOI, the one that best fits the metadata supplied in the query, and is therefore suitable for automated matching.\nQuery results are returned in XML, and will contain a full or abbreviated metadata record for matched items, depending on the request. The query input schema file is crossref_query_input2.", "content": "The XML API supports XML-formatted querying. XML queries give you significant control over the DOI matching process. The XML API is designed to (typically) return only one DOI, the one that best fits the metadata supplied in the query, and is therefore suitable for automated matching.\nQuery results are returned in XML, and will contain a full or abbreviated metadata record for matched items, depending on the request. The query input schema file is crossref_query_input2.0.xsd - learn more in our schema documentation.\nThe XML API also supports DOI-to-metadata queries.\nTo use the XML API, you need to identify yourself. Authentication can be achieved by providing either your Crossref member account credentials or your email address into your query. Query examples that demonstrate how to supply credentials can be found with instructions for Using HTTPS to query.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/using-https-to-query/", "title": "Using HTTPS to query", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Most queries sent to our XML API will use HTTPS. For HTTPS querying you need to add either your Crossref member account credentials or an email address in the query. Use a DOI-to-metadata query when you know the DOI and want to find its associated metadata.\nHTTPS query examples DOI-to-metadata query example https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;id=10.1577/H02-043 DOI-to-metadata query requesting UNIXSD results https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;format=unixsd**\u0026amp;id=10.1577/H02-043 DOI-to-metadata queries may also be supplied using this format https://0-doi-crossref-org.libus.csd.mu.edu/search/doi?pid={email@address.com}\u0026amp;doi=10.1577/H02-043 XML query https://doi.", "content": "Most queries sent to our XML API will use HTTPS. For HTTPS querying you need to add either your Crossref member account credentials or an email address in the query. Use a DOI-to-metadata query when you know the DOI and want to find its associated metadata.\nHTTPS query examples DOI-to-metadata query example https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;id=10.1577/H02-043 DOI-to-metadata query requesting UNIXSD results https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;format=unixsd**\u0026amp;id=10.1577/H02-043 DOI-to-metadata queries may also be supplied using this format https://0-doi-crossref-org.libus.csd.mu.edu/search/doi?pid={email@address.com}\u0026amp;doi=10.1577/H02-043 XML query https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?usr={email@address.com}\u0026amp;format=unixref\u0026amp;qdata=\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt;\u0026lt;query_batch version=\u0026#34;2.0\u0026#34;... where:\nusr or pid is your email address format is the desired results format qdata is the query data (XML is the recommended format) id is a Crossref DOI XML querying An XML query must contain complete and valid query XML. Multiple queries may be included in a single XML file, but the \u0026lt;query\u0026gt; element is repeatable. For best results, do not exceed 5 queries per HTTPS XML query request.\nMultiple queries in a single request https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?usr=username\u0026amp;pwd=password\u0026amp;qdata=\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd\u0026#34;\u0026gt;\u0026lt;head\u0026gt;\u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt;\u0026lt;doi_batch_id\u0026gt;ABC_123_fff\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;query1\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34; forward-match=\u0026#34;false\u0026#34;\u0026gt;\u0026lt;issn match=\u0026#34;optional\u0026#34;\u0026gt;15360075\u0026lt;/issn\u0026gt;\u0026lt;journal_title match=\u0026#34;exact\u0026#34;\u0026gt;American Journal of Bioethics\u0026lt;/journal_title\u0026gt;\u0026lt;author match=\u0026#34;fuzzy\u0026#34; search-all-authors=\u0026#34;false\u0026#34;\u0026gt;Agich\u0026lt;/author\u0026gt;\u0026lt;volume match=\u0026#34;fuzzy\u0026#34;\u0026gt;1\u0026lt;/volume\u0026gt;\u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt;\u0026lt;first_page\u0026gt;50\u0026lt;/first_page\u0026gt;\u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt;\u0026lt;article_title\u0026gt;The Salience of Narrative for Bioethics\u0026lt;/article_title\u0026gt;\u0026lt;/query\u0026gt;\u0026lt;query key=\u0026#34;query2\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;unstructured_citation\u0026gt;Hungate, B. A., \u0026amp;amp; Hampton, H. M. (2012). Ecosystem services: Valuing ecosystems for climate. Nature Climate Change, 2(3), 151-152.\u0026lt;/unstructured_citation\u0026gt;\u0026lt;/query\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/query_batch\u0026gt; HTTPS XML query example https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?usr=email@address.com\u0026amp;qdata=\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd\u0026#34;\u0026gt;\u0026lt;head\u0026gt;\u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt;\u0026lt;doi_batch_id\u0026gt;ABC_123_fff\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;1178517\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34; forward-match=\u0026#34;false\u0026#34;\u0026gt;\u0026lt;issn match=\u0026#34;optional\u0026#34;\u0026gt;15360075\u0026lt;/issn\u0026gt;\u0026lt;journal_title match=\u0026#34;exact\u0026#34;\u0026gt;American Journal of Bioethics\u0026lt;/journal_title\u0026gt;\u0026lt;author match=\u0026#34;fuzzy\u0026#34; search-all-authors=\u0026#34;false\u0026#34;\u0026gt;Agich\u0026lt;/author\u0026gt;\u0026lt;volume match=\u0026#34;fuzzy\u0026#34;\u0026gt;1\u0026lt;/volume\u0026gt;\u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt;\u0026lt;first_page\u0026gt;50\u0026lt;/first_page\u0026gt;\u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt;\u0026lt;article_title\u0026gt;The Salience of Narrative for Bioethics\u0026lt;/article_title\u0026gt;\u0026lt;/query\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/query_batch\u0026gt; HTTPS XML query with an unstructured (formatted) citation Learn more about querying with an unstructured (formatted) citation\nhttps://doi.crossref.org/servlet/query?usr=email@address.com\u0026amp;format=unixref\u0026amp;qdata=\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt;\u0026lt;query_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd\u0026#34;\u0026gt;\u0026lt;head\u0026gt;\u0026lt;email_address\u0026gt;your@email.org\u0026lt;/email_address\u0026gt;\u0026lt;doi_batch_id\u0026gt;01032012\u0026lt;/doi_batch_id\u0026gt;\u0026lt;/head\u0026gt;\u0026lt;body\u0026gt;\u0026lt;query key=\u0026#34;q1\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;unstructured_citation\u0026gt;Hungate, B. A., \u0026amp;amp; Hampton, H. M. (2012). Ecosystem services: Valuing ecosystems for climate. Nature Climate Change, 2(3), 151-152.\u0026lt;/unstructured_citation\u0026gt;\u0026lt;/query\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/query_batch\u0026gt; HTTPS XML query with URL encoding https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?usr=email@address.com\u0026amp;qdata=%3C?xml%20version=%221.0%22?%3E%3Cquery_batch%20version=%222.0%22%20xsi:schemaLocation=%22https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0%20https://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd%22%20xmlns=%22https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0%22%20xmlns:xsi=%22http://www.w3.org/2001/XMLSchema-instance%22%3E%3Chead%3E%3Cemail_address%3Esupport@crossref.org%3C/email_address%3E%3Cdoi_batch_id%3EABC_123_fff%3C/doi_batch_id%3E%20%3C/head%3E%20%3Cbody%3E%20%3Cquery%20key=%221178517%22%20enable-multiple-hits=%22false%22%20forward-match=%22false%22%3E%3Cissn%20match=%22optional%22%3E15360075%3Cissn%3Cjournal_title%20match=%22exact%22%3EAmerican%20Journal%20of%20Bioethics%3C/journal_title%3E%3Cauthor%20match=%22fuzzy%22%20search-all-authors=%22false%22%3EAgich%3C/author%3E%3Cvolume%20match=%22fuzzy%22%3E1%3C/volume%3E%3Cissue%3E1%3C/issue%3E%3Cfirst_page%3E50%3C/first_page%3E%3Cyear%3E2001%3C/year%3E%3Carticle_title%3EThe%20Salience%20of%20Narrative%20for%20Bioethics%3C/article_title%3E%3C/query%3E%3C/body%3E%3C/query_batch% Note that some characters must be URL-encoded:\nCharacter Name URL code ; Semi-colon %3B / Slash, virgule, separatrix, or solidus %2F ? Question mark %3F : Colon %3A @ At sign, arobase %40 = Equals sign %3D \u0026amp; Ampersand %26 lf line feed %0A ", "headings": ["HTTPS query examples ","DOI-to-metadata query example ","DOI-to-metadata query requesting UNIXSD results ","DOI-to-metadata queries may also be supplied using this format ","XML query ","XML querying ","Multiple queries in a single request ","HTTPS XML query example ","HTTPS XML query with an unstructured (formatted) citation ","HTTPS XML query with URL encoding "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/doi-to-metadata-query/", "title": "DOI-to-metadata query", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Metadata may be retrieved in JSON format using our REST API, or in XML format as documented below.\nMost DOI-to-metadata queries are done via HTTPS using synchronous HTTPS queries, but may also be submitted as asynchronous batch queries.\nExample HTTPS query https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;format=unixref\u0026amp;id=DOI OpenURL query We support DOI queries formatted as OpenURL version 0.1 requests. For complete metadata (UNIXREF) include the format=\u0026quot;unixref\u0026quot; parameter.\nhttps://www.crossref.org/openurl/?pid={myemail@crossref.org}\u0026amp;format=unixref\u0026amp;id=doi:10.1577/H02-043\u0026amp;noredirect=true Query results: xsd_xml format (default) \u0026lt;crossref_result version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;https://www.", "content": "Metadata may be retrieved in JSON format using our REST API, or in XML format as documented below.\nMost DOI-to-metadata queries are done via HTTPS using synchronous HTTPS queries, but may also be submitted as asynchronous batch queries.\nExample HTTPS query https://0-doi-crossref-org.libus.csd.mu.edu/servlet/query?pid={email@address.com}\u0026amp;format=unixref\u0026amp;id=DOI OpenURL query We support DOI queries formatted as OpenURL version 0.1 requests. For complete metadata (UNIXREF) include the format=\u0026quot;unixref\u0026quot; parameter.\nhttps://www.crossref.org/openurl/?pid={myemail@crossref.org}\u0026amp;format=unixref\u0026amp;id=doi:10.1577/H02-043\u0026amp;noredirect=true Query results: xsd_xml format (default) \u0026lt;crossref_result version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0 https://0-www-crossref-org.libus.csd.mu.edu/schema/queryResultSchema/crossref_query_output2.0.xsd\u0026#34; xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;none\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query status=\u0026#34;resolved\u0026#34; fl_count=\u0026#34;6\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1577/H02-043\u0026lt;/doi\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;0899-7659\u0026lt;/issn\u0026gt; \u0026lt;issn type=\u0026#34;electronic\u0026#34;\u0026gt;1548-8667\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;Journal of Aquatic Animal Health\u0026lt;/journal_title\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;contributor sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;John P.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Hawke\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ronald L.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Thune\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Richard K.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Cooper\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Erika\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Judice\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Maria\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Kelly-Smith\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;volume\u0026gt;15\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;189\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;201\u0026lt;/last_page\u0026gt; \u0026lt;year media_type=\u0026#34;print\u0026#34;\u0026gt;2003\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;article_title\u0026gt;Molecular and Phenotypic Characterization of Strains ofPhotobacterium damselaesubsp.piscicidaIsolated from Hybrid Striped Bass Cultured in Louisiana, USA\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; Query results: UNIXREF format \u0026lt;doi_records\u0026gt; \u0026lt;doi_record owner=\u0026#34;10.1080\u0026#34; timestamp=\u0026#34;2011-12-08 13:28:43\u0026#34;\u0026gt; \u0026lt;crossref\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Journal of Aquatic Animal Health\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;Journal of Aquatic Animal Health\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;0899-7659\u0026lt;/issn\u0026gt; \u0026lt;issn media_type=\u0026#34;electronic\u0026#34;\u0026gt;1548-8667\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;09\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2003\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;15\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Molecular and Phenotypic Characterization of Strains of \u0026lt;i\u0026gt;Photobacterium damselae\u0026lt;/i\u0026gt;subsp. \u0026lt;i\u0026gt;piscicida\u0026lt;/i\u0026gt; Isolated from Hybrid Striped Bass Cultured in Louisiana, USA\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;John P.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Hawke\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ronald L.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Thune\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Richard K.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Cooper\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Erika\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Judice\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Maria\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Kelly-Smith\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;09\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2003\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;189\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;201\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;sequence-number\u0026#34;\u0026gt;1\u0026lt;/item_number\u0026gt; \u0026lt;identifier id_type=\u0026#34;doi\u0026#34;\u0026gt;10.1577/H02-043\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1577/H02-043\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-tandfonline-com.libus.csd.mu.edu/doi/abs/10.1577/H02-043\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/crossref\u0026gt; \u0026lt;/doi_record\u0026gt; \u0026lt;/doi_records\u0026gt; DOI-to-metadata XML batch query DOI-to-metadata queries can also be performed using XML, allowing the use of control features available only with XML queries.\nExample XML query \u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mykey\u0026#34; expanded-results=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1006/jmbi.2000.4282\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; ", "headings": ["Example HTTPS query ","OpenURL query ","Query results: xsd_xml format (default) ","Query results: UNIXREF format ","DOI-to-metadata XML batch query ","Example XML query "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/xml-query-examples/", "title": "XML query examples", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This section includes examples for:\na strict journal article query a less strict journal article query an unstructured citation a book chapter query a book title query a book title query without an author controlling query result XML searching all authors DOI-to-metadata query (retrieves metadata for a DOI) A strict journal article query This query is fairly strict - it is requesting a single match for the given metadata. The ISSN is provided but does not need to be used for matching (match=\u0026quot;optional\u0026quot;).", "content": "This section includes examples for:\na strict journal article query a less strict journal article query an unstructured citation a book chapter query a book title query a book title query without an author controlling query result XML searching all authors DOI-to-metadata query (retrieves metadata for a DOI) A strict journal article query This query is fairly strict - it is requesting a single match for the given metadata. The ISSN is provided but does not need to be used for matching (match=\u0026quot;optional\u0026quot;). The journal title needs to match exactly (match=\u0026quot;exact\u0026quot;), no fuzzy matching will be applied. Fuzzy matching is applied to the author (match=\u0026quot;fuzzy\u0026quot;) but only the first author will be matched.\n\u0026lt;query key=\u0026#34;1178517\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;issn match=\u0026#34;optional\u0026#34;\u0026gt;15360075\u0026lt;issn\u0026gt; \u0026lt;journal_title match=\u0026#34;exact\u0026#34;\u0026gt;American Journal of Bioethics\u0026lt;/journal_title\u0026gt; \u0026lt;author match=\u0026#34;fuzzy\u0026#34; search-all-authors=\u0026#34;false\u0026#34;\u0026gt;Agich\u0026lt;/author\u0026gt; \u0026lt;volume match=\u0026#34;fuzzy\u0026#34;\u0026gt;1\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;50\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;article_title\u0026gt;The Salience of Narrative for Bioethics\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; A less strict journal article query The query below will return multiple matches (enable-multiple-hits=\u0026quot;true\u0026quot;) and fuzzy match the author against all deposited authors, and will do an author/article title query if the full metadata query does not produce a match.\n\u0026lt;query key=\u0026#34;1178517\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34; secondary-query=\u0026#34;author-title\u0026#34;\u0026gt; \u0026lt;journal_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;American Journal of Bioethics\u0026lt;/journal_title\u0026gt; \u0026lt;author match=\u0026#34;fuzzy\u0026#34; search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Agich\u0026lt;/author\u0026gt; \u0026lt;volume match=\u0026#34;fuzzy\u0026#34;\u0026gt;1\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;50\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;article_title\u0026gt;The Salience of Narrative for Bioethics\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; An unstructured citation This citation has not been marked up into separate elements. We\u0026rsquo;ll try to break it up, but the results may not be as accurate.\n\u0026lt;query key=\u0026#34;q1\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Hungate, B. A., \u0026amp;amp; Hampton, H. M. (2012). Ecosystem services: Valuing ecosystems for climate. Nature Climate Change, 2(3), 151-152. \u0026lt;/unstructured_citation\u0026gt; \u0026lt;/query\u0026gt; Book chapter query A query which will return the DOI for a single chapter in the specific title\n\u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt;Casteilla\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;155\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;isbn\u0026gt;1-59259-231-7\u0026lt;/isbn\u0026gt; \u0026lt;volume_title\u0026gt;Adipose Tissue Protocol\u0026lt;/volume_title\u0026gt; \u0026lt;/query\u0026gt; Searching for individual chapters within a book may also be done by using just the author name and chapter title (author name is optional, but should be included for better results):\n\u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt;Casteilla\u0026lt;/author\u0026gt; \u0026lt;article_title\u0026gt;Choosing an Adipose Tissue Depot for Sampling \u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; Book title query Book title queries should include the book title (as volume_title) and author (or editor) when available.\n\u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt; Ailhaud\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;155\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;isbn\u0026gt;1-59259-231-7\u0026lt;/isbn\u0026gt; \u0026lt;volume_title\u0026gt;Adipose Tissue Protocol\u0026lt;/volume_title\u0026gt; \u0026lt;/query\u0026gt; Book title query without an author Some title-level book DOIs lack author information. If you do not have author information to include in your query or you are querying for an authorless book, for best results your query should instruct the system to ignore author by setting the author match attribute to null.\n\u0026lt;query key=\u0026#34;555-555\u0026#34;\u0026gt; \u0026lt;isbn\u0026gt;9780387791456\u0026lt;/isbn\u0026gt; \u0026lt;volume_title\u0026gt;Ordinary and Partial Differential Equations\u0026lt;/volume_title\u0026gt; \u0026lt;year\u0026gt;2009\u0026lt;/year\u0026gt; \u0026lt;author match=\u0026#34;null\u0026#34;/\u0026gt; \u0026lt;/query\u0026gt; Controlling query result XML Crossref query results can be retrieved in several formats. By default the XSD_XML format will only contain basic bibliographic metadata. Setting expanded-results to TRUE will also return the article title.\nThis example shows use of expanded-results=true along with enable-multiple-hits=true:\nQuery \u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;someone@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mutliResolve1\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34; expanded-results=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0360-3016\u0026lt;/issn\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; Query results \u0026lt;crossref_result version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0 https://0-www-crossref-org.libus.csd.mu.edu/qrschema/crossref_query_output2.0.xsd\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;hisham@atypon.com\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mutliResolve1\u0026#34; status=\u0026#34;multiresolved\u0026#34; fl_count=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1016/S0360-3016(02)03429-6\u0026lt;/doi\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;03603016\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt; International Journal of Radiation Oncology*Biology*Physics\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;KIM\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;article_title\u0026gt; Potential radiation sensitizing effect of SU5416 by down-regulating the COX-2 expression in human lung cancer cells \u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;query key=\u0026#34;mutliResolve1\u0026#34; status=\u0026#34;multiresolved\u0026#34; fl_count=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1016/S0360-3016(02)03428-4\u0026lt;/doi\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;03603016\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt; International Journal of Radiation Oncology*Biology*Physics \u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;WILLIAMS\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;article_title\u0026gt; Effect of the administration of lovastatin on the development of pulmonary fibrosis following whole lung irradiation in a mouse model \u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; The system will return no DOIs if an ambiguity exists. Setting enable-multiple-hits to true instructs the system to return the list of DOIs.\nSearching all authors Normally the author name supplied in a query must be that of the article\u0026rsquo;s first author. First author is an optional designation made by the member when depositing a DOI\u0026rsquo;s metadata. Articles registered without a first author designation handicap queries that depend on author (for example, those which do not supply a page number). In an XML query there is a property called search-all-authors which forces the process to examine all authors associated with the article.\nThis example shows a query that would not return any results if this feature were not used:\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.com\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mutliResolve1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Advances in Applied Probability\u0026lt;/journal_title\u0026gt; \u0026lt;author search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Weil\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;33\u0026lt;/volume\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; DOI-to-metadata query (retrieves metadata for a DOI) \u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;hisham@atypon.com\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mykey\u0026#34; expanded-results=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1006/jmbi.2000.4282\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; DOI-to-metadata query results \u0026lt;crossref_result version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0 https://0-www-crossref-org.libus.csd.mu.edu/qrschema/crossref_query_output2.0.xsd\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;hisham@atypon.com\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;Sample multi resolve\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;mykey\u0026#34; status=\u0026#34;resolved\u0026#34; fl_count=\u0026#34;1\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1006/jmbi.2000.4282\u0026lt;/doi\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;00222836\u0026lt;/issn\u0026gt; \u0026lt;issn type=\u0026#34;electronic\u0026#34;\u0026gt;10898638\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;Journal of Molecular Biology\u0026lt;/journal_title\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;contributor first-author=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Y\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Jiang\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;volume\u0026gt;305\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;377\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;article_title\u0026gt; Specific interaction between anticodon nuclease and the tRNALys wobble base \u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; ", "headings": ["A strict journal article query ","A less strict journal article query ","An unstructured citation ","Book chapter query ","Book title query ","Book title query without an author ","Controlling query result XML ","Query ","Query results ","Searching all authors ","DOI-to-metadata query (retrieves metadata for a DOI) ","DOI-to-metadata query results "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/xml-query-results-and-errors/", "title": "XML query results and errors", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Query results may be requested in three different formats: XSD_XML, UNIXREF, and UNIXSD.\nUNIXSD is the recommended and most robust format, but XSD_XML is the default for many services for legacy reasons.\nPossible errors returned by malformed or insufficient queries include:\nan invalid XML query will return no result (\u0026lt;body/\u0026gt;) either first page or author must be supplied either ISSN or journal title must be supplied unreasonable DOI found (an unreasonable DOI does not follow the expected DOI format (10.", "content": "Query results may be requested in three different formats: XSD_XML, UNIXREF, and UNIXSD.\nUNIXSD is the recommended and most robust format, but XSD_XML is the default for many services for legacy reasons.\nPossible errors returned by malformed or insufficient queries include:\nan invalid XML query will return no result (\u0026lt;body/\u0026gt;) either first page or author must be supplied either ISSN or journal title must be supplied unreasonable DOI found (an unreasonable DOI does not follow the expected DOI format (10.XXXX/yyy...) and/or exceeds 200 characters) either ISSN/ISBN or series/volume title must be supplied. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/querying-for-books/", "title": "Querying for books", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Querying for a book rather than a book chapter sometimes needs extra attention. Here are some examples.\nBook chapter queries Searching for individual chapters within a book may be done by using just the author name and chapter title (author name is optional but should be included for better results). This method is less exact than including book-level metadata in a query:\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://www.", "content": "Querying for a book rather than a book chapter sometimes needs extra attention. Here are some examples.\nBook chapter queries Searching for individual chapters within a book may be done by using just the author name and chapter title (author name is optional but should be included for better results). This method is less exact than including book-level metadata in a query:\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;someone@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;SomeTrackingID2\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt;Casteilla\u0026lt;/author\u0026gt; \u0026lt;article_title\u0026gt;Choosing an Adipose Tissue Depot for Sampling \u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; This query contains a page number and author of a specific chapter as well as book metadata. It will return the chapter with author Casteilla and page 1:\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;someone@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;SomeTrackingID2\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt;Casteilla\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;155\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;isbn\u0026gt;1-59259-231-7\u0026lt;/isbn\u0026gt; \u0026lt;volume_title\u0026gt;Adipose Tissue Protocol\u0026lt;/volume_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; Book title queries The deposit from the previous example also created a DOI for the book itself which can be found with a query containing the book\u0026rsquo;s editor (in the \u0026lt;author\u0026gt; element):\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;someone@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;SomeTrackingID2\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;MyKey1\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;author\u0026gt; Ailhaud\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;155\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;isbn\u0026gt;1-59259-231-7\u0026lt;/isbn\u0026gt; \u0026lt;volume_title\u0026gt;Adipose Tissue Protocol\u0026lt;/volume_title\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; Many title-level book DOIs do not have author information deposited. If you do not have author information to include in your query or you are querying for an authorless book, for best results your query should instruct the system to ignore author by setting the author match attribute to null.\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34;xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0 https://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;test@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;test\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query key=\u0026#34;555-555\u0026#34; \u0026gt; \u0026lt;isbn\u0026gt;9780387791456\u0026lt;/isbn\u0026gt; \u0026lt;journal_title\u0026gt;Ordinary and Partial Differential Equations\u0026lt;/journal_title\u0026gt; \u0026lt;year\u0026gt;2009\u0026lt;/year\u0026gt; \u0026lt;author match=\u0026#34;null\u0026#34;/\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; ", "headings": ["Book chapter queries ","Book title queries "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/controlling-query-execution/", "title": "Controlling query execution", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Some query elements have optional properties that control how a query is executed by the XML API. Our query engine operates by evaluating several logic rules in series. Each rule focuses on certain fields in the query, with the first rule processing the entire query. If any rule returns a single metadata record as its output, this DOI is taken as the result for the query, and rule processing terminates.", "content": "Some query elements have optional properties that control how a query is executed by the XML API. Our query engine operates by evaluating several logic rules in series. Each rule focuses on certain fields in the query, with the first rule processing the entire query. If any rule returns a single metadata record as its output, this DOI is taken as the result for the query, and rule processing terminates.\nControls include:\nMatch: use to specify level of fuzzy or exact matching. This attribute may be applied to several elements (ISSN, author, issue, article_title) Enable-multiple-hits: allows or prevents matches returning more than one DOI Secondary-query: instructs the system to perform a specific query if the default query mode does not produce a result. Current options include author/title, multiple hits, and author/title multiple hits. Table of query controls Element Property Value Purpose query key string to track a query to its results query enable-multiple-hits false (default) have the system reduce the query to one DOI and return nothing if it can not do so query enable-multiple-hits true have the system return one DOI for each query rule it executes query enable-multiple-hits multi_hit_per_rule have the system return many DOIs for each query rule it executes - will produce up to 50 candidate DOIs which partially match the query query enable-multiple-hits exact rule processing is disabled, all DOIs matching a simple comparison to query values are returned query forward-match false (default) no query is stored query forward-match true store this query and re-run it automatically and notify via email any matches that are found query list-components false (default) components not included in results query list-components true list the DOIs of any components that have deposited which are linked to this DOI query expanded-results false (default) will not include article title in results query expanded-results true include article title in the results (only applicable when results format=XML_XSD) query secondary-query author-title perform author-title search if metadata search fails query secondary-query multiple-hits returns multiple hits (if present) query secondary-query author-title-multiple-hits performs author/title and multiple hits search if initial search fails issn match optional value may be missing from deposited metadata issn match exact match exactly as it appears in the query journal_title match optional not required journal_title match fuzzy (default) use fuzzy string matching journal_title match exact match exactly as it appears in the query author match optional instructs the query engine that this field may be ignored author match fuzzy (default) use fuzzy string matching author match null match if author is missing in the metadata author match exact match exactly as it appears in the query issue match fuzzy (default) / exact \u0026ndash; see above \u0026ndash; first_page match fuzzy (default) / exact \u0026ndash; see above \u0026ndash; year match optional (default) / exact \u0026ndash; see above \u0026ndash; article_title match fuzzy (default) / exact \u0026ndash; see above \u0026ndash; ", "headings": ["Table of query controls "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/author-article-title-query/", "title": "Author/article title query", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We support a query mode where only article title and/or first author surname are required. These queries can be performed using an XML query.\nExample: XML author/article-title query \u0026lt;query enable-multiple-hits=\u0026#34;false\u0026#34; secondary-query=\u0026#34;author-title\u0026#34; key=\u0026#34;key1\u0026#34;\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Concluding remarks\u0026lt;/article_title\u0026gt; \u0026lt;author search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Somiari\u0026lt;/author\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;query enable-multiple-hits=\u0026#34;false\u0026#34; secondary-query=\u0026#34;author-title\u0026#34; key=\u0026#34;key1\u0026#34;\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Off-line Approaches\u0026lt;/article_title\u0026gt; \u0026lt;author search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Gustafsson\u0026lt;/author\u0026gt; \u0026lt;/query\u0026gt; Author/article title query tips As with other metadata queries, if the system finds more than one possible match, the results are considered ambiguous, and no results are returned.", "content": "We support a query mode where only article title and/or first author surname are required. These queries can be performed using an XML query.\nExample: XML author/article-title query \u0026lt;query enable-multiple-hits=\u0026#34;false\u0026#34; secondary-query=\u0026#34;author-title\u0026#34; key=\u0026#34;key1\u0026#34;\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Concluding remarks\u0026lt;/article_title\u0026gt; \u0026lt;author search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Somiari\u0026lt;/author\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;query enable-multiple-hits=\u0026#34;false\u0026#34; secondary-query=\u0026#34;author-title\u0026#34; key=\u0026#34;key1\u0026#34;\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Off-line Approaches\u0026lt;/article_title\u0026gt; \u0026lt;author search-all-authors=\u0026#34;true\u0026#34;\u0026gt;Gustafsson\u0026lt;/author\u0026gt; \u0026lt;/query\u0026gt; Author/article title query tips As with other metadata queries, if the system finds more than one possible match, the results are considered ambiguous, and no results are returned. You can override the one result rule and request that multiple hits may be returned. When performing XML queries, an author-title query may be submitted as a secondary query if the initial full metadata query is unsuccessful. Only the \u0026lt;author\u0026gt; and \u0026lt;article_title\u0026gt; elements should appear in an author/article title query - if other elements are present, the query will be performed as a full metadata query. ", "headings": ["Example: XML author/article-title query ","Author/article title query tips "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/allowing-multiple-hits/", "title": "Allowing multiple hits", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our query engine operates by processing many different rules in order. Normally if the output of any rule is a single DOI, processing terminates and that DOI is returned as the result of the query. If no rule can produce only one DOI, the query fail results (its results are an ambiguous set). This behavior is necessary because most processes querying Crossref are automated and are incapable of deciding what to do with multiple DOI records.", "content": "Our query engine operates by processing many different rules in order. Normally if the output of any rule is a single DOI, processing terminates and that DOI is returned as the result of the query. If no rule can produce only one DOI, the query fail results (its results are an ambiguous set). This behavior is necessary because most processes querying Crossref are automated and are incapable of deciding what to do with multiple DOI records.\nHowever, in an editorial environment where a person is involved, getting multiple results may be valuable if the system is unable to reduce the match to a single item.\nTo enable Crossref to return more than result, set the enable-multiple-hits attribute for query to true, exact, or multi_hit_per_rule for a given query item.\nTo retrieve a candidate list of DOIs, use the enable-multiple-hits attribute:\n\u0026lt;query key=\u0026#34;key\u0026#34; enable-multiple-hits=\u0026#34;true/false/multi_hit_per_rule/exact\u0026#34;\u0026gt; Each option invokes the following responses:\nTrue: Instructs the query function to return the single DOI produced by all the various rules. The output of a rule that produces more than one DOI will not be included. MULTI_HIT_PER_RULE: Instructs the query engine to return all DOIs uncovered by each rule. Exact: Disables the normal rule processing and instructs the query engine to perform a simple comparison to the query values and return all DOIs that match (up to a limit of 40). ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/secondary-queries/", "title": "Secondary queries", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our query engine operates by processing many different rules. By default, all metadata provided in a query is used to generate a match, and a result is only returned if exactly one DOI match is found. Enabling a secondary query instructs the query engine to perform specific searches when the initial search fails to find a match.\nAuthor/article title secondary query Queries by default use all metadata present in the request.", "content": "Our query engine operates by processing many different rules. By default, all metadata provided in a query is used to generate a match, and a result is only returned if exactly one DOI match is found. Enabling a secondary query instructs the query engine to perform specific searches when the initial search fails to find a match.\nAuthor/article title secondary query Queries by default use all metadata present in the request. Requests with only an author and title are treated as author/article title queries. When other fields are present, the request is submitted as a metadata search.\nSetting the secondary-query attribute to author-title causes an author/title search to be performed when the initial metadata search fails to find a match.\n\u0026lt;query key=\u0026#34;cit-3\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34; secondary-query=\u0026#34;author-title\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0360-3016\u0026lt;/issn\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;author\u0026gt;Kim\u0026lt;/author\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Potential radiation sensitizing effect of SU5416 by down-regulating the COX-2 expression in human lung cancer cells\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; Year is included in the author/article title secondary query when present. An author/article title query will only be performed if both author and article title are included in the query.\nMultiple hits secondary query By default, the query engine will only return a result if a single DOI is found. Queries returning multiple results (or hits) are unresolved. Setting the secondary-query attribute to \u0026ldquo;multi-hit\u0026rdquo; instructs the query engine to return all available results.\n\u0026lt;query key=\u0026#34;cit-3\u0026#34; secondary-query=\u0026#34;multiple-hits\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0360-3016\u0026lt;/issn\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;author\u0026gt;Kim\u0026lt;/author\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Potential radiation sensitizing effect of SU5416 by down-regulating the COX-2 expression in human lung cancer cells\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; This option can also be performed by setting attribute enable-multiple-hits (learn more about allowing multiple hits) to true.\nAuthor/title multiple hits secondary query Setting the secondary-query attribute to \u0026ldquo;author-title-multiple-hits\u0026rdquo; instructs the query engine to perform an author/title query and return multiple hits if the initial search fails.\n\u0026lt;query key=\u0026#34;cit-3\u0026#34; enable-multiple-hits=\u0026#34;true\u0026#34; secondary-query=\u0026#34;author-title-multiple-hits\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0360-3016\u0026lt;/issn\u0026gt; \u0026lt;volume\u0026gt;54\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;2\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;215\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;author\u0026gt;Kim\u0026lt;/author\u0026gt; \u0026lt;article_title match=\u0026#34;fuzzy\u0026#34;\u0026gt;Potential radiation sensitizing effect of SU5416 by down-regulating the COX-2 expression in human lung cancer cells\u0026lt;/article_title\u0026gt; \u0026lt;/query\u0026gt; Secondary query results Query results will specify which method was used to return the result. Successful metadata searches will specify metadata:\n\u0026lt;query key=\u0026#34;cit-3\u0026#34; status=\u0026#34;multiresolved\u0026#34; fl_count=\u0026#34;0\u0026#34; query_mode=\u0026#34;metadata\u0026#34;\u0026gt; Queries resolved by the secondary author/title query will specify author-title:\n\u0026lt;query status=\u0026#34;resolved\u0026#34; fl_count=\u0026#34;0\u0026#34; query_mode=\u0026#34;author-title\u0026#34;\u0026gt; Queries with multiple hits will have a status of multiresolved\n\u0026lt;query status=\u0026#34;multiresolved\u0026#34; query_mode=\u0026#34;metadata\u0026#34;\u0026gt; ", "headings": ["Author/article title secondary query ","Multiple hits secondary query ","Author/title multiple hits secondary query ","Secondary query results "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/using-the-match-attribute/", "title": "Using the match attribute", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our query engine operates by evaluating several logic rules in order. Each rule focuses on certain fields in the query, with the first rule processing the entire query. If any rule returns a single DOI as its output, this DOI is taken as the result for the query, and rule processing terminates.\nIn an XML query you can exercise some control over how the query engine works by using the match attribute that is available on many query elements.", "content": "Our query engine operates by evaluating several logic rules in order. Each rule focuses on certain fields in the query, with the first rule processing the entire query. If any rule returns a single DOI as its output, this DOI is taken as the result for the query, and rule processing terminates.\nIn an XML query you can exercise some control over how the query engine works by using the match attribute that is available on many query elements.\nfuzzy: Fuzzy matching is allowed but the field is not optional, it must be matched. optional: In most cases, optional is a default property. Using a value that excludes optional (for example, match=\u0026quot;\u0026quot;) tells the query engine that this field is not optional and must be present, and matched, for a DOI match to occur. exact: Instructs the query engine to not apply any of its fuzzy comparison logic. This is the same as match=\u0026quot;\u0026quot; (which means not optional and not fuzzy). Using exact is not compatible with optional, or fuzzy. null: Instructs the query engine to match on the query element if it is not present in the metadata. When using this option it makes no sense to provide an element value (for example, the correct style would be \u0026lt;author match=\u0026quot;null\u0026quot;/\u0026gt; Multiple values may be assigned (as in match=\u0026quot;optional fuzzy\u0026quot;). Typical uses include:\nmatch=\u0026quot;fuzzy\u0026quot; - instructs the query engine to apply its fuzzy comparison logic for this field match=\u0026quot;optional fuzzy\u0026quot; - a rule may drop the field altogether or use fuzzy matching match=\u0026quot;exact\u0026quot; - field must be matched and no fuzzy matching is allowed match=\u0026quot;\u0026quot; - same as match=\u0026quot;exact\u0026quot; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/retrieving-doi-info-metadata/", "title": "Retrieving DOI info-metadata", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Crossref-specific information about a DOI may be retrieved by including the appropriate parameters in the following URL:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=info\u0026amp;doi=doi This data is distinct from item metadata deposited for each DOI, and includes information such as timestamp, owner prefix, and primary/alias status.\nExample info-metadata DOI: 10.2353/jmoldx.2009.090037 CITATION-ID: 39306481 JOURNAL-TITLE: The Journal of Molecular Diagnostics JOURNAL-CITE-ID: 58207 BOOK-CITE-ID: SERIES-ID: DEPOSIT-TIMESTAMP: 20110701073812000 OWNER: 10.1016 LAST-UPDATE: 2011-07-04 21:13:59 PRIME-DOI: none", "content": "Crossref-specific information about a DOI may be retrieved by including the appropriate parameters in the following URL:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=info\u0026amp;doi=doi This data is distinct from item metadata deposited for each DOI, and includes information such as timestamp, owner prefix, and primary/alias status.\nExample info-metadata DOI: 10.2353/jmoldx.2009.090037 CITATION-ID: 39306481 JOURNAL-TITLE: The Journal of Molecular Diagnostics JOURNAL-CITE-ID: 58207 BOOK-CITE-ID: SERIES-ID: DEPOSIT-TIMESTAMP: 20110701073812000 OWNER: 10.1016 LAST-UPDATE: 2011-07-04 21:13:59 PRIME-DOI: none\n", "headings": ["Example info-metadata "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/retrieving-dois-by-title/", "title": "Retrieving DOIs by title", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "A list of DOIs by title (top-level title, such as journal or book title, not article- or chapter-level title) may be retrieved using the following format:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=doilist\u0026amp;pubid=record type pubid where:\npid = your email address record type = J for journals, B for books or conference proceedings, S for series pubid = publication ID For example:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=doilist\u0026amp;pubid=J173705 The results returned match the title detail results from the depositor report, and include a list of all DOIs for the title, the owner prefix for each DOI, the timestamp used in the most recent deposit, and the data the DOI was last updated.", "content": "A list of DOIs by title (top-level title, such as journal or book title, not article- or chapter-level title) may be retrieved using the following format:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=doilist\u0026amp;pubid=record type pubid where:\npid = your email address record type = J for journals, B for books or conference proceedings, S for series pubid = publication ID For example:\nhttps://doi.crossref.org/search/doi?pid={email@address.com}\u0026amp;format=doilist\u0026amp;pubid=J173705 The results returned match the title detail results from the depositor report, and include a list of all DOIs for the title, the owner prefix for each DOI, the timestamp used in the most recent deposit, and the data the DOI was last updated.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/querying-with-formatted-citations/", "title": "Querying with formatted citations", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Formatted citations are sometimes referred to as plain text references. For example, an APA-formatted citation for the DOI https://0-doi-org.libus.csd.mu.edu/10.1038/nclimate1398 is:\nHungate, B. A., and Hampton, H. M. (2012). Ecosystem services: Valuing ecosystems for climate. Nature Climate Change, 2(3), 151-152. https://0-doi-org.libus.csd.mu.edu/10.1038/nclimate1398\nCrossref has two interfaces that can be used for making formatted citation queries:\nSimple Text Query: a tool which allows a user to cut-and-paste references into an interactive form and receive instant results, or upload a text file and receive HTML format results via email.", "content": "Formatted citations are sometimes referred to as plain text references. For example, an APA-formatted citation for the DOI https://0-doi-org.libus.csd.mu.edu/10.1038/nclimate1398 is:\nHungate, B. A., and Hampton, H. M. (2012). Ecosystem services: Valuing ecosystems for climate. Nature Climate Change, 2(3), 151-152. https://0-doi-org.libus.csd.mu.edu/10.1038/nclimate1398\nCrossref has two interfaces that can be used for making formatted citation queries:\nSimple Text Query: a tool which allows a user to cut-and-paste references into an interactive form and receive instant results, or upload a text file and receive HTML format results via email. XML API: supports XML-formatted queries via HTTPS POST and GET. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/querying-with-special-characters/", "title": "Querying with special characters", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Crossref performs some fuzzy matching, but for best results, queries for metadata which includes special characters should also include special characters.\nFor example, DOI https://0-doi-org.libus.csd.mu.edu/10.1260/095830506778119452 was deposited with the metadata shown below. Notice that the first author surname (ámon) has a numerical character entity, in this case the character á.\n\u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Twenty years after chernobyl in Hungary: the Hungarian perspective of energy policy and the role of nuclear power\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ada\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt; \u0026amp;#x00E1;mon \u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date =\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;7\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;1\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2006\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date =\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;7\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;29\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2009\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;383\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;399\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;pii\u0026#34;\u0026gt;C8612851662R7W4H\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.", "content": "Crossref performs some fuzzy matching, but for best results, queries for metadata which includes special characters should also include special characters.\nFor example, DOI https://0-doi-org.libus.csd.mu.edu/10.1260/095830506778119452 was deposited with the metadata shown below. Notice that the first author surname (ámon) has a numerical character entity, in this case the character á.\n\u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Twenty years after chernobyl in Hungary: the Hungarian perspective of energy policy and the role of nuclear power\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ada\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt; \u0026amp;#x00E1;mon \u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date =\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;7\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;1\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2006\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date =\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;7\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;29\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2009\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;383\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;399\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;pii\u0026#34;\u0026gt;C8612851662R7W4H\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1260/095830506778119452\u0026lt;/doi\u0026gt; \u0026lt;/doi_data\u0026gt; If we formulate a query that does not include page number, we must include the author using the entity and not some other form of character, for example:\nNo match:\n\u0026lt;query key=\u0026#34;mykey\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Energy \u0026amp;amp; Environment\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;ámon\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;17\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;year\u0026gt;2006\u0026lt;/year\u0026gt; \u0026lt;/query\u0026gt; No match:\n\u0026lt;query key=\u0026#34;mykey\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Energy \u0026amp;amp; Environment\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;amon\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;17\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;year\u0026gt;2006\u0026lt;/year\u0026gt; \u0026lt;/query\u0026gt; Match found:\n\u0026lt;query key=\u0026#34;mykey\u0026#34; enable-multiple-hits=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Energy \u0026amp;amp; Environment\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;\u0026amp;#x00E1;mon\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;17\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;3\u0026lt;/issue\u0026gt; \u0026lt;year\u0026gt;2006\u0026lt;/year\u0026gt; \u0026lt;/query\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-api/retrieving-publication-ids/", "title": "Retrieving publication IDs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Every top-level publication in Crossref is assigned a unique publication ID. Publication IDs are mainly used for internal purposes, but may be useful when retrieving data using OAI-PMH, or identifying a specific title. For most purposes, publication IDs are always preceded by the publication type (J, B, or S). B indicates non-journal publications including books, conference proceedings, standards, reports, dissertations, posted content, and datasets.\nPublication IDs may be retrieved via the following:", "content": "Every top-level publication in Crossref is assigned a unique publication ID. Publication IDs are mainly used for internal purposes, but may be useful when retrieving data using OAI-PMH, or identifying a specific title. For most purposes, publication IDs are always preceded by the publication type (J, B, or S). B indicates non-journal publications including books, conference proceedings, standards, reports, dissertations, posted content, and datasets.\nPublication IDs may be retrieved via the following:\nOAI-PMH: an OAI-PMH ListSets request will return titles and publication IDs for journals, books, conference proceedings, and series-level data: https://0-oai-crossref-org.libus.csd.mu.edu/OAIHandler?verb=ListSets https://0-oai-crossref-org.libus.csd.mu.edu/OAIHandler?verb=ListSets\u0026amp;set=B J (journal) is the default set, set=B must be specified to retrieve book or conference proceeding titles, and S for series-level titles. Sets may be further limited by member prefix\nThe publication ID is listed within the \u0026lt;setspec\u0026gt; element, after the set and member prefix. For example, within the following set, 24 is the publication ID for Journal of Clinical Psychology: \u0026lt;set\u0026gt; \u0026lt;setSpec\u0026gt;J:10.1002:24\u0026lt;/setSpec\u0026gt; \u0026lt;setName\u0026gt;Journal of Clinical Psychology\u0026lt;/setName\u0026gt; \u0026lt;/set\u0026gt; Browsable title list: this list includes the publication ID next to each title in the search results. Click to reveal the ID UNIXSD metadata: the UNIXSD output format includes the journal-id (J) or book-id (B). ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-output-formats/", "title": "XML output formats", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "XML output formats include:\nUNIXSD query output format | UNIXSD metadata | UNIXSD schema UNIXREF query output format | UNIXREF schema XSD XML query output format ", "content": "XML output formats include:\nUNIXSD query output format | UNIXSD metadata | UNIXSD schema UNIXREF query output format | UNIXREF schema XSD XML query output format ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/publication-ids/", "title": "Publication IDs", "subtitle":"", "rank": 4, "lastmod": "2022-01-19", "lastmod_ts": 1642550400, "section": "Documentation", "tags": [], "description": "Publication IDs Every publication in our system is assigned a unique publication ID. These are used mostly for internal purposes, but may be useful when retrieving data in bulk or identifying a specific title. Publication IDs may be retrieved using OAI-PMH, or from the browsable title list.\nFind publication IDs using an OAI-PMH request An OAI-PMH ListSets request will return titles and publication IDs for journals, books, conference proceedings, and series-level data:", "content": "Publication IDs Every publication in our system is assigned a unique publication ID. These are used mostly for internal purposes, but may be useful when retrieving data in bulk or identifying a specific title. Publication IDs may be retrieved using OAI-PMH, or from the browsable title list.\nFind publication IDs using an OAI-PMH request An OAI-PMH ListSets request will return titles and publication IDs for journals, books, conference proceedings, and series-level data:\nJournal data http://0-oai-crossref-org.libus.csd.mu.edu/OAIHandler?verb=ListSets Non-journal data http://0-oai-crossref-org.libus.csd.mu.edu/OAIHandler?verb=ListSets\u0026amp;set=B J (journal) is the default set, set=B must be specified to retrieve book or conference proceeding titles, and S for series-level titles. Sets may be further limited by member prefix. Learn more about OAI-PMH.\nThe publication ID is listed within the \u0026lt;setspec\u0026gt; element, after the set and member prefix. For example, within the following set, 24 is the publication ID for Journal of Clinical Psychology:\n\u0026lt;set\u0026gt; \u0026lt;setSpec\u0026gt;J:10.1002:24\u0026lt;/setSpec\u0026gt; \u0026lt;setName\u0026gt;Journal of Clinical Psychology\u0026lt;/setName\u0026gt; \u0026lt;/set\u0026gt; Find publication IDs using the browsable title list The browsable title list includes the publication ID next to each title in the search results. Select the icon to reveal the ID. For most purposes, publication IDs are always preceded by the publication type (J, B, or S for journal, book, or series).\n", "headings": ["Publication IDs ","Find publication IDs using an OAI-PMH request ","Find publication IDs using the browsable title list "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-output-formats/unixsd-query-output-format/", "title": "UNIXSD query output format", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "UNIXSD query results include member metadata as deposited (as with the UNIXREF format) as well as some Crossref-generated information about the metadata record. UNIXSD is the most comprehensive metadata output format available for our metadata records.\nUNIXSD format will return deposited references for other members. References will also be returned to members querying for their own deposited data.\nUNIXSD metadata UNIXSD results contain a sequence of Crossref produced meta-metadata including:", "content": "UNIXSD query results include member metadata as deposited (as with the UNIXREF format) as well as some Crossref-generated information about the metadata record. UNIXSD is the most comprehensive metadata output format available for our metadata records.\nUNIXSD format will return deposited references for other members. References will also be returned to members querying for their own deposited data.\nUNIXSD metadata UNIXSD results contain a sequence of Crossref produced meta-metadata including:\nbook-id: a Crossref internal identifier assigned to a non-journal title (book, conference proceeding, database, standard, dissertation, or report/working paper) citation-id: a Crossref internal identifier assigned to a DOI record citedby-count: number of Cited-by matches identified by Crossref created: the date the record was created deposit-timestamp: timestamp provided in most recent submission journal-id: a Crossref internal identifier assigned to a journal title last-update: the date the record was last updated member-id: a Crossref internal identifier assigned to a member owner-prefix: the prefix that ‘owns’ (has permissions to update) the DOI record prefix-name: name associated with the prefix prime: the DOI a record is aliased to (if the record is aliased) publisher-name: member account name relation: related item, includes ‘type’ attribute to identify the identifier, and ‘claim’ to identify type of relationship series-id: a Crossref internal identifier assigned to a series title (applies to book and conference proceeding series) This meta-metadata is contained in a \u0026lt;crm-item\u0026gt; element, for example:\n\u0026lt;crm-item name=\u0026#34;citedby-count\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;0\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;owner-prefix\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;10.1353\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;citation-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;25715607\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;journal-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;48965\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;book-id\u0026#34; type=\u0026#34;number\u0026#34; /\u0026gt; \u0026lt;crm-item name=\u0026#34;series-id\u0026#34; type=\u0026#34;number\u0026#34; /\u0026gt; \u0026lt;crm-item name=\u0026#34;last-update\u0026#34; type=\u0026#34;date\u0026#34;\u0026gt;2007-08-07 15:31:43.0\u0026lt;/crm-item\u0026gt; The full metadata record as deposited by members is available as well. The member as-deposited XML will begin with the \u0026lt;crossref\u0026gt; element.\nUNIXSD schema UNIXSD results are generated using crossref_query_output3.0.xsd (schema | documentation)\nExample UNIXSD result \u0026lt;crossref_result xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qrschema/3.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;3.0\u0026#34; xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qrschema/3.0 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref_query_output3.0.xsd\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;none\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;query status=\u0026#34;resolved\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1353/aq.1998.0005\u0026lt;/doi\u0026gt; \u0026lt;crm-item name=\u0026#34;publisher-name\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;Johns Hopkins University Press\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;prefix-name\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;Muse - Johns Hopkins University Press\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;member-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;147\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;citation-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;25715607\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;journal-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;48965\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;deposit-timestamp\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;20070206205234\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;owner-prefix\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;10.1353\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;last-update\u0026#34; type=\u0026#34;date\u0026#34;\u0026gt;2007-02-13T20:56:13Z\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;created\u0026#34; type=\u0026#34;date\u0026#34;\u0026gt;2007-02-07T02:04:57Z\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;citedby-count\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;1\u0026lt;/crm-item\u0026gt; \u0026lt;doi_record\u0026gt; \u0026lt;crossref xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/xschema/1.0\u0026#34; xsi:schemaLocation=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/xschema/1.0 https://0-doi-crossref-org.libus.csd.mu.edu/schemas/unixref1.0.xsd\u0026#34;\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;American Quarterly\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;American Quarterly\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;electronic\u0026#34;\u0026gt;1080-6490\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1998\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;50\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;\u0026amp;quot;Disturbing the Peace: What Happens to American Studies If You Put African American Studies at the Center?\u0026amp;quot;: Presidential Address to the American Studies Association, October 29, 1997\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Mary Helen.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Washington\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1998\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;23\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1353/aq.1998.0005\u0026lt;/doi\u0026gt; \u0026lt;timestamp\u0026gt;20070206205234\u0026lt;/timestamp\u0026gt; \u0026lt;resource\u0026gt; http://0-muse-jhu-edu.libus.csd.mu.edu/content/crossref/journals/american_quarterly/v050/50.1washington.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/crossref\u0026gt; \u0026lt;/doi_record\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; ", "headings": ["UNIXSD metadata ","UNIXSD schema ","Example UNIXSD result "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-output-formats/unixref-query-output-format/", "title": "UNIXREF query output format", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our unified XML (UNIXREF) format returns all metadata submitted by the member responsible for a DOI. The UNIXREF data does not include namespaces or namespace prefixes, which are used extensively for non-bibliographic metadata.\nThe UNIXREF format will return deposited citations from other members. Citations will also be returned to members querying for their own deposited data.\nUNIXREF schema The majority of UNIXREF results use the unixref1.1.xsd schema (schema | documentation). Some results involving book or conference proceeding data deposited prior to a deposit schema change use unixref1.", "content": "Our unified XML (UNIXREF) format returns all metadata submitted by the member responsible for a DOI. The UNIXREF data does not include namespaces or namespace prefixes, which are used extensively for non-bibliographic metadata.\nThe UNIXREF format will return deposited citations from other members. Citations will also be returned to members querying for their own deposited data.\nUNIXREF schema The majority of UNIXREF results use the unixref1.1.xsd schema (schema | documentation). Some results involving book or conference proceeding data deposited prior to a deposit schema change use unixref1.0.xsd (schema | documentation).\nExample UNIXREF result \u0026lt;doi_records\u0026gt; \u0026lt;doi_record owner=\u0026#34;10.1353\u0026#34; timestamp=\u0026#34;2007-02-13 15:56:13\u0026#34;\u0026gt; \u0026lt;crossref\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;American Quarterly\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;American Quarterly\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=l\u0026#34;electronic\u0026#34;\u0026gt;1080-6490\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1998\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;50\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;\u0026amp;quot;Disturbing the Peace: What Happens to American Studies If You Put African American Studies at the Center?\u0026amp;quot;: Presidential Address to the American Studies Association, October 29, 1997\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Mary Helen.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Washington\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1998\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;23\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1353/aq.1998.0005\u0026lt;/doi\u0026gt; \u0026lt;timestamp\u0026gt;20070206205234\u0026lt;/timestamp\u0026gt; \u0026lt;resource\u0026gt; http://0-muse-jhu-edu.libus.csd.mu.edu/content/crossref/journals/american_quarterly/v050/50.1washington.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/crossref\u0026gt; \u0026lt;/doi_record\u0026gt; \u0026lt;/doi_records\u0026gt; ", "headings": ["UNIXREF schema ","Example UNIXREF result "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/xml-output-formats/xsd-xml-query-output-format/", "title": "XSD XML query output format", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The XSD_XML format follows the Crossref query output schema. It provides basic citation metadata that has been processed by Crossref. For doi-to-metadata XML querying, some controls are available for including expanded data and/or components in results.\nXSD XML examples The default query set will return basic citation metadata only:\n\u0026lt;query key=\u0026#34;q1\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1107/s1600536814013208\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; Set expanded-results=\u0026ldquo;true\u0026rdquo; to return additional author, article title, and date information:\n\u0026lt;query list-components=\u0026#34;false\u0026#34; expanded-results=\u0026#34;true\u0026#34; key=\u0026#34;expanded\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1107/s1600536814013208\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; Set list-components=\u0026ldquo;true\u0026rdquo; to include components in results:", "content": "The XSD_XML format follows the Crossref query output schema. It provides basic citation metadata that has been processed by Crossref. For doi-to-metadata XML querying, some controls are available for including expanded data and/or components in results.\nXSD XML examples The default query set will return basic citation metadata only:\n\u0026lt;query key=\u0026#34;q1\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1107/s1600536814013208\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; Set expanded-results=\u0026ldquo;true\u0026rdquo; to return additional author, article title, and date information:\n\u0026lt;query list-components=\u0026#34;false\u0026#34; expanded-results=\u0026#34;true\u0026#34; key=\u0026#34;expanded\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1107/s1600536814013208\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; Set list-components=\u0026ldquo;true\u0026rdquo; to include components in results:\n\u0026lt;query list-components=\u0026#34;true\u0026#34; expanded-results=\u0026#34;true\u0026#34; key=\u0026#34;expanded-with-components\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1107/s1600536814013208\u0026lt;/doi\u0026gt; \u0026lt;/query\u0026gt; Example files: xsd_xml.out.xml and xsd_xml.in.xml\n", "headings": ["XSD XML examples "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/retraction-watch/", "title": "Retraction Watch", "subtitle":"", "rank": 4, "lastmod": "2025-01-19", "lastmod_ts": 1737244800, "section": "Documentation", "tags": [], "description": "Research can be modified after publication, including being corrected or retracted. This is a natural part of the research process and important for accurately reporting changes. While members can deliver this information to us, Retraction Watch has also collected a large number of retractions. Many of these have not been reported by our members.\nIn September 2023, we acquired the Retraction Watch database from the Center of Scientific Integrity and have made it publicly available.", "content": "Research can be modified after publication, including being corrected or retracted. This is a natural part of the research process and important for accurately reporting changes. While members can deliver this information to us, Retraction Watch has also collected a large number of retractions. Many of these have not been reported by our members.\nIn September 2023, we acquired the Retraction Watch database from the Center of Scientific Integrity and have made it publicly available. The database contains retractions gathered from publisher websites and is updated every working day by Retraction Watch. Some other update types, such as expressions of concern and corrections, are also included in the data, but these are not as comprehensive as retractions. Various methods are used to find retractions, including searching scholarly databases, checking publisher websites, web searches, and reports from the community. For further details, see this document.\nAccessing the Retraction Watch Database There are two ways to access the Retraction Watch data, either via the Crossref REST API or downloading the full dataset.\nREST API Retractions are included in the update-to field of json files in the REST API. Retractions and other updates from Retraction Watch are identified by a source field, which can have a value of publisher or retraction-watch. The following query provides a list of 100 retractions:\nhttps://api.crossref.org/v1/works?filter=update-type:retraction\nCSV dataset The Retraction Watch database is available in csv format from a git repository. It is updated once per working day. Git is a widely used for sharing software code and can also be used for datasets.\nTo create a local copy of the Retraction Watch metadata file, install git and use the command git clone https://gitlab.com/crossref/retraction-watch-data. This creates a folder called retraction-watch-data. When you want to update to the most recent version, run the command git pull from this folder.\nData in the csv file is comma-separated, with lists within a single entry separated by a semicolon (such as author names or reasons for retraction). The column headings in the csv file are:\nRecord ID: An internal identifier from Retraction Watch. Title: The title of the retracted or updated content. Subject: The subject area of the publication. Institution: Author affiliations, as given in the content. Journal: The source (serial, book, etc.) in which the research was published. Publisher: The organisation responsible for publication. Country: Countries included in author affiliations. Author: A list of author names. URLS: Links to relevant pages on the Retraction Watch website, including blog posts about the retraction. ArticleType: The content type, using a list of types maintained by Retraction Watch. Note that this isn’t the same as the Crossref work type. RetractionDate: The date of the published retraction. RetractionDOI: The DOI of the published retraction, if available. If there is no DOI, the value is either blank, \u0026lsquo;unavailable\u0026rsquo;, or \u0026lsquo;Unavailable\u0026rsquo;. RetractionPubMedID: PubMED ID of the published retraction, if available. If there is no Pubmed ID, the value is either blank or 0. OriginalPaperDate: The publication date of the retracted content. OriginalPaperDOI: The DOI of the retracted publication, if available. If there is no DOI, the value is either blank, \u0026lsquo;unavailable\u0026rsquo;, or \u0026lsquo;Unavailable\u0026rsquo;. OriginalPaperPubMedID: PubMED ID of the original publication, if available. If there is no Pubmed ID, the value is either blank or 0. RetractionNature: The type of update notice, which can be Retraction, Correction, Expression of concern, or Reinstatement. Note that these are different to the list of update types in the Crossref schema. Reason: A list of reasons for retraction. This uses a controlled vocabulary maintained by Retraction Watch. Paywalled: Is a fee or paid subscription required to access the retraction notice? Note that there can be cases where this changes some time after publication of the notice. Notes: Additional comments about the retraction. These fields are also documented on the Retraction Watch website. Changes to the field names and vocabulary used are recorded by Retraction Watch.\n", "headings": ["Accessing the Retraction Watch Database","REST API","CSV dataset"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/dois-openurl-and-link-resolvers/", "title": "DOIs, OpenURL, and link resolvers", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "DOIs point to the authoritative version of content on the publisher\u0026rsquo;s web site and to publisher-designated resources. In an institutional context, it is often useful to direct users to other resources via their institution\u0026rsquo;s link resolver.\nCrossref acts as a source of metadata to enhance OpenURL-based local link resolvers and supports DOI redirection for localized linking within library holdings.\nHow DOIs and OpenURL work together DOIs and OpenURL work together in several ways.", "content": "DOIs point to the authoritative version of content on the publisher\u0026rsquo;s web site and to publisher-designated resources. In an institutional context, it is often useful to direct users to other resources via their institution\u0026rsquo;s link resolver.\nCrossref acts as a source of metadata to enhance OpenURL-based local link resolvers and supports DOI redirection for localized linking within library holdings.\nHow DOIs and OpenURL work together DOIs and OpenURL work together in several ways. First, the DOI resolver itself\u0026mdash;where link resolution occurs\u0026mdash;is OpenURL-enabled. This means that it can recognize a user with access to a local resolver. When a user clicks on a DOI link, it is used as a key to pull the metadata needed to create the OpenURL targeting the local link resolver out of our metadata, and redirects that DOI back to the user\u0026rsquo;s local resolver. The institutional user is then directed to appropriate resources.\nIt works this way:\nA library user clicks a DOI link within a link resolver-enabled resource. A cookie on the user’s machine alerts the DOI proxy server to redirect this DOI to the local linking server. The local linking server receives the metadata needed for local resolution, either from the source of the link or from Crossref via OpenURL. ", "headings": ["How DOIs and OpenURL work together "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/choose-content-registration-method/", "title": "Choosing a content registration method", "subtitle":"", "rank": 4, "lastmod": "2022-12-07", "lastmod_ts": 1670371200, "section": "Documentation", "tags": [], "description": "In order to get working DOIs for your content and share your metadata with the scholarly ecosystem, you need to register your content with Crossref.\nYour metadata is stored with us as XML. Some members send us XML files directly, but if you’re not familiar with writing XML files, you can use a helper tool instead. There are three helper tools available - these are online forms with different fields for you to complete, and this information is converted to XML and deposited with Crossref for you.", "content": "In order to get working DOIs for your content and share your metadata with the scholarly ecosystem, you need to register your content with Crossref.\nYour metadata is stored with us as XML. Some members send us XML files directly, but if you’re not familiar with writing XML files, you can use a helper tool instead. There are three helper tools available - these are online forms with different fields for you to complete, and this information is converted to XML and deposited with Crossref for you. A big decision to make as a new member is which of our content registration methods to use.\nHelper tools Crossref XML plugin for OJS (Open Journal Systems) - you can use this helper tool if you’re using the Open Journal Systems publishing platform. Web deposit form - you can use this form to register metadata for journals, books, conference proceedings, reports, and dissertations. Grant registration form - you can use this form to register metadata for grants Direct deposit of XML Upload JATS XML using the web deposit form Upload XML files using our admin tool XML deposit using HTTPS POST Quick guide to choosing your content registration method Use this chart to choose the best option for you:\nShow image × Which registration tools support which metadata? Record type / deposit method OJS-to-Crossref plugin Web deposit form Direct deposit of XML Books and chapters No (OJS is a journal platform) Yes Yes Conference proceedings No (OJS is a journal platform) Yes Yes Datasets No (OJS is a journal platform) No Yes Dissertations No (OJS is a journal platform) Yes Yes Journals and articles Yes Yes Yes Peer reviews No (OJS is a journal platform) No Yes Posted content (including preprints) No (OJS is a journal platform) No Yes Reports and working papers No (OJS is a journal platform) Yes Yes Standards No (OJS is a journal platform) No Yes Documentation for this deposit method Crossref XML plugin for OJS Web deposit form Direct deposit of XML Which deposit methods can I use to register pending publications? Crossref XML plugin for OJS Web deposit form XML (via the web deposit form, admin tool, or HTTPS POST) Pending publication No No Yes ", "headings": ["Helper tools ","Direct deposit of XML ","Quick guide to choosing your content registration method ","Which registration tools support which metadata? ","Which deposit methods can I use to register pending publications? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/account-credentials/", "title": "Your Crossref account credentials", "subtitle":"", "rank": 4, "lastmod": "2024-08-23", "lastmod_ts": 1724371200, "section": "Documentation", "tags": [], "description": "To register your content with us you’ll need a set of Crossref account credentials. These credentials will consist of a username and a password.\nDepending on when you joined Crossref and how you work with us, your organization might use unique and personal user credentials for each user at your organization, or alternatively, everyone at your whole organization might share a single set of role credentials. When you apply for membership, we’ll set you up with the best option based on the content registration tool you plan to use, and whether you joined Crossref through a sponsor.", "content": "To register your content with us you’ll need a set of Crossref account credentials. These credentials will consist of a username and a password.\nDepending on when you joined Crossref and how you work with us, your organization might use unique and personal user credentials for each user at your organization, or alternatively, everyone at your whole organization might share a single set of role credentials. When you apply for membership, we’ll set you up with the best option based on the content registration tool you plan to use, and whether you joined Crossref through a sponsor.\nOn this page, learn more about:\nPersonal user credentials Organization-wide shared role credentials Forgotten your password, or want to change your password? Password reset for personal user credentials Password reset for organization-wide shared role credentials Want to remove a set of user credentials from having access to your account? Personal user credentials Using personal user credentials to access our tools and services is the most secure and flexible option.\nIf your organization will be using personal user credentials, each individual who needs to access our system will have a unique set of credentials. Their username will be their email address, and their password will be one that they themselves set, and only they know.\nWhen we first set up a new member account with user credentials, we’ll create user credentials for the nominated technical contact only. We send the nominated technical contact an email which includes a link for them to set their personal password.\nThese personal user credentials are unique to each individual user and should not be shared with others. If there are other people at your organization (or a third party) who need to register content as well as your nominated technical contact, you will need to request that we add them as a user. This request will need to come from one of the main contacts we hold on your account to keep things secure.\nEach set of user credentials will be associated with a role – this role gives your users permission to register content on behalf of your organization. For some tools and services, the user will need to specify the role too.\nOrganization-wide shared role credentials If everyone at your organization will be using a single set of shared role credentials to access our tools and services, we’ll create a new role when we first create your account. We\u0026rsquo;ll then send an email to one person at your organization who will create a central, shared password for your organization.\nIf your organization is a direct member, the person who will set the shared password will be your Technical contact. If you are a member of Crossref through a sponsor, then your Sponsor will set the password.\nThis new role and password can then be shared with anyone who will be registering content with Crossref for your organization. Individual people will all use the same shared role as their username, and the same shared password as the password.\nForgotten your password, or want to change your password? Password reset for personal user credentials If you use your personal email address and password to access our tools and services, you can use the \u0026ldquo;forgotten password\u0026rdquo; link in the admin tool or web deposit form. These tools will send you an email with a link to reset your password.\nWhen you change your personal user credentials password, this won’t have any effect on any other users at your organization.\nPassword reset for organization-wide shared role credentials If your organization uses a central set of shared role credentials and you need to change the password, please contact us and we will be able to send a password reset email to one of the main contacts on your account. Crossref staff are not able to view or share the password.\nDon’t forget, if you update the password on shared role credentials, then you will need to let all your colleagues know about the new password so they can still access our tools and services.\nWant to remove a set of user credentials from having access to your account? If your organization uses unique user credentials for each person, you may want to remove one of these users from having access to your account - for example, if someone has left your organization, or if you have stopped working with a third party. If you would like to remove specific users from your account, one of the main contacts on your account can contact us to request this change.\n", "headings": ["Personal user credentials ","Organization-wide shared role credentials ","Forgotten your password, or want to change your password? ","Password reset for personal user credentials ","Password reset for organization-wide shared role credentials ","Want to remove a set of user credentials from having access to your account? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/ojs-plugin/", "title": "Crossref XML plugin for OJS", "subtitle":"", "rank": 4, "lastmod": "2024-09-06", "lastmod_ts": 1725580800, "section": "Documentation", "tags": [], "description": "Registering your DOI records using the OJS platform You can register your DOI records with us using the OJS platform with two extra plugins - the DOI plugin, and the Crossref XML plugin for OJS. We highly recommend including your references in the metadata you send to us, too - you can do this by adding the OJS references plugin.\nStep 1: Set up the DOI plugin Ask your OJS administrator to install the DOI plugin, and add the DOI prefix that we gave to you.", "content": "Registering your DOI records using the OJS platform You can register your DOI records with us using the OJS platform with two extra plugins - the DOI plugin, and the Crossref XML plugin for OJS. We highly recommend including your references in the metadata you send to us, too - you can do this by adding the OJS references plugin.\nStep 1: Set up the DOI plugin Ask your OJS administrator to install the DOI plugin, and add the DOI prefix that we gave to you. Your prefix will start with 10. and will be followed by other numbers.\nYou can check whether the DOI Plugin is already set up by following these steps:\nGo to ‘Settings’ on your dashboard and click ‘Website’ Switch to the ‘Plugins’ tab Show image × Search ‘Public Identifier Plugins’ and find ‘DOI’ Click the checkbox on the right side of the DOI plugin description to enable it Show image × Step 2: Set up the Crossef XML plugin for OJS To make best use of the plugin, make sure you’re using OJS version 3 or higher.\nYou can start by finding the Crossref plugin from your dashboard:\nClick ‘Tools’ Choose the ‘Import/Export’ tab Click ‘Crossref XML Export Plugin’ Show image × You can deposit content with us in one of three ways:\nRegister your content with us automatically using the OJS plugin Register the content with us manually, from the plugin interface Have the plugin create an XML file that you can then upload to our admin tool We recommend automatic deposits.\nStep 3: Enable automatic deposits Simply click the checkbox at the bottom of the DOI plugin settings to enable automatic deposits.\nShow image × You’ll then need to add information into the plugin:\nShow image × Here’s what to enter into each of the fields shown in the screenshot above:\nDepositor name - the name of the organization registering the DOIs (note: this field is not authenticated with Crossref) Depositor email - the email address of the individual responsible for registering content with Crossref (note: this field is not authenticated with Crossref) Username - this is the username element of your Crossref depositor credentials. It will be passed to us to authenticate your submission(s). Your username might be just a collection of letters (role credentials), or it might be an email address (user credentials) - there is more information on role versus user credentials below. Password - this is the password associated with your Crossref depositor credentials Note: if the combination of username and password is incorrect, OJS will return a ‘401 unauthorized status code’ error at the time of registration. This error indicates that the username and password are incorrectly entered. That is, they do not match the username and/or password set with Crossref.\nIf you are using organization-wide, shared role credentials (i.e. your username is a collection of letters), you can simply add in your shared username and password. If you are using personal user credentials that are unique to you (i.e. your username is your email address), you’ll need to add your email address and your role into the username field, and your personal password into the password field. Here’s an example of what this will look like: Username: email@address.com/role\nPassword: your password\nStep 4: Activate the OJS references plugin The OJS references plugin is available from OJS 3.1.2 onwards. The plugin will use the Crossref API to check against plain text references and locate possible DOIs for articles. The plugin will also allow the display of reference lists on the article landing page in OJS and deposit them as part of your metadata deposit. Linking references is a requirement of Crossref membership.\nTwo things need to be set up to activate the references plugin:\na) Workflow Settings\nClick ‘Settings’ and then ‘Workflow’ from your dashboard Under the ‘Submission’ tab, choose ‘Metadata’! Show image × Scroll down to the bottom and to find the ‘References’ section\nMake sure you enable references metadata by clicking the checkbox ‘Enable references metadata’. You also need to select the option ‘Ask the author to provide references during submission’. Click save! Show image × b) Website Settings\nThen you need to activate the references plugin on the website, too, by following the instructions here:\nClick ‘Settings’ and then ‘Website’ from your dashboard Choose the ‘Plugins’ tab. Show image × Search ‘Crossref reference linking’ Click the ‘Crossref reference linking checkbox This plugin will deposit the references that you enter into the XML deposit.\nAdditional OJS plugins for Crossref In addition to the Crossref XML plugin for OJS, there are also other important plugins that can be enabled in OJS to enrich your metadata records:\nCited-by (OJS Scopus/Crossref plugin) - as of OJS 3.2, this third-party plugin allows journals to display citations and citation counts (using article DOIs) from Scopus and/or Crossref. Funding Metadata plugin - as of OJS 3.1.2, it is possible to enable a funder registry plugin for submitting funding information to Crossref. The plugin will use the Open Funder Registry to check against existing funding agencies. The plugin will include funding information in your Crossref DOI deposits. Similarity Check plugin - if you are using OJS 3.1.2 or above, you are able to use the Similarity Check plugin. This will enable you to automatically send manuscripts to your iThenticate account to check their similarity to already published content. You will need to be subscribed to Crossref’s Similarity Check service for this to work. Getting help with OJS plugins The team at Crossref didn’t create these plugins - they were either created by the team at PKP, or by third-party developers. Because of this, we aren’t able to give in-depth help or troubleshooting on problems with these plugins.\nIf you need more help, you can learn more from PKP’s Crossref OJS Manual and PKP’s Open Journals System 3.3 How to guides on DOI \u0026amp; Crossref Plugins plus there’s a very active PKP Community Forum that has more information on how to modify your OJS instance to submit metadata and register DOIs with Crossref.\nAlternatively, you can contact the support team at PKP.\n", "headings": ["Registering your DOI records using the OJS platform","Step 1: Set up the DOI plugin","Step 2: Set up the Crossef XML plugin for OJS","Step 3: Enable automatic deposits","Step 4: Activate the OJS references plugin","Additional OJS plugins for Crossref","Getting help with OJS plugins"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/web-deposit-form/", "title": "Web deposit form", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The web deposit form is suitable for making small numbers of deposits, and you do not need any knowledge of XML to use it. You can use this form to deposit metadata for journals, books, conference proceedings, reports, and dissertations. You can also upload NLM or JATS-formatted XML using this form.\nWeb deposit form tutorial\nHow to use the web deposit form to register your content Step one: Select your record type Start at the web deposit form and select the type of content you want to register: journal, book, conference proceedings, report or dissertation.", "content": "The web deposit form is suitable for making small numbers of deposits, and you do not need any knowledge of XML to use it. You can use this form to deposit metadata for journals, books, conference proceedings, reports, and dissertations. You can also upload NLM or JATS-formatted XML using this form.\nWeb deposit form tutorial\nHow to use the web deposit form to register your content Step one: Select your record type Start at the web deposit form and select the type of content you want to register: journal, book, conference proceedings, report or dissertation. Different fields will appear depending on what you\u0026rsquo;ve chosen.\nStep two: Add the content you want to register Journals You can deposit a journal-level DOI only or DOIs for each article within a given issue.\nRegister DOIs for articles - you can register articles for up to one issue in any deposit. On the first screen, enter information for the relevant journal or issue, then click add articles. After you\u0026rsquo;ve added information for each article, click on Add another article to register the next article. When you are done, click Finish. Register a journal-level DOI only - on the first screen, enter the information for the relevant journal or issue and click Submit journal/issue DOI. Please note: when you register your first item, be really careful about the journal title you enter - this will create a journal title record and any future submissions will have to match this. Your journal title doesn\u0026rsquo;t have to match the title in the ISSN portal, but if you do want it to match, make sure to check what this is before you register your first item. Books You can register both book- and chapter-level information. Add you content and select Submit Book DOI to deposit a title-level DOI, or select Add Chapters to enter metadata for chapters attached to the book being registered. For series and sets, only one volume can be registered at a time.\nConference proceedings Enter event and conference paper information. Select Add Papers to enter metadata for conference papers.\nReports Select Submit Report DOI to deposit a single report DOI, or select Add Content Item if the report contains multiple chapters or papers.\nDissertations Complete the fields and click Submit dissertation.\nStep three: Login and submit your content Login with your Crossref account credentials, and then add the email address that should receive the submission log email. Even if your login username is your email address, you still need to add an email address to receive the submission log email. It can be the same or different from the email you used as your login username. Finally, click Deposit.\nYour submission is then added to our submission queue. When we\u0026rsquo;ve processed your file we\u0026rsquo;ll send you a log via email (to the address you gave us in step three). You must review this log to make sure your content was registered successfully. Learn more about error and warning messages.\nWe also send you a copy of the XML that has been generated by the web deposit form. This XML is just for your records - you don\u0026rsquo;t need to do anything with it. If changes or corrections need to be made to your metadata record, you can edit and submit the XML instead of re-entering your metadata into the form. If you do edit the XML, be sure to increment the value in the \u0026lt;timestamp\u0026gt; field to ensure a successful update.\nWeb deposit form limitations and how to work around them There are some metadata elements that you can’t currently register as part of your initial deposit in the web deposit form. However, you can add many of these to an existing deposit later on, using our other tools. Here’s a list of the elements that can’t currently be included in your initial deposit with the web deposit form, and your options for adding them later:\narchive locations, and article numbers or IDs can\u0026rsquo;t be registered using the web deposit form. Funding and license information, Similarity Check URLs, STM-article sharing framework (stm-asf) license information, and text and data mining URLs can be added to existing DOIs in bulk using a supplemental metadata upload. References: use Simple Text Query to match and deposit references to existing DOIs. How to use the web deposit form to upload a JATS or NLM file You can use the web deposit form to upload an XML file built according to the NLM or JATS document type definition (DTD) publishing tag set. Find out more.\nHow to use the web deposit form for supplemental metadata upload using a .csv file Supplemental metadata upload enables members to add metadata elements to existing DOIs in bulk by uploading a .csv file via the web deposit form. You can use it to add funding metadata, license metadata, funding and license metadata together, Similarity Check URLs, or STM-article sharing framework (stm-asf) license metadata.\nStep one: Prepare your CSV file Step two: Add in the relevant headings and information Step three: Upload the CSV file to the web deposit form. Step one: Prepare your CSV file Files submitted for supplemental metadata upload using a .csv file must comply with these specifications:\nFiles must be submitted in .csv format Column headings must match the headings outlined for each deposit type: see below for how to prepare your .csv file for funding, license, funding and license metadata, Similarity Check full-text URLs, and for STM-article sharing framework (stm-asf) license metadata Values must be separated by a comma (,) Don\u0026rsquo;t use commas (,) or quotation marks (\u0026quot;) within a column value Dates must be in the format: YYYY-MM-DD If metadata is not available for an item, leave the cell blank. If an entire column is not populated, you may omit it. Do not enter placeholders such as n/a or -, as this will cause your deposit to fail, or cause incorrect metadata to be attached to a DOI Files may be up to 45 MB in size. We automatically split the file into batches of 4,000 DOIs for processing. We send you two emails for each batch: a submission log, and a copy of the submitted XML, so uploading a large file may result in many emails. Step two: Add the relevant headings and information You\u0026rsquo;ll need to add different headings and information depending on what type of metadata you wish to add to your record. Here are the options:\nAdd funding metadata Add license metadata Add both funding metadata and license metadata at the same time Add Similarity Check full-text URLs Add STM-article sharing framework (stm-asf) license metadata Add funding metadata To update your deposits with Funder Registry metadata using a .csv file, your file must contain 4 columns with these headings:\nDOI: the DOI whose metadata is being updated \u0026lt;funder_name\u0026gt;: name of the funding agency as it appears in the Funder Registry. Learn more about accessing the Funder Registry. \u0026lt;funder_identifier\u0026gt;: funding agency identifier in the form of a DOI \u0026lt;award_number\u0026gt;: grant number or other fund identifier If a DOI has multiple funders, the DOI must be repeated for each funder. We recommend that all available metadata is deposited. If a piece of funding metadata is not available (for example, a grant number) the field should be left blank.\nExample .csv file for funding metadata\nYou can now go go step three: Upload the CSV into the web deposit form.\nAdd license metadata To add license metadata to your existing records the .csv file may contain these headings (*=required):\nDOI: the DOI whose metadata is being updated* \u0026lt;license_ref applies_to=\u0026quot;vor\u0026quot;\u0026gt;: license URL for version of record \u0026lt;vor_lic_start_date\u0026gt;: start date of version of record license \u0026lt;license_ref applies_to=\u0026quot;am\u0026quot;\u0026gt;: license URL for accepted manuscript \u0026lt;am_lic_start_date\u0026gt;: start date of accepted manuscript license \u0026lt;license_ref applies_to=\u0026quot;tdm\u0026quot;\u0026gt;: license URL for accepted manuscript \u0026lt;tdm_lic_start_date\u0026gt;: start date of accepted manuscript license \u0026lt;resource content_version=\u0026quot;vor\u0026quot;\u0026gt;: item URL for version of record \u0026lt;resource content_version=\u0026quot;vor\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for version of record with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot;\u0026gt;: item URL for author manuscript with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for author manuscript with MIME type \u0026lt;resource\u0026gt;: item URL with no version type and with MIME type \u0026lt;resource mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL with no version type where \u0026ldquo;?\u0026rdquo; is the MIME type as defined in the deposit section of the schema.\nExample .csv file for license metadata\nYou can now go go step three: Upload the CSV into the web deposit form.\nAdd both funding and license metadata at the same time Funding and license metadata may be combined into a single file. The order is important: please include columns in the order listed below (*=required):\nDOI: the DOI whose metadata is being updated* \u0026lt;funder_name\u0026gt;: name of the funding agency* as it appears in the Funder Registry. Learn more about accessing the Funder Registry \u0026lt;funder_identifier\u0026gt;: funding agency identifier in the form of a DOI* \u0026lt;award_number\u0026gt;: grant number or other fund identifier* \u0026lt;license_ref\u0026gt;: license URL \u0026lt;license_ref applies_to=\u0026quot;vor\u0026quot;\u0026gt;: license URL for version of record \u0026lt;vor_lic_start_date\u0026gt;: start date of version of record license \u0026lt;license_ref applies_to=\u0026quot;am\u0026quot;\u0026gt;: license URL for accepted manuscript \u0026lt;am_lic_start_date\u0026gt;: start date of accepted manuscript license \u0026lt;license_ref applies_to=\u0026quot;tdm\u0026quot;\u0026gt;: license URL for text and data mining \u0026lt;tdm_lic_start_date\u0026gt;: start date of text and data mining license \u0026lt;resource content_version=\u0026quot;vor\u0026quot;\u0026gt;: item URL for version of record \u0026lt;resource content_version=\u0026quot;vor\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for version of record with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot;\u0026gt;: item URL for author manuscript with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for author manuscript with MIME type where \u0026ldquo;?\u0026rdquo; is the MIME type as defined in the deposit section of the schema.\nExample .csv file for funding and license metadata\nYou can now go go step three: Upload the CSV into the web deposit form.\nAdd Similarity Check full-text URLs Download your file of DOIs that are missing Similarity Check URLs using the Similarity Check widget, or prepare your file using our supplemental-metadata-upload sample file. Open your file using spreadsheet software (such as MS Excel). Your file should contain two columns with the headings DOI and \u0026lt;item crawler=\u0026quot;iParadigms\u0026quot;\u0026gt;, where DOI is the DOI being updated and \u0026lt;item crawler=\u0026quot;iParadigms\u0026quot;\u0026gt; is the URL being submitted for Similarity Check indexing. Here is an example: DOI,\u0026lt;item crawler=\u0026#34;iParadigms\u0026#34;\u0026gt; 10.5555/test1,https://www.yoururl.com/pdf1 10.5555/test2,https://www.yoururl.com/pdf2 10.5555/test3,https://www.yoururl.com/pdf3 10.5555/test4,https://www.yoururl.com/pdf4 10.5555/test5,https://www.yoururl.com/pdf5 10.5555/test6,https://www.yoururl.com/pdf6 10.5555/test7,https://www.yoururl.com/pdf7 10.5555/test8,https://www.yoururl.com/pdf8 Replace the example DOIs (10.5555/test1) and URLs (https://www.yoururl.com/pdf1) with your DOIs and URLs. Be sure to save the file as .csv and not as .xlsx (or any other file type) Some spreadsheet programs add additional quotation marks to the column headers, such as \u0026quot;DOI\u0026quot; or \u0026quot;\u0026lt;item crawler=\u0026quot;iParadigms\u0026quot;\u0026gt;\u0026quot;. If your upload is not successful, please open your file in a text editor, and make sure the top line of the file is simply: DOI or \u0026lt;item crawler=\u0026quot;iParadigms\u0026quot;\u0026gt;. Edit if necessary, and resubmit.\nExample .csv file for Similarity Check full-text URLs\nYou can now go go step three: Upload the CSV into the web deposit form.\nAdd STM-article sharing framework (stm-asf) license metadata As of 2022, Crossref supports registration of STM-article sharing framework (stm-asf) license metadata. The .csv file may contain these headings (*=required):\nDOI: the DOI whose metadata is being updated* \u0026lt;license_ref applies_to=\u0026quot;stm-asf\u0026quot;\u0026gt;: stm-asf license URL for version of record \u0026lt;stm-asf_lic_start_date\u0026gt;: start date of version of record license \u0026lt;license_ref applies_to=\u0026quot;am\u0026quot;\u0026gt;: license URL for accepted manuscript \u0026lt;am_lic_start_date\u0026gt;: start date of accepted manuscript license \u0026lt;license_ref applies_to=\u0026quot;tdm\u0026quot;\u0026gt;: license URL for accepted manuscript \u0026lt;tdm_lic_start_date\u0026gt;: start date of accepted manuscript license \u0026lt;resource content_version=\u0026quot;vor\u0026quot;\u0026gt;: item URL for version of record \u0026lt;resource content_version=\u0026quot;vor\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for version of record with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot;\u0026gt;: item URL for author manuscript with MIME type \u0026lt;resource content_version=\u0026quot;am\u0026quot; mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL for author manuscript with MIME type \u0026lt;resource\u0026gt;: item URL with no version type and with MIME type \u0026lt;resource mime_type=\u0026quot;?\u0026quot;\u0026gt;: item URL with no version type where \u0026ldquo;?\u0026rdquo; is the MIME type as defined in the deposit section of the schema.\nExample .csv file for stm-asf license metadata\nStep three: Use the web deposit form’s supplemental metadata upload option to upload your csv and add this element to your existing records Start at the web deposit form Under data type selection, choose supplemental metadata upload Log in using your Crossref account credentials in the appropriate fields Click the choose file button next to csv file information and select your .csv file for upload Enter the email address that should receive the submission log email Click upload csv data Some initial validation relating to formatting is performed upon upload. Incomplete or incorrect files will return an error message, and will not be deposited. If the file passes the initial validation, it will be converted to XML, and registered with us. Additional validation is performed upon deposit.\nYou will receive a submission log when your deposit is complete. Please review the log to be sure your DOIs have been updated successfully.\nPlease contact us with questions or comments about your .csv upload. If you are reporting problems with a .csv upload, please attach the .csv file to your support request.\n", "headings": ["How to use the web deposit form to register your content ","Step one: Select your record type","Step two: Add the content you want to register","Journals","Books","Conference proceedings","Reports","Dissertations","Step three: Login and submit your content","Web deposit form limitations and how to work around them ","How to use the web deposit form to upload a JATS or NLM file ","How to use the web deposit form for supplemental metadata upload using a .csv file ","Step one: Prepare your CSV file ","Step two: Add the relevant headings and information ","Add funding metadata ","Add license metadata ","Add both funding and license metadata at the same time ","Add Similarity Check full-text URLs ","Add STM-article sharing framework (stm-asf) license metadata ","Step three: Use the web deposit form’s supplemental metadata upload option to upload your csv and add this element to your existing records "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/record-registration-form/", "title": "Record registration form", "subtitle":"", "rank": 4, "lastmod": "2024-12-18", "lastmod_ts": 1734480000, "section": "Documentation", "tags": [], "description": "The record registration form can be used to deposit metadata for your records. You do not need any knowledge of XML to use it. You can download your records to your local machine and re-upload them to the form later to make edits to the metadata. You can also save partial records to be used as templates in the future.\nThe form currently supports journal articles and grants, but we are planning to add support for additional record types in future.", "content": "The record registration form can be used to deposit metadata for your records. You do not need any knowledge of XML to use it. You can download your records to your local machine and re-upload them to the form later to make edits to the metadata. You can also save partial records to be used as templates in the future.\nThe form currently supports journal articles and grants, but we are planning to add support for additional record types in future.\nHow to use the record registration form Start at the record registration form and enter your Crossref account credentials. You can choose to create a new record or upload a record you’ve already created using this form. If this is the first time you’ve used this form, you’ll choose New Record.\nShow image × Create a new record Select the type of record you wish to create, then add the metadata associated with your record. Some fields are required to be filled out in order to submit your record, while others are optional. If you are submitting a journal article, you can find links to our documentation in the form for more information on what each field means.\nAt any point while filling out the form, you can use the download button to save your record to your local computer for future edits. The record will download as a .json file, which is named automatically: for grant records, it will be named the funder name and award number; for journal article records, it will be named after the journal\u0026rsquo;s e-ISSN (or p-ISSN if no e-ISSN is available) and article title. This file can be loaded back into the form at a later date to make changes to your record.\nSubmit your record Click Submit at the bottom of the form once you have filled out the required fields, as well as any optional metadata you want to deposit. The submission will be made immediately and a success message will appear on the screen. You can also download the record from this page, or choose to start another submission. If you have submitted a journal article record, you can choose to repeat the process for another article in the same journal and/or journal issue, which will pre-fill the appropriate metadata for you so you don\u0026rsquo;t have to re-enter it.\nShow image × If there is a problem with your submission, you will see an error message appear instead. Go to the documentation for tips on how to troubleshoot common errors from our deposit system.\nLoad a saved record If you’ve used the record registration form before to create a record, you can upload your saved copy to make edits and re-deposit the metadata. Start at the record registration form and choose Load Record. Select the appropriate .json file from your computer and click Open. Note: the record you load must be a .json file previously downloaded from the record registration form.\nOnce the form is loaded, you can make edits to your record and submit your record to update the metadata. You can also download a new version to your local machine to repeat the process later.\nCreate a template You can partially complete a form and download it for use as a template in the future. For example, if you register multiple grants, your depositor information (name, email address) and funder information (funder name, funder ID) are likely to be the same across all submissions. So you might complete just those parts of the form, download the record, and upload it each time you need to submit a new grant record.\n", "headings": ["How to use the record registration form ","Create a new record ","Submit your record ","Load a saved record ","Create a template "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/", "title": "Metadata Manager (deprecated)", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nMetadata Manager (beta) offers a way to deposit and update metadata for journal articles for both single and multiple deposits.\nTake a look at the Metadata Manager tutorial to get started.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nMetadata Manager (beta) offers a way to deposit and update metadata for journal articles for both single and multiple deposits.\nTake a look at the Metadata Manager tutorial to get started.\nOverview of the Metadata Manager workspace Start from Metadata Manager, and log in using your Crossref account credentials.\nYou\u0026rsquo;ll now see your Metadata Manager workspace. This is where all deposits occur, both new deposits and updates to content you’ve already registered with Crossref. To return to this view at any time, click Home at the top of the screen.\nYour workspace holds your list of publications, and it will be blank when you first log in. As you add the publications you want to manage to Metadata Manager, they’ll start collecting on this screen.\nYou can add new publications and edit existing publications you have previously submitted to our system from your workspace. You can also click into each publication and add or edit articles against them.\nShow image × The home button - Return to the overview of all your publications by clicking Home. Deposit history - See your previous deposits made via Metadata Manager (excludes deposits via other deposit methods such as HTTPS POST, or the web deposit form). To deposit - Shows items for which you’ve entered information, but have not yet deposited with us. The number next to To deposit shows how many records are awaiting deposit. Your username - Shows the credential you’ve used to log in. Click the down arrow to access account functions, log out, and view a tutorial of Metadata Manager. Search publication - This search bar allows you to find and add publications to your workspace. You can search by title name or title-level DOI. New publication - This section allows you to create a new journal and add it to your workspace. ", "headings": ["Overview of the Metadata Manager workspace "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/setting-up-a-new-journal-in-your-workspace/", "title": "Setting up a new journal in your workspace", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nWhen you log in using your account credentials, you’ll see a view of all the publications that have been added to your workspace.\nTo add a publication for which you have already registered metadata with Crossref, enter its title or title-level DOI into the search bar, and click Add.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nWhen you log in using your account credentials, you’ll see a view of all the publications that have been added to your workspace.\nTo add a publication for which you have already registered metadata with Crossref, enter its title or title-level DOI into the search bar, and click Add. Once added to your workspace, you can update the title record by hovering your mouse over the publication title and select Edit, which will take you to the Edit journal record screen. If your publication does not already have a title-level DOI, you will need to add one. Learn more about title-level DOIs. Provide additional metadata for the publication record if available (the blue/asterisk * mark indicates a required field).\nTo bring an article into your workspace, click into the chosen journal, and enter the article title into the Article search field.\nTo add a publication for which you have never registered metadata with Crossref, click New publication. On the Edit journal record screen, add details for the publication (the blue/asterisk * mark indicates a required field).\nShow image × Click Save, then Close to return to the journal list. The publication will now appear in your workspace.\nShow image × ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/registering-new-articles-and-working-with-volumes-issues/", "title": "Registering new articles and working with volumes/issues", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nClick into the journal to view all of its associated articles in your workspace. You will only see previous deposits made using Metadata Manager. To see deposits made using other deposit methods, manually add them by searching for the article using Search.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nClick into the journal to view all of its associated articles in your workspace. You will only see previous deposits made using Metadata Manager. To see deposits made using other deposit methods, manually add them by searching for the article using Search.\nShow image × If your journal has volumes and/or issues, and the article is in a new volume and/or issue, go to new article in new volume and/or issue If your journal has volumes and/or issues, and the article is in an existing volume and/or issue, go to new article in existing issue and/or volume If your journal does not have volumes or issues, click Add record, select New article, and go to add article metadata. New article in new volume and/or issue If the article is part of a new volume and/or issue, click Add record and select New volume/issue. Complete the fields in the volume/issue form. The blue/asterisk * mark indicates a required field. Click Save, then click Close. The volume/issue is now added into your workspace (you only need to do this once for all articles associated with this volume/issue). The volume/issue now appears in your journal Record List - click Add article on the right of that row.\nShow image × Continue to add article metadata.\nNew article in existing issue and/or volume If the new article is part of an existing volume or issue, click on Add article by the relevant volume/issue. To add an existing volume/issue to your workspace, enter its DOI into the search bar and click Add.\nShow image × Continue to add article metadata.\nAdding article metadata Provide contributor, funding, license, references, and additional metadata by clicking on each section to open it out. The blue/asterisk * mark indicates a required field, and we recommend that you deposit as much metadata as possible for the optional fields.\nAt any time, click Continue (at the top right of the screen) and select Add to deposit, Save, or Review.\nIf you would like to know more about the metadata for each field, we provide tool tips that appear on the right side of the form. You can turn these off be selecting Off in Show help slider at the top of the form. For a broader overview, explore our metadata best practices.\nShow image × Metadata Manager checks your metadata to ensure that you provide the correct type of information needed for a successful deposit. You will see warnings when the metadata does not validate, which contain guidance on the type of metadata we are expecting. These do not need to be corrected until you are ready to submit the deposit.\nShow image × If you participate in Crossmark, you can also add Crossmark metadata to the article record using Metadata Manager. This section will automatically appear at the bottom section of the article form for Crossmark participants - please contact us if the section doesn’t appear for you.\n", "headings": ["New article in new volume and/or issue ","New article in existing issue and/or volume ","Adding article metadata "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/save-as-draft/", "title": "Save as draft", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nIf the record you’re working on is not yet complete, you can choose to \u0026ldquo;save as draft\u0026rdquo;. Your record will be saved within Metadata Manager for you to work on later, but it won’t yet be submitted to our system.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nIf the record you’re working on is not yet complete, you can choose to \u0026ldquo;save as draft\u0026rdquo;. Your record will be saved within Metadata Manager for you to work on later, but it won’t yet be submitted to our system. This is true whether your record is completely new, or you’re using Metadata Manager to update the metadata record for a DOI you’ve already registered with Crossref. When you want to submit the changes to Crossref, you must choose add to deposit cart and resubmit your deposit.\nTo save your record for later, go to Continue, and click Save as draft.\nShow image\r×\rWhen you are ready to continue editing the record or submit it, you can find it again under your workspace.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/review-and-submit/", "title": "Review and submit", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nThe Review selection provides a condensed view of all the metadata you’ve provided in the form, so you can check it before submitting the record for deposit.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nThe Review selection provides a condensed view of all the metadata you’ve provided in the form, so you can check it before submitting the record for deposit. Click Continue at the top of the form, and select Review. You can also Review All submissions on the To deposit screen before submitting the deposit.\nSubmitting a deposit When you have finished adding article metadata and would like to deposit, click Continue from the article form, and select Add to deposit.\nYou can also do this from the Record List - select the article(s) you would like to deposit by checking the box to the left of the article title. You will then see the Action menu, and you can select Add to deposit. You can also move to, duplicate, and remove selected records using these buttons in the Action menu. If you select Remove for a record that has not been deposited, it will be erased from Metadata Manager. Records previously deposited will not be deleted from our system, only removed from the Metadata Manager workspace. If you created an article outside of a volume or issue, you can associate it with a volume or issue using Move to.\nShow image × To submit your item(s) for deposit, click To deposit at the top of the screen. Please submit a maximum of 20 articles at a time. This will reduce the chance of an error with Metadata Manager.\nHere, you can collect and review all your journal-specific records using Review all. The system will display any errors with a red flag by the respective record(s). You must correct these errors before you can deposit. If there are no errors, the Deposit button will be activated. Click Deposit and the system will immediately process your deposit request.\nShow image × ", "headings": ["Submitting a deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/metadata-manager/deposit-results/", "title": "Deposit results", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nOnce you click Deposit, we immediately process the deposit and display the results for accepted and rejected deposits. All deposit records accepted by the system have a live DOI.", "content": "The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nOnce you click Deposit, we immediately process the deposit and display the results for accepted and rejected deposits. All deposit records accepted by the system have a live DOI.\nAll deposit results are archived and available for reference on the Deposit history tab on the top menu bar.\nShow image × You can also see your deposit history in the admin tool - go to the Administration tab, then the Submissions tab. Metadata Manager deposit filenames begin with MDT. You can even review the XML that Metadata Manager has created your behalf.\nUpdating existing records and failed deposits Metadata Manager also makes it easy to update existing records, even if you didn’t use Metadata Manager to make the deposit in the first place. You must add the journal to your workspace before you can update the records associated with it - learn more about setting up a new journal in your workspace.\nAccepted and Failed submissions can be updated using the respective tabs in the workspace. Click into the journal, and then click into the article. Add or make changes to the information, and then deposit.\nWhat does the status “warning” in my submission result mean? When similar metadata is registered for more than one DOI, it\u0026rsquo;s possible that the additional DOIs are duplicates. Because DOIs are intended to be unique, the potentially duplicated DOI is called a conflict. Learn more about the conflict report.\nIn Metadata Manager, if you register bibliographic metadata that is very similar to that for an existing DOI, you will see a status “warning” with your submission result. This is accurate.\nWhen you return to your journal workspace in Metadata Manager to review your list of DOIs, the DOI that returned the “warning” will display as “failed”. This is inaccurate, as you can see if you try to resolve the DOI in question. We are working on improving the wording in this part of the process to make it less confusing.\n", "headings": ["Updating existing records and failed deposits ","What does the status “warning” in my submission result mean? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/", "title": "Direct deposit of XML", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "If you’re sending us XML directly, you\u0026rsquo;ll need to review our input schema, and check your XML files follow the schema\u0026rsquo;s rules.\nTo deposit your XML files with Crossref, you have a choice of three methods:\nUpload JATS XML using the web deposit form Upload XML files using our admin tool XML deposit using HTTPS POST If you\u0026rsquo;re making your deposits via the admin tool or HTTPS POST, you can use our test system.", "content": "If you’re sending us XML directly, you\u0026rsquo;ll need to review our input schema, and check your XML files follow the schema\u0026rsquo;s rules.\nTo deposit your XML files with Crossref, you have a choice of three methods:\nUpload JATS XML using the web deposit form Upload XML files using our admin tool XML deposit using HTTPS POST If you\u0026rsquo;re making your deposits via the admin tool or HTTPS POST, you can use our test system.\nSpecial characters in your XML All XML submitted to our system must be UTF-8 encoded. There are two ways to include a special unicode character in a Crossref deposit XML file:\nEncode the special character using a numerical representation. This is the preferred approach. Constructing an entity reference in the XML that is the numerical value of the character. For example, \u0026lt;surname\u0026gt;\u0026amp;#352;umbera\u0026lt;/surname\u0026gt; includes the special character S with a háček (Š). Use a UTF-8 editor or tool when creating the XML and insert characters directly into the file, which results in a one or more byte sequence per character in the file. For example, an S with a háček (Š) has a decimal value of 352 which is 160hex. This value converts to the UTF-8 sequence C5,A0 in hex. You can create a small XML file in which you insert this two-byte sequence (shown here between the \u0026lt;UTF_encoded\u0026gt; tags).\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34; ?\u0026gt; \u0026lt;start\u0026gt; \u0026lt;UTF_encoded\u0026gt;Š\u0026lt;/UTF_encoded\u0026gt; \u0026lt;/start\u0026gt; The character displays properly in a browser but if you save the XML source and try to view it in certain editors, it will not display correctly.\nCharacter entities XML based on schema does not support named character entities (sometimes referred to as html-encoded characters). For example, é or – are not allowed. To include these characters you must use their numerical representation, \u0026amp;#x0E9; or \u0026amp;#x2013; respectively. These are called numerical entities, shown by the # (hash or pound sign). The x following # indicates the value is in hex (rather than decimal if the x were omitted). All entities must end with the ; character.\nCharacter numerical values may be found in the Unicode Character Code Charts. Learn more about UTF-8 and unicode, and the ISO 8859 series of standardized multilingual graphic character sets for writing in alphabetic languages.\nUsing face markup Some style/face markup is supported by our schema but we recommend using it only when it is essential to the meaning of the text. Learn more about face markup.\n", "headings": ["Special characters in your XML ","Character entities ","Using face markup "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/admin-tool/", "title": "Upload XML files using our admin tool", "subtitle":"", "rank": 4, "lastmod": "2022-06-30", "lastmod_ts": 1656547200, "section": "Documentation", "tags": [], "description": "If you generate your own XML files (or export them from OJS), you can deposit these using the deposit admin tool.\nLog in with your Crossref account credentials Go to Submissions Click Upload Click Browse to locate the file you are uploading Select the file type: Metadata: content registration XML Query: XML queries DOI Query: XML-formatted DOI-to-metadata queries DOI References/Resources: resource-only deposit XML Conflict Management: conflict management .txt file Bulk URL Update: URL updates .", "content": "If you generate your own XML files (or export them from OJS), you can deposit these using the deposit admin tool.\nLog in with your Crossref account credentials Go to Submissions Click Upload Click Browse to locate the file you are uploading Select the file type: Metadata: content registration XML Query: XML queries DOI Query: XML-formatted DOI-to-metadata queries DOI References/Resources: resource-only deposit XML Conflict Management: conflict management .txt file Bulk URL Update: URL updates .txt file Click Upload. The uploaded file will be added to the system submission queue. We\u0026rsquo;ll email submission reports and query results to the email address you specified in the uploaded file.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/https-post/", "title": "XML deposit using HTTPS POST", "subtitle":"", "rank": 4, "lastmod": "2024-01-22", "lastmod_ts": 1705881600, "section": "Documentation", "tags": [], "description": "XML files can be POSTed to our system where they are added to the submission queue to await processing. You can do this yourself or make use of our java program.\nOn this page, find out more about:\nSubmitting files - an overview Sample transactions Submission limits Submitting files - an overview Uploading files (for deposit or for bulk queries) are submitted using HTTPS POST with the encType: multipart/form-data. The URL for all submissions is https://doi.", "content": "XML files can be POSTed to our system where they are added to the submission queue to await processing. You can do this yourself or make use of our java program.\nOn this page, find out more about:\nSubmitting files - an overview Sample transactions Submission limits Submitting files - an overview Uploading files (for deposit or for bulk queries) are submitted using HTTPS POST with the encType: multipart/form-data. The URL for all submissions is https://0-doi-crossref-org.libus.csd.mu.edu/servlet/deposit. You may also POST submissions to our test system using https://0-test-crossref-org.libus.csd.mu.edu/servlet/deposit. Learn more about our test system.\nThe following parameters are supported:\nForm field Description Possible values Mandatory? Default operation Depends on submission type doMDUpload: For metadata (XSD) submissions (or, (full XML) metadata upload) doDOICitUpload: For DOI citations or resources submissions (or, (resource-only) DOI resources) doQueryUpload: For query submissions doDOIQueryUpload: For DOI-to-metadata query submissions doTransferDOIsUpload: For submissions to update resource resolution URLs No doMDUpload subType subtype for metadata submissions cm: for conflict management submissions No Not applicable login_id Crossref account credentials username If using shared role credentials, add the role. If using personal user credentials, add your email address and the role in this format email@address.com/role. The login_id value is case sensitive Yes Not applicable login_passwd Crossref account credentials password the login_passwd value is case sensitive Yes Not applicable Content parts fname Submission contents the fname value is case sensitive Yes Not applicable Sample transactions Sample transaction using curl curl -F \u0026#39;operation=doQueryUpload\u0026#39; -F \u0026#39;login_id=[username]\u0026#39; -F \u0026#39;login_passwd=[password]\u0026#39; -F \u0026#39;fname=@[filename]\u0026#39; https:// doi.crossref.org/servlet/deposit Note: several members have contacted us about use of the @ in the above sample transaction; it should be retained prior to the filename\nSample https transaction POST https://doi-crossref-org/servlet/deposit?operation=doMDUpload\u0026amp;login_id=[username]\u0026amp;login_passwd=[password] HTTP/1.1 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Accept-Language: en-us Content-Type: multipart/form-data; boundary=---------------------------7d22911b10028e User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Q312461) Host: Myhost Content-length: 1304 Pragma: no-cache -----------------------------7d22911b10028e Content-Disposition: form-data; name=\u0026#34;fname\u0026#34;; filename=\u0026#34;crossref_query.xml\u0026#34; \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.3.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.0 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; ... \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; .... \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; -----------------------------7d22911b10028e-- For backward compatibility, we also accept the login_id, login_passwd, operation, and area parameters in a multi-part request:\n-----------------------------7d22911b10028e Content-Disposition: form-data; name=\u0026#34;login_id\u0026#34; atypon -----------------------------7d22911b10028e Content-Disposition: form-data; name=\u0026#34;login_passwd\u0026#34; _atypon_ -----------------------------7d22911b10028e Content-Disposition: form-data; name=\u0026#34;fname\u0026#34;; filename=\u0026#34;hisham.xml\u0026#34; ... file contents ... Submission limits We have a limit of 10,000 pending submissions per user. If there are 10,000 submissions in our queue for a given user, subsequent uploads will fail with a 429 error. You may resume POSTing when pending submissions are below 10,000.\n", "headings": ["Submitting files - an overview ","Sample transactions ","Sample transaction using curl","Sample https transaction","Submission limits "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/https-post-using-java-program/", "title": "POSTing files using our upload tool", "subtitle":"", "rank": 4, "lastmod": "2024-01-22", "lastmod_ts": 1705881600, "section": "Documentation", "tags": [], "description": "We provide a Java program that performs file uploads (via HTTPS POST) to Crossref. This program allows you to upload a single file, a list of files, or a whole directory of files.\nTo use, download crossref-upload-tool.jar and place it in /usr/local/lib.\nHow to use our upload tool In the following examples:\nuser is the username and password from your Crossref account credentials. If you are using organization-wide shared role credentials, the username is the role.", "content": "We provide a Java program that performs file uploads (via HTTPS POST) to Crossref. This program allows you to upload a single file, a list of files, or a whole directory of files.\nTo use, download crossref-upload-tool.jar and place it in /usr/local/lib.\nHow to use our upload tool In the following examples:\nuser is the username and password from your Crossref account credentials. If you are using organization-wide shared role credentials, the username is the role. If you\u0026rsquo;re using personal user credentials, the username is your email address plus the role in the following format email@address.com/role. file is the name of the file you are uploading or directory is the name of the directory containing files to upload To upload a metadata file java -jar crossref-upload-tool.jar --user [username] [password] --metadata ([filename/directory]) Using role credentials (note: in these examples, we have used the fictional role and password combination of mrcrossref and abc134):\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --metadata crdeposit234.xml Using user credentials (note: in these examples, we have used the fictional user credential, role, and password combination of: email@address.com/role, mrcrossref, and abc134)\njava -jar crossref-upload-tool.jar --user email@address.com/mrcrossref abc134 --metadata crdeposit234.xml To upload a resource-only deposit file java -jar crossref-upload-tool.jar --user [username] [password] --resources ([filename/directory]) for example:\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --metadata cr_refs.xml To upload conflict files Single file:\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --conflicts ticket1234.txt Directory of files:\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --conflicts ALIAS_123 To upload bulk Resource URL updates Single file:\njava -jar crossref-upload-tool.jar -user mrcrossref abc 134 -transfers ticket1234.text Directory of files:\njava -jar crossref-upload-tool.jar -user mrcrossref abc134 -transfer ALIAS_123 To direct upload(s) to the test system If you don\u0026rsquo;t already have Crossref test system credentials configured, you\u0026rsquo;ll need to contact our technical support team in order for us to enable a test account for test.crossref.org.\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --host test.crossref.org --metadata crdeposit234.xml Dry run (test) Note that if the \u0026ndash;metadata option is given a directory name instead of a filename then all files within the directory are uploaded. To ensure that you are uploading what you want use the \u0026ndash;dry-run option and review the listing of files, eg:\njava -jar crossref-upload-tool.jar --user mrcrossref abc134 --metadata mydeposits/ --dry-run Additional info If your upload is successful, you will see this message:\n[\u0026hellip;] INFO uploading to https://0-doi-crossref-org.libus.csd.mu.edu/\n[\u0026hellip;] INFO uploading submission: file=myfile.xml\n[\u0026hellip;] INFO uploaded submission: file=myfile.xml\n[\u0026hellip;] INFO done\nIf the username is wrong, you will see the message:\n[\u0026hellip;] INFO uploading to https://0-doi-crossref-org.libus.csd.mu.edu/\n[\u0026hellip;] INFO uploading submission: file=myfile.xml\n[\u0026hellip;] INFO unauthorized: file=myfile.xml; user=mrcrossref\n[\u0026hellip;] INFO done\nUpload options --user name password --metadata ( file | directory ) --query ( file | directory ) --transfers ( file | directory ) --handles ( file | directory ) --resources ( file | directory ) --conflicts ( file | directory ) --address host port --protocol ( http | https ) --dry-run --help Where:\nuser: your Crossref system username (either role, for role credentials, or email@address.com/role, for user credentials) and password metadata: use for metadata deposits query: use for query deposits transfers: admin use only handles: admin use only resources: resource-only deposits conflicts: conflict resolution files address: direct to a different address (such as test.crossref.org) protocol: http or https dry-run: test uploader without uploading help: displays the above list of upload options Further examples Entry into terminal\njava -jar /usr/local/crossref-upload-tool.jar --user mrcrossref abc134 --metadata /Users/mistercrossref/Uploader/September/19/134/ Key for entry into terminal:\nusr/local/crossref-upload-tool.jar = location of the upload program on your machine mrcrossref abc134 = Crossref role and password you are using for upload metadata = type of content being uploaded Users/mistercrossref/Uploader/September/19/134/ = location of XML files on my machine being uploaded ", "headings": ["How to use our upload tool ","To upload a metadata file ","To upload a resource-only deposit file ","To upload conflict files ","To upload bulk Resource URL updates ","To direct upload(s) to the test system ","Dry run (test) ","Additional info ","Upload options ","Further examples "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/testing-your-xml/", "title": "Testing your XML", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "It’s a good idea to verify the format and structure of your XML file before trying to register your content. You can validate your XML locally using an XML editor such as Oxygen or XMLSpy, or command line tools such as xmllint. We provide an XML parser that supports single file uploads for validation only.\nOur test version of the admin tool allows members and service providers a sandbox to test their XML submissions, before depositing in the production (live) system.", "content": "It’s a good idea to verify the format and structure of your XML file before trying to register your content. You can validate your XML locally using an XML editor such as Oxygen or XMLSpy, or command line tools such as xmllint. We provide an XML parser that supports single file uploads for validation only.\nOur test version of the admin tool allows members and service providers a sandbox to test their XML submissions, before depositing in the production (live) system. The test environment works in the same way as our production admin tool, but uses a test database and does not register DOIs with Handle. You can use the test system for deposits via the admin tool and HTTPS POST, but not for deposits via the web deposit form.\nYou can also use our metadata quality check parser to check your XML before submission. The parser quickly identifies errors in the XML you uploaded.\nAny deposits you make in the test system have no effect on your resolution reports and conflict reports, which relate only to content you register in the production system. Learn more about reports.\nDifferences between test and production systems VoR/preprint match notifications: in the production system, a notification feature alerts preprint creators of any matches with journal articles, so they can link to future versions from the preprint. In the test system, you won’t be notified of matches.\nAccessing the test system We don’t automatically set up new accounts with access to the test system, but we are happy to give you this access at any time, whether during your membership application, or at any time after joining. Just contact us to request access.\nLog in to the test system with your Crossref account credentials. If your credentials do not work with the test system, please contact us so we can enable access for you. If you have forgotten your password, you can reset it - learn more about how to change your Crossref account credentials password.\nCarrying out a platform migration? Please let us know so we can update both your production and test accounts. Learn more about planning a platform migration.\n", "headings": ["Differences between test and production systems ","Accessing the test system ","Carrying out a platform migration? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/direct-deposit-xml/jats-xml/", "title": "Using JATS XML", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We recommend creating XML directly to our schema rather than trying to convert JATS (NLM) formatted XML as we don\u0026rsquo;t currently have a reliable way to convert it.\nThere are two unreliable options available in beta.\nUpload NLM JATS-formatted XML files into our system using the web deposit form Use our basic JATS-to-Crossref XSLT conversion and then upload using the admin tool or https post. Upload NLM JATS-formatted XML files using the web deposit form Start at the web deposit form Under data type selection, choose NLM file Log in using your Crossref account credentials Click on select file and select your NLM or JATS file Enter the email address that should receive the all-important submission log email Add the DOI in the relevant field (if your XML contains \u0026lt;article-id pub-id-type=\u0026quot;doi\u0026quot;\u0026gt; you can leave the DOI field empty) Add the URL in the relevant field (if your XML contains \u0026lt;self-uri\u0026gt; and that URI contains the URL you intend to register with your DOI, you can leave the URL field empty) Click Upload NLM Data to submit You\u0026rsquo;ll receive a submission log when your deposit is complete.", "content": "We recommend creating XML directly to our schema rather than trying to convert JATS (NLM) formatted XML as we don\u0026rsquo;t currently have a reliable way to convert it.\nThere are two unreliable options available in beta.\nUpload NLM JATS-formatted XML files into our system using the web deposit form Use our basic JATS-to-Crossref XSLT conversion and then upload using the admin tool or https post. Upload NLM JATS-formatted XML files using the web deposit form Start at the web deposit form Under data type selection, choose NLM file Log in using your Crossref account credentials Click on select file and select your NLM or JATS file Enter the email address that should receive the all-important submission log email Add the DOI in the relevant field (if your XML contains \u0026lt;article-id pub-id-type=\u0026quot;doi\u0026quot;\u0026gt; you can leave the DOI field empty) Add the URL in the relevant field (if your XML contains \u0026lt;self-uri\u0026gt; and that URI contains the URL you intend to register with your DOI, you can leave the URL field empty) Click Upload NLM Data to submit You\u0026rsquo;ll receive a submission log when your deposit is complete. Please review the log to be sure your DOIs have been updated successfully.\nNLM JATS to Crossref conversion We have a basic JATS-to-Crossref XSLT conversion that can be used to transform NLM JATS-formatted XML into Crossref-friendly XML. You may download the .xsl file for local use.\nPlease note:\nThe journal title used for Crossref deposits be included in the \u0026lt;journal-title\u0026gt; element The DOI should be included in \u0026lt;article-id\u0026gt; with attribute pub-id-type=\u0026lsquo;doi\u0026rsquo; The DOI URL should be included in \u0026lt;self-uri\u0026gt; The JATS document type definition (DTD) does not have an appropriate place to include the email element used in Crossref deposits - this needs to be added manually. ", "headings": ["Upload NLM JATS-formatted XML files using the web deposit form ","NLM JATS to Crossref conversion "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/constructing-your-dois/", "title": "Constructing your DOIs", "subtitle":"", "rank": 4, "lastmod": "2024-02-05", "lastmod_ts": 1707091200, "section": "Documentation", "tags": [], "description": "Crossref allows citation linking using Digital Object Identifiers (DOIs) between research produced by different organizations (without the need for individual agreements between them). This ensures that citation links are persistent - that they work over long periods of time. However, there is no purely technical solution to the problem of broken links on the web; Crossref members have to keep these links updated, along with rich metadata that everyone in the scholarly ecosystem relies on.", "content": "Crossref allows citation linking using Digital Object Identifiers (DOIs) between research produced by different organizations (without the need for individual agreements between them). This ensures that citation links are persistent - that they work over long periods of time. However, there is no purely technical solution to the problem of broken links on the web; Crossref members have to keep these links updated, along with rich metadata that everyone in the scholarly ecosystem relies on.\nAt Crossref, every metadata record that our members register for their content needs to have a unique DOI attached to it, both as a container for that record and as a locator for others to use. A DOI does not signify any value or accuracy of the thing it locates; the value lies in the record\u0026rsquo;s metadata which gives context about the object (such as contributors, funding bodies, abstract/summary) and enables connections with other entities (such as people (e.g. ORCID) or organizations (e.g. ROR)).\nDOIs include 3 parts:\nShow image × Of these three parts of the DOI, members (or their service providers) create the last part, the suffix. Because DOIs must be unique and persistent, members need a reasonable way to create and manage their suffixes, which should be opaque.\nHere, we share the rules, guidelines and some examples to help you decide how to approach your suffixes. You can also go straight to our suffix generator.\nNote, if you’re using the Crossref XML plugin for OJS, you don’t need to create your suffixes as the plugin will generate them for you automatically.\nWe’ve also got some advice for DOIs and DSpace repositories.\nFirst, a few rules Rules are shared by all DOI registration agencies.\nEach DOI must be unique Only use approved characters: DOI suffixes can be any alphanumeric string that includes combinations the following approved characters: Letters of the Roman alphabet, A-Z (see below on case insensitivity) Numbers, 0-9 -._;()/ (hyphen, period, underscore, semicolon, parentheses, forward slash). Note that the non-breaking hyphen (U+2011), figure dash (U+2012), en dash (U+2013), and em dash (U+2014) are not approved characters. The only approved hyphen is the hyphen-minus (U+002D). Note, some older (pre-2008) DOIs which contain other characters. Learn more about suffixes containing special characters. Suffixes are case insensitive, so 10.1006/abc is the same in the system as 10.1006/ABC. Note that using lowercase is better for accessibility. Guidelines for creating a DOI suffix In part because there are few rules, it can be helpful to have some guidance in how to approach suffixes. This advice applies to DOIs at all levels, whether at journal or book level (a title-level DOI), or volume, issue, article, or chapter level.\nThe most important part of creating your DOIs is to understand that because DOIs are unique, persistent and \u0026lsquo;dumb\u0026rsquo;, once they are created, they will always work. There is never a need to delete or update existing DOIs.\nBest practices for DOI suffixes: Suffixes are best when they include short strings that are easily displayed and typed but are ‘dumb’ - meaning, the suffixes contains no readable information, including metadata.\nUse a random approach: this ensures the DOIs are opaque or ‘dumb’ and minimizes attempts at interpretation or prediction (more on opaque suffixes below). Try our suggested DOI registration workflow, including our suffix generator. Any random generator will also work. Note, if you’re using the Crossref XML plugin for OJS,you don’t need to create your suffixes as the plugin will generate them for you automatically. Keep suffixes short. This makes them easier to read and to re-type. Remember, DOIs will appear online and in print. Best practice DOI example: 10.3390/s18020479 This example appears to be opaque because it includes no obvious information.\nAvoid the following in DOI suffixes: The function of suffixes is technical in nature so they are most problematic when they are treated as information to be read, interpreted and/or predicted. Remember, DOIs are persistent and not subject to correction or deletion.\nWhile it may be tempting, using a pattern, such as a sequence, can cause problems. Services and tools that use DOIs may, for example, try to predict future DOIs that are not registered and may never be (more on opaque suffixes below). Don’t include information like journal title (or initials), page number or date. This kind of information should be included in the metadata but can cause problems when included in suffixes for 2 main reasons: Information in the suffix that conflicts with information in the metadata is confusing. Information like journal title (or initials) may change or be found to be incorrect, as with dates, but DOIs are persistent, cannot be deleted and are not subject to correction. See more on opaque suffixes below. Example problematic DOI suffix: 10.5555/2014-04-01 This example is not opaque because it includes a date, which should be included in the metadata instead of in the suffix.\nProceed with caution in DOI suffixes: Determining how to create suffixes and manage the over time can be a challenge. We recognize that some systems have requirements that don’t follow this advice and that human readability is helpful in managing DOIs.\nIf you must use a suffix with meaning, internal system identifiers can work, with careful management. Because things like ISBNs are themselves metadata, we don’t recommend using them in suffixes. Just remember that while you and readers may recognize an ISBN, for example, the DOI system itself doesn’t and DOIs are not subject to correction or deletion.\nNo matter your approach, it’s worth taking some time to understand the emphasis on opaque suffixes.\nOnce a DOI has been registered with us, it should always be used for the same content. Even if the content moves to a new website or a new owner, the same DOI should continue to be used. Though the DOI never changes, its associated metadata is kept up-to-date by the relevant Crossref member.\nWhat if your content already has a DOI? Sometimes members may acquire a journal that already has DOIs registered for some articles. It\u0026rsquo;s important to keep and continue to use the DOIs that have already been registered and not change them - DOIs need to be persistent.\nIt doesn\u0026rsquo;t matter if the prefix on the existing DOI is different from the prefix belonging to the acquiring member. As content can move between members, the owner of a DOI is not necessarily the same as the owner of the prefix. Read more about transferring responsibility for DOIs.\nThe importance of opaque identifiers What are opaque suffixes \u0026amp; why they are important\nSuffixes are ‘dumb numbers.’ They are essentially meaningless on their own and meant to be that way\u0026ndash;opaque. One good reason for that is because when something is meaningless, it doesn’t need to be corrected.\nDOIs should not include information that can be understood, interpreted or predicted, especially information that may change. Page numbers and dates are examples of information that shouldn’t be included in suffixes. It is particularly problematic if the suffix includes information that conflicts with the metadata associated with the DOI.\nWe’ve referred to creating ‘suffix patterns’ in the past but information that includes or implies a pattern is also problematic. A sequence of numbers, for example, lends itself to the assumption that future DOIs can be predicted.\nScraping for DOIs - or what appear to be DOIs\u0026ndash;is common, as is the likelihood that what is\u0026ndash;or appears to be\u0026ndash;a pattern will be treated as such. Just as the timing of DOI registration is important, in order to avoid unregistered DOIs, their construction is critical to avoiding interpretation.\nMore information on creating DOIs Here are a few other resources that discuss creating DOIs and the importance of using opaque suffixes.\nhttps://blog.datacite.org/cool-dois/ https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-and-matching-regular-expressions/ https://en.wikipedia.org/wiki/Handle_System ", "headings": ["First, a few rules","Guidelines for creating a DOI suffix ","What if your content already has a DOI?","The importance of opaque identifiers","More information on creating DOIs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/constructing-your-dois/the-structure-of-a-doi/", "title": "DOIs for different levels", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "DOIs at different levels A DOI may refer to a journal or book (a title-level DOI), or to a specific article or chapter.\nShow image × Journals and DOIs Like a set of nesting dolls, a journal may be made up of volumes, each containing a number of issues, each containing a number of articles. You can assign a DOI at each level, for example:\njournal-level-DOI (sometimes called the title-level-DOI) 10.", "content": "DOIs at different levels A DOI may refer to a journal or book (a title-level DOI), or to a specific article or chapter.\nShow image × Journals and DOIs Like a set of nesting dolls, a journal may be made up of volumes, each containing a number of issues, each containing a number of articles. You can assign a DOI at each level, for example:\njournal-level-DOI (sometimes called the title-level-DOI) 10.5555/QYPF2031. Like an ISSN, it refers to the whole journal volume-level-DOI 10.5555/FFFU4804 issue-level-DOI 10.5555/QKLE5634 article-level-DOI 10.5555/CNBT7653 Show image × The role of the journal-level-DOI, volume-level-DOI, and issue-level-DOI is to link persistently to a point in the journal structure. These DOIs do not have any associated content, and it does not cost anything to register these DOIs.\nHowever, article-level-DOIs do have associated content, and therefore a fee applies to register these DOIs.\nBooks and DOIs Like a set of nesting dolls, a book may be made up of chapters. Again, you can assign a DOI at each level, for example:\nbook-level-DOI (sometimes called the title-level-DOI) 10.5555/ZAAR1365. Just like an ISBN, it refers to the whole book. chapter-level-DOI 10.5555/TFWD2627 Show image × Both book-level-DOIs and chapter-level-DOIs have associated content, and therefore a fee applies to register these DOIs.\nLearn more about our fees for different record types, and how to construct your DOIs.\n", "headings": ["DOIs at different levels ","Journals and DOIs ","Books and DOIs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/constructing-your-dois/suffixes-containing-special-characters/", "title": "Suffixes containing special characters", "subtitle":"", "rank": 4, "lastmod": "2021-12-03", "lastmod_ts": 1638489600, "section": "Documentation", "tags": [], "description": "You might see (or inherit responsibility for) older DOIs which contain other characters, and require special treatment in a URL:\nEncode hash or pound sign # as %23 Do not encode left bracket (or less than) \u0026lt; as \u0026amp;lt; and right bracket (or greater than) \u0026gt; as \u0026amp;gt; when resolving DOIs or retrieving metadata from our REST API to retrieve the metadata (see below) Do not encode forward slash / when resolving DOIs or retrieving metadata from our REST API For example, use the following when resolving DOIs with special characters:", "content": "You might see (or inherit responsibility for) older DOIs which contain other characters, and require special treatment in a URL:\nEncode hash or pound sign # as %23 Do not encode left bracket (or less than) \u0026lt; as \u0026amp;lt; and right bracket (or greater than) \u0026gt; as \u0026amp;gt; when resolving DOIs or retrieving metadata from our REST API to retrieve the metadata (see below) Do not encode forward slash / when resolving DOIs or retrieving metadata from our REST API For example, use the following when resolving DOIs with special characters:\nhttps://doi.org/10.1002/(SICI)1521-3951(199911)216:1\u0026lt;135::AID-PSSB135\u0026gt;3.0.CO;2-%23 instead of:\nhttps://doi.org/10.1002/(SICI)1521-3951(199911)216:1\u0026lt;135::AID-PSSB135\u0026gt;3.0.CO;2-# And, to retrieve the metadata in our REST API, using those same DOI examples use:\nhttps://api.crossref.org/works/10.1002/(SICI)1521-3951(199911)216:1%3C135::AID-PSSB135%3E3.0.CO;2-%23 instead of:\nhttps://api.crossref.org/works/10.1002/(SICI)1521-3951(199911)216:1\u0026lt;135::AID-PSSB135\u0026gt;3.0.CO;2-# ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/constructing-your-dois/suggested-doi-registration-workflow-including-suffix-generator/", "title": "Suggested DOI registration workflow, including suffix generator", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Here is a suggested workflow for depositing DOIs and metadata in order to register your content. We also provide a suffix generator tool to help you create suffix strings which comply with best practices (suffixes should be opaque, unique, and short). Download the suffix generator (macro-enabled .xlsm file). When you open it, choose Read-only, and Enable macros.\nCreate a spreadsheet to keep track of DOIs assigned (master list). Use one tab per prefix.", "content": "Here is a suggested workflow for depositing DOIs and metadata in order to register your content. We also provide a suffix generator tool to help you create suffix strings which comply with best practices (suffixes should be opaque, unique, and short). Download the suffix generator (macro-enabled .xlsm file). When you open it, choose Read-only, and Enable macros.\nCreate a spreadsheet to keep track of DOIs assigned (master list). Use one tab per prefix. Add columns for suffix, full DOI, and URL (you can add others if you wish) Generate suffixes using our tool (or invent them yourself), and add each suffix and full DOI to the master list Each time you create a new content item, add its URL to a new row. Display the DOI from the same row on your content item, following our DOI display guidelines Deposit the the DOI and its associated metadata with Crossref. The DOI is only active once it is registered with us. Checking for duplicates: the suffix generator should be sufficiently random to avoid creating duplicates. However, if when you deposit the metadata with Crossref, if the system says the DOI has already been registered, remove it from your master list, and use a new suffix.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/constructing-your-dois/dois-and-dspace-repositories/", "title": "DOIs and DSpace repositories", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We encourage members with institutional repositories to assign DOIs to their original non-duplicative works.\nHere are some simple guidelines for repositories based on DSpace. DSpace and the DOI both use the Handle system for identifiers. When a DSpace repository is configured it must be registered with CNRI, which provides the repository with a Handle prefix (typically a sequence of numbers). This is not a DOI prefix (also a sequence of numbers which begin with 10.", "content": "We encourage members with institutional repositories to assign DOIs to their original non-duplicative works.\nHere are some simple guidelines for repositories based on DSpace. DSpace and the DOI both use the Handle system for identifiers. When a DSpace repository is configured it must be registered with CNRI, which provides the repository with a Handle prefix (typically a sequence of numbers). This is not a DOI prefix (also a sequence of numbers which begin with 10.).\nWhen constructing a Crossref DOI for your repository content, use the DSpace suffix as the DOI suffix. For example, a member with a DOI prefix of 10.1575 would construct a DSpace DOI like this: 10.1575/1912/1099, where:\n10.1575 = DOI prefix 1912 = DSpace prefix 1099 = DSpace suffix The URL registered for the DOI should be the Handle URL, which uses the form http://0-hdl-handle-net.libus.csd.mu.edu/DSpace-prefix/DSpace-suffix. So in this example, the URL registered for the DOI is: http://0-hdl-handle-net.libus.csd.mu.edu/1912/1099, and the DOI link is: https://0-doi-org.libus.csd.mu.edu/10.1575/1912/1099.\nYou may also be interested in recommendations for using ORCID in repositories.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/creating-a-landing-page/", "title": "Creating a landing page", "subtitle":"", "rank": 4, "lastmod": "2024-04-19", "lastmod_ts": 1713484800, "section": "Documentation", "tags": [], "description": "As soon your content is registered with Crossref, users will be able to retrieve identifiers and create links with them. Crossref DOIs must resolve to a unique landing (or response) page that you maintain.\nA landing page is a web page that provides further information to someone who has clicked on a DOI link to help them confirm that they are in the right place. It\u0026rsquo;s important that each DOI resolves to a unique landing page that is just for that specific item.", "content": "As soon your content is registered with Crossref, users will be able to retrieve identifiers and create links with them. Crossref DOIs must resolve to a unique landing (or response) page that you maintain.\nA landing page is a web page that provides further information to someone who has clicked on a DOI link to help them confirm that they are in the right place. It\u0026rsquo;s important that each DOI resolves to a unique landing page that is just for that specific item.\nLanding pages for published research outputs The landing page for research outputs should be unique for that item and should contain:\nFull bibliographic information: so that the user can verify they have been delivered to the correct item The DOI displayed as a URL: so that if a reader wishes to cite this item, they can just copy and paste the DOI link (learn more about our DOI display guidelines) A way to access the full-text of the content: It\u0026rsquo;s acceptable for the full-text to be behind a login or paywall - this is fine as long as the landing page is accessible to everyone. A DOI can resolve to the HTML full-text of the content, and if this page includes the criteria above, a separate landing page is not necessary. It\u0026rsquo;s not good practice to link directly to a PDF however, as it will start downloading when the DOI is clicked. Here are some examples of landing pages for published research outputs:\nhttps://doi.org/10.54825/IOLO6421 - an open-access journal article https://0-doi-org.libus.csd.mu.edu/10.11116/MTA.2.2.2 - a closed-access (i.e., paywalled) journal article https://0-doi-org.libus.csd.mu.edu/10.54957/jolas.v2i2.182 - an open-access journal article hosted on the OJS platform https://0-doi-org.libus.csd.mu.edu/10.1109/AIPR.2015.7444535 - a closed-access conference proceeding https://0-doi-org.libus.csd.mu.edu/10.5772/intechopen.106437 - an open-access book chapter https://0-doi-org.libus.csd.mu.edu/10.18574/nyu/9780814768839.001.0001 - a closed-access book https://0-doi-org.libus.csd.mu.edu/10.3133/ofr93184 - an open-access report https://0-doi-org.libus.csd.mu.edu/10.1079/cabicompendium.46299 - a closed-access dataset https://0-doi-org.libus.csd.mu.edu/10.62481/f02ed1c5 - an open-access posted content document (a scientific blog) Many publishers also include abstracts on their landing pages, especially for journal articles.\nAnd a little more for preprints As well as the criteria above, a preprint landing page (such as https://0-doi-org.libus.csd.mu.edu/10.31235/osf.io/bkx3n) should also prominently identify the content as a preprint and include a link to any AAM or VOR. This information should be above the fold.\nLanding pages for grants The landing pages for grants should be unique for that specific grant and contain:\nInformation about the grant so the user can verify they\u0026rsquo;ve been delivered to the correct item The DOI displayed as a URL - learn more about our DOI display guidelines. Here are two example landing pages for grants: https://0-doi-org.libus.csd.mu.edu/10.37717/220020589 and https://0-doi-org.libus.csd.mu.edu/10.35802/107769.\n", "headings": ["Landing pages for published research outputs","And a little more for preprints","Landing pages for grants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/working-with-a-service-provider/", "title": "Working with a service provider", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Service providers such as hosting platforms and XML providers may provide or deposit metadata with Crossref on behalf of members.\nThese organizations provide key services to our shared community, by formatting metadata and providing visibility to registered content on hosting platforms. Working closely with service providers as our own services develop is critical to improving research communications at all points in the supply chain.\nIf you\u0026rsquo;re working with a service provider, check that they’re helping you fulfil your member obligations, and look at your Participation Report to see how your metadata is growing.", "content": "Service providers such as hosting platforms and XML providers may provide or deposit metadata with Crossref on behalf of members.\nThese organizations provide key services to our shared community, by formatting metadata and providing visibility to registered content on hosting platforms. Working closely with service providers as our own services develop is critical to improving research communications at all points in the supply chain.\nIf you\u0026rsquo;re working with a service provider, check that they’re helping you fulfil your member obligations, and look at your Participation Report to see how your metadata is growing. Learn more about metadata stewardship, and how our reports help you evaluate and improve your metadata. If you would like your reports to go directly to a contact at your service provider, please contact us so we can add their details.\nEven if you are not moving platforms, you might find our checklist for platform migration helpful in discussing aspects of metadata deposit with your service provider.\nIf you are looking to access rather than deposit Crossref metadata, learn more about metadata retrieval and choose the best service for your needs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/working-with-a-service-provider/hosting-platforms/", "title": "Hosting platforms", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Hosting platforms are organizations that host publisher content. They give visibility to Crossref metadata in two key ways:\nPlatforms register content on behalf of publishers. This means that they are responsible for a significant proportion of the metadata records in Crossref. Platforms show published research to the world. They support authors, readers and search engines through the display of bibliographic metadata, links to citing articles and Cited-by counts, and updates and retractions through Crossmark.", "content": "Hosting platforms are organizations that host publisher content. They give visibility to Crossref metadata in two key ways:\nPlatforms register content on behalf of publishers. This means that they are responsible for a significant proportion of the metadata records in Crossref. Platforms show published research to the world. They support authors, readers and search engines through the display of bibliographic metadata, links to citing articles and Cited-by counts, and updates and retractions through Crossmark. Hosting platforms deliver metadata to Crossref as well as consuming it as output. Citation matching is a good example of metadata retrieval - try this out using our Metadata Search, and learn more about looking up metadata and identifiers.\nHosting platforms work with our members and help deliver Crossref services to our shared community. When we plan a change or new service, we help hosting platforms to prepare both for the change and for questions from members.\nContent sometimes moves from one member or platform to another. We aim to make these transitions as smooth as possible, so that deposits and other key activities continue uninterrupted. Learn more about transferring responsibility for DOIs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/working-with-a-service-provider/manuscript-submission-systems/", "title": "Manuscript submission systems", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Metadata first takes shape with author manuscripts.\nDiscoverability starts here Manuscript submission systems, and the authors who use them, start metadata on its long lifecycle. Publishers’ instructions to authors and author-created metadata (such as keywords) are first captured by manuscript submission systems.\nAlong with other publishing service organizations such as typesetters, submission systems are a kind of service provider, and are the first step in getting content registered, often by yet another kind of service provider, hosting platforms.", "content": "Metadata first takes shape with author manuscripts.\nDiscoverability starts here Manuscript submission systems, and the authors who use them, start metadata on its long lifecycle. Publishers’ instructions to authors and author-created metadata (such as keywords) are first captured by manuscript submission systems.\nAlong with other publishing service organizations such as typesetters, submission systems are a kind of service provider, and are the first step in getting content registered, often by yet another kind of service provider, hosting platforms.\nBeyond bibliographic Learn about the importance of depositing different types of metadata.\nWe encourage the fullest metadata possible and recognize that manuscript submission systems are among those that make this possible!\n", "headings": ["Discoverability starts here ","Beyond bibliographic "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/working-with-a-service-provider/planning-a-platform-migration/", "title": "Planning a platform migration", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We understand that migrating your hosting platform is an extremely stressful time. There’s so much to think about and plan for, and the last thing you want to be worrying about is whether your DOIs will resolve after the move and what this will mean for the quality of your metadata.\nMembers often ask us for advice on the best way to handle a platform migration. This handy guide will help you prepare for the move, and gives hints and tips for managing the actual process as smoothly as possible.", "content": "We understand that migrating your hosting platform is an extremely stressful time. There’s so much to think about and plan for, and the last thing you want to be worrying about is whether your DOIs will resolve after the move and what this will mean for the quality of your metadata.\nMembers often ask us for advice on the best way to handle a platform migration. This handy guide will help you prepare for the move, and gives hints and tips for managing the actual process as smoothly as possible. It also includes a very useful checklist to include in your request for proposal (RFP) to make sure you’re asking the right questions of your vendors and making life easier for everyone involved.\nSelecting your new service provider A key element in selecting your new service provider will be ensuring they can do everything you need them to do with Crossref.\nThere’s a big difference between just depositing skeleton metadata and depositing full, rich metadata and participating in all our services. And there’s sometimes a big difference in cost from different providers. You’ll want to be clear on what you need up front so your providers know what you’re expecting and you don’t get any surprises further down the line. This also means you’ll be able to accurately compare the different service providers, and have a smoother working relationship with your chosen service provider in the future. So where do you start?\nMake sure you know how you’re participating in Crossref today The first place to start is to be clear on how you’re participating with us right now. This means your starting point is a metadata audit. We know you probably won’t want to do this with everything else that’s going on with a platform move. However, members who haven’t done this up front have said that they wish they had. They ended up having to do one anyway but further down the process and with less time, adding to the stress. It’s definitely better to start this early.\nThe easiest way to see what metadata you’re already depositing with us is to use our new Participation Reports tool. It may surprise you to see what you are and aren’t currently depositing, or what you current platform provider is or isn’t depositing on your behalf.\nThink about how you want to participate in Crossref in the future Once you’re clear on how you’re participating currently, think about what you may want to do in the future.\nWhat record types are you registering with us right now, and what you might want to register in the future? Perhaps you’re only registering journal articles right now, but are planning to register preprints in the future.\nWhat metadata elements are you registering with us right now, and what you may want to register later? Are you registering references? They’re a key addition for discoverability and especially recommended if you want to use the Cited-by service in the future. Are you registering full-text URLs and license information for text and data mining? What about full-text URLs for the Similarity Check service?\nLearn more about all our services.\nAsk the right questions of your service providers up front Members in the community have told us that even when some service providers have confirmed in their contract that they \u0026ldquo;support Crossref content registration\u0026rdquo; or \u0026ldquo;deposit metadata with Crossref\u0026rdquo; there have still been surprises when working together in terms of the amount of metadata the service provider can actually deposit, or costs for doing this fully.\nTo help ask Crossref-related questions of service providers, we’ve created a checklist. This contains specific questions to ask about content registration. You can add the checklist into your RFP so you’re absolutely clear on what your prospective service providers can and can’t provide. Your metadata audit and your thinking about what you want to do in the future will help you to select the right sections from the checklist.\nPlanning the changeover process Once you’ve selected your new provider and are starting to think about the changeover process and timings, please let us know.\nConfirm with us which platform you’re moving to, and who we’ll be working with when you actually migrate. The best model for success is for us to work with a single contact. It doesn’t matter if that contact works for you or your new service provider, the key thing is that we keep the same contact consistently through your migration process.\nIf you need to update a lot of URLs (over 50,000) please let us know as soon as possible when you’re planning on doing this. This will slow down our deposit queue for other members, and we’ll need to avoid doing any maintenance while your update is running, so we’ll need to coordinate carefully with you on this.\nThis is a great opportunity to review the other contacts we hold for you, and make sure our billing, voting, technical, metadata quality, and primary (formerly business) contacts are up-to-date.\nAccount details and permissions You need to think carefully about how your service provider will register (and update) content on your behalf.\nIf your service provider will be using their own Crossref account credentials to register your content with us, do let us know in advance so we can give them permission to update your prefix using their credentials. We also may need to remove the account credentials for your old service provider. If your new service provider will be sharing your Crossref account credentials, we may need to update passwords so that your old service provider doesn\u0026rsquo;t accidentally continue to update your records. Updating the URLs for your existing DOIs You’ll probably need to update the URLs for all your existing DOIs when you move. Updating multiple DOIs at a time in our system is a process, and can’t happen at a single moment in time. Do make sure that your platforms can coordinate redirection while this is happening.\nThe good news is that this update won’t cost you anything extra with Crossref - you’re only charged registration fees the first time you register a DOI. Although if you’ve uncovered backlist DOIs that were never registered as part of your metadata audit you will need to pay registration fees for them.\nDon’t forget that you may have other types of URLs in the metadata that will need updating as well as the main URL where the DOI resolves to. If you’ve previously registered text and data mining URLs, or full-text URLs for Similarity Check, these will also need updating.\nIf you’re planning on updating your URLs, let us know and we can give you an estimate for how long this will take and help you manage the process efficiently.\nUpdating contacts for reports We send out a range of reports to help our members manage their metadata. Please confirm with us which named contacts (and email addresses) should receive Crossref reports going forward. These reports include:\nConflict report DOI error report Preprint version of record report Resolution report Title report Don’t forget you may already have reports set up to go to your existing service provider, so do ask us to delete these.\nLearn more about reports.\nAfter the migration Once your migration is complete, do keep an eye on your Participation Reports to make sure everything is happening as you think it should.\n", "headings": ["Selecting your new service provider ","Make sure you know how you’re participating in Crossref today ","Think about how you want to participate in Crossref in the future ","Ask the right questions of your service providers up front ","Planning the changeover process ","Account details and permissions ","Updating the URLs for your existing DOIs ","Updating contacts for reports ","After the migration "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/working-with-a-service-provider/checklist-for-platform-migration/", "title": "Checklist for platform migration", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This checklist will help you ask the right questions of your content hosting providers when selecting a new platform. It helps ensure that you can continue to participate with us in the way you want to after a migration - without any surprises.\nYou can cut and paste the bits you need into your RFP, or use File \u0026gt; Print in your browser to save or print a copy to use in your meeting.", "content": "This checklist will help you ask the right questions of your content hosting providers when selecting a new platform. It helps ensure that you can continue to participate with us in the way you want to after a migration - without any surprises.\nYou can cut and paste the bits you need into your RFP, or use File \u0026gt; Print in your browser to save or print a copy to use in your meeting. But before you remove any sections, do check that you definitely won’t want them in the future. Even if you aren’t planning to use all the options now, it’s good to know what your chosen platform will be able to do for you in the future, and whether there will be any extra costs involved.\nRecord types Is the potential provider able to register these record types with Crossref (using the relevant schema)?\nRecord type Able to register? Extra charge? Charge (if applicable) Books, chapters and reference works Components (as part of other content) Conference proceedings and conference papers Datasets Journal articles Peer review reports Pending publications (DOIs on acceptance) Preprints Reports and working papers Standards Theses and dissertations Grants that you award Metadata elements Is the potential provider able to gather and register these metadata elements with Crossref?\nMetadata type Able to register? Extra charge? Charge (if applicable) Abstracts Award/grant numbers Full-text URLs for Similarity Check Full-text URLs for text mining Funder IDs License URLs (eg for Version of Record (VOR), Accepted Manuscript (AM), Text and Data Mining (TDM) or STM\u0026rsquo;s Article Sharing Framework) ORCID iDs References Relationships (such as data, translation, preprint) URL, publication title, authors, and dates ROR identifiers Other Crossref services Is the potential provider able to support these services?\nOther Crossref services Able to register? Extra charge? Charge (if applicable) Cited-by display Crossmark display Reference linking (displaying Crossref DOIs in article reference lists) Backfile content Is the potential provider able to support the deposit of metadata for backfile content?\nBackfile content Extra charge? Charge (if applicable) Will backfile content be supported? Will backfile content be migrated at the same time as current? Will backfile content registration happen at the same time? DOI obligations and best practice Is the potential provider able to support the DOI best practice and obligations?\nDOI best practice and obligations Extra charge? Charge (if applicable) DOI display guidelines compliance ", "headings": ["Record types ","Metadata elements ","Other Crossref services ","Backfile content ","DOI obligations and best practice "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/required-recommended-elements/", "title": "Required, recommended, and optional metadata", "subtitle":"", "rank": 4, "lastmod": "2021-05-13", "lastmod_ts": 1620864000, "section": "Documentation", "tags": [], "description": "Each record type we support has a unique set of requirements and recommended metadata. Contributor metadata is consistent across record types and, while not always required, has consistent recommendations if present.\nRequired metadata must be included or your submission will be rejected. Recommended metadata should be included to create a complete metadata record. Optional metadata should be included if relevant, but will not be relevant for most records.\nFind required, recommended and optional metadata for:", "content": "Each record type we support has a unique set of requirements and recommended metadata. Contributor metadata is consistent across record types and, while not always required, has consistent recommendations if present.\nRequired metadata must be included or your submission will be rejected. Recommended metadata should be included to create a complete metadata record. Optional metadata should be included if relevant, but will not be relevant for most records.\nFind required, recommended and optional metadata for:\nContributor Journal Book Conference Proceeding Database Dissertation Posted content Report / working paper Standard Peer Review Contributor metadata Required Surname Recommended given_name, suffix, affiliation, ORCID Journal and article metadata Required Journal (journal_metadata) full_title, ISSN or title-level DOI and URL Issue (issue_metadata) issue, publication_date (year) Article (article_metadata) titles, publication_date (year), doi_data Issue elements are only required if a DOI is being deposited at the issue level. Article elements are likewise only required for article DOI deposits.\nRecommended Journal (journal_metadata) abbrev_title, doi_data, coden, journal_issue, archive-locations (with one archive name), title-level DOI and URL Issue (issue_metadata) publication_date (month, day), journal_volume, contributors, issue, doi_data Article (article_metadata) contributors, ORCID, publication_date (day, month), pages (first_page, last_page), citation_list, funding, license, Crossmark metadata and JATS-formatted abstracts Optional publisher_item, special_numbering, component_list Book metadata Required Series titles, ISSN, volume, publication_date (year), publisher (publisher_name) Set titles, volume Book titles, publication_date (year), publisher Chapter doi_data Recommended Series doi_data,edition_number, ISBN, contributors, coden, series_number, citation_list Set contributors, ISBN, edition_number, citation_list, doi_data Book contributors, ISBN, edition_number, doi_data, citation_list, funding, license, and Crossmark metadata Chapter contributors, titles, pages, publication_date, citation_list, funding, license, and Crossmark metadata Optional publisher_item, part_number, component_number,component_list Conference Proceeding metadata Required Series titles, ISSN Proceeding level proceedings_title, publisher, publication_date (year) Conference paper contributors, titles, doi_data Recommended Series level doi_data, contributors, series_number, ISBN Conference level volume, contributors, ISBN, event_metadata (conference_date, conference_location, conference_acronym, conference_theme, conference_sponsor, conference_number), proceedings_subject Conference paper publication_date, pages, citation_list, funding, license, and Crossmark metadata Optional publisher_item, coden, component_list Dataset metadata Required Database level titles Dataset level doi_data Recommended Database level contributors, description, database_date, publisher, institution, doi_data Dataset level contributors, titles, database_date, description, format, citation_list, component_list, funding, license, and Crossmark metadata Optional publisher_item, component_list Dissertation metadata Required titles, approval_date, institution, doi_data Recommended contributors, ISBN, degree, ORCID, funding, license, and Crossmark metadata Optional citation_list, component_list Posted content metadata Required titles, posted_date, doi_data Recommended group_title, contributors, acceptance_date, institution, item_number, abstracts, doi_data, citation_list, funding, license, Crossmark metadata and JATS-formatted abstracts Optional component_list Report / Working paper metadata Required Series level titles, ISSN Report level title, publication_date (year) Recommended Series level contributors, coden, series_number, volume, doi_data, edition_number, approval_date, publisher, institution, doi_data, citation_list Report level contributors, ORCIDs, edition_number, approval_date, ISBN, publisher, institution, citation_list, funding, license, and Crossmark metadata Optional publisher_item, contract_number Standard metadata Required Standard level title, designator, approval_date, standard_body_name, standard_body_acronym Item level contributors, titles, component_number, publication_date (year), pages, publisher_item, doi_data Recommended Standard level contributors, edition_number, ISBN, institution, citation_list, funding, license, and Crossmark metadata metadata Item level citation_list Optional publisher_item, content_item, component_list Peer Review metadata Required title, review_date (year), relation (isReviewOf) Recommended contributors, institution, competing_interest_statement, running_number, license metadata ", "headings": ["Contributor metadata ","Journal and article metadata ","Book metadata ","Conference Proceeding metadata ","Dataset metadata ","Dissertation metadata ","Posted content metadata ","Report / Working paper metadata ","Standard metadata ","Peer Review metadata "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/", "title": "Markup guides for metadata segments", "subtitle":"", "rank": 4, "lastmod": "2023-02-23", "lastmod_ts": 1677110400, "section": "Documentation", "tags": [], "description": "For a deeper dive into each metadata segment, an introduction and markup guide for each are below:\nAbstracts Affiliations and ROR Archive locations Article IDs Contributors Face markup within titles Full-text URLs Funding information ISSN and ISBN License Information MathML Multi-language content and translations References Relationships Titles Each record model also has its own markup guides.", "content": "For a deeper dive into each metadata segment, an introduction and markup guide for each are below:\nAbstracts Affiliations and ROR Archive locations Article IDs Contributors Face markup within titles Full-text URLs Funding information ISSN and ISBN License Information MathML Multi-language content and translations References Relationships Titles Each record model also has its own markup guides.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/", "title": "Markup guides for record types", "subtitle":"", "rank": 4, "lastmod": "2022-05-20", "lastmod_ts": 1653004800, "section": "Documentation", "tags": [], "description": "For a deeper dive into each record model, an introduction and markup guide for each are below:\nBooks and chapters markup guide Components markup guide Conference proceedings markup guide Datasets markup guide Dissertations markup guide Grants markup guide Journals and articles markup guide Peer reviews markup guide Pending publications markup guide Posted content (includes preprints) markup guide Reports and working papers markup guide Standards markup guide Many segments of metadata are repeatable across record models and have their own markup guides.", "content": "For a deeper dive into each record model, an introduction and markup guide for each are below:\nBooks and chapters markup guide Components markup guide Conference proceedings markup guide Datasets markup guide Dissertations markup guide Grants markup guide Journals and articles markup guide Peer reviews markup guide Pending publications markup guide Posted content (includes preprints) markup guide Reports and working papers markup guide Standards markup guide Many segments of metadata are repeatable across record models and have their own markup guides.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/xsd-schema-quick-reference/", "title": "XSD schema quick reference", "subtitle":"", "rank": 4, "lastmod": "2021-10-05", "lastmod_ts": 1633392000, "section": "Documentation", "tags": [], "description": "We support additional schema (not listed here) for legacy purposes.\nDeposit schema Used for registering and updating DOI metadata records:\nSchema Purpose Status Further info grant_id0.2.0.xsd grant metadata recommended documentation crossref5.3.1.xsd full metadata deposits recommended documentation crossref4.8.1.xsd full metadata deposits recommended documentation crossref4.4.2.xsd full metadata deposits available documentation AccessIndicators.xsd license data imported relations.xsd relationships between DOIs and other identifiers imported clinicaltrials.xsd relationships between publications that report on a clinical trial imported doi_resources4.", "content": "We support additional schema (not listed here) for legacy purposes.\nDeposit schema Used for registering and updating DOI metadata records:\nSchema Purpose Status Further info grant_id0.2.0.xsd grant metadata recommended documentation crossref5.3.1.xsd full metadata deposits recommended documentation crossref4.8.1.xsd full metadata deposits recommended documentation crossref4.4.2.xsd full metadata deposits available documentation AccessIndicators.xsd license data imported relations.xsd relationships between DOIs and other identifiers imported clinicaltrials.xsd relationships between publications that report on a clinical trial imported doi_resources4.4.2.xsd used to append or update specific sets of metadata to an existing record recommended documentation Query schema Used for formatting XML queries:\nSchema Purpose Further info Crossref_query_input2.0.xsd used to input XML queries to the system documentation Metadata retrieval schema Used for retrieving our metadata:\nSchema Purpose Further info crossref_query_output2.0.xsd returns query results in xsd_xml format, used for Cited-by results documentation crossref_query_output3.0.xsd returns query results in UNIXSD format documentation unixref1.1.xsd returns query results in the UNIXML format, also used to support the data delivered by our OAI-PMH service documentation unixref1.0.xsd returns query results in the UNIXML format for some older content, also used to support the data delivered by our OAI-PMH service documentation crossref_output3.0.1.xsd used to support data files generated for bulk data distribution documentation OAI-PMH.cr.xsd accommodates differences between Crossref\u0026rsquo;s OAI-PMH implementation and the published Open Archives OAI 2.0 schema documentation ", "headings": ["Deposit schema ","Query schema ","Metadata retrieval schema "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/schema-versions/", "title": "Schema versions", "subtitle":"", "rank": 4, "lastmod": "2025-02-10", "lastmod_ts": 1739145600, "section": "Documentation", "tags": [], "description": "We support several versions of our metadata input and grants schema, as well as XML Schema Definition (XSD) schema for looking up and retrieving DOIs and metadata. A quick reference is available. The metadata input schema is used to deposit metadata for most record types, except Grants, which have their own schema.\nWe currently support versions 4.3.0 to 5.3.1 of our main metadata schema. If you are beginning to register your metadata with Crossref you should use the most recent version (currently 5.", "content": "We support several versions of our metadata input and grants schema, as well as XML Schema Definition (XSD) schema for looking up and retrieving DOIs and metadata. A quick reference is available. The metadata input schema is used to deposit metadata for most record types, except Grants, which have their own schema.\nWe currently support versions 4.3.0 to 5.3.1 of our main metadata schema. If you are beginning to register your metadata with Crossref you should use the most recent version (currently 5.3.1) to ensure you are able to take advantage of all metadata deposit options. We also have a resource-only deposit schema that may be used to add some pieces of metadata to an existing record.\nMetadata input schema versioning All supported schema are available in our Schema GitLab repository. Versions 4.3.0 - 4.8.1 of our schema are backwards-compatible with the exception of deposits for standards, which may only be deposited with version 4.3.6 and above.\nWe are now incrementing our input schema version numbers with each change for all updates after version 4.4.2. Note that addtional schema versions are available via gitlab (5.0 - 5.2 for example) but are not documented for use as subsequent versions were released at the same time (5.3.1).\nRecommended metadata deposit schema crossref5.3.1xsd: adds support for ROR identifiers in affiliation metadata Recommended grants deposit schema grant_id0.2.0.xsd: adds support for ROR identifiers to identify funders; adds new funding types (APC, BPC, infrastructure) Recommended resource-only deposit schema doi_resources4.4.2.xsd Also supported crossref4.8.1.xsd: changes include support for ISBN that begin with 979, changes to the regex for the email_address field, relaxed regex for given_name to allow numbers, and schema refactoring. crossref4.4.2.xsd: adds support for pending publication, distributed usage logging (DUL), multiple dissertation authors, abstracts for all record types, support for JATS 1.2 abstracts, and adds acceptance_date element to journal article, book, book chapter, and conference papers crossref4.4.1.xsd: adds support for peer reviews crossref4.4.0.xsd: adds support for posted content (includes preprints) crossref4.3.7.xsd: adds support for linked clinical trials, journal deposits without ISSNs crossref4.3.6.xsd: expanded support for standards crossref4.3.5.xsd: supports relationships between DOIs and other objects crossref4.3.4.xsd: add archive_location option, change name element to depositor_name crossref4.3.3.xsd: modifications for standards crossref4.3.2.xsd: adds support for license metadata crossref4.3.1.xsd: adds support for Crossmark, funding data, ORCID iDs crossref4.3.0.xsd: revisions to handling of books doi_resources4.3.6.xsd doi_resources4.3.5.xsd doi_resources4.3.4.xsd doi_resources4.3.2.xsd doi_resources4.3.0.xsd ", "headings": ["Metadata input schema versioning","Recommended metadata deposit schema ","Recommended grants deposit schema","Recommended resource-only deposit schema ","Also supported "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-5-3-1/", "title": "Metadata deposit schema 5.3.1", "subtitle":"", "rank": 4, "lastmod": "2021-10-05", "lastmod_ts": 1633392000, "section": "Documentation", "tags": [], "description": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.3.1 contains schema version 5.3.1 and associated files.\nSchema: crossref5.3.1.xsd Full documentation: 5.3.1\nCrossref included schema:\ncommon5.3.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.8.1\nreplace \u0026lt;affiliation\u0026gt; tag with \u0026lt;affiliations\u0026gt; tag to support new affiliations structure add \u0026lt;institution_id\u0026gt; element to support ROR and other org IDs make either \u0026lt;institution_id\u0026gt; or \u0026lt;institution_name\u0026gt; required within institution metadata relax regex rules for \u0026lt;given_name\u0026gt; element ", "content": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.3.1 contains schema version 5.3.1 and associated files.\nSchema: crossref5.3.1.xsd Full documentation: 5.3.1\nCrossref included schema:\ncommon5.3.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.8.1\nreplace \u0026lt;affiliation\u0026gt; tag with \u0026lt;affiliations\u0026gt; tag to support new affiliations structure add \u0026lt;institution_id\u0026gt; element to support ROR and other org IDs make either \u0026lt;institution_id\u0026gt; or \u0026lt;institution_name\u0026gt; required within institution metadata relax regex rules for \u0026lt;given_name\u0026gt; element ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/grants-schema/", "title": "Grants schema", "subtitle":"", "rank": 4, "lastmod": "2025-02-10", "lastmod_ts": 1739145600, "section": "Documentation", "tags": [], "description": "All metadata records and identifiers registered with Crossref are submitted as XML formatted using our metadata input schema. Unlike other objects registered with Crossref, grants have their own grant-specific input schema.\nVersion 0.2.0 was released in January 2025 and added support for ROR identifiers to identify funders as well as new funding types (APC, BPC, infrastructure)\nAlso supported Version 0.1.1 Version 0.1.0 ", "content": "All metadata records and identifiers registered with Crossref are submitted as XML formatted using our metadata input schema. Unlike other objects registered with Crossref, grants have their own grant-specific input schema.\nVersion 0.2.0 was released in January 2025 and added support for ROR identifiers to identify funders as well as new funding types (APC, BPC, infrastructure)\nAlso supported Version 0.1.1 Version 0.1.0 ", "headings": ["Also supported"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-4-8-1/", "title": "Metadata deposit schema 4.8.1", "subtitle":"", "rank": 4, "lastmod": "2021-10-05", "lastmod_ts": 1633392000, "section": "Documentation", "tags": [], "description": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.3.1 contains schema version 4.8.1 and associated files.\nSchema: crossref4.8.1.xsd Full documentation: 4.8.1\nCrossref included schema:\ncommon4.8.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.4.2\nrefactoring of schema relax regex rules for email addresses allow ISBN beginning with 979 update imported JATS schema to v. 1.", "content": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.3.1 contains schema version 4.8.1 and associated files.\nSchema: crossref4.8.1.xsd Full documentation: 4.8.1\nCrossref included schema:\ncommon4.8.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.4.2\nrefactoring of schema relax regex rules for email addresses allow ISBN beginning with 979 update imported JATS schema to v. 1.3 relax regex rules for \u0026lt;given_name\u0026gt; element ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/resource-only-deposit-schema-4-4-2/", "title": "Resource-only deposit schema 4.4.2", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Schema: doi_resources4.4.2.xsd Full documentation: doi_resource4.4.2\ndoi_resources4.4.2.xsd is included in bundle 0.1.0 and imports\ncommon4.4.2.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd ", "content": "Schema: doi_resources4.4.2.xsd Full documentation: doi_resource4.4.2\ndoi_resources4.4.2.xsd is included in bundle 0.1.0 and imports\ncommon4.4.2.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-4-4-2/", "title": "Metadata deposit schema 4.4.2", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.1.0 contains schema version 4.4.2 and associated files.\nSchema: crossref4.4.2.xsd Full documentation: 4.4.2\nCrossref included schema:\ncommon4.4.2.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.4.1\nsupport for pending publication support for distributed usage logging (DUL) support for JATS 1.2 abstracts add abstract support to dissertations, reports, and allow multiple abstracts wherever available add support for multiple dissertation authors add acceptance_date element to journal article, books, book chapter, conference paper ", "content": "Beginning with deposit schema version 4.4.2, all Crossref schema releases are available in our GitLab schema repository as a bundle. Bundle 0.1.0 contains schema version 4.4.2 and associated files.\nSchema: crossref4.4.2.xsd Full documentation: 4.4.2\nCrossref included schema:\ncommon4.4.2.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd External imported schema:\nMathML JATS Changes from 4.4.1\nsupport for pending publication support for distributed usage logging (DUL) support for JATS 1.2 abstracts add abstract support to dissertations, reports, and allow multiple abstracts wherever available add support for multiple dissertation authors add acceptance_date element to journal article, books, book chapter, conference paper ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-4-4-1/", "title": "Metadata deposit schema 4.4.1", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Schema: crossref4.4.1.xsd Full documentation: 4.4.1\nCrossref included schema:\ncommon4.4.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.4.0\nadds support for peer reviews ", "content": "Schema: crossref4.4.1.xsd Full documentation: 4.4.1\nCrossref included schema:\ncommon4.4.1.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.4.0\nadds support for peer reviews ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-4-4-0/", "title": "Metadata deposit schema 4.4.0", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Schema: crossref4.4.0.xsd Full documentation: 4.4.0\nCrossref included schema:\ncommon4.4.0.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.3.7\nadds support for posted content ", "content": "Schema: crossref4.4.0.xsd Full documentation: 4.4.0\nCrossref included schema:\ncommon4.4.0.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.3.7\nadds support for posted content ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/metadata-deposit-schema-4-3-7/", "title": "Metadata deposit schema 4.3.7", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Schema: crossref4.3.7.xsd\nFull documentation: 4.3.7\nOur schema:\ncommon4.3.7.xsd fundref.xsd accessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.3.6\nadds support for linked clinical trials adds support for journal deposits without ISSNs ", "content": "Schema: crossref4.3.7.xsd\nFull documentation: 4.3.7\nOur schema:\ncommon4.3.7.xsd fundref.xsd accessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd External imported schema:\nMathML JATS Changes from 4.3.6\nadds support for linked clinical trials adds support for journal deposits without ISSNs ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/resource-only-deposit-schema-4-3-6/", "title": "Resource-only deposit schema 4.3.6", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Schema: doi_resources4.3.6.xsd Full documentation: doi_resource4.3.6\nCrossref included schema:\ncommon4.3.7.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd ", "content": "Schema: doi_resources4.3.6.xsd Full documentation: doi_resource4.3.6\nCrossref included schema:\ncommon4.3.7.xsd fundref.xsd AccessIndicators.xsd clinicaltrials.xsd relations.xsd common4.3.5.xsd ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/books-and-chapters/", "title": "Introduction to books and chapters", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Why register books and chapters Registration of books and chapters maximizes reference linking between books and other record types including journals and conference proceedings, and ensures that we collect and distribute persistent identifiers and authoritative metadata for online books. There are seven benefits for our members to register book- and chapter-level metadata:\nincreased discoverability increased usage matching author expectations author exposure usage and citations reporting supporting authors with funding compliance and reporting understanding the hot topics within your books You can read about each of these in more detail on our blog.", "content": "Why register books and chapters Registration of books and chapters maximizes reference linking between books and other record types including journals and conference proceedings, and ensures that we collect and distribute persistent identifiers and authoritative metadata for online books. There are seven benefits for our members to register book- and chapter-level metadata:\nincreased discoverability increased usage matching author expectations author exposure usage and citations reporting supporting authors with funding compliance and reporting understanding the hot topics within your books You can read about each of these in more detail on our blog.\nObligations and limitations Follow the books metadata best practices. You can register books and chapters using our web deposit form or via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. There are volume discounts available for registration of book chapters and reference entries for a single title. Read about the fees.\nHistory Books have been supported since 2003. The Books Interest Group provides guidance and advice for developing the books metadata we collect.\nKey links Books and chapters content metadata best practice How to register content via direct deposit of XML or the web deposit form. Books and chapters markup guide ", "headings": ["Why register books and chapters","Obligations and limitations ","Fees ","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/conference-proceedings/", "title": "Introduction to conference proceedings", "subtitle":"", "rank": 4, "lastmod": "2022-05-06", "lastmod_ts": 1651795200, "section": "Documentation", "tags": [], "description": "Why register conference proceedings Conference proceedings are often one of the first ways that researchers communicate new, innovative, and emergent research to their peers and the scholarly community. While this record type can be a precursor to a more formal peer-reviewed journal article, conference proceedings are critical to communicating new concepts that further enrich the research nexus.\nObligations and limitations Follow the conference proceedings metadata best practices. The conference proceedings record type captures metadata about a single conference, such as date, acronym, and location.", "content": "Why register conference proceedings Conference proceedings are often one of the first ways that researchers communicate new, innovative, and emergent research to their peers and the scholarly community. While this record type can be a precursor to a more formal peer-reviewed journal article, conference proceedings are critical to communicating new concepts that further enrich the research nexus.\nObligations and limitations Follow the conference proceedings metadata best practices. The conference proceedings record type captures metadata about a single conference, such as date, acronym, and location. DOIs should be assigned to all papers associated with the conference, and a DOI may be assigned to the conference itself. Ongoing conferences published with an ISSN may be deposited as a series. You can register conference proceedings using our web deposit form or via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. Read about the fees.\nHistory We\u0026rsquo;ve been supporting conference proceedings since 2003.\nKey links Conference proceedings metadata best practice How to register content via direct deposit of XML or the web deposit form. Conference proceeding markup guide ", "headings": ["Why register conference proceedings","Obligations and limitations ","Fees ","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/datasets/", "title": "Introduction to datasets", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Why register datasets Datasets are important research outputs themselves and increasingly seen as a first-class object Registration helps ensure reproducibility and further solidifies the research nexus.\nObligations and limitations Follow the datasets metadata best practices. You can register datasets via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. Read about the fees for registering datasets.\nHistory Datasets have been supported since 2006.", "content": "Why register datasets Datasets are important research outputs themselves and increasingly seen as a first-class object Registration helps ensure reproducibility and further solidifies the research nexus.\nObligations and limitations Follow the datasets metadata best practices. You can register datasets via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. Read about the fees for registering datasets.\nHistory Datasets have been supported since 2006.\nKey links Datasets content metadata best practice How to register content via direct deposit of XML Dataset markup guide ", "headings": ["Why register datasets","Obligations and limitations ","Fees ","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/dissertations/", "title": "Introduction to dissertations", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Why register dissertations Like conference proceedings, dissertations are excellent sources of emergent research findings and innovations in the scholarly community. Dissertations are often robust works that serve to expand the research nexus.\nObligations and limitations Follow the dissertations metadata best practices.. You can register dissertations using our web deposit form or via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. Read about the fees.", "content": "Why register dissertations Like conference proceedings, dissertations are excellent sources of emergent research findings and innovations in the scholarly community. Dissertations are often robust works that serve to expand the research nexus.\nObligations and limitations Follow the dissertations metadata best practices.. You can register dissertations using our web deposit form or via direct deposit of XML. Fees In addition to annual membership dues, all records attract a one-time registration fee. Read about the fees.\nHistory Dissertations have been supported since 2005.\nKey links Dissertations content metadata best practice How to register content using direct deposit of XML Dissertation markup guide ", "headings": ["Why register dissertations","Obligations and limitations ","Fees ","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/grants/", "title": "Introduction to grants", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Funders are joining Crossref to register their grants so that they can more easily and accurately track the outputs connected to the research they support.\nOnce you\u0026rsquo;re a member, registering grants with us means giving us information about each awarded grant, including a DOI which uniquely and persistently identifies each record. You can use the grant registration form or direct XML deposit methods to deposit and update grant metadata. This section focuses on grants, but research funders can also register other record types such as reports, data, and working papers.", "content": "Funders are joining Crossref to register their grants so that they can more easily and accurately track the outputs connected to the research they support.\nOnce you\u0026rsquo;re a member, registering grants with us means giving us information about each awarded grant, including a DOI which uniquely and persistently identifies each record. You can use the grant registration form or direct XML deposit methods to deposit and update grant metadata. This section focuses on grants, but research funders can also register other record types such as reports, data, and working papers.\nSomething to consider before you begin `` Decide which grants to register first, as you get into the swing of things. For example, pilot a particular country, or area of support. It’s better to start with newly-awarded grants, and then move on to older or long-running awards - these are cheaper to register, and are more likely to have produced research papers, so they’re great for demonstrating the full potential of connected research metadata.\nConstructing your identifiers (DOIs) `` A DOI is made up of a DOI resolver, a prefix, and a suffix. When you join Crossref as a member, we give you a DOI prefix. You combine this with a suffix of your choice to create a DOI. Although some funders choose to use their internal grant identifier as the DOI suffix, we advise you to make your suffix opaque, meaning that it does not encode or describe any information about the work. Your DOI becomes active once it is successfully registered with us. Read more about constructing your DOIs.\nGrant landing pages `` Your grant metadata records should link to a landing page where you can find information about the grant. Examples: https://0-doi-org.libus.csd.mu.edu/10.37717/220020589, https://0-doi-org.libus.csd.mu.edu/10.35802/107769. Read more about landing pages.\nShould a grant move to a new landing page, the URL in the grant’s metadata is updated to point to the new location. There’s no charge to update metadata for existing deposits.\nRegistering grant metadata `` Grants can be registered for all sorts of support provided to a research group or individual, such as awards, use of facilities, sponsorship, training, or salary awards. Here’s the section of our schema for grant metadata. If you’re working with a third-party system, such as Proposal Central or EuroPMC, they may be able to help with this piece of work.\nRegistering grant metadata using the grant registration form `` You can use the grant registration form to register grants, with no prior knowledge of XML. You fill out the form and the XML is created for you in the background. You enter your account credentials and the metadata is submitted directly.\nFormatting grant metadata for direct deposit `` If you\u0026rsquo;d prefer to work directly with XML, you may be able to map your own data and identifiers to our schema. See our example deposit file - this is a full example, and many of the fields it contains are optional, but we encourage you to provide as much information as you can. Rich metadata helps maximum reuse of the grant records you register with Crossref. This .xsd file helps explain what goes into each field, and the parameters (length, format) of what is accepted in each field. Here’s a less techy version.\nWhen you’ve created your XML files, use our checker to test them - this will show any potential errors with your files. For help with resolving problems, send your XML file and the error message to Support.\nUploading your files to Crossref `` Once you’re happy with your files, upload them to us using the admin tool, or submit them through HTTPS POST.\nVerify your registration, check the submission queue and log, and troubleshoot any errors.\nOnce your submission is successful, your grant DOIs are ‘live’ and ready to be used. It’s good practice to add the grant DOI to the landing page for the grant, as in this example for https://0-doi-org.libus.csd.mu.edu/10.37717/220020589:\nShow image × Spread the word about your grant identifiers `` Let your grant submission systems, awardees, and other parties know you are supporting Crossref grant identifiers, and that they should start collecting these identifiers too. Crossref grant metadata (including grant DOIs) is made openly available through our APIs, so it can be used by third parties (including publishers, grant tracking systems) to link grants to related research outputs.\n", "headings": ["Something to consider before you begin ``","Constructing your identifiers (DOIs) ``","Grant landing pages ``","Registering grant metadata ``","Registering grant metadata using the grant registration form ``","Formatting grant metadata for direct deposit ``","Uploading your files to Crossref ``","Spread the word about your grant identifiers ``"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/journals-and-articles/", "title": "Introduction to journals and articles", "subtitle":"", "rank": 4, "lastmod": "2022-04-01", "lastmod_ts": 1648771200, "section": "Documentation", "tags": [], "description": "Why register journals and articles Journals and articles remain the starting point for researchers exploring previous research, and provide a great way to discover links and relationships with other parts of the research nexus. Registering your journals and articles with us ensures that metadata about them are shared across the scholarly ecosystem with library discovery systems, scholarly sharing networks, specialist databases, metrics and analytics tools and much more.\nObligations and limitations Follow the journals metadata best practices.", "content": "Why register journals and articles Journals and articles remain the starting point for researchers exploring previous research, and provide a great way to discover links and relationships with other parts of the research nexus. Registering your journals and articles with us ensures that metadata about them are shared across the scholarly ecosystem with library discovery systems, scholarly sharing networks, specialist databases, metrics and analytics tools and much more.\nObligations and limitations Follow the journals metadata best practices. You can register journals and articles using all our content registration tools. Fees In addition to annual membership dues, most records attract a one-time registration fee. Journal title records are free of charge, but there are fees for each journal articles registered. These fees are different depending on whether the article registered is current or backfile. Read about the fees.\nHistory Crossref has supported journal and journal article registration since we started registering content in 2000.\nKey links Journals metadata best practices How to register content - using Crossref XML plugin for OJS, our web deposit form or direct deposit or XML. Journals and articles markup guide ", "headings": ["Why register journals and articles","Obligations and limitations","Fees","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/peer-reviews/", "title": "Introduction to peer reviews", "subtitle":"", "rank": 4, "lastmod": "2022-02-04", "lastmod_ts": 1643932800, "section": "Documentation", "tags": [], "description": "Why register peer reviews More of our members are keen to expose evidence of the integrity of the editorial process, such as peer review. Registering peer reviews means:\nThe metadata can provide relevant information about the reviews such as whether they were part of peer review or post-publication. There is evidence of the contribution from reviewers. These reviews are connected to the original work and other related objects. Links to these documents persist over time for future generations.", "content": "Why register peer reviews More of our members are keen to expose evidence of the integrity of the editorial process, such as peer review. Registering peer reviews means:\nThe metadata can provide relevant information about the reviews such as whether they were part of peer review or post-publication. There is evidence of the contribution from reviewers. These reviews are connected to the original work and other related objects. Links to these documents persist over time for future generations. This metadata may also support enrichment of scholarly discussion, reviewer accountability, publishing transparency, and analysis or research on peer reviews.\nObligations and limitations Members need to follow the peer review metadata best practices. All peer reviews must include relationship metadata linking the review with the item being reviewed. The item being reviewed must have a Crossref DOI. You can\u0026rsquo;t add components to peer review records. Crossmark is not currently supported for peer review records. You can only register peer reviews by direct deposit of XML, our helper tools do not currently support this record type. Fees In addition to annual membership dues, all records attract a one-time registration fee. For peer reviews, the fees are different if they are reviews of your own records, or for another member\u0026rsquo;s records. Read about the fees.\nHistory Our members asked for the flexibility to register content for the reviews and discussions of scholarly content which they publish, so we\u0026rsquo;ve extended our infrastructure to support members who post them. We support a whole host of outputs made publicly available from the peer review history, as they vary greatly based on journal. This may include referee reports, decision letter, and author response. The overall set may include outputs from the initial submission only or those from all subsequent rounds of revisions. We also allow members to register content made up of discussions surrounding a journal article after it was published (e.g. post-publication reviews).\nThe following organizations consulted with us on the design and/or development of the peer review service:\nPublons PeerJ F1000 Research eLife BioMedCentral BMJ Copernicus EMBO Nature Communications Key links Peer review metadata best practices Direct deposit of XML Peer Review markup guide For full instructions and XML examples please visit our support article where you can also raise a ticket for any questions.\nIf you have questions please consult other users on our forum at community.crossref.org or open a ticket with our technical support team where we’ll reply within a few days.\n", "headings": ["Why register peer reviews","Obligations and limitations ","Fees ","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/pending-publications/", "title": "Introduction to pending publications", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Pending publication is a way of creating a DOI and depositing metadata for a content item any time after a manuscript has been accepted but before it is published online. This is possible for all standard record types (such as articles, books, conference proceedings).\nBecause a pending publication has not yet been published, its DOI will resolve to a publicly-available Crossref-hosted landing page. Once the work is published online, this same DOI will resolve to the URL for that content.", "content": "Pending publication is a way of creating a DOI and depositing metadata for a content item any time after a manuscript has been accepted but before it is published online. This is possible for all standard record types (such as articles, books, conference proceedings).\nBecause a pending publication has not yet been published, its DOI will resolve to a publicly-available Crossref-hosted landing page. Once the work is published online, this same DOI will resolve to the URL for that content.\nThe pending publication record type serves as a temporary placeholder for your content - like a \u0026ldquo;coming soon\u0026rdquo; or preview of the great work to come. For a pending publication, you register basic metadata for your content item before registering all the formal metadata that comes with a version of record. Take care not to share a DOI before it has been deposited with us, or it will not resolve for your readers, and will lead to a failed resolution in your resolution report. Learn more about the pending publication consultation.\nUse cases for pending publication Before the pending publication record type existed, we recommended you to register DOIs at the time content was published online, or shortly after. As the communication needs of our members (researchers, funders, institutions, and publishers) evolve, we have created this new solution to aid you and your work, and allow you to register DOIs before content is published online. With pending publication:\nMembers can: address timing issues related to press embargos publicly establish scholarly precedence for their articles meet the conditions in full for new funder policies and mandates, which focus on acceptance as a key event to report on ensure that institutional repositories use the DOI to link to the member-stewarded copy Researchers can provide formal evidence of all publications in employment and grant applications Funders can fully track all publications funded by their research grants Institutions can fully track the scholarly output of their faculty members Technology vendors that support scholarly research management can account for all outputs How does pending publication work? When registering your publication as pending there are two things you need to do:\nRegister a subset of the metadata (as a minimum: member name, journal title, and accepted date) under the Pending Publication record type. After you do this, the DOI will resolve to a Crossref-hosted landing page displaying your logo, a banner showing the manuscript has been accepted for publication, and the metadata you’ve provided. As with all registered content, pending publication metadata will be publicly available in our APIs (and updated as you update your metadata records). Once your work is published, you need to register the full metadata for the work - this is not an automatic process. You must update the metadata for each pending publication DOI, so that each DOI will resolve directly to the content (and not the pending publication landing page). Pending publication workflow diagram Crossmark participants please note that you can deposit Crossmark metadata at any point, but during the Beta version of the pending publication rollout, the Crossmark badge will not be displayed to readers.\nShow image × Fees for pending publications Content Registration (metadata deposit) fees still apply, but there are no additional fees for using pending publication. So, you’ll be charged once when you register the pending publication, but any subsequent updates, including the update on publication, are not charged.\nHistory Pending publication has been supported since 2019 and was designed in response to community feedback:\nProposal PDF Original proposal blog post Community responses to proposal blog post Key links How to register content using direct deposit of XML Pending publication markup guide ", "headings": ["Use cases for pending publication ","How does pending publication work? ","Pending publication workflow diagram ","Fees for pending publications ","History ","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/posted-content-includes-preprints/", "title": "Introduction to posted content (includes preprints)", "subtitle":"", "rank": 4, "lastmod": "2024-04-01", "lastmod_ts": 1711929600, "section": "Documentation", "tags": [], "description": "Why register posted content Posted content includes preprints, eprints, working papers, reports, dissertations, and many other types of content that has been posted but not formally published. Note that accepted manuscripts are not considered posted content.\nTo qualify as posted content, an item must be posted to a host platform where it will receive some level of stewardship. We’re all about persistence, so it’s vital that everything registered with us be maintained.", "content": "Why register posted content Posted content includes preprints, eprints, working papers, reports, dissertations, and many other types of content that has been posted but not formally published. Note that accepted manuscripts are not considered posted content.\nTo qualify as posted content, an item must be posted to a host platform where it will receive some level of stewardship. We’re all about persistence, so it’s vital that everything registered with us be maintained. A preprint should remain online once it has been posted, including once it has appeared in a journal or if an updated version becomes available. If different versions become available, the preprint owner should update the preprint metadata using relations tags. In exceptional cases where a preprint is removed, such as in the case of plagiarism or other misconduct, we recommend that the DOI resolves to a page containing at least the title, authors, and a short explanation of the removal. Preprint owners should refer to good practice for journal article retraction in this case. Note that we cannot remove preprint metadata from our records.\nPublishing preprints is about more than simply getting a DOI Crossref can help you to clearly label content as a preprint using a preprint-specific schema. It’s not advisable to register preprints as data, components, articles, or anything else, because a preprint is not any of those things. Our service allows you to ensure the relationships between preprints and any eventual article are asserted in the metadata, and accurately readable by both humans and machines.\nBenefits of our custom support for preprints Persistent identifiers for preprints to ensure successful links to the scholarly record over the course of time The preprint-specific metadata we ask for reflects researcher workflows from preprint to formal publication Support for preprint versioning by providing relationships between metadata for different iterations of the same document. Notification of links between preprints and formal publications that may follow (such as journal articles, monographs) Reference linking for preprints, connecting up the scholarly record to associated literature Auto-update of ORCID records to ensure that preprint contributors are acknowledged for their work Preprints include funding data so people can report research contributions based on funder and grant identification Discoverability: we make the metadata available for machine and human access, across multiple interfaces (including our REST API, OAI-PMH, and Metadata Search. Obligations and limitations Follow the posted content metadata best practices. Designate a specific contact with us who will receive match notifications when an accepted manuscript (AM) or version of record (VOR) of the posted content has been registered. Link your posted content record to that other record within seven days of receiving an alert. Clearly label the manuscript as a preprint above the fold on its landing page, and ensure that any link to the AAM or VOR is also prominently displayed above the fold. Ensure that each version is assigned a new DOI, and associate the versions via a relationship with type isVersionOf - learn more about structural metadata and declaring relationship types Crossmark is not currently supported for posted content. You can\u0026rsquo;t add components to posted content. How to register content - using the Crossref Deposit Plugin for Open Preprint Systems (OPS) or direct deposit of XML. Fees In additional to annual membership dues, all records attract a one-time registration fee. There are volume discounts available for posted content. Read about the fees.\nKey links Posted content metadata best practices How to register content via the Crossref Deposit Plugin for Open Preprint Systems (OPS) or direct deposit of XML Posted content markup guide Associating preprints with later published outputs Once a research object has been published from the posted content and a DOI has been assigned to it, the preprint publisher will update their metadata to associate the posted content with the DOI of the accepted manuscript (AM) or version of record (VOR).\nWe will notify the member who deposited metadata for the posted content when we find a match between the title and first author of two publications, so that the potential relationship can be reviewed. The posted content publisher must then update the preprint metadata record by declaring the AM/VOR relationship. The notification is delivered by email to the technical contact on file. Please contact us if you need the email notifications to be sent to a different address.\nDisplaying and labeling on posted content publications You must make it clear that posted content is unpublished and you must ensure that any link to the AM/VOR is prominently displayed, specifically:\nThe landing page (or equivalent) of the preprint must be labeled as not formally published (for example, preprint, unpublished manuscript). This label must appear above the fold (the part of a web page that is visible in a browser window when the page first loads) The landing page (or equivalent) of the preprint must link to the AM/VOR when it is made available. The link must be appropriately labeled (for example, Now published in [Journal Name], Version of record available in [Journal Name]) and appear above the fold. History We’ve been supporting registration of preprints and other posted content since 2016. The Preprints Advisory Group provides ongoing guidance and advice for developing the preprints metadata we collect.\nOur preprints record type was originally developed with advisors from bioRxiv and arXiv, PLOS, Elsevier, AIP, IOP, and ACM.\nYou can register posted content (includes preprints) by direct deposit of XML - learn more about markup examples for posted content (includes preprints).\n", "headings": ["Why register posted content","Publishing preprints is about more than simply getting a DOI ","Benefits of our custom support for preprints ","Obligations and limitations ","Fees ","Key links","Associating preprints with later published outputs ","Displaying and labeling on posted content publications ","History"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/reports-and-working-papers/", "title": "Introduction to reports and working papers", "subtitle":"", "rank": 5, "lastmod": "2022-04-11", "lastmod_ts": 1649635200, "section": "Documentation", "tags": [], "description": "Why register reports and working papers Researchers communicate with each other through increasing diverse channels, and our community sees more and more citations and links to reports and working papers such as white papers as an important part of the scholarly record. When these records are registered with Crossref, research can be traced from origin to practical implementation.\nObligations and limitations Follow the reports and working papers metadata best practices. Technical reports and working papers are typically assigned a single identifier, but identifiers may also be assigned to sub-sections of the report (such as chapters) as needed using the content_item element.", "content": "Why register reports and working papers Researchers communicate with each other through increasing diverse channels, and our community sees more and more citations and links to reports and working papers such as white papers as an important part of the scholarly record. When these records are registered with Crossref, research can be traced from origin to practical implementation.\nObligations and limitations Follow the reports and working papers metadata best practices. Technical reports and working papers are typically assigned a single identifier, but identifiers may also be assigned to sub-sections of the report (such as chapters) as needed using the content_item element. Fees In additional to annual membership dues, all records attract a one-time registration fee. Fees for reports and working papers are different depending on whether the content registered is current or backfile. Read about the fees.\nHistory We\u0026rsquo;ve been supporting reports and working papers since 2005.\nKey links Reports and working papers metadata best practices How to register content using direct deposit of XML Reports and working paper markup guide ", "headings": ["Why register reports and working papers","Obligations and limitations","Fees","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/standards/", "title": "Introduction to standards", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Why register standards Standards are often cited in research and registering these with Crossref means that these connections remain part of the scholarly record.\nObligations and limitations Follow the standards metadata best practices. You can only register standards by direct deposit of XML using schema version 4.3.6 and above. Our helper tools do not currently support this record type. Fees In additional to annual membership dues, all records attract a one-time registration fee.", "content": "Why register standards Standards are often cited in research and registering these with Crossref means that these connections remain part of the scholarly record.\nObligations and limitations Follow the standards metadata best practices. You can only register standards by direct deposit of XML using schema version 4.3.6 and above. Our helper tools do not currently support this record type. Fees In additional to annual membership dues, all records attract a one-time registration fee. Fees for standards are different depending on whether the content registered is current or backfile. Read about the fees.\nHistory Crossref began accepting metadata deposits for standards in 2005. Our input schema was modified significantly for standards with help from the Standards Technical Working Group. Significant changes to the deposit and indexing of designators were made with schema version 4.3.6, as a result standards may only be deposited using schema versions 4.3.6 and above.\nKey links Standards metadata best practices How to register content using direct deposit of XML Standards markup guide ", "headings": ["Why register standards","Obligations and limitations","Fees","History","Key links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/research-nexus/", "title": "The research nexus", "subtitle":"", "rank": 4, "lastmod": "2021-12-11", "lastmod_ts": 1639180800, "section": "Documentation", "tags": [], "description": "The \u0026lsquo;research nexus\u0026rsquo; is the vision to which we aspire:\nA rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nThe research nexus goes beyond the basic idea of just having persistent identifiers for content. Objects and entities such as journal articles, book chapters, grants, preprints, data, software, statements, dissertations, protocols, affiliations, contributors, etc.", "content": "The \u0026lsquo;research nexus\u0026rsquo; is the vision to which we aspire:\nA rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nThe research nexus goes beyond the basic idea of just having persistent identifiers for content. Objects and entities such as journal articles, book chapters, grants, preprints, data, software, statements, dissertations, protocols, affiliations, contributors, etc. should all be identified and that is still an important part of the picture. But what is most important is how they relate to each other and the context in which they make up the whole research ecosystem.\nThe foundation of the research nexus is metadata; the richer and more comprehensive the metadata in Crossref records, the more value there is for our members and for others, including for future generations.\nCrossref Research Nexus Vision\nMetadata and relationships between research objects and entities can support the whole scholarly research ecosystem in many ways, including:\nResearch integrity: helping to provide signals about the trustworthiness of the work including provenance information such as who funded it (when and for how much), which organizations and people contributed what, whether something was updated or corrected, and whether it was checked for originality. All of these signals can be expressed through Crossref metadata.\nReproducibility: helping others to reproduce outcomes by adding relationships between literature, data, software, protocols and methods, and more. All of these relationships can be asserted through members\u0026rsquo; ongoing stewardship of their Crossref metadata records.\nReporting and assessment: helping organizations such as universities, funders, governments, to track and demonstrate the outcomes of investment; provide benchmarking information; show compliance with funder mandates; and decide what other research to fund. This kind of information can be included in Crossref metadata.\nDiscoverability: helping people and systems identify work through multiple angles. Registering content with Crossref makes it possible for work to be found and used. Thousands of systems use Crossref metadata, therefore the richer the records are, the more visibility there is likely to be of your work. Including metadata like abstracts and references are very simple ways to increase the visibility of your records.\nThe importance of relationships A big part of the research nexus is establishing connections between and among different research objects which establishes provenance over time. Adding relationships to your metadata records can convey much richer and more nuanced connections beyond traditional references.\nThese relationships may consist of versions, corrections, translations, data, formats, supplements, and components. There are no extra fees for including relationships in your metadata.\nCurrently relationships can only be established by direct deposit of XML. Read our relationships metadata best practice page and our relationships markup guide.\nWhat types of resources and records can be registered with Crossref? We are working to make our input schema more flexible so that almost any type of object can be registered and distributed openly through Crossref. At the moment, members tend to register the following:\nBooks, chapters, and reference works: includes book title and/or chapter-level records. Books can be registered as a monograph, series, or set. Conference proceedings: information about a single conference and records for each conference paper/proceeding. Datasets: includes database records or collections. Dissertations: includes single dissertations and theses, but not collections. Grants: includes both direct funding and other types of support such as the use of equipment and facilities. Journals and articles: at the journal title and article level, and includes supplemental materials as components. Peer reviews: any number of reviews, reports, or comments attached to any other work that has been registered with Crossref. Pending publications: a temporary placeholder record with minimal metadata, often used for embargoed work where a DOI needs to be shared before the full content is made available online. Preprints and posted content: includes preprints, eprints, working papers, reports, and other types of content that has been posted but not formally published. Reports and working papers: this includes content that is published and likely has an ISSN. Standards: includes publications from standards organizations. You can also establish relationships between different research objects (such as preprints, translations, and datasets) in your metadata. Learn more about all the metadata that can be included in these records with our schema library and markup guides.\n", "headings": ["The importance of relationships","What types of resources and records can be registered with Crossref?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/abstracts/", "title": "Abstracts", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples of abstracts for members registering content by direct deposit of XML. Our web deposit form support abstracts.\nAbstracts imported from JATS-formatted XML may be included in records deposited with us. A namespace prefix (jats:) must be used for the abstract and all child elements, and the namespace must be included in the schema declaration. MathML may be included in abstracts but must use a MathML-specific namespace prefix.", "content": "This guide gives markup examples of abstracts for members registering content by direct deposit of XML. Our web deposit form support abstracts.\nAbstracts imported from JATS-formatted XML may be included in records deposited with us. A namespace prefix (jats:) must be used for the abstract and all child elements, and the namespace must be included in the schema declaration. MathML may be included in abstracts but must use a MathML-specific namespace prefix. Multiple abstracts may be included.\nAbstracts may be registered for journal articles, books and book chapters, conference papers, posted content, dissertations, reports, and standards.\nAbstracts schema declaration \u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34;\u0026gt; Example of a JATS-formatted abstract \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;jats:abstract\u0026gt;\u0026lt;jats:p\u0026gt;Acute and chronic lung inflammation is associated with numerous important disease pathologies including asthma, chronic obstructive pulmonary disease and silicosis. Lung fibroblasts are a novel and important target of anti-inflammatory therapy, as they orchestrate, respond to, and amplify inflammatory cascades and are the key cell in the pathogenesis of lung fibrosis. Peroxisome proliferator-activated receptor gamma (PPAR**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**γ**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;**) ligands are small molecules that induce anti-inflammatory responses in a variety of tissues. Here, we report for the first time that PPAR**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**γ**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;** ligands have potent anti-inflammatory effects on human lung fibroblasts. 2-cyano-3, 12-dioxoolean-1, 9-dien-28-oic acid (CDDO) and 15-deoxy-**\u0026lt;mml:math\u0026gt;\u0026lt;mml:msup\u0026gt;\u0026lt;mml:mi\u0026gt;**Δ**\u0026lt;/mml:mi\u0026gt;\u0026lt;mml:mrow\u0026gt;\u0026lt;mml:mn\u0026gt;**12**\u0026lt;/mml:mn\u0026gt;\u0026lt;mml:mo\u0026gt;**,**\u0026lt;/mml:mo\u0026gt;\u0026lt;mml:mn\u0026gt;**14**\u0026lt;/mml:mn\u0026gt;\u0026lt;/mml:mrow\u0026gt;\u0026lt;/mml:msup\u0026gt;\u0026lt;/mml:math\u0026gt;**-prostaglandin J\u0026lt;jats:sub\u0026gt;2\u0026lt;/jats:sub\u0026gt; (15d-PGJ\u0026lt;jats:sub\u0026gt;2\u0026lt;/jats:sub\u0026gt;) inhibit production of the inflammatory mediators interleukin-6 (IL-6), monocyte chemoattractant protein-1 (MCP-1), COX-2, and prostaglandin (PG)E\u0026lt;jats:sub\u0026gt;2\u0026lt;/jats:sub\u0026gt; in primary human lung fibroblasts stimulated with either IL-1**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**β**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;** or silica. The anti-inflammatory properties of these molecules are not blocked by the PPAR**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**γ**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;** antagonist GW9662 and thus are largely PPAR**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**γ**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;** independent. However, they are dependent on the presence of an electrophilic carbon. CDDO and 15d-PGJ\u0026lt;jats:sub\u0026gt;2\u0026lt;/jats:sub\u0026gt;, but not rosiglitazone, inhibited NF-**\u0026lt;mml:math\u0026gt;\u0026lt;mml:mi\u0026gt;**κ**\u0026lt;/mml:mi\u0026gt;\u0026lt;/mml:math\u0026gt;**B activity. These results demonstrate that CDDO and 15d-PGJ\u0026lt;jats:sub\u0026gt;2\u0026lt;/jats:sub\u0026gt; are potent attenuators of proinflammatory responses in lung fibroblasts and suggest that these molecules should be explored as the basis for novel, targeted anti-inflammatory therapies in the lung and other organs.\u0026lt;/jats:p\u0026gt;\u0026lt;/jats:abstract\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2000\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; ", "headings": ["Abstracts schema declaration ","Example of a JATS-formatted abstract "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/books-and-chapters/", "title": "Books and chapters markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering books and chapters by direct deposit of XML. You can also register the books and chapters record type using one of our helper tools: web deposit form.\nOn this page, learn more about:\nBook structures Example of a book deposit containing a single book with chapters Example of a book series deposit Example of a book set deposit Book structures \u0026lt;book\u0026gt; is the container for all information about a single book volume and (optionally) the book chapters.", "content": "This guide gives markup examples for members registering books and chapters by direct deposit of XML. You can also register the books and chapters record type using one of our helper tools: web deposit form.\nOn this page, learn more about:\nBook structures Example of a book deposit containing a single book with chapters Example of a book series deposit Example of a book set deposit Book structures \u0026lt;book\u0026gt; is the container for all information about a single book volume and (optionally) the book chapters. Books may be deposited as a single book, a series, or a set, and the metadata requirements differ slightly for each type.\nBook: a book is single book (monograph) that is not part of a series or a set. The title-level metadata for the book is captured in \u0026lt;book_metadata\u0026gt;. Book series: books that are part of an ongoing series and have an ISSN assigned should be deposited as a book series. Book metadata is captured in \u0026lt;book_series_metadata\u0026gt;, with series-specific details such as ISSN and series title captured in \u0026lt;series_metadata\u0026gt;. A series-level and volume-level title must be supplied for each book submitted as part of a series. A series-level ISBN and/or DOI may optionally be assigned. Examples of books in series: Series: a sequence of books with certain characteristics in common that are formally identified together as a group. They may be released in successive parts once a year, or less often. For example, Loeb Classical Library or Oxford World’s Classics Monographs in series: if volumes can stand alone as separate books (if not, they are a book set). For example, Advances in Experimental Medicine and Biology Book set: book volumes that cannot stand alone as separate books must be deposited as a book set. A book set has a set-level title but does not require an ISSN. An ISBN and/or DOI may optionally be assigned at the set level. Example of a book set: Le Deuxième Sexe by Simone de Beauvoir, in two volumes: Les faits et les mythes and L\u0026rsquo;expérience vécue. Book metadata Book titles: Books with subtitles should use the \u0026lt;title\u0026gt; and \u0026lt;subtitle\u0026gt; tags to capture the appropriate segment of the title. Book title DOIs: A DOI is required for each book that you submit. It is not possible to submit DOI information for individual chapters without assigning a DOI to the entire work. ISBNs: ISBNs should be supplied when available. Both a print and electronic ISSN may be supplied for a book. If a book does not have an ISBN, the \u0026lt;noisbn\u0026gt; element is required. Contributors: If a book as a whole has a single set of authors, the author(s) should be included in the \u0026lt;contributors\u0026gt; section for the book itself. Editors may also be deposited. If each chapter has distinct authors and/or editors, the authors should also be included at the chapter (\u0026lt;content_item\u0026gt;) level. Editions: \u0026lt;edition_number\u0026gt;, when given, should include only a number and not additional text such as \u0026ldquo;edition\u0026rdquo;. For example, you should submit \u0026ldquo;3\u0026rdquo;, not \u0026ldquo;third edition\u0026rdquo;. Citations: \u0026lt;citation_list\u0026gt; should only be used in \u0026lt;book_metadata\u0026gt; instead of \u0026lt;content_item\u0026gt; when the reference list is a separate section of the book and chapters are not included in the deposit. Book language: The language of the book should be specified in the book_metadata language attribute using the ISO639 language code. If a book contains items in multiple languages this attribute should be set for the predominant language of the book. Individual items may have their language specified in content_item. If all content items are the same language, it is only necessary to specify the language at the book level. Book chapters: Book chapters are captured in the container element \u0026lt;content_item\u0026gt;. Book chapter DOIs are optional but strongly recommended if the book content will be commonly cited at the chapter level. Metadata supplied in the top-level book section (\u0026lt;book_metadata\u0026gt;, \u0026lt;book_set_metadata\u0026gt;, or \u0026lt;book_series_metadata\u0026gt;) will be applied to the chapters as well unless distinct information is supplied for a chapter. This includes contributors, citation lists, and publication dates.\nExample of a book deposit containing a single book with chapters Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;2004-10-19-10-04-31-1016001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20082117100522\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;book book_type=\u0026#34;edited_book\u0026#34;\u0026gt; \u0026lt;book_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Use of Recycled Materials\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;2005\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn media_type=\u0026#34;electronic\u0026#34;\u0026gt;2912143691\u0026lt;/isbn\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;RILEM Publications SARL\u0026lt;/publisher_name\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/book\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/hello\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/book_metadata\u0026gt; \u0026lt;content_item component_type=\u0026#34;chapter\u0026#34; publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;E.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Vázquez\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Miscellaneous\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;53\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;55\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number\u0026gt;rep030-006\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/m4\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; https://www.rilem.net/boutique/fiche.php?cat=book\u0026amp;reference=rep030-006 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;content_item component_type=\u0026#34;chapter\u0026#34; publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;L.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;De Bock\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Recycled asphalt pavement\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;45\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;51\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number\u0026gt;rep030-005\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/m5\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; https://www.rilem.net/boutique/fiche.php?cat=book\u0026amp;reference=rep030-005 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;/book\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a book series deposit Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20082003110604\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;test data\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;test data\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;book book_type=\u0026#34;edited_book\u0026#34;\u0026gt; \u0026lt;book_series_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;series_metadata\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Electrochemistry\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;issn\u0026gt;0305-9979\u0026lt;/issn\u0026gt; \u0026lt;/series_metadata\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;editor\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Pletcher\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Electrochemistry\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;volume\u0026gt;8\u0026lt;/volume\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1983\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn\u0026gt;978-0-85186-067-1\u0026lt;/isbn\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Royal Society of Chemistry\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Cambridge\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/testdoi13\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://ebook.rsc.org/?DOI=10.1039/9781847557179\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/book_series_metadata\u0026gt; \u0026lt;content_item component_type=\u0026#34;other\u0026#34; publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;editor\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Pletcher\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Front cover\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;X001\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;X002\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/testdoi14\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://ebook.rsc.org/?DOI=10.1039/9781847557179-FX001 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;/book\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a book set deposit Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;20080305081200\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;200815071200\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Sample Data\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;book book_type=\u0026#34;edited_book\u0026#34;\u0026gt; \u0026lt;book_set_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;set_metadata\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Sample Set Title\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;isbn media_type=\u0026#34;print\u0026#34;\u0026gt;0 571 08989 5\u0026lt;/isbn\u0026gt; \u0026lt;/set_metadata\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Sample Volume Title\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;volume\u0026gt;1\u0026lt;/volume\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2007\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn media_type=\u0026#34;print\u0026#34;\u0026gt;0064410145\u0026lt;/isbn\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Sample Publisher\u0026lt;/publisher_name\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;/book_set_metadata\u0026gt; \u0026lt;content_item component_type=\u0026#34;chapter\u0026#34; level_sequence_number=\u0026#34;1\u0026#34; publication_type=\u0026#34;full_text\u0026#34; language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Patricia\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Feeney\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Sample Chapter Title\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2007\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;200\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;sequence-number\u0026#34;\u0026gt;S0091679X0861064X\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/testset1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;/book\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Book structures ","Book metadata","Example of a book deposit containing a single book with chapters ","Example of a book series deposit ","Example of a book set deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/affiliations/", "title": "Affiliations and ROR", "subtitle":"", "rank": 4, "lastmod": "2022-08-19", "lastmod_ts": 1660867200, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering affiliations by direct deposit of XML. As of schema version 5.3.0 we’ve introduced a new tag that supports both affiliated institution names and select identifiers, including ROR. This change is made across contributor metadata in all record types.\nAffiliation metadata consists of a repeatable element that contains the following:\nElement Description Limits institution container for institution metadata repeatable institution_name The full name of an institution repeatable, either institution_name or institution_id required institution_id and attribute(s): @type (values are ror, wikidata, isni) Identifier for an institution or organization ID institution_acronym The acronym of an institution optional institution_place The primary city location of an institution 1 allowed, xsd:string institution_department The department within an institution 1 allowed, xsd:string Requirements: For each affiliation, you must at minimum include an institution identifier (institution_id) or an institution name (institution_name).", "content": "This guide gives markup examples for members registering affiliations by direct deposit of XML. As of schema version 5.3.0 we’ve introduced a new tag that supports both affiliated institution names and select identifiers, including ROR. This change is made across contributor metadata in all record types.\nAffiliation metadata consists of a repeatable element that contains the following:\nElement Description Limits institution container for institution metadata repeatable institution_name The full name of an institution repeatable, either institution_name or institution_id required institution_id and attribute(s): @type (values are ror, wikidata, isni) Identifier for an institution or organization ID institution_acronym The acronym of an institution optional institution_place The primary city location of an institution 1 allowed, xsd:string institution_department The department within an institution 1 allowed, xsd:string Requirements: For each affiliation, you must at minimum include an institution identifier (institution_id) or an institution name (institution_name). A ROR ID is recommended as we plan to integrate ROR data into our APIs in the future. You should include an identifier wherever possible, to improve discovery, disambiguate, and make affiliations machine actionable. You may include optional metadata including an acronym (institution_acronym), a place (institution_place), and a department (institution_department). Most of this metadata is made redundant by identifiers, so include only if an identifier is not available or if the identifier is not sufficiently granular (as with departments). Institution identifiers We currently support 3 institution identifiers: ROR, Wikidata, and ISNI. We do some basic validation for each identifier provided in your XML:\nROR: must begin with https://ror.org/ (full regex used for validation is https://ror\\\\.org/0[^ilo]{6}\\\\d{2}) Wikidata: must begin with https://www.wikidata.org/entity/ (full regex used for validation is https://www\\\\.wikidata\\\\.org/entity/([qQ]|[pP]|[lL])\\\\d+) ISNI: should begin with https://isni.org/isni but https://www.isni.org/isni is also allowed (full regex used for validation is https://www\\\\.isni\\\\.org/isni/\\\\d{15}(x|[0-9]) Affiliation examples There are multiple ways to mark up an affiliation depending on what metadata is available - a ROR ID may be provided on its own as it\u0026rsquo;s all we need to identify an organization:\n\u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_id type=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/05gq02987\u0026lt;/institution_id\u0026gt; \u0026lt;/institution\u0026gt; This example includes department information to supplement the ROR ID:\n\u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_id type=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/01bj3aw27\u0026lt;/institution_id\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;/affiliations\u0026gt; This affiliation does not have an identifier, so additional metadata is useful:\n\u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Tinker Fan Club\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;TinFC\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Boston, MA\u0026lt;/institution_place\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; As mentioned above a ROR identifier is preferred, but ISNI and Wikidata identifiers are also supported and will be passed on to our metadata users via our REST and XML APIs.\nCrossref and JATS Crossref affiliation metadata easily maps to JATS and the JATS4R affiliation recommendations. For example, a basic affiliation with name only is tagged in JATS as:\n\u0026lt;contrib-group\u0026gt; \u0026lt;contrib contrib-type=”author”\u0026gt; \u0026lt;name\u0026gt; \u0026lt;surname initials=”M”\u0026gt;Mitchell\u0026lt;/surname\u0026gt; \u0026lt;given-names initials=”AP”\u0026gt;Aaron P.\u0026lt;/given-names\u0026gt; \u0026lt;/name\u0026gt; \u0026lt;aff\u0026gt;Carnegie Mellon University\u0026lt;/aff\u0026gt; \u0026lt;/contrib\u0026gt; \u0026lt;/contrib-group\u0026gt; and should be tagged for Crossref use as\n\u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Carnegie Mellon University\u0026lt;/institution_name\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;/affiliations\u0026gt; This example contains a JATS-tagged affiliation with an institution ID:\n\u0026lt;aff id=\u0026#34;aff1\u0026#34;\u0026gt; \u0026lt;label\u0026gt;a\u0026lt;/label\u0026gt; \u0026lt;institution-wrap\u0026gt; \u0026lt;institution-id institution-id-type=”ror”\u0026gt;https://ror.org/03vek6s52 \u0026lt;/institution-id\u0026gt; \u0026lt;institution\u0026gt;Harvard University\u0026lt;/institution\u0026gt;\u0026lt;/institution-wrap\u0026gt; \u0026lt;institution-wrap\u0026gt; \u0026lt;institution-id institution-id-type=”ror”\u0026gt;https://ror.org/000cs1t14 \u0026lt;/institution-id\u0026gt; \u0026lt;institution\u0026gt;Harvard NeuroDiscovery Center\u0026lt;/institution\u0026gt; \u0026lt;/institution-wrap\u0026gt; \u0026lt;/aff\u0026gt; and should be tagged for Crossref use as:\n\u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Harvard University\u0026lt;/institution_name\u0026gt; \u0026lt;institution_id type=”ror”\u0026gt;https://ror.org/03vek6s52\u0026lt;/institution_id\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Harvard NeuroDiscovery Center\u0026lt;/institution_name\u0026gt; \u0026lt;institution_id type=”ror”\u0026gt;https://ror.org/000cs1t14\u0026lt;/institution_id\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;/affiliations\u0026gt; Samples of full XML files containing our new affiliation metadata are available on our Example XML metadata page.\n", "headings": ["Requirements:","Institution identifiers","Affiliation examples","Crossref and JATS"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/components/", "title": "Components", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering books and chapters by direct deposit of XML. Component records are often registered for figures, tables, and supplemental materials associated with a journal article.\nConstructing component deposits Components may be deposited along with their parent DOI or they can be deposited by themselves in a separate XML file as a stand-alone component. Components have their own metadata which is distinct from that of the parent DOI(s).", "content": "This guide gives markup examples for members registering books and chapters by direct deposit of XML. Component records are often registered for figures, tables, and supplemental materials associated with a journal article.\nConstructing component deposits Components may be deposited along with their parent DOI or they can be deposited by themselves in a separate XML file as a stand-alone component. Components have their own metadata which is distinct from that of the parent DOI(s).\nComponents may belong to more than one parent item. For example, two journal articles may include the same component DOI.\nExample of a stand-alone component deposit Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schema/deposit/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;2015052016\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref sample deposit\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;pfeeney@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;sa_component parent_doi=\u0026#34;10.5555/mrtest2\u0026#34;\u0026gt; \u0026lt;component_list\u0026gt; \u0026lt;component parent_relation=\u0026#34;isPartOf\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;iso-6892-1.xsd\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;description\u0026gt;ISO 6892 XML Schema, Reference Implementation\u0026lt;/description\u0026gt; \u0026lt;format mime_type=\u0026#34;text/xml\u0026#34;/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/demo_1.1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://tsturi.cen.eu/root/cwa_16200/iso-6892-1.xsd\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/component\u0026gt; \u0026lt;/component_list\u0026gt; \u0026lt;/sa_component\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Constructing component deposits ","Example of a stand-alone component deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/archive-locations/", "title": "Archive locations", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Digital preservation is a combination of policies, strategies, and actions that ensure persistent access to digital content over time. It includes archiving arrangements. The Digital Preservation Coalition’s Digital Preservation Handbook gives a good introduction to practicalities and best practices in archiving arrangements.\nUnder our member obligations, you are asked to make best efforts to have your content archived by an archiving organization, and you are encouraged to include information about your designated archive in your metadata.", "content": "Digital preservation is a combination of policies, strategies, and actions that ensure persistent access to digital content over time. It includes archiving arrangements. The Digital Preservation Coalition’s Digital Preservation Handbook gives a good introduction to practicalities and best practices in archiving arrangements.\nUnder our member obligations, you are asked to make best efforts to have your content archived by an archiving organization, and you are encouraged to include information about your designated archive in your metadata. This helps us work with archives to ensure your DOIs continue to resolve to your content, even if your organization ceases.\nThe archives currently listed in our deposit schema section are:\nCLOCKSS Deep Web Technologies (DWT) Internet Archive Koninklijke Bibliotheek (KB) LOCKSS Portico Another archiving service is PKP Preservation Network (PKP PN).\nThere\u0026rsquo;s a useful list of archive providers on the Keepers Registry.\nPlease contact us if you have archiving arrangements with an organization that is not listed.\nTo include archiving metadata, insert the relevant archive information into your metadata above the doi_data section, for example:\n\u0026lt;archive_locations\u0026gt; \u0026lt;archive name=\u0026#34;CLOCKSS\u0026#34;/\u0026gt; \u0026lt;archive name=\u0026#34;Internet Archive\u0026#34;/\u0026gt; \u0026lt;archive name=\u0026#34;Portico\u0026#34;/\u0026gt; \u0026lt;archive name=\u0026#34;KB\u0026#34;/\u0026gt; \u0026lt;/archive_locations\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.32013/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/xml-samples/ \u0026lt;/doi_data\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/conference-proceedings/", "title": "Conference proceedings markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering conference proceedings by direct deposit of XML. You can also register the conference proceedings record type using one of our helper tools: web deposit form.\nThe conference proceedings record type captures metadata about a single conference, such as date, acronym, and location. DOIs should be assigned to all papers associated with the conference, and a DOI may be assigned to the conference itself.", "content": "This guide gives markup examples for members registering conference proceedings by direct deposit of XML. You can also register the conference proceedings record type using one of our helper tools: web deposit form.\nThe conference proceedings record type captures metadata about a single conference, such as date, acronym, and location. DOIs should be assigned to all papers associated with the conference, and a DOI may be assigned to the conference itself. Ongoing conferences published with an ISSN may be deposited as a series.\n\u0026lt;conference\u0026gt; is the container for all information about a single conference as well as the individual conference papers you are depositing for the conference. If you need to register articles for more than one conference, you must use multiple instances of \u0026lt;conference\u0026gt;.\nConference deposits require metadata about the event (captured in \u0026lt;event_metadata\u0026gt;) such as conference name (required) and theme, acronym, sponsor, location, and date (all optional), and proceedings-specific metadata such as a proceedings title, publisher, and date (all required) and subject (optional).\nA conference may be deposited as a single conference or as a conference series. A conference series requires an ISSN and series-level title.\nConference papers Conference paper metadata is captured in \u0026lt;conference_paper\u0026gt;. Contributor(s), title, and DOI are required. Abstracts, page numbers, publication date, citations, funding, license, and relationship metadata are optional but encouraged.\nExample of a single conference Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34; version=\u0026#34;4.3.7\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;1234\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20010910040315\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Association of Computing Machinery\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;conference\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;chair\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Peter\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Lee\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;chair\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Fritz\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Henglein\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;chair\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Neil D.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Jones\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;event_metadata\u0026gt; \u0026lt;conference_name\u0026gt;24th ACM SIGPLAN-SIGACT symposium\u0026lt;/conference_name\u0026gt; \u0026lt;conference_theme\u0026gt;Algorithms \u0026amp; Computation Theory\u0026lt;/conference_theme\u0026gt; \u0026lt;conference_acronym\u0026gt;POPL \u0026#39;97\u0026lt;/conference_acronym\u0026gt; \u0026lt;conference_sponsor\u0026gt;L\u0026#39;École des Mines de Paris\u0026lt;/conference_sponsor\u0026gt; \u0026lt;conference_number\u0026gt;24\u0026lt;/conference_number\u0026gt; \u0026lt;conference_location\u0026gt;Paris, France\u0026lt;/conference_location\u0026gt; \u0026lt;conference_date start_month=\u0026#34;01\u0026#34; start_year=\u0026#34;1997\u0026#34; start_day=\u0026#34;15\u0026#34; end_year=\u0026#34;1997\u0026#34; end_month=\u0026#34;01\u0026#34; end_day=\u0026#34;17\u0026#34;/\u0026gt; \u0026lt;/event_metadata\u0026gt; \u0026lt;proceedings_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;proceedings_title\u0026gt; Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL \u0026#39;97 \u0026lt;/proceedings_title\u0026gt; \u0026lt;proceedings_subject\u0026gt;Principles of programming languages\u0026lt;/proceedings_subject\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;ACM Press\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;New York\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;1997\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn\u0026gt;0-89791-853-3\u0026lt;/isbn\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1145/263699\u0026lt;/doi\u0026gt; \u0026lt;timestamp\u0026gt;20010910040315\u0026lt;/timestamp\u0026gt; \u0026lt;resource\u0026gt;http://0-portal-acm-org.libus.csd.mu.edu/citation.cfm?doid=263699\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/proceedings_metadata\u0026gt; \u0026lt;conference_paper publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Marc\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Shapiro\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Susan\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Horwitz\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt; Fast and accurate flow-insensitive points-to analysis \u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;1997\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;14\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1145/263699.263703\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://0-portal-acm-org.libus.csd.mu.edu/citation.cfm?doid=263699.263703 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/conference_paper\u0026gt; \u0026lt;conference_paper publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Erik\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Ruf\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Partitioning dataflow analyses using types\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;1997\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;15\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;26\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1145/263699.263705\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://0-portal-acm-org.libus.csd.mu.edu/citation.cfm?doid=263699.263705 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/conference_paper\u0026gt; \u0026lt;conference_paper publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Pascal\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Fradet\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Daniel\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Le M\u0026amp;étayer\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Shape types\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;1997\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;27\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;39\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1145/263699.263706\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://0-portal-acm-org.libus.csd.mu.edu/citation.cfm?doid=263699.263706 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/conference_paper\u0026gt; \u0026lt;/conference\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a conference that is part of a series Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34; version=\u0026#34;4.3.7\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;111111\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;200503161011\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;conference\u0026gt; \u0026lt;event_metadata\u0026gt; \u0026lt;conference_name\u0026gt;Crossref Annual Meeting\u0026lt;/conference_name\u0026gt; \u0026lt;conference_acronym\u0026gt;cam2005\u0026lt;/conference_acronym\u0026gt; \u0026lt;conference_date\u0026gt;Nov. 13, 2005\u0026lt;/conference_date\u0026gt; \u0026lt;/event_metadata\u0026gt; \u0026lt;proceedings_series_metadata\u0026gt; \u0026lt;series_metadata\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;CrossRef Annual Meeting Dummy Proceedings\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;issn\u0026gt;5555-5555\u0026lt;/issn\u0026gt; \u0026lt;/series_metadata\u0026gt; \u0026lt;proceedings_title\u0026gt;Annual Meeting of PILA\u0026lt;/proceedings_title\u0026gt; \u0026lt;volume\u0026gt;3\u0026lt;/volume\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Crossref\u0026lt;/publisher_name\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;11\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;23\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2004\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn\u0026gt;5555555555\u0026lt;/isbn\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/cram05\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/cram05\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/proceedings_series_metadata\u0026gt; \u0026lt;!-- ============== --\u0026gt; \u0026lt;conference_paper\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Jim\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Jones\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Crossref Sample Title\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;12\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;31\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2003\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;3\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;4\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/cdp0001\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/pubs/cdp0001\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/conference_paper\u0026gt; \u0026lt;/conference\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Conference papers ","Example of a single conference ","Example of a conference that is part of a series "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/article-ids/", "title": "Article numbers or IDs", "subtitle":"", "rank": 4, "lastmod": "2023-02-23", "lastmod_ts": 1677110400, "section": "Documentation", "tags": [], "description": "Adding other identifiers Journal articles and other scholarly works often have an ID such as an article number, eLocator, or e-location ID instead of a page number. In these cases, do not use the \u0026lt;first_page\u0026gt; tag to capture the ID - instead, use the \u0026lt;item_number\u0026gt; tag with the item_number_type attribute value set to article_number.\nExample article number or ID \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;5\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;10\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2017\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;article_number\u0026#34;\u0026gt;3D9324F1-16B1-11D7- 8645000102C\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;crossmark\u0026gt; Internal and other identifiers You can include identifiers that are not explicitly defined in our deposit schema section within the optional \u0026lt;publisher_item\u0026gt; section.", "content": "Adding other identifiers Journal articles and other scholarly works often have an ID such as an article number, eLocator, or e-location ID instead of a page number. In these cases, do not use the \u0026lt;first_page\u0026gt; tag to capture the ID - instead, use the \u0026lt;item_number\u0026gt; tag with the item_number_type attribute value set to article_number.\nExample article number or ID \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;5\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;10\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2017\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;article_number\u0026#34;\u0026gt;3D9324F1-16B1-11D7- 8645000102C\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;crossmark\u0026gt; Internal and other identifiers You can include identifiers that are not explicitly defined in our deposit schema section within the optional \u0026lt;publisher_item\u0026gt; section. \u0026lt;publisher_item\u0026gt; is also used to capture article or e-location IDs. This option should only be used for identifiers that identify the item being registered. Use relationships to capture identifiers for related items.\nExamples of identifier types include:\nPII SICI DOI DAI Z39.23 ISO-std-ref std-designation report-number other Example of an identifier \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;**pii**\u0026#34;\u0026gt;s00022098195001808\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; ", "headings": ["Adding other identifiers ","Example article number or ID ","Internal and other identifiers ","Example of an identifier "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/contributors/", "title": "Contributors", "subtitle":"", "rank": 4, "lastmod": "2023-10-28", "lastmod_ts": 1698451200, "section": "Documentation", "tags": [], "description": "This guide gives markup examples of contributor metadata for members registering content by direct deposit of XML. A contributor is a person or organization that is considered the author of a work. A contributor may be a person or a group author (organization in our XML). Contributor metadata also includes affiliations, which have their own guide.\nORCID iDs An author\u0026rsquo;s ORCID iD should be included whenever possible. Providing an ORCID iD in a metadata record allows the author\u0026rsquo;s ORCID record to be automatically updated via our auto-update process.", "content": "This guide gives markup examples of contributor metadata for members registering content by direct deposit of XML. A contributor is a person or organization that is considered the author of a work. A contributor may be a person or a group author (organization in our XML). Contributor metadata also includes affiliations, which have their own guide.\nORCID iDs An author\u0026rsquo;s ORCID iD should be included whenever possible. Providing an ORCID iD in a metadata record allows the author\u0026rsquo;s ORCID record to be automatically updated via our auto-update process. OJS users who have upgraded to version 3.1.2 or later can request authenticated iDs from both contributing authors and co-authors - learn more about the OJS-ORCID plugin.\nContributor roles We currently support and require a single contributor role per contributor. Supported values are:\nauthor editor chair reviewer review-assistant stats-reviewer reviewer-external reader translator We intend to allow multiple roles per contributor and as expand our list of supported contributor roles in a future update.\nContributor order The \u0026lt;person_name\u0026gt; and \u0026lt;organization\u0026gt; elements both include required contributor role and sequence attributes. An author may be first or additional. Specific sequence numbering is not allowed, but many systems using our metadata assume that the order of authors as present in the metadata is the appropriate order for metadata display. * If a contributor has just one name, put it under the \u0026lt;surname\u0026gt; field\nNote that the data supplied in the \u0026lt;given_name\u0026gt; and \u0026lt;surname\u0026gt; fields is used for display and query matching and must be as accurate as possible.\n\u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Minerva\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Nipperson\u0026lt;/surname\u0026gt; \u0026lt;ORCID authenticated=\u0026#34;true\u0026#34;\u0026gt;https://orcid.org/0000-0002-4011-3590\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; Contributor example \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Minerva\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Nipperson\u0026lt;/surname\u0026gt; \u0026lt;suffix\u0026gt;III\u0026lt;/suffix\u0026gt; \u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_id type=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/01bj3aw27\u0026lt;/institution_id\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;/affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Tinker Fan Club\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;TinFC\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Boston, MA\u0026lt;/institution_place\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;ORCID authenticated=\u0026#34;true\u0026#34;\u0026gt;https://orcid.org/0000-0002-4011-3590\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Christopher \u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Fielding\u0026lt;/surname\u0026gt; \u0026lt;affiliations\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_id type=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/01bj3aw27\u0026lt;/institution_id\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;/affiliations\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Katharine \u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Mech\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; ", "headings": ["ORCID iDs ","Contributor roles ","Contributor order ","Contributor example "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/datasets/", "title": "Datasets markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering datasets by direct deposit of XML. It is not currently possible to register the datasets record type using one of our helper tools.\nDataset records capture information about one or more database records or collections. Dataset deposits do not contain the entire database record or collection, only descriptive metadata. The metadata can include:\nContributors: the author(s) of a database record or collection Title: the title of a database record or collection Date (within \u0026lt;database_date\u0026gt;): the creation date, publication date (if different from the creation date), and the date of last update of the record Record number or other identifier (within \u0026lt;publisher_item\u0026gt;): the record number of the dataset item.", "content": "This guide gives markup examples for members registering datasets by direct deposit of XML. It is not currently possible to register the datasets record type using one of our helper tools.\nDataset records capture information about one or more database records or collections. Dataset deposits do not contain the entire database record or collection, only descriptive metadata. The metadata can include:\nContributors: the author(s) of a database record or collection Title: the title of a database record or collection Date (within \u0026lt;database_date\u0026gt;): the creation date, publication date (if different from the creation date), and the date of last update of the record Record number or other identifier (within \u0026lt;publisher_item\u0026gt;): the record number of the dataset item. In this context, \u0026lt;publisher_item\u0026gt; can be used for the record number of each item in the database Description (within \u0026lt;description\u0026gt;): a brief summary description of the contents of the database Format: the format type of the dataset item if it includes files rather than just text. Note the format element here should not be used to describe the format of items deposited as part of the component_list Citations (within \u0026lt;citation_list\u0026gt;): a list of items (such as journal articles) cited by the dataset item. For example, dataset entry from a taxonomy might cite the article in which a species was first identified. The dataset_type attribute should be set to either record or collection to indicate the type of deposit. The default value of this attribute is record.\nConstructing dataset deposits \u0026lt;database\u0026gt; is the container for all information about a set of datasets. The top-level database may be a functional database or an abstraction acting as a collection (much like a journal is a collection of articles). Individual dataset entries are captured within the \u0026lt;dataset\u0026gt; element.\nDatasets that aren\u0026rsquo;t datasets The database record type is often used to capture metadata for items that do not fit into our currently defined record types. This may include online collections, videos, archives, and other items that aren\u0026rsquo;t cited or presented as articles, books, reports, or other defined types of content. Learn more about our supported record types.\nExample of a database deposit containing several datasets Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;2006-03-24-21-57-31-10023\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20060324215731\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;database\u0026gt; \u0026lt;database_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;NURSA Datasets\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Nuclear Receptor Signaling Atlas\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;NURSA\u0026lt;/institution_acronym\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1621/NURSA_dataset_home\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://www.nursa.org/template.cfm?threadId=10222\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/database_metadata\u0026gt; \u0026lt;dataset dataset_type=\u0026#34;collection\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Mangelsdorf\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt; Tissue-specific expression patterns of nuclear receptors \u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1621/datasets.02001\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://www.nursa.org/template.cfm?threadId=10222\u0026amp;dataType=Q-PCR\u0026amp;dataset=Tissue-specific%20expression%20patterns%20of%20nuclear%20receptors \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/dataset\u0026gt; \u0026lt;dataset dataset_type=\u0026#34;collection\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;R\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Evans\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Circadian expression patterns of nuclear receptors\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1621/datasets.02002\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://www.nursa.org/template.cfm?threadId=10222\u0026amp;dataType=Q-PCR\u0026amp;dataset=Circadian%20expression%20patterns%20of%20nuclear%20receptors \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/dataset\u0026gt; \u0026lt;/database\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; How to access data \u0026amp; software citations Crossref and DataCite make the data \u0026amp; software citations deposited by Crossref members and DataCite data repositories openly available for use for anyone within the research ecosystem (funders, research organisations, technology and service providers, research data frameworks such as Scholix, etc.).\nData \u0026amp; software citations from references can be accessed via our Event Data API. Citations included directly into the metadata by relation type can be accessed via our APIs. We’re working to include these relation type citations in the Event Data API as well, so that all data citations will be available via one source.\nScholix Participation The goal of the Scholix (SCHOlarly LInk eXchange) initiative is to establish a high-level interoperability framework for exchanging information about the links between scholarly literature and data. Crossref members can participate by sharing article-data links by including them in their deposited metadata as references and/or relation type as described above. You don’t need to sign up or let us know you’re going to start providing this information, just start to send it to us in your reference lists or in the relationship metadata.\nIf the reference metadata you are registering with us uses either Crossref or DataCite DOIs, the linkage between the publications/data is handled by us - nothing more is needed.\nIf the data (or other research objects) uses DOIs from another source, or a different type of persistent identifier, then you need to create a relationship type record instead. This method also allows for the linkage of other research objects.\nScholix API Endpoint The Event Data service implements a Scholix endpoint in the API. A subset of relevant Events (from the ‘crossref’ and ‘datacite’ sources) is available at this endpoint. The filter parameters are the same as specified in the Query API. The response format uses the Scholix schema.\n", "headings": ["Constructing dataset deposits ","Datasets that aren\u0026rsquo;t datasets ","Example of a database deposit containing several datasets ","How to access data \u0026amp; software citations","Scholix Participation","Scholix API Endpoint"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/dissertations/", "title": "Dissertations markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering dissertations by direct deposit of XML. You can also register the dissertations record type using one of our helper tools: web deposit form.\nDissertation records may be deposited for a single dissertation or thesis. DOIs are not assigned to collections of dissertations. The dissertation type should be used for content items which have not been published in books or journals. If a dissertation is published as a book or within a serial, it should be deposited with the appropriate record type.", "content": "This guide gives markup examples for members registering dissertations by direct deposit of XML. You can also register the dissertations record type using one of our helper tools: web deposit form.\nDissertation records may be deposited for a single dissertation or thesis. DOIs are not assigned to collections of dissertations. The dissertation type should be used for content items which have not been published in books or journals. If a dissertation is published as a book or within a serial, it should be deposited with the appropriate record type.\nConstructing dissertation deposits A dissertation deposit requires a title, a single author (deposited as \u0026lt;person_name\u0026gt;), institution, and approval date. Degree, ISBN, and record number information may also be included.\nIf a Dissertation Abstracts International (DAI) number has been assigned, it should be deposited in the identifier element with the id_type attribute set to dai. If an institution has its own numbering system, it should be deposited in \u0026lt;item_number\u0026gt;, and the item_number_type should be set to institution.\nSee also our dissertation example XML file.\n", "headings": ["Constructing dissertation deposits "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/face-markup/", "title": "Face markup", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Our schema supports minimal face markup in order to avoid ambiguity in certain disciplines, such as genetics, where the same text may be a gene (when italicized) or a protein (when not italicized).\nFace markup that appears in the title, subtitle, original_language_title, and unstructured_citation elements should be retained when depositing metadata. Face markup in other elements (such as small caps in author names) must be dropped. Face markup support includes bold (b), italic (i), underline (u), over-line (ovl), superscript (sup), subscript (sub), small caps (scp), and typewriter text (tt).", "content": "Our schema supports minimal face markup in order to avoid ambiguity in certain disciplines, such as genetics, where the same text may be a gene (when italicized) or a protein (when not italicized).\nFace markup that appears in the title, subtitle, original_language_title, and unstructured_citation elements should be retained when depositing metadata. Face markup in other elements (such as small caps in author names) must be dropped. Face markup support includes bold (b), italic (i), underline (u), over-line (ovl), superscript (sup), subscript (sub), small caps (scp), and typewriter text (tt).\nExamples where inclusion of face markup is especially important include:\nItalics in titles for terms such as species names or genes Superscript and subscript in titles as part of chemical names (for example, H20) Superscript and subscript in simple inline mathematics (for example, x2 + y2 = z2) The schema supports nested face markup (for example: This text is bold and italic), which would be tagged as:\nThis text is \u0026lt;b\u0026gt;\u0026lt;i\u0026gt;bold and italic\u0026lt;/i\u0026gt;\u0026lt;/b\u0026gt; Correspondingly, superscript and subscript may be nested for correct representation of xyz. This expression should be tagged as:\nx\u0026lt;sup\u0026gt;y\u0026lt;sup\u0026gt;z\u0026lt;/sup\u0026gt;\u0026lt;/sup\u0026gt; We also support MathML markup in title elements.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/grants/", "title": "Grants markup guide", "subtitle":"", "rank": 4, "lastmod": "2023-09-15", "lastmod_ts": 1694736000, "section": "Documentation", "tags": [], "description": "All metadata records and identifiers registered with Crossref are submitted as XML formatted using our metadata input schema. Unlike other objects registered with Crossref, grants have their own grant-specific input schema. Version 0.1.1 of our Grants schema is available in our GitLab schema repository, as is a complete XML example.\nPlease note: as of version 0.1.1 a version attribute is required in the \u0026lt;doi_batch\u0026gt; schema declaration, for example:\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/grant_id/0.1.1\u0026#34; xmlns:xsi=\u0026#34;http://www.", "content": "All metadata records and identifiers registered with Crossref are submitted as XML formatted using our metadata input schema. Unlike other objects registered with Crossref, grants have their own grant-specific input schema. Version 0.1.1 of our Grants schema is available in our GitLab schema repository, as is a complete XML example.\nPlease note: as of version 0.1.1 a version attribute is required in the \u0026lt;doi_batch\u0026gt; schema declaration, for example:\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/grant_id/0.1.1\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/grant_id/0.1.1 http://0-www-crossref-org.libus.csd.mu.edu/schemas/grant_id0.1.1.xsd\u0026#34; version=\u0026#34;0.1.1\u0026#34;\u0026gt; Members currently using version 0.0.1 do not need to provide a version number.\nGrant and project metadata Grant metadata by default includes a single or multiple projects. Multiple projects may be applied to a single grant but the DOI registered is applied at the grant level. Multiple grants may be included in a single XML file.\nThe metadata within each project includes basics like titles, descriptions, and investigator information (including affiliations), funding information such as funder names and identifiers from the Funder Registry, as well as information about funding types and amounts.\nWhen registering a grant:\nyou must include required project information (a project title, a funder name and identifier, and a funding type) as well as your internal grant or award number you should include a project description, language information, investigator details including ORCID IDs, ROR IDs within affiliations, and investigator country code; award amounts/currency, and project start and end dates and/or an award date (note that project or grant start/end dates are used to calculate current vs. backfile content registration fees). you may include multiple titles and descriptions as well as language information; a funding scheme, and planned project start and end dates. Project metadata Project: Project metadata includes titles and descriptions (abstracts). Both can be supplied multiple times to capture information in different languages.\nElement / attribute Description Limits project Container for project information. Multiple projects may be assigned to a single Grant ID. required; multiple allowed project-title Title of a project funded by the grant being registered required; multiple allowed description Used to capture an abstract or description of a project. optional; multiple allowed @xml:lang Use @xml:lang to identify language for each project-title or description. This allows you to provide multiple titles in different languages. optional Investigators: Investigators are not required, but all applicable investigators should be included. Optional start and end dates may be used to capture investigators whose involvement is limited to a specific timeframe.\nElement / attribute Description Limits investigators container for investigator information optional person container for individual investigator details at least 1 required, multiple allowed (unbounded) @role available roles are lead_investigator, co-lead_investigator, investigator required @start-date Date an investigator began work with the project optional @end-date Date an investigator ended work with the project optional givenName given or first name optional familyName family or surname optional alternateName alias or nickname used by the Investigator optional affiliation container for affiliation information optional, multiple allowed institution institution an investigator is affiliated with when associated with the project being defined. Multiple affiliations should be supplied where applicable 1 allowed, use multiple affiliation groups for investigators with multiple affiliations @country ISO 3166-1 alpha 2-letter country code, captures location (country) of affiliation optional ROR A ROR ID may be supplied to disambiguate affiliation information, expressed as a URL optional ORCID ORCID ID of the investigator, expressed as a URL optional Funding details: Funding details include award amount, currency, funder details, and funding type.\nElement / attribute Description Limits award-amount total overall amount awarded to project optional @currency ISO 4217 currency for value provided in award-amount required funding container for funding information. Use multiple funding sections as needed to specify different funding types required, multiple allowed @amount amount of funding provided by funder optional @currency ISO 4217 currency for value provided in @amount optional @funding-percentage percentage of overall funding optional @funding-type type of funding provided, values are limited required @null-amount supply for projects where an award amount is missing or can’t be disclosed - allowed values are: unknown, undisclosed, not-applicable other optional funder-name name of the funder required funder-id funder identifier from our Funder Registry required funding-scheme scheme for grant or award as provided by the funder optional Award dates: Dates can be applied at the project level (via award-date). An award-start-date may also be applied to the grant / award as a whole.\nElement / attribute Description Limits award-dates container for date information optional @start-date actual start date of award optional @end-date actual end date of the award optional @planned-start-date planned start date of award optional @planned-end-date planned end date of award optional Funding types: Types of funding are limited to the following values:\naward: a prize, award, or other type of general funding contract: agreement involving payment crowdfunding: funding raised via multiple sources, typically small amounts raised online endowment: gift of money that will provide an income equipment: use of or gift of equipment facilities: use of location, equipment, or other resources fellowship: grant given for research or study grant: a monetary award loan: money or other resource given in anticipation of repayment other: award of undefined type prize: an award given for achievement salary-award: an award given as salary, includes intramural research funding secondment: detachment of a person or resource for temporary assignment elsewhere seed-funding: an investor invests capital in exchange for equity training-grant: grant given for training Grant metadata We collect grant-specific metadata that is separate from the project information. This includes the funder-specific award identifier (grant number), the (optional) start date of the grant, related items, and the DOI and URL being registered.\nElement / attribute Description Limits award-number funder-supplied award ID /grant number required award-start-date start date of grant funding optional relation (as rel:program) relationship metadata connecting grant to other items (other grants, funded research outputs) optional DOI DOI being registered required resource URL of grant landing page required ", "headings": ["Grant and project metadata","Project metadata","Grant metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/full-text-urls/", "title": "Resource and full-text URLs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The resolution URL is a link to the web page you want users to see when they click on your DOI. This landing page needs to contain specific information including how to access your full-text content. For online works, this is often a link to content in HTML or PDF format, and for physical works, a catalog record including location details.\nAs well as the resolution URL, there are other URLs that you may include in the metadata for your content:", "content": "The resolution URL is a link to the web page you want users to see when they click on your DOI. This landing page needs to contain specific information including how to access your full-text content. For online works, this is often a link to content in HTML or PDF format, and for physical works, a catalog record including location details.\nAs well as the resolution URL, there are other URLs that you may include in the metadata for your content:\nFull-text URLs for Similarity Check - these URLs allow Turnitin to index your content and include it in the iThenticate database. You’ll include these URLs if you want to participate in the Similarity Check service, or if you want to make sure your content is included when other users check submitted manuscripts for similarity to content that’s already been published. Learn more about Similarity Check. Full-text URL for text and data mining (TDM) - these URLs are provided specifically for researchers carrying out text and data mining. Learn more about text and data mining. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/funding-information/", "title": "Funding information", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Add funder information, including the funder’s unique identifier from the Funder Registry, and help build connections between funders and research outputs.\nLinking research funding and published outcomes Funding data is used by funders to track the publications that result from their grants, including use of facilities, equipment, salary awards, and so on.\nPublishers can contribute by depositing the funding acknowledgements from their publications as part of their standard metadata. The deposit should include funder names, funder IDs, and associated grant numbers.", "content": "Add funder information, including the funder’s unique identifier from the Funder Registry, and help build connections between funders and research outputs.\nLinking research funding and published outcomes Funding data is used by funders to track the publications that result from their grants, including use of facilities, equipment, salary awards, and so on.\nPublishers can contribute by depositing the funding acknowledgements from their publications as part of their standard metadata. The deposit should include funder names, funder IDs, and associated grant numbers.\nFunding data can be searched using our interfaces for people or our APIs for machines. This data clarifies the scholarly record, and makes life easier for researchers who may need to comply with requirements to make their published results publicly available.\nHow to collect and register funding data Ask authors to submit the names of their funder(s) and grant numbers when they submit their manuscript, or extract funding information from the text of accepted manuscripts Match funder names to their corresponding Funder ID in the Funder Registry Deposit with funder name(s), Crossref funder ID(s) and Crossref grant ID(s) for each DOI. You can register funding data as a stand-alone deposit (useful for backfiles) or as part of your standard metadata deposit (for current content) Make use of our metadata retrieval tools to check the metadata we hold for your publications (and to retrieve metadata for your own analysis) Check your progress using Participation Reports to see the percentage of your deposits that have funding data (and other key metadata elements) registered. ", "headings": ["Linking research funding and published outcomes ","How to collect and register funding data "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/journals-and-articles/", "title": "Journals and articles markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering journals and articles by direct deposit of XML. You can also register the journals and articles record type using one of our helper tools: web deposit form or third party Crossref XML plugin for OJS.\nDOIs may be assigned to journal titles, volumes, issues, and (of course) journal articles. A title-level DOI is encouraged for all journals.\nAssign DOIs to supplemental materials associated with journal articles using our component record type.", "content": "This guide gives markup examples for members registering journals and articles by direct deposit of XML. You can also register the journals and articles record type using one of our helper tools: web deposit form or third party Crossref XML plugin for OJS.\nDOIs may be assigned to journal titles, volumes, issues, and (of course) journal articles. A title-level DOI is encouraged for all journals.\nAssign DOIs to supplemental materials associated with journal articles using our component record type.\nCreating journal deposits \u0026lt;journal\u0026gt; is the container for all information about a single journal and the articles you are depositing for the journal. Within a single \u0026lt;journal\u0026gt; instance you may register articles for a single issue. If you need to register articles for more than one issue, you must use multiple instances of \u0026lt;journal\u0026gt;. These may be included within the same deposit file.\nIf you have articles that have not been assigned to an issue (or you do not use issue numbering) you may register them within a single journal instance. In this case, do not include \u0026lt;journal_issue\u0026gt; metadata.\nIf you publish in volumes only you must include \u0026lt;journal_issue\u0026gt; and the child element \u0026lt;journal_volume\u0026gt; but omit the \u0026lt;issue\u0026gt; number.\nExamples A journal may be created with an ISSN or without. A DOI is not required but is strongly recommended and should remain consistent for all articles registered for the journal. Example title-only deposit files are available here: with ISSN | without ISSN.\nExample of a journal deposit containing several articles and issues Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.4.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; / contains information related to submission file \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; / required, used to track submissions \u0026lt;timestamp\u0026gt;19990628123304\u0026lt;/timestamp\u0026gt; / required, must be incremented with each metadata record update \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Bob Surname\u0026lt;/depositor_name\u0026gt; / person or entity submitting deposit \u0026lt;email_address\u0026gt;someone@crossref.org\u0026lt;/email_address\u0026gt; / we\u0026#39;ll send a submission log to this address \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; / entity responsible for content being registered \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; / captures journal-level metadata \u0026lt;full_title\u0026gt;Applied Physics Letters\u0026lt;/full_title\u0026gt; / required, full title of the journal \u0026lt;abbrev_title\u0026gt;Appl. Phys. Lett.\u0026lt;/abbrev_title\u0026gt; / abbreviated or alternate title \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;0003-6951\u0026lt;/issn\u0026gt; / multiple ISSN may be supplied \u0026lt;coden\u0026gt;applab\u0026lt;/coden\u0026gt; / optional \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; / captures volume and/or issue level metadata \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; / publication year is required, month and day are encouraged \u0026lt;year\u0026gt;1999\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;74\u0026lt;/volume\u0026gt; / volume number \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;16\u0026lt;/issue\u0026gt; / issue number \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt; Sub-5-fs visible pulse generation by pulse-front-matched noncollinear optical parametric amplification \u0026lt;/title\u0026gt; / article title \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; / author names \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ann P.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Shirakawa\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;organization sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt;Sample Organization\u0026lt;/organization\u0026gt; / an organization that has authored the article \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1999\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;2268\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;pii\u0026#34;\u0026gt;S000369519903216\u0026lt;/identifier\u0026gt; / optional, internal identifier \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9876/S000369519903216\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-ojps-crossref-org.libus.csd.mu.edu/link/?apl/74/2268/ab\u0026lt;/resource\u0026gt; / the URL of your DOI landing page \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; / a second journal article in the same issue \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Ultrafast (GaIn)(NAs)/GaAs vertical-cavity surface-emitting laser\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;M\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;van Exter\u0026lt;/surname\u0026gt; \u0026lt;suffix\u0026gt;III\u0026lt;/suffix\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1999\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;2274\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;pii\u0026#34;\u0026gt;S0003695199038164\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9876/S0003695199034166\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-ojps-aip-org.libus.csd.mu.edu/link/?apl/74/2271/ab\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;citation_list\u0026gt; / a citation list for the content being registered \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-1\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0027-8424\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;Proc. Natl. Acad. Sci. U.S.A.\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;West\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;98\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;20\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;11024\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;2001\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-2\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Space Sci. Rev.\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Heber\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;97\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;309\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;2001\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-3\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1029/2002GL014944\u0026lt;/doi\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-4\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Dev. Dyn.\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Tufan\u0026lt;/author\u0026gt; \u0026lt;cYear\u0026gt;2002\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-5\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;J. Plankton Res.\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Bocher\u0026lt;/author\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-6\u0026#34;\u0026gt; \u0026lt;issn\u0026gt;0169-6009\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;Bone. Miner.\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt; Proyecto Multicéntrico de Investigación de Osteoporosis \u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;17\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;133\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1992\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-7\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt;Neuropharmacological Basis of Reward\u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Carr\u0026lt;/author\u0026gt; \u0026lt;first_page\u0026gt;265\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1989\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-8\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt;The Metabolic and Molecular Bases of Inherited Disease \u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Mahley\u0026lt;/author\u0026gt; \u0026lt;edition_number\u0026gt;7\u0026lt;/edition_number\u0026gt; \u0026lt;first_page\u0026gt;1953\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1995\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-9\u0026#34;\u0026gt; \u0026lt;series_title\u0026gt;Genome Analysis\u0026lt;/series_title\u0026gt; \u0026lt;volume_title\u0026gt;Genetic and Physical Mapping\u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Rinchik\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;1\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;121\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1990\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-10\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt;Molecular Cloning: A Laboratory Manual\u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Sambrook\u0026lt;/author\u0026gt; \u0026lt;cYear\u0026gt;1989\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-11\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt;Immunocytochemistry: Theory and Practice\u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Larsson\u0026lt;/author\u0026gt; \u0026lt;first_page\u0026gt;41\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1988\u0026lt;/cYear\u0026gt; \u0026lt;component_number\u0026gt;2\u0026lt;/component_number\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-12\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt; Proceedings of the XXVth International Conference on Animal Genetics \u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Roenen\u0026lt;/author\u0026gt; \u0026lt;first_page\u0026gt;105\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1996\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-13\u0026#34;\u0026gt; \u0026lt;volume_title\u0026gt; Proceedings of the Corn and Sorghum Industry Research Conference \u0026lt;/volume_title\u0026gt; \u0026lt;author\u0026gt;Beavis\u0026lt;/author\u0026gt; \u0026lt;first_page\u0026gt;250\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;1994\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-14\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;L. Reynolds, Three dimensional reflection and transmission equations for optical diffusion in blood, MS Thesis. Seattle, Washington: University of Washington (1970).\u0026lt;/unstructured_citation\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;key-10.9876/S0003695199034166-15\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;T. J. Dresse, U.S. Patent 308, 389 [\u0026lt;i\u0026gt;CA 82\u0026lt;/i\u0026gt;, 73022 (1975)].\u0026lt;/unstructured_citation\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;journal\u0026gt; / start of a new journal submission \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Applied Physics Letters\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;Appl. Phys. Lett.\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;0003-6951\u0026lt;/issn\u0026gt; \u0026lt;coden\u0026gt;applab\u0026lt;/coden\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1999\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;74\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Working with metadata\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D L\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Peng\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;K\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Sumiyama\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;S\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Sumiyama\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;T\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Hihara\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;T J\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Konno\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;1999\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;76\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9876/S0003695199019014\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://18000-ojps-crossref-org.libus.csd.mu.edu/link/?apl/74/1/76/ab\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;component_list\u0026gt; / component (supplemental material) registration records \u0026lt;component parent_relation=\u0026#34;isPartOf\u0026#34;\u0026gt; \u0026lt;description\u0026gt;Figure 1: This is the caption of the first figure...\u0026lt;/description\u0026gt; \u0026lt;format mime_type=\u0026#34;image/jpeg\u0026#34;\u0026gt;Web resolution image\u0026lt;/format\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9876/S0003695199019014/f1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://18000-ojps-crossref-org.libus.csd.mu.edu/link/?apl/74/1/76/f1\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/component\u0026gt; \u0026lt;component parent_relation=\u0026#34;isReferencedBy\u0026#34;\u0026gt; \u0026lt;description\u0026gt;Video 1: This is a description of the video...\u0026lt;/description\u0026gt; \u0026lt;format mime_type=\u0026#34;video/mpeg\u0026#34;/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9876/S0003695199019014/video1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://18000-ojps-crossref-org.libus.csd.mu.edu/link/?apl/74/1/76/video1\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/component\u0026gt; \u0026lt;/component_list\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Creating journal deposits ","Examples","Example of a journal deposit containing several articles and issues "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/peer-reviews/", "title": "Peer reviews markup guide", "subtitle":"", "rank": 4, "lastmod": "2023-10-28", "lastmod_ts": 1698451200, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering peer reviews by direct deposit of XML. It is not currently possible to register the peer reviews record type using one of our helper tools.\nGetting started with registering peer reviews Registration of peer reviews is supported as of schema version 4.4.1. Peer reviews include referee reports, decision letters, and author responses. You may also register post-publication reviews using our peer review record type.", "content": "This guide gives markup examples for members registering peer reviews by direct deposit of XML. It is not currently possible to register the peer reviews record type using one of our helper tools.\nGetting started with registering peer reviews Registration of peer reviews is supported as of schema version 4.4.1. Peer reviews include referee reports, decision letters, and author responses. You may also register post-publication reviews using our peer review record type.\nPeer review metadata includes a number of review-specific elements. Many are optional to accommodate differences in review practices, but please include all elements relevant to your reviews when submitting your metadata records.\nOur schema includes support for the following fields:\ncontributor, to capture reviewer name and role, choose from: reviewer review-assistant stats-reviewer reviewer-external reader translator anonymous title review_date institution competing_interest_statement running_number license data relations stage type recommendation revision-round language Note that all reviews must include relationship metadata linking the review with the item being reviewed. Learn more about obligations and limitations for peer review registration.\nElement Description Limits contributor, includes person_name or anonymous Captures reviewer name and role. If anonymous, must capture as \u0026lt;anonymous/\u0026gt;.\nPeer review roles are: reviewer, review-assistant, stats-reviewer, reviewer-external, reader, translator, author, editor optional title Title of review. If you don’t have a review-specific title convention, we recommend that you include Review (or member’s own term for review) in your peer review registration, as well as a revision and review number.\nFor example, a review pattern of Review: title of article (Revision number/Review number) will be:\nReview: Analysis of the effects of bad metadata on discoverability (R2/RC3) required review_date Date of review, including month, day, year year is required institution Organization (member or other) submitting the peer review, strongly advised if submitter differs from publisher of item being reviewed optional, may include up to 5 competing_interest_statement Competing interest statement provided by review author during review process optional running_number Internal number/identifier used to identify specific review optional license data Text or data mining license info optional relations Relate review to item being reviewed through relationships - must supply the DOI of item being reviewed as an inter-work relation with review type isReviewOf required Some metadata is captured as attributes with specific enumerated values:\nAttributes Description Limits stage Options are pre-publication and post-publication optional type Types of report include: referee-report, editor-report, author-comment, community-comment, aggregate, recommendation optional recommendation Values are: major-revision, minor-revision, reject, reject-with-resubmit, accept optional revision-round Revision round number, first submission is defined as revision round 0 optional language Language of review optional The types are refer to the following categories of peer reviews:\nreferee-report: also commonly known as a reviewer report - comments provided by someone who has been verified as an expert in the field of the work, usually invited by an editor. Note that we treat the terms reviewer and referee as interchangeable. editor-report: comments by an editor representing the journal or platform where the work is submitted. It might contain feedback on other peer reviews or a decision on whether to accept the work for publication. author-comment: a response by one or more authors to peer review reports on their work. community-comment: comments made on a work from the community at large. These are usually part of a public call for reviews, rather than personally invited. aggregate: a summary of a review process, for example collating the responses of several referees and/or editors. recommendation: Reporting an editorial decision on a peer review with one of the values: major-revision, minor-revision, reject, reject-with-resubmit, accept. Example of connecting a review to the reviewed item through relations \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Referee report of Treatment of plaque psoriasis with an ointment formulation of the Janus kinase inhibitor, tofacitinib: a Phase 2b randomized clinical trial\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;isReviewOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34; \u0026gt;10.1186/s12895-016-0051-4\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; Example of a complete review \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 http://0-www-crossref-org.libus.csd.mu.edu/schema/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; version=\u0026#34;4.4.2\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;20170807\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;2017080715731\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;peer_review stage=\u0026#34;pre-publication\u0026#34; revision-round=\u0026#34;1\u0026#34; recommendation=\u0026#34;accept\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;reviewer\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Wilson\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Liao\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Review: Treatment of plaque psoriasis with an ointment formulation of the Januskinase inhibitor, tofacitinib: a Phase 2b randomized clinical trial. V1\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;review_date\u0026gt; \u0026lt;month\u0026gt;08\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;19\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/review_date\u0026gt; \u0026lt;competing_interest_statement\u0026gt; There were no competing interests\u0026lt;/competing_interest_statement\u0026gt; \u0026lt;running_number\u0026gt;RC1 \u0026lt;/running_number\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Referee report of Treatment of plaque psoriasis with an ointment formulation of the Janus kinase inhibitor, tofacitinib: a Phase 2b randomized clinical trial\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;isReviewOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34; \u0026gt;10.1186/s12895-016-0051-4 \u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/abc123\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://www.example.org/openpeerreview/art%3A10.1186%2Fs12895-016-0051-4/12895_2016_51_ReviewerReport_V2_R1.pdf\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/peer_review\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Getting started with registering peer reviews ","Example of connecting a review to the reviewed item through relations ","Example of a complete review "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/issn-isbn/", "title": "ISSNs and ISBNs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "An International Standard Serial Number (ISSN) or International Standard Book Number (ISBN) is a number used to uniquely identify a serial or book publication. To obtain an ISSN, you need to register with the ISSN International Centre; and for an ISBN, with your national ISBN agency.\nISSNs/ISBNs are useful in distinguishing between serials or books with the same title. If a publication with the same content is published in more than one format, a different identifier is assigned to each media type.", "content": "An International Standard Serial Number (ISSN) or International Standard Book Number (ISBN) is a number used to uniquely identify a serial or book publication. To obtain an ISSN, you need to register with the ISSN International Centre; and for an ISBN, with your national ISBN agency.\nISSNs/ISBNs are useful in distinguishing between serials or books with the same title. If a publication with the same content is published in more than one format, a different identifier is assigned to each media type. For example, a journal may have a print ISSN and an electronic ISSN, and print and ebooks have different ISBNs.\nInclude the title and ISSN/ISBN when you first deposit metadata for a content item in our system (if applicable) Include both print and electronic ISSNs/ISBNs (if applicable) If the journal does not have an ISSN at the time of registering content for it, include a title-level DOI for the journal. Once the ISSN is known, deposits should include both the ISSN and the journal-level DOI. Ideally, you would also update the metadata for all the previously registered content to include the ISSN. If you have any queries, please contact us.\nWe do not verify your title and ISSN combination with an external agency, but we carry out a check digit validation on every ISSN deposited. Once a title or ISSN is deposited, a new publication with the same title or ISSN can\u0026rsquo;t be created. If you try to make another deposit using a title and ISSN combination that does not match the combination in our system, the deposit will not work. Learn more about updating title records, including ISSNs/ISBNs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/license-information/", "title": "License information", "subtitle":"", "rank": 4, "lastmod": "2024-02-16", "lastmod_ts": 1708041600, "section": "Documentation", "tags": [], "description": "Copyright is a type of intellectual property, which allows the copyright owner to protect against others copying or reproducing their work. Copyright arises automatically when a work that qualifies for protection is created. Scholarly communications relies on researchers sharing, adapting, and building on the work of others, so a license (an official permission or permit) is needed in order for copyrighted content to be used in these ways.\nIncluding license information (or access indicators) in your deposit is very helpful in letting readers know how they can access and use your content, for example, in text and data mining.", "content": "Copyright is a type of intellectual property, which allows the copyright owner to protect against others copying or reproducing their work. Copyright arises automatically when a work that qualifies for protection is created. Scholarly communications relies on researchers sharing, adapting, and building on the work of others, so a license (an official permission or permit) is needed in order for copyrighted content to be used in these ways.\nIncluding license information (or access indicators) in your deposit is very helpful in letting readers know how they can access and use your content, for example, in text and data mining. You can include access indicators in metadata deposits.\nExamples of licenses BMJ - Text and Data Mining (TDM) Policy and License Copyright Clearance Center’s About Copyright Creative Commons - Share your work DOAJ - Licensing guide Elsevier - Copyright IEEE - License Agreements JISC Collections - Guide to the Model license PKP - Contributor License Agreement An additional element (\u0026lt;ai:program\u0026gt;) has been added from schema version 4.3.2 to support the access indicators schema (AccessIndicators.xsd).\nLicense information metadata collected includes:\nfree-to-read status (free_to_read) license URL element (license_ref) start_date attribute, optional, date format YYYY-MM-DD applies_to attribute, optional, allowed values are: vor (version of record) am (accepted manuscript) tdm (text mining) stm-asf (Article Sharing Framework) Note that free-to-read is an access indicator, separate from the license. It’s used to show that a work is available at no charge for a limited time, but would normally be behind a paywall.\nAccess indicators may be included in a metadata deposit, submitted as a resource-only deposit, or as a supplemental metadata upload, and may be included with Crossmark metadata where applicable. The ai namespace must be included in the schema declaration, for example:\n\u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; xmlns:ai=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34; version=\u0026#34;4.4.2\u0026#34;\u0026gt; Best practice for license information This guidance for members on how to register better license metadata with us is to help academic institutions identify content written by their researchers, and how this content may be used, particularly in an automated, machine-readable way.\nInstitutions need to know which article version may be exposed on an open repository, and from what date. It is no longer sufficient simply to describe in words how they may calculate the embargo end-date, for example, by referring them to a general set of terms and conditions that apply to all of your content across its whole lifecycle – they need to know whether this version of this article can be exposed on their repository and, if so, from what specific date, and what repository readers can then do with the content they find there.\nOur schema contains all the fields you need to specify this unambiguously. By doing so, you can also be more confident that institutions will have the information they need to respect your terms and conditions.\nIn this section, learn more about:\nHow we collect license information Example: Green OA with Creative Commons license Example: Green OA with member-defined post-embargo license Example: Gold OA Use cases How to populate your metadata with license information How we collect license information A single Crossref DOI can be associated with metadata relating to multiple versions of a work: the author\u0026rsquo;s accepted manuscript (AAM), version of record (VoR), or a version intended for text and data mining (TDM). Each of these versions can have their own license conditions attached to them. To reflect this, works registered with us can have multiple license elements. Each license element can contain a URL to a license, the article version to which the license applies, and the license start date. Together, these can describe nuanced license terms across different versions of the work. An analysis done by Jisc of our metadata found that while 48% of journal articles published in 2017 had license information, the licenses most often referred to the text and data mining version of the work, and licenses were still being used inconsistently for the version of record (VoR) or accepted manuscript (AM). A major concern is that many members link to their general terms and conditions rather than to licenses that apply at specific times to specific versions of a work. For example, a member may set its policies out in a general terms and conditions page, and link to it in the license metadata:\n\u0026lt;license_ref applies_to=\u0026#34;vor\u0026#34; start_date=\u0026#34;2019-01-01\u0026#34;\u0026gt; http://www.publisherwebsite.com/general_terms_and_conditions \u0026lt;/license_ref\u0026gt; On the terms and conditions page, the member could spell out, for example, the license that applies to the VoR, the restrictions that apply to the AAM during its embargo period, and details of how the AAM may be used after its embargo period. A repository manager would then have to go through the terms and conditions, and manually calculate the embargo end date, in order to determine whether the work could be deposited to a repository. This is a prohibitively onerous process for institutions, and risks content being used outside the terms of member policies because of human error. It would be helpful if members could instead set out specific licenses for each stage in each article’s lifecycle, for each of its versions. If the licensing terms for a version will change (for example, because it may be exposed on a repository after an embargo period), then a separate license should be used, with the start_date element indicating when the new license comes into effect. Using start dates for this license information is best practice in general, as it can validate immediate open access, which is at the heart of many institutional and funder policies. This is set out in more detail in the examples below.\nExample: Green OA with Creative Commons license In this example, a work is published on 1 January 2019. Under the member’s policy, the VoR is under access controls. The AAM is under embargo for a six-month period and then becomes open access under a CC BY NC ND license. Green OA with Creative Commons license\nBy using a Creative Commons license with a start date, the embargo end date can be unambiguously deduced from the metadata.\nExample: Green OA with member-defined post-embargo license Linking to a Creative Commons license is optimal whenever possible, as this is an unambiguously open license and so will be readily recognizable as identifying the post-embargo period. It is also a standard license which makes it more easily machine-readable. However, if you need to define your own open license, you can instead link to that in the metadata along with the appropriate start date. Green OA with member-defined post-embargo license\nRepository managers will still be able to unambiguously distinguish works that can be made available after an embargo period, albeit involving a brief manual check, provided the license identifies itself explicitly as referring specifically to the post-embargo period. It would not be suitable to provide a single URL containing license terms for both the pre-embargo and post-embargo period, for example:\n\u0026lt;license_ref applies_to=\u0026#34;am\u0026#34; start_date=\u0026#34;2019-01-01\u0026#34;\u0026gt; http://www.publisherwebsite.com/am_ general_terms \u0026lt;/license_ref\u0026gt; This would not allow institutions to unambiguously determine the embargo end date and license, and so should be avoided.\nExample: Gold OA In the case of gold OA, the licenses are simple: both the AAM and the VoR have an open license (in this example, CC BY) that starts no later than the date of publication. The start date could optionally be omitted entirely, since the license terms will apply for the article’s lifetime. Gold OA license\nUse cases Having clear, unambiguous license metadata helps institutions use the content within your terms and conditions. For example, an institution could query our APIs to find works published by researchers at their organisation (provided you have also populated the affiliations of all the (co-)authors), and check programmatically for the presence and with-effect dates of any open license(s). This would show whether (and if so when) the work can be exposed on their repository.\nHow to add license information to your Crossref metadata There are multiple ways that members can add license information to the metadata they deposit/have deposited with us:\nAdd license information to your regular deposits Register license information as part of a resource-only deposit with only license information to populate existing metadata records - learn more about resource-only deposits Use a .csv file with license information to populate existing metadata records and view an example .csv file for license metadata How to register license information as part of a metadata deposit \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2013\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;13\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;ai:program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;ai:free_to_read start_date=\u0026#34;2011-02-11\u0026#34;/\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;vor\u0026#34; start_date=\u0026#34;2011-02-11\u0026#34;\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;/ai:program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/openAI_test2\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/test\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; How to register license information as part of a resource-only deposit \u0026lt;body\u0026gt; \u0026lt;!-- license updates with dates / free to read info included--\u0026gt; \u0026lt;lic_ref_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/pubdate1\u0026lt;/doi\u0026gt; \u0026lt;ai:program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;ai:free_to_read/\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;vor\u0026#34; start_date=\u0026#34;2011-01-11\u0026#34;\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/vor-license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;am\u0026#34; start_date=\u0026#34;2012-01-11\u0026#34;\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/am-license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;tdm\u0026#34; start_date=\u0026#34;2012-01-11\u0026#34;\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/tdm-license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;/ai:program\u0026gt; \u0026lt;/lic_ref_data\u0026gt; \u0026lt;!-- license updates with just license URL included--\u0026gt; \u0026lt;lic_ref_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/pubdate1\u0026lt;/doi\u0026gt; \u0026lt;ai:program name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;ai:free_to_read/\u0026gt; \u0026lt;ai:license_ref\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/vor-license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;/ai:program\u0026gt; \u0026lt;/lic_ref_data\u0026gt; \u0026lt;/body\u0026gt; ", "headings": ["Examples of licenses ","Best practice for license information ","How we collect license information ","Example: Green OA with Creative Commons license ","Example: Green OA with member-defined post-embargo license ","Example: Gold OA ","Use cases ","How to add license information to your Crossref metadata ","How to register license information as part of a metadata deposit ","How to register license information as part of a resource-only deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/pending-publications/", "title": "Pending publications markup guide", "subtitle":"", "rank": 4, "lastmod": "2023-10-28", "lastmod_ts": 1698451200, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering pending publications by direct deposit of XML. It is not currently possible to register the pending publications record type using one of our helper tools.\nHow to make changes to records How to update records - for Crossmark users Crossmark service users can add the withdrawal to the Crossmark metadata as a scholarly update assertion (using the update type withdrawal). Update the landing page by updating the metadata record you provide us, using the same DOI in the assertion.", "content": "This guide gives markup examples for members registering pending publications by direct deposit of XML. It is not currently possible to register the pending publications record type using one of our helper tools.\nHow to make changes to records How to update records - for Crossmark users Crossmark service users can add the withdrawal to the Crossmark metadata as a scholarly update assertion (using the update type withdrawal). Update the landing page by updating the metadata record you provide us, using the same DOI in the assertion. See example of a full deposit for an XML example. If you choose, you can publish a separate update, and then link to the new DOI in the Crossmark metadata.\nThe green banner saying Manuscript has been accepted will change to a red banner saying Accepted manuscript has been withdrawn. See the following withdrawn pending publication example landing page with its associated metadata deposit.\nHow to update records - for non Crossmark users Our system will not be able to identify your pending publication as withdrawn, which means the green Manuscript has been accepted banner will remain on the landing page. Therefore it is critical that you write a clear statement about the withdrawal in the Intent to Publish statement. The Intent to Publish statement is supplied in the XML element - see the default Intent to publish statement message below.\nCustomizations You can personalize the display of the Crossref-hosted landing page with the following information:\nmember/society/journal logo custom wording for the intent to publish statement display of all provided optional extra metadata such as article title, funder identifiers, ORCID iDs, license information Crossmark to handle the rare occasions when a member rescinds acceptance. If you’d like to display a custom logo on your pending publication landing page, please email us the logo, and include your member name, prefix, and a note indicating that the logo is to be used for pending publication. We accept both JPEG and PNG files (dimensions should be 112px by 112px).\nIntent to publish statement This is the default intent to publish statement shown on the landing page:\nThis paper has been accepted for publication so its publisher has pre-registered a Crossref DOI. This persistent identifier and link [DOI INSERTED HERE] can already be shared by authors and readers, as it will redirect to the published article when available.\nWe encourage you to provide your own custom statement in the metadata which will replace the default statement.\nThe intent to publish statement can be used to convey any information you’d like to share about the forthcoming publication event (such as process, timeline). In the event that you delay publication or withdraw the publication, this statement may be very informative for your community - both the readers on your platform as well as the systems that consume Crossref metadata.\nMetadata reference examples for pending publication Here are three examples of pending publication records. The full record contains the full range of metadata accepted for pending publication. The basic record has the bare minimum metadata required. The withdrawn record is for a pending publication that the member has decided not to publish.\nDOI Description Links Full This record contains the full range of metadata accepted for a pending publication. Deposit XML, XML API Basic This basic record contains only the required metadata for a pending publication. We apply the default Intent to Publish statement in the absence of a custom message. Here, some authors have ORCID iDs with affiliation data; others do not. Deposit XML, XML API Withdrawn In this record of a withdrawn pending publication, the member has registered a pending publication and then updated the record to reflect the withdrawal. This example is an in situ scholarly update with no separate update published. Deposit XML, XML API Example of a full deposit \u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.4.2\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;org.crossref.early.001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;000001\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;pending_publication\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Josiah\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Carberry\u0026lt;/surname\u0026gt; \u0026lt;ORCID\u0026gt;https://orcid.org/0000-0002-1825-0097\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Megan\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Strongjackplum\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;University of Los Angeles\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Sonder\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Meander\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;editor\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Matt\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Techthespian\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;New York Institute of Technology\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication\u0026gt; \u0026lt;full_title\u0026gt;Journal of Psychoceramics\u0026lt;/full_title\u0026gt; \u0026lt;doi\u0026gt;10.5555/1234567890\u0026lt;/doi\u0026gt; \u0026lt;/publication\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Processing Fragmented Postmodernity: Cake for the Disenchanted\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;acceptance_date\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;27\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2018\u0026lt;/year\u0026gt; \u0026lt;/acceptance_date\u0026gt; \u0026lt;intent_statement\u0026gt; This article has been peer reviewed and accepted for publication by Crossref University Press. It is slated to publish on Dec 11.\u0026lt;/intent_statement\u0026gt; \u0026lt;fr:program xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; National Science Foundation \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34; \u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CNS-1228930\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CNS-2345567\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; Eva Crane Trust \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;10.13039/100012660\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;ECTA20160303\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;ECTA30498\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Financial Authority\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Federal Wilderness Commission\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Foundation of Minor Health\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;MED00001234\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34; name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;license_ref applies_to=\u0026#34;vor\u0026#34; start_date=\u0026#34;2018-12-11\u0026#34; \u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/\u0026lt;/license_ref\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi\u0026gt;10.5555/pending-publication-multi-author-funder-test\u0026lt;/doi\u0026gt; \u0026lt;/pending_publication\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a basic deposit \u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.4.2\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;org.crossref.early.001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;000002\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;pending_publication\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Josiah\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Carberry\u0026lt;/surname\u0026gt; \u0026lt;ORCID\u0026gt;https://orcid.org/0000-0002-1825-0097\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Fran\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Whitmorsup\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;University of Los Angeles\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Sandra\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Questcheck\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication\u0026gt; \u0026lt;full_title\u0026gt;Journal of Psychoceramics\u0026lt;/full_title\u0026gt; \u0026lt;doi\u0026gt;10.5555/1234567890\u0026lt;/doi\u0026gt; \u0026lt;/publication\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Theorizing Interfering Appropriation: Headphones for All Ears\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;acceptance_date\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;28\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2018\u0026lt;/year\u0026gt; \u0026lt;/acceptance_date\u0026gt; \u0026lt;doi\u0026gt;10.5555/pending-publication-multi-author-crossmark-test\u0026lt;/doi\u0026gt; \u0026lt;/pending_publication\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a withdrawal \u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.4.2\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;org.crossref.early.001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;000001\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;pending_publication\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Josiah\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Carberry\u0026lt;/surname\u0026gt; \u0026lt;ORCID\u0026gt;https://orcid.org/0000-0002-1825-0097\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication\u0026gt; \u0026lt;full_title\u0026gt;Journal of Psychoceramics\u0026lt;/full_title\u0026gt; \u0026lt;doi\u0026gt;10.5555/1234567890\u0026lt;/doi\u0026gt; \u0026lt;/publication\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Categorizing Belligerent Violence: Rockets and/in the Other \u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;acceptance_date\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;28\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2018\u0026lt;/year\u0026gt; \u0026lt;/acceptance_date\u0026gt; \u0026lt;intent_statement\u0026gt;The article has been withdrawn according to the Crossref Policy on Article in Press Withdrawal.\u0026lt;/intent_statement\u0026gt; \u0026lt;crossmark\u0026gt; \u0026lt;crossmark_version\u0026gt;1\u0026lt;/crossmark_version\u0026gt; \u0026lt;crossmark_policy\u0026gt;10.5555/crossmark_policy\u0026lt;/crossmark_policy\u0026gt; \u0026lt;crossmark_domains\u0026gt; \u0026lt;crossmark_domain\u0026gt; \u0026lt;domain\u0026gt;psychoceramics.labs.crossref.org\u0026lt;/domain\u0026gt; \u0026lt;/crossmark_domain\u0026gt; \u0026lt;/crossmark_domains\u0026gt; \u0026lt;crossmark_domain_exclusive\u0026gt;true\u0026lt;/crossmark_domain_exclusive\u0026gt; \u0026lt;updates\u0026gt; \u0026lt;update type=\u0026#34;withdrawal\u0026#34; date=\u0026#34;2018-07-28\u0026#34; \u0026gt;10.5555/crossmark-withdrawal-test-for-pendingpub\u0026lt;/update\u0026gt; \u0026lt;/updates\u0026gt; \u0026lt;/crossmark\u0026gt; \u0026lt;doi\u0026gt;10.5555/crossmark-withdrawal-test-for-pendingpub\u0026lt;/doi\u0026gt; \u0026lt;/pending_publication\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["How to make changes to records ","How to update records - for Crossmark users ","How to update records - for non Crossmark users ","Customizations ","Intent to publish statement ","Metadata reference examples for pending publication ","Example of a full deposit ","Example of a basic deposit ","Example of a withdrawal "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/mathml/", "title": "MathML", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "MathML may be included in the title, subtitle, original_language_title, and abstract elements. The MathML namespace (mml) must be defined in the schema declaration, for example:\n\u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34;​\u0026gt; Note that all MathML markup must include an mml namespace prefix:\n\u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Selectron production at an \u0026lt;mml:math\u0026gt;\u0026lt;mml:msup\u0026gt;\u0026lt;mml:mi\u0026gt;e\u0026lt;/mml:mi\u0026gt;\u0026lt;mml:mo\u0026gt;\u0026amp;#x02212;\u0026lt;/mml:mo\u0026gt;\u0026lt;/mml:msup\u0026gt;\u0026lt;mml:msup\u0026gt;\u0026lt;mml:mi\u0026gt;e\u0026lt;/mml:mi\u0026gt;\u0026lt;mml:mo\u0026gt;\u0026amp;#x02212;\u0026lt;/mml:mo\u0026gt;\u0026lt;/mml:msup\u0026gt;\u0026lt;/mml:math\u0026gt; linear collider with transversely polarized beams\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; ", "content": "MathML may be included in the title, subtitle, original_language_title, and abstract elements. The MathML namespace (mml) must be defined in the schema declaration, for example:\n\u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:mml=\u0026#34;http://www.w3.org/1998/Math/MathML\u0026#34;​\u0026gt; Note that all MathML markup must include an mml namespace prefix:\n\u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Selectron production at an \u0026lt;mml:math\u0026gt;\u0026lt;mml:msup\u0026gt;\u0026lt;mml:mi\u0026gt;e\u0026lt;/mml:mi\u0026gt;\u0026lt;mml:mo\u0026gt;\u0026amp;#x02212;\u0026lt;/mml:mo\u0026gt;\u0026lt;/mml:msup\u0026gt;\u0026lt;mml:msup\u0026gt;\u0026lt;mml:mi\u0026gt;e\u0026lt;/mml:mi\u0026gt;\u0026lt;mml:mo\u0026gt;\u0026amp;#x02212;\u0026lt;/mml:mo\u0026gt;\u0026lt;/mml:msup\u0026gt;\u0026lt;/mml:math\u0026gt; linear collider with transversely polarized beams\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/multi-language/", "title": "Translated and multi-language materials", "subtitle":"", "rank": 4, "lastmod": "2023-01-18", "lastmod_ts": 1674000000, "section": "Documentation", "tags": [], "description": "Much of the content in Crossref is English language, but we encourage members to register content in the appropriate language for the content being registered. We support UTF-8 encoded character sets and in many cases you will be able to supply multiple versions of titles, abstracts, and other metadata.\nMulti-language content We currently provide limited support for multi-language content. If you consider your content to be multi-language and not a translation (meaning it will be cited as a single item) register one DOI for the item, and include titles and abstracts in multiple languages in your metadata record as allowed (Note that support for this currently varies by record type).", "content": "Much of the content in Crossref is English language, but we encourage members to register content in the appropriate language for the content being registered. We support UTF-8 encoded character sets and in many cases you will be able to supply multiple versions of titles, abstracts, and other metadata.\nMulti-language content We currently provide limited support for multi-language content. If you consider your content to be multi-language and not a translation (meaning it will be cited as a single item) register one DOI for the item, and include titles and abstracts in multiple languages in your metadata record as allowed (Note that support for this currently varies by record type). Order is important in input metadata - if the English title is provided as the first title in your metadata, then the English title will be displayed in citations generated from our metadata.\nTranslated content If your content is translated register separate DOIs for each translation, and connect translations with relationship metadata using the relationship hasTranslation. This is essential if translate items have differing metadata such as article IDs or page numbers.\nIf the translations are registered and connected via a relationships, it is not necessary to include titles and other metadata in multiple languages. Note that for items with separate DOIs, we do not aggregate cited-by matches or search results.\n\u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;When your best metadata isn\u0026#39;t good enough: working with an imperfect specification\u0026lt;/title\u0026gt; \u0026lt;original_language_title language=\u0026#34;fr\u0026#34;\u0026gt;Quand vos meilleures métadonnées ne\tsuffisent pas: travailler avec une spécification imparfaite\u0026lt;/original_language_title\u0026gt; \u0026lt;/titles\u0026gt; ", "headings": ["Multi-language content","Translated content"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/posted-content-includes-preprints/", "title": "Posted content (includes preprints) markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This guide gives markup examples for members registering posted content (includes preprints) by direct deposit of XML. It is not currently possible to register the posted content (includes preprints) record type using one of our helper tools.\nDepositing and updating posted content Posted content is a Crossref record type available starting with schema version 4.4.0. The schema updates include a number of posted content-specific elements. The top level element is called posted_content and has an attribute called type.", "content": "This guide gives markup examples for members registering posted content (includes preprints) by direct deposit of XML. It is not currently possible to register the posted content (includes preprints) record type using one of our helper tools.\nDepositing and updating posted content Posted content is a Crossref record type available starting with schema version 4.4.0. The schema updates include a number of posted content-specific elements. The top level element is called posted_content and has an attribute called type. This attribute is given a value from an enumerated list that defines the nature of the posted content. The current set of enumerations are:\npreprint working_paper letter dissertation report other The default value is preprint. Please contact us if you want to deposit metadata for a posted record type which is not on this list.\nMetadata elements The posted record type contains the following elements (* = required):\nElement Description group_title The hosting platform may organize its posted content into categories or subject areas. This field is used to name the container for the posted item contributors Container for author information titles* The titles (title and subtitle) of the posted content posted_date* The date when the posted content became available online on the hosting platform acceptance_date The date the content item was submitted to and accepted by the hosting platform institution Container for information about an organization that sponsored or hosted an item but is not the publisher program: funding Source of funding applicable to research related to the posted content program: access indicators License terms program: relations Relationships (other than bibliographic citation) to other works, such as found in acknowledgments or list of supplemental material doi_data* Container for persistent identifier and URL citation_list Posted content bibliography listing of citations to other works Updating metadata with relationship to AAM/VoR Once a posted content item has been published, the posted content publisher must update their publication metadata with the AAM/VoR DOI using the isPreprintOf relation type.\n\u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;!--DOI of the AM / VOR--\u0026gt; \u0026lt;intra_work_relation relationship-type=\u0026#34;isPreprintOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5555/preprint_sample_doi_vor\u0026lt;/intra_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; The relationship metadata may be updated with either a full metadata deposit (existing metadata plus the ‘relationship’ metadata) or as a resource-only deposit. Metadata deposited using the resource schema will be appended to the existing metadata.\nPosted content and conflicts When a posted content item is submitted for a published item that has a registered metadata record, a conflict is created. The conflict is resolved when the posted content item is updated with the relationship metadata for the published item\u0026rsquo;s DOI. Learn more about the conflict report.\nPosted content and Cited-by matches When you query for Cited-by matches, you can choose to include posted content matches in your results. By default, posted content is not included. To retrieve matches including posted content:\nHTTPS queries Add the include_postedcontent=true parameter to your query, for example:\nhttps://doi.crossref.org/servlet/getForwardLinks?usr=XXXX\u0026amp;pwd=XXXX\u0026amp;doi=10.5555/12345678\u0026amp;include_postedcontent=true XML queries Add the include_postedcontent=\u0026quot;true\u0026quot; attribute to your fl_query element, for example:\n\u0026lt;?xml version = \u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;head\u0026gt;\u0026lt;doi_batch_id\u0026gt;eXtyles Request AMP.dodge0724.doc__76\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;fl_query include_postedcontent=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt;\u0026lt;/fl_query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; Example of a posted content deposit Review the sample below or download an XML file.\n\u0026lt;posted_content\u0026gt; \u0026lt;!--group_title: Prepublication content items may be organized into groupings within a given publisher. This element provides for naming the group. It is expected that publishers will have a small number of groups each of which reflect a topic or subject area. (required)--\u0026gt; \u0026lt;group_title\u0026gt;Metadata Quality\u0026lt;/group_title\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Dorothy\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Depositor\u0026lt;/surname\u0026gt; \u0026lt;ORCID\u0026gt;http\\://orcid.org/0000-0002-4011-3590\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Mind your \u0026amp;lt; and \u0026amp;gt;: why XML needs to be valid\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;!--posted_date: The date when the posted content became available online on the hosting platform (required)--\u0026gt; \u0026lt;posted_date\u0026gt; \u0026lt;month\u0026gt;01\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;15\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;1971\u0026lt;/year\u0026gt; \u0026lt;/posted_date\u0026gt; \u0026lt;!--acceptance_date: date the content item was submitted to and accepted by the hosting platform. (optional) --\u0026gt; \u0026lt;acceptance_date\u0026gt; \u0026lt;month\u0026gt;01\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;01\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;1971\u0026lt;/year\u0026gt; \u0026lt;/acceptance_date\u0026gt; \u0026lt;!--optional funding and license data--\u0026gt; \u0026lt;program name=\u0026#34;fundref\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; U.S. Department of Energy \u0026lt;assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;100000015\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;award_number\u0026#34;\u0026gt;DE-FG03-03SF22691\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; U.S. Department of Energy \u0026lt;assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;100000015\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;award_number\u0026#34;\u0026gt;DE-AC52-06NA27279\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34;\u0026gt; \u0026lt;free_to_read start_date=\u0026#34;2016-01-01\u0026#34;/\u0026gt; \u0026lt;license_ref\u0026gt;http://some.co.org/license_page.html\u0026lt;/license_ref\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;!--DOI and URL (required)--\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/preprint_sample_doi_1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/index.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;!--citation list for the item (optional)--\u0026gt; \u0026lt;citation_list\u0026gt; \u0026lt;citation key=\u0026#34;pp1\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt; \u0026lt;/posted_content\u0026gt; Example of a posted content deposit containing a relationship to a VOR Review the sample below or download an XML file.\n\u0026lt;!-- relationship established with VOR DOI (required when VOR is identified)--\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;intra_work_relation relationship-type=\u0026#34;isPreprintOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5555/preprint_sample_doi_vor\u0026lt;/intra_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;!--DOI and URL (required)--\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/preprint_sample_doi_1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/index.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; Example of a journal article deposit that includes a relationship to posted content Review the sample below or download an XML file.\n\u0026lt;publication_date\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;!--relationship established with posted content (preprint) DOI--\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;intra_work_relation relationship-type=\u0026#34;hasPreprint\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.50505/preprint_sample_doi_1\u0026lt;/intra_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/preprint_sample_doi_vor\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/index.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; Example of email notification of posted content match Member Example Publishing has deposited DOI 10.5555/preprint_sample_doi_vor (http://0-doi-org.libus.csd.mu.edu/10.5555/preprint_sample_doi_vor) claiming it is the VoR for your posted content DOI 10.50505/preprint_sample_doi_1. Please display a link to the Version of Record from your posted content online. Linking posted content to the published record is critical to enabling the full history of scholarly results, and ensuring that the citation record is clear and up-to-date. If you have questions please contact support@crossref.org and one of our colleagues (in the EST timezone) will get back to you. Many thanks, Crossref\n", "headings": ["Depositing and updating posted content ","Metadata elements ","Updating metadata with relationship to AAM/VoR ","Posted content and conflicts ","Posted content and Cited-by matches ","HTTPS queries ","XML queries ","Example of a posted content deposit ","Example of a posted content deposit containing a relationship to a VOR ","Example of a journal article deposit that includes a relationship to posted content ","Example of email notification of posted content match "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/references/", "title": "References", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "Registering references means submitting them as part of your metadata deposit. It is optional but strongly encouraged, especially if you use our Cited-by service.\nNote that registering references is not the same as reference linking - learn more about the differences.\nThe benefits of registering references as part of your metadata include:\nmaking your content more discoverable enabling evaluation of research, and helping with citation counts. Whenever you register content with us, make sure you include your references in the submission.", "content": "Registering references means submitting them as part of your metadata deposit. It is optional but strongly encouraged, especially if you use our Cited-by service.\nNote that registering references is not the same as reference linking - learn more about the differences.\nThe benefits of registering references as part of your metadata include:\nmaking your content more discoverable enabling evaluation of research, and helping with citation counts. Whenever you register content with us, make sure you include your references in the submission.\nIncluding references (or adding them to an existing deposit) can be done by:\nCrossref XML plugin for OJS: You must first enable References as a submission metadata field and then enable the Crossref reference linking plugin, to include references in your initial deposit, or add them later. Web deposit form: the web deposit form can’t currently be used to add references when you first register your content, but you can use Simple Text Query to match references and add them to an existing record. Metadata Manager: If you\u0026rsquo;re still using the deprecated Metadata Manager, there’s a field where you can add references and Metadata Manager will even match your references to their DOIs. If you want to add references to an existing deposit, simply find the existing journal record, add your references, and resubmit. Learn more about updating article metadata using Metadata Manager. Direct deposit of XML: you can include references in your original deposit, or add them later. Learn more at how to deposit references for users of direct deposit of XML. Detailed information for users of direct deposit of XML In this section, learn more about:\nCurrent elements for citation tagging Metadata deposit example Resource-only deposit example When depositing references as part of your content registration XML, mark up individual citations according to our deposit schema section. For example:\n\u0026lt;citation key=\u0026#34;ref1\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Current Opinion in Oncology\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Chauncey\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;13\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;21\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;2001\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; Marking up each element allows us to be very precise when identifying potential matches with registered DOIs.\nIf you know the DOIs of individual citations, include them. We\u0026rsquo;ll use the metadata deposited for the DOI when generating Cited-by matches:\n\u0026lt;citation key=\u0026#34;ref2\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/small_md_0001\u0026lt;/doi\u0026gt; \u0026lt;/citation\u0026gt; References may also be included as an unstructured citation. This option is not as precise as including an already-matched DOI or marking up a citation into individual elements.\n\u0026lt;citation key=\u0026#34;ref=3\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Clow GD, McKay CP, Simmons Jr. GM, and Wharton RA, Jr. 1988. Climatological observations and predicted sublimation rates at Lake Hoare, Antarctica. Journal of Climate 1:715-728.\u0026lt;/unstructured_citation\u0026gt; \u0026lt;/citation\u0026gt; As well as for depositing conventional references, data and software citations can also be deposited by inserting them into an item’s references metadata. To do so, follow the general process for depositing references as described above. Members can deposit the full data or software citation as an unstructured reference, or they can employ any number of reference tags currently accepted by us. It’s always best to include the DOI (either DataCite or Crossref) for the dataset if possible.\nCurrent elements for citation tagging \u0026lt;issn\u0026gt;: ISSN of a series (print or electronic) \u0026lt;journal_title\u0026gt; \u0026lt;author\u0026gt;: first author of an article or other publication \u0026lt;volume\u0026gt;: volume number (journal or book set) \u0026lt;issue\u0026gt;: journal issue \u0026lt;first_page\u0026gt; \u0026lt;cYear\u0026gt;: year of publication \u0026lt;article_title\u0026gt;: journal article, conference paper, or book chapter title \u0026lt;isbn\u0026gt; \u0026lt;series_title\u0026gt;: title of a book or conference series \u0026lt;volume_title\u0026gt;: book or conference proceeding title \u0026lt;edition_number\u0026gt; \u0026lt;article_title\u0026gt; \u0026lt;std_designator\u0026gt; \u0026lt;standards_body_name\u0026gt; \u0026lt;standards_body_acronym\u0026gt; \u0026lt;component_number\u0026gt;: the chapter, section, part number for a content item in a book \u0026lt;unstructured_citation\u0026gt;: citations for which no structured data is available. Our ability to process unstructured citations is limited (learn more about querying with formatted citations) \u0026lt;doi\u0026gt;: include the DOI wherever possible Learn more about the \u0026lt;citation\u0026gt; element in the schema documentation.\nAll citation elements are optional, but please submit as much information as you can to help us match your citations to DOIs.\nJournal citations should include either \u0026lt;issn\u0026gt; or \u0026lt;journal_title\u0026gt; or both. \u0026lt;journal_title\u0026gt; only is preferred over \u0026lt;issn\u0026gt; only. In addition the first author (\u0026lt;author\u0026gt;) and \u0026lt;first_page number\u0026gt; should be submitted. \u0026lt;first_page\u0026gt; number is preferred, but for those citations that are \u0026ldquo;in press\u0026rdquo;, the author should be submitted.\nWhen submitting a book or conference citation, you should include an \u0026lt;isbn\u0026gt;, \u0026lt;series_title\u0026gt;, \u0026lt;volume_title\u0026gt;, or any combination of these three elements as may be available.\nA citation for a standard must include a standard designator (\u0026lt;std_designator\u0026gt;) as well as the name and acronym of a standards body (\u0026lt;standards_body_name\u0026gt;, \u0026lt;standards_body_acronym\u0026gt;). These elements are required for identifying a citation of a standard.\nMetadata deposit example References may be included with a metadata deposit. The references are included within the \u0026lt;citation_list\u0026gt; element. Review the example below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schema/deposit/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;1dbb27d1030c6c9d9d-7ff0\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;200504260247\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;your name\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;your@email.com\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;WEB-FORM\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata\u0026gt; \u0026lt;full_title\u0026gt;Test Publication\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;TP\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;12345678\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;12\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;1\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2005\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;12\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;1\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;!--This is the article\u0026#39;s metadata--\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Article 12292005 9:32\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Bob\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Surname\u0026lt;/surname\u0026gt; \u0026lt;ORCID\u0026gt;http\\://orcid.org/0000-0002-4011-3590\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;12\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;1\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2004\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;100\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;200\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/test_20051229930\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;!--the list of references cited in the above article--\u0026gt; \u0026lt;citation_list\u0026gt; \u0026lt;citation key=\u0026#34;ref1\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Current Opinion in Oncology\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Chauncey\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;13\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;21\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;2001\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;ref2\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/small_md_0001\u0026lt;/doi\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;ref=3\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Clow GD, McKay CP, Simmons Jr. GM, and Wharton RA, Jr. 1988. Climatological observations and predicted sublimation rates at Lake Hoare, Antarctica. Journal of Climate 1:715-728.\u0026lt;/unstructured_citation\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Resource-only deposit example References may be added to an existing metadata record using a resource-only deposit. Review the example below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.3.6\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.6\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.3.6 http://0-www-crossref-org.libus.csd.mu.edu/schema/deposit/doi_resources4.3.6.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;your name\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;doi_citations\u0026gt; \u0026lt;!--The DOI of the article that contains the citations --\u0026gt; \u0026lt;doi\u0026gt;10.5555/small_md_0001\u0026lt;/doi\u0026gt; \u0026lt;!--The list of references cited in the above article --\u0026gt; \u0026lt;citation_list\u0026gt; \u0026lt;citation key=\u0026#34;ref1\u0026#34;\u0026gt; \u0026lt;journal_title\u0026gt;Current Opinion in Oncology\u0026lt;/journal_title\u0026gt; \u0026lt;author\u0026gt;Chauncey\u0026lt;/author\u0026gt; \u0026lt;volume\u0026gt;13\u0026lt;/volume\u0026gt; \u0026lt;first_page\u0026gt;21\u0026lt;/first_page\u0026gt; \u0026lt;cYear\u0026gt;2001\u0026lt;/cYear\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;ref2\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/small_md_0001\u0026lt;/doi\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;ref=3\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Clow GD, McKay CP, Simmons Jr. GM, and Wharton RA, Jr. 1988. Climatological observations and predicted sublimation rates at Lake Hoare, Antarctica. Journal of Climate 1:715-728.\u0026lt;/unstructured_citation\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt; \u0026lt;/doi_citations\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Detailed information for users of direct deposit of XML ","Current elements for citation tagging ","Metadata deposit example ","Resource-only deposit example "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/reports-and-working-papers/", "title": "Reports and working papers markup guide", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This page gives markup examples for members registering reports and working papers by direct deposit of XML. You can also register the reports and working papers record type using one of our helper tools: web deposit form.\n\u0026lt;report-paper\u0026gt; is the container for all information about a single report or working paper. If you need to register articles for more than one report, you must use multiple instances of \u0026lt;report-paper\u0026gt;. These may be included within the same deposit file.", "content": "This page gives markup examples for members registering reports and working papers by direct deposit of XML. You can also register the reports and working papers record type using one of our helper tools: web deposit form.\n\u0026lt;report-paper\u0026gt; is the container for all information about a single report or working paper. If you need to register articles for more than one report, you must use multiple instances of \u0026lt;report-paper\u0026gt;. These may be included within the same deposit file.\nTechnical reports and working papers are typically assigned a single identifier, but identifiers may also be assigned to sub-sections of the report (such as chapters) as needed using the \u0026lt;content_item\u0026gt; element. Report registration files may include a publisher name (within \u0026lt;publisher\u0026gt;) and/or institution name (within \u0026lt;institution\u0026gt; depending on the organization issuing the report.\nReports/working papers may also be deposited as a series.\nExample of a single report Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20050606110604\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;report-paper\u0026gt; \u0026lt;report-paper_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D.S.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;McShane\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt; Title Sludge Handling System Conceptual Design Document \u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;edition_number\u0026gt;0\u0026lt;/edition_number\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;03\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;03\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Office of Scientific and Technical Information\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Washington, DC\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;United States Department of Energy\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;USDOE(EM)\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Washington, DC\u0026lt;/institution_place\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Fluor Daniel Northwest\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;FDNW\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Aliso Viejo, CA\u0026lt;/institution_place\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;report-number\u0026#34;\u0026gt;abc123\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;contract_number\u0026gt;AC06-96RL13200\u0026lt;/contract_number\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9999/osti-806888\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt; http://198.232.211.23/pdwdocs/fsd0001/osti/2001/I0004856.pdf \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/report-paper_metadata\u0026gt; \u0026lt;/report-paper\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a report with ‘chapters’ Review the sample below or download an XML file.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.3.7\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20050606110604\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Sample Master\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;report-paper\u0026gt; \u0026lt;report-paper_metadata language = \u0026#34;en\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence = \u0026#34;first\u0026#34; contributor_role = \u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D.S.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;McShane\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Title Sludge Handling System Conceptual Design Document\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;edition_number\u0026gt;0\u0026lt;/edition_number\u0026gt; \u0026lt;publication_date media_type = \u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;03\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;03\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Office of Scientific and Technical Information\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Washington, DC\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;United States Department of Energy\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;USDOE(EM)\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Washington, DC\u0026lt;/institution_place\u0026gt; \u0026lt;institution_department\u0026gt;Office of Environmental Management\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;Fluor Daniel Northwest\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;FDNW\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Aliso Viejo, CA\u0026lt;/institution_place\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;report-number\u0026#34;\u0026gt;abc123\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;contract_number\u0026gt;AC06-96RL13200\u0026lt;/contract_number\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9999/abcd-806888\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/pdwdocs/fsd0001/osti/2001/I0004856.pdf\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/report-paper_metadata\u0026gt; \u0026lt;content_item component_type=\u0026#34;part\u0026#34;\u0026gt; \u0026lt;titles\u0026gt;\u0026lt;title\u0026gt;Introduction\u0026lt;/title\u0026gt;\u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt;\u0026lt;first_page\u0026gt;3\u0026lt;/first_page\u0026gt;\u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9999/abcd-806888.p1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/pdwdocs/fsd0001/osti/2001/p1\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;content_item component_type=\u0026#34;part\u0026#34;\u0026gt; \u0026lt;titles\u0026gt;\u0026lt;title\u0026gt;A chapter title\u0026lt;/title\u0026gt;\u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt;\u0026lt;first_page\u0026gt;17\u0026lt;/first_page\u0026gt;\u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9999/abcd-806888.p2\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/pdwdocs/fsd0001/osti/2001/p2\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;content_item component_type=\u0026#34;part\u0026#34;\u0026gt; \u0026lt;titles\u0026gt;\u0026lt;title\u0026gt;Appendix\u0026lt;/title\u0026gt;\u0026lt;/titles\u0026gt; \u0026lt;pages\u0026gt;\u0026lt;first_page\u0026gt;32\u0026lt;/first_page\u0026gt;\u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.9999/abcd-806888.p3\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/pdwdocs/fsd0001/osti/2001/p3\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/content_item\u0026gt; \u0026lt;/report-paper\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a report series Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20050606110604\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;CrossRef\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;report-paper\u0026gt; \u0026lt;report-paper_series_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;series_metadata\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;CrossRef Report Series\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;issn\u0026gt;5555-5555\u0026lt;/issn\u0026gt; \u0026lt;/series_metadata\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Bob\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Surname\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Depositing report series with CrossRef\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;edition_number\u0026gt;0\u0026lt;/edition_number\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;03\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;03\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2009\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Publishers International Linking Association\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Lynnfield, MA\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;institution\u0026gt; \u0026lt;institution_name\u0026gt;CrossRef\u0026lt;/institution_name\u0026gt; \u0026lt;institution_acronym\u0026gt;CR\u0026lt;/institution_acronym\u0026gt; \u0026lt;institution_place\u0026gt;Lynnfield, MA\u0026lt;/institution_place\u0026gt; \u0026lt;institution_department\u0026gt;Metadata Quality\u0026lt;/institution_department\u0026gt; \u0026lt;/institution\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;Report Number\u0026#34;\u0026gt;IMA-RPT\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;contract_number\u0026gt;AC06-96RL13200\u0026lt;/contract_number\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/sampledoi\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/report/\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/report-paper_series_metadata\u0026gt; \u0026lt;/report-paper\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Example of a single report ","Example of a report with ‘chapters’ ","Example of a report series "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/relationships/", "title": "Relationships", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "We maintain an expansive set of relationship types to support the various content items that a research object, like a journal article, might link to. For data and software, we ask you to provide the following information:\nidentifier of the dataset/software identifier type: DOI, Accession, PURL, ARK, URI, Other (additional identifier types are also accepted beyond those used for data or software, including ARXIV, ECLI, Handle, ISSN, ISBN, PMID, PMCID, and UUID) relationship type: isSupplementedBy or references (use the former if it was generated as part of the research results) description of dataset or software We and DataCite both use this kind of linking.", "content": "We maintain an expansive set of relationship types to support the various content items that a research object, like a journal article, might link to. For data and software, we ask you to provide the following information:\nidentifier of the dataset/software identifier type: DOI, Accession, PURL, ARK, URI, Other (additional identifier types are also accepted beyond those used for data or software, including ARXIV, ECLI, Handle, ISSN, ISBN, PMID, PMCID, and UUID) relationship type: isSupplementedBy or references (use the former if it was generated as part of the research results) description of dataset or software We and DataCite both use this kind of linking. Data repositories which register their content with DataCite follow the same process and apply the same metadata tags. This means that we achieve direct data interoperability with links in the reverse direction (data and software repositories to journal articles).\nYou can see illustrations and examples of this schema in our data and software citation guide.\nDeclaring relationship types The possible relationship types between content items can be as varied as the items themselves. We use a controlled vocabulary to define these relationships, in order to construct an orderly mapped network of content.\nThis is achieved by (i) an implicit approach where the relation type is a function of a specific service and is declared in the structure of the deposited XML, and (ii) in an explicit approach where the relation type is selected as a value within the deposited metadata.\nReference linking and Cited-by: implicitly creates cites and isCitedBy relationships between a content item and the items in its bibliography Crossmark: explicit creation of update relations between an item and other items that materially affect it (for example, a retraction) Funding data: implicit creation of isFundedBy and hasAward relationships between an item and the funding source that supported the underlying research Linked clinical trials: implicit creation of a belongsTo relationship between and item and a registered clinical trial Components: implicit creation of a isChildOf relationship between an item and its elemental parts that are assigned their own DOI (limited parent relation typing) General typed relations: explicitly typed relation between an item with a Crossref DOI and an item with one of several possible identifiers. Relationship types for associated research objects: intra-work (within a work) Description Reciprocal relationship types Expression isExpressionOf, hasExpression Format isFormatOf, hasFormat Identical isIdenticalTo Manifestation isManifestationOf, hasManifestation Manuscript isManuscriptOf, hasManuscript Preprint isPreprintOf, hasPreprint Replacement isReplacedBy, Replaces Translation isTranslationOf, hasTranslation Variant isVariantFormOf, isOriginalFormOf Version isVersionOf, hasVersion Relationship types for associated research objects: inter-work (between works) Description Reciprocal relationship types Basis isBasedOn, isBasisFor Comment isCommentOn, hasComment Continuation isContinuedBy, Continues Derivation isDerivedFrom, hasDerivation Documentation isDocumentedBy, Documents Funding finances, isFinancedBy Part isPartOf, hasPart Peer review isReviewOf, hasReview References references, isReferencedBy Related material, such as a protocol isRelatedMaterial, hasRelatedMaterial Reply isReplyTo, hasReply Requirement requires, isRequiredBy Software compilation isCompiledBy, compiles Supplement, such as a dataset generated as part of research results isSupplementTo, isSupplementedBy General typed relations This service allows for the creation of a typed relationship between an item with a Crossref DOI and another content item. The other item may be represented by another Crossref DOI, a DOI from some other Registration Agency, or an item not identified with a DOI. When DOIs are used, the deposit process will fail if the DOI does not exist. Non-DOI identifiers are not verified.\n\u0026lt;xsd:attributeGroup name=\u0026#34;relations_type.atts\u0026#34;\u0026gt; \u0026lt;xsd:attribute name=\u0026#34;identifier-type\u0026#34; use=\u0026#34;required\u0026#34;\u0026gt; \u0026lt;xsd:simpleType\u0026gt; \u0026lt;xsd:restriction base=\u0026#34;xsd:string\u0026#34;\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;doi\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;issn\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;isbn\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;uri\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;pmid\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;pmcid\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;purl\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;arxiv\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;ark\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;handle\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;uuid\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;ecli\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;accession\u0026#34;/\u0026gt; \u0026lt;xsd:enumeration value=\u0026#34;other\u0026#34;/\u0026gt; \u0026lt;/xsd:restriction\u0026gt; \u0026lt;/xsd:simpleType\u0026gt; \u0026lt;/xsd:attribute\u0026gt; When DOIs are used, a bidirectional relation is automatically created by us when a relation is created in the deposit of one item in a pair. The DOI with metadata creating the relation is said to be the claimant, the other item does not need to have its metadata directly contain the relationship.\nExample: translated article A single journal article is published in two languages with each being assigned its own DOI. In this example, both are published in the same journal. The original language instance has metadata that contains no indication of the translation instance. The alternative language instance includes in its metadata a relation to the original language instance. Here is a screenshot of the relevant section in the code. Please refer to the code snippet below to see it in context.\nShow image × \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Um artigo na língua original, que passa a ser o inglês\u0026lt;/title\u0026gt; \u0026lt;original_language_title language=\u0026#34;en\u0026#34;\u0026gt;An article in its original language which happens to be English\u0026lt;/original_language_title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Daniel\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Stepputtis\u0026lt;/surname\u0026gt; \u0026lt;ORCID authenticated=\u0026#34;true\u0026#34;\u0026gt;http://orcid.org/0000-0003-4824-1631\u0026lt;/ORCID\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;02\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;28\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2013\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Portuguese translation of an article\u0026lt;/description\u0026gt; \u0026lt;intra_work_relation relationship-type=\u0026#34;isTranslationOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5555/original_language\u0026lt;/intra_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/translation\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; Example: book review This example has a book review published as an article in the journal The Holocene. The article\u0026rsquo;s title, taken from the publisher\u0026rsquo;s site is \u0026ldquo;Book Review: Understanding the Earth system: compartments, processes and interactions\u0026rdquo; where this book has the DOI https://0-doi-org.libus.csd.mu.edu/10.1007/978-3-642-56843-5.\nA: The current metadata for the review article gives no indication of the actual book being reviewed: \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;The Holocene\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;The Holocene\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;0959-6836\u0026lt;/issn\u0026gt; \u0026lt;issn media_type=\u0026#34;electronic\u0026#34;\u0026gt;1477-0911\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;27\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;12\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;4\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Book Review: Understanding the Earth system: compartments, processes and interactions\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ian\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Fairchild\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;Keele University\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;27\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;505\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;505\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;doi\u0026#34;\u0026gt;10.1191/0959683602hl565xx\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;ai:program xmlns:ai=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34; name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;tdm\u0026#34;\u0026gt;http://0-journals-sagepub-com.libus.csd.mu.edu/page/policies/text-and-data-mining-license\u0026lt;/ai:license_ref\u0026gt; \u0026lt;/ai:program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1191/0959683602hl565xx\u0026lt;/doi\u0026gt; B: Modifications to the review\u0026rsquo;s metadata show how it would include a relationship to the book Here is a screenshot of the relevant section in the code. Please refer to the code snippet below to see it in context.\nShow image × \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;The Holocene\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;The Holocene\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;0959-6836\u0026lt;/issn\u0026gt; \u0026lt;issn media_type=\u0026#34;electronic\u0026#34;\u0026gt;1477-0911\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;27\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;12\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;4\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Book Review: Understanding the Earth system: compartments, processes and interactions\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ian\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Fairchild\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;Keele University\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;07\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;27\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2016\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2002\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;505\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;505\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;identifier id_type=\u0026#34;doi\u0026#34;\u0026gt;10.1191/0959683602hl565xx\u0026lt;/identifier\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;isReviewOf\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt; 10.1007/978-3-642-56843-5 \u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1191/0959683602hl565xx\u0026lt;/doi\u0026gt; C: Meanwhile, the book\u0026rsquo;s deposited metadata shows no indication of the relation to the review article: \u0026lt;book book_type=\u0026#34;other\u0026#34;\u0026gt; \u0026lt;book_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;editor\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Eckart\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Ehlers\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;editor\u0026#34; sequence=\u0026#34;additional\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Thomas\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Krafft\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Understanding the Earth System\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2001\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;isbn media_type=\u0026#34;print\u0026#34;\u0026gt;978-3-540-67515-0\u0026lt;/isbn\u0026gt; \u0026lt;isbn media_type=\u0026#34;electronic\u0026#34;\u0026gt;978-3-642-56843-5\u0026lt;/isbn\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;Springer Berlin Heidelberg\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Berlin, Heidelberg\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;ai:program xmlns:ai=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34; name=\u0026#34;AccessIndicators\u0026#34;\u0026gt; \u0026lt;ai:license_ref applies_to=\u0026#34;tdm\u0026#34;\u0026gt;http://0-www-springer-com.libus.csd.mu.edu/tdm\u0026lt;/ai:license_ref\u0026gt; \u0026lt;/ai:program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1007/978-3-642-56843-5\u0026lt;/doi\u0026gt; D: Book DOI\u0026rsquo;s metadata showing the relationship Here is a screenshot of the relevant section in the code. Please refer to the code snippet below to see it in context.\nShow image × \u0026lt;query status=\u0026#34;resolved\u0026#34;\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.7554/eLife.42135\u0026lt;/doi\u0026gt; \u0026lt;crm-item name=\u0026#34;publisher-name\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;eLife Sciences Publications, Ltd\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;prefix-name\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;eLife Sciences Publications, Ltd.\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;member-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;4374\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;citation-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;104997326\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;journal-id\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;189365\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;deposit-timestamp\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;20190402090010\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;owner-prefix\u0026#34; type=\u0026#34;string\u0026#34;\u0026gt;10.7554\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;last-update\u0026#34; type=\u0026#34;date\u0026#34;\u0026gt;2019-04-02T09:00:31Z\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;created\u0026#34; type=\u0026#34;date\u0026#34;\u0026gt;2019-02-25T13:00:23Z\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;citedby-count\u0026#34; type=\u0026#34;number\u0026#34;\u0026gt;3\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;relation\u0026#34; type=\u0026#34;doi\u0026#34; claim=\u0026#34;isPreprintOf\u0026#34;\u0026gt;10.1101/425587\u0026lt;/crm-item\u0026gt; \u0026lt;crm-item name=\u0026#34;relation\u0026#34; type=\u0026#34;doi\u0026#34; claim=\u0026#34;isReviewOf\u0026#34;\u0026gt;10.3410/f.735157928.793558703\u0026lt;/crm-item\u0026gt; \u0026lt;doi_record\u0026gt; \u0026lt;crossref xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/xschema/1.1\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/xschema/1.1 http://0-doi-crossref-org.libus.csd.mu.edu/schemas/unixref1.1.xsd\u0026#34;\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;eLife\u0026lt;/full_title\u0026gt; Example: linked dataset An article with a Crossref DOI identifies that data represented by a DataCite DOI was used in the research and was mentioned in the article\u0026rsquo;s acknowledgment section.\nThe article\u0026rsquo;s Crossref deposited XML:\n\u0026lt;doi_record\u0026gt; \u0026lt;crossref xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/xschema/1.1\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/xschema/1.1 http://0-doi-crossref-org.libus.csd.mu.edu/schemas/unixref1.1.xsd\u0026#34;\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Journal of Psychoceramics\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;Journal of Psychoceramics\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;electronic\u0026#34;\u0026gt;0264-3561\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;06\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;06\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;5\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;11\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt; Dog: A Methodology for the Development of Simulated Annealing \u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Josiah\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Carberry\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;06\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;06\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;crossmark\u0026gt; \u0026lt;crossmark_policy\u0026gt;10.5555/crossmark_policy\u0026lt;/crossmark_policy\u0026gt; \u0026lt;crossmark_domains\u0026gt; \u0026lt;crossmark_domain\u0026gt; \u0026lt;domain\u0026gt;psychoceramics.labs.crossref.org\u0026lt;/domain\u0026gt; \u0026lt;/crossmark_domain\u0026gt; \u0026lt;/crossmark_domains\u0026gt; \u0026lt;crossmark_domain_exclusive\u0026gt;false\u0026lt;/crossmark_domain_exclusive\u0026gt; \u0026lt;updates\u0026gt; \u0026lt;update type=\u0026#34;correction\u0026#34; date=\u0026#34;2012-05-12\u0026#34;\u0026gt;10.5555/12345681\u0026lt;/update\u0026gt; \u0026lt;/updates\u0026gt; \u0026lt;custom_metadata/\u0026gt; \u0026lt;/crossmark\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Acknowledgement mention of dataset use.\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;isBasedOn\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5284/1000389\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345681\u0026lt;/doi\u0026gt; \u0026lt;timestamp\u0026gt;201601211508\u0026lt;/timestamp\u0026gt; \u0026lt;resource\u0026gt; http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345681.html \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/crossref\u0026gt; \u0026lt;/doi_record\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/body\u0026gt; You can also see the article\u0026rsquo;s deposited metadata in JSON.\n", "headings": ["Declaring relationship types ","Relationship types for associated research objects: intra-work (within a work) ","Relationship types for associated research objects: inter-work (between works) ","General typed relations ","Example: translated article ","Example: book review ","A: The current metadata for the review article gives no indication of the actual book being reviewed:","B: Modifications to the review\u0026rsquo;s metadata show how it would include a relationship to the book","C: Meanwhile, the book\u0026rsquo;s deposited metadata shows no indication of the relation to the review article:","D: Book DOI\u0026rsquo;s metadata showing the relationship","Example: linked dataset "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-record-types/standards/", "title": "Standards markup guide", "subtitle":"", "rank": 4, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Documentation", "tags": [], "description": "This page gives markup examples for members registering standards by direct deposit of XML. It is not currently possible to register the standards record type using one of our helper tools.\n\u0026lt;standard\u0026gt; is the top-level element for deposit of metadata about standards developed by Standards Development Organizations (SDOs) or consortia. Standards are assigned a DOI at the title level and may also have DOIs assigned to lower level content-items. Standards deposits contain several pieces of standard-specific metadata, including standard designators and standards body information.", "content": "This page gives markup examples for members registering standards by direct deposit of XML. It is not currently possible to register the standards record type using one of our helper tools.\n\u0026lt;standard\u0026gt; is the top-level element for deposit of metadata about standards developed by Standards Development Organizations (SDOs) or consortia. Standards are assigned a DOI at the title level and may also have DOIs assigned to lower level content-items. Standards deposits contain several pieces of standard-specific metadata, including standard designators and standards body information. Standards may only be deposited with schema version 4.3.6 and above.\nDesignator types All standards have a designator that is used as a primary identifier. crossref4.3.6.xsd allows for a number of designator types to be applied to a DOI. A primary designator must be included in each metadata deposit. One of the following designator types must be supplied:\nAs-published: captured in \u0026lt;std_designator\u0026gt; (child of \u0026lt;std_as_published\u0026gt;), designator for the standard being deposited. This is an item-level designator. Typically includes the year of initial publication plus additional information such as amendment number and/or revision/reaffirmation year, for example: ASTM D6/D6M-95(2011)e1 is a designator for an ASTM standard. Any undated, family, and set designators related to the designator supplied in \u0026lt;std_designator\u0026gt; may be recorded in the corresponding attributes, for example: \u0026lt;std_designator undated=\u0026#34;ASTM D6/D6M-95\u0026#34;\u0026gt; ASTM D6/D6M-95(2011)\u0026lt;/std_designator\u0026gt; Optional: an alternative as-published designator may be recorded in \u0026lt;std_alt_as_published\u0026gt;.This is intended to accommodate minor changes to a standard that do not merit assigning a new DOI. A variant form designator (see below) may also be supplied to accommodate differing forms of a designator Undated: captured in \u0026lt;std_undated_designator\u0026gt;. An undated designator removes the year component that specifies a particular revision. Undated designators refer to a single document series. For example: ASTM C90, IEC 60601-2-11, ISO/IEC 19757-2 Family: captured in \u0026lt;std_family_designator\u0026gt;, a collection of standards which are conceptually grouped together where that grouping is not necessarily reflected in the designator in an obvious way.For example, the ISO 9000 family includes ISO 9001, ISO 9004, ISO 19011 Set: captured in \u0026lt;std_set_designator\u0026gt;. A set, also referred to as truncated form, is composed of several parts (standards that are divided into separate documents). For example, ISO 19757 is a standard in 11 parts, with the individual documents known as ISO 19757-2 where the \u0026ldquo;-\u0026rdquo; denotes a part. Optional designators Some optional designators may be supplied in deposits in addition to a required dated, undated, family, or set designator. They are supplied to accommodate query matching but are not considered title-level designators. These are:\nVariant form: Alternative versions of a designator may be supplied in \u0026lt;std_variant_form\u0026gt;. Variant form captures stylized forms that don’t accurately reflect the true standard designator but are needed due to business practices (for example, IEEE formal designators have \u0026ldquo;std\u0026rdquo; while the display of them does not). Variant forms may be applied to \u0026lt;std_alt_as_published\u0026gt;, \u0026lt;std_as_published\u0026gt;, \u0026lt;std_set_designator\u0026gt; and \u0026lt;std_undated_designator\u0026gt; Alternative script: captured in \u0026lt;std_alt_script\u0026gt;, accommodates designators that are published using multiple character sets Supersedes: captured in \u0026lt;std_supersedes\u0026gt;. Designator for standard being replaced by the standard being deposited. Adopted from: captured in \u0026lt;std_adopted_from\u0026gt;. Designator for standard from which the current deposit is adopted. Revision of: captured in \u0026lt;std_revision_of\u0026gt;. Designator for the previous revision of the standard being deposited. Standards body information The \u0026lt;standards_body\u0026gt; wrapper element has two children:\n\u0026lt;standards_body_name\u0026gt;: the full name of the standards body \u0026lt;standards_body_acronym\u0026gt;: acronym of the standards body Both child elements are required and should reflect the standards body name and acronym used when the standard was published. Changes in standards body names and acronyms over time will be accounted for within Crossref’s query mechanism.\nHistory Crossref began accepting metadata deposits for standards in 2005. The schema was modified significantly for standards with the inception of the Standards Technical Working Group. Significant changes to the deposit and indexing of designators were made with schema version 4.3.6, as a result standards may only be deposited schema versions 4.3.6 and above.\nExample of a standard deposit Review the sample below or download an XML file.\n\u0026lt;doi_batch xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;4.3.7\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.7 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.7.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;20050606110604\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;CrossRef\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;pfeeney@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;CrossRef\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;standard\u0026gt; \u0026lt;standard_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;organization sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt;CrossRef Standards TWG\u0026lt;/organization\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Fun with standards\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;designators\u0026gt; \u0026lt;std_as_published\u0026gt; \u0026lt;std_designator\u0026gt;CR 1234\u0026lt;/std_designator\u0026gt; \u0026lt;/std_as_published\u0026gt; \u0026lt;/designators\u0026gt; \u0026lt;approval_date\u0026gt; \u0026lt;month\u0026gt;04\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;17\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;1995\u0026lt;/year\u0026gt; \u0026lt;/approval_date\u0026gt; \u0026lt;publisher\u0026gt; \u0026lt;publisher_name\u0026gt;CrossRef\u0026lt;/publisher_name\u0026gt; \u0026lt;publisher_place\u0026gt;Bethesda, MD\u0026lt;/publisher_place\u0026gt; \u0026lt;/publisher\u0026gt; \u0026lt;standards_body\u0026gt; \u0026lt;standards_body_name\u0026gt;CrossRef\u0026lt;/standards_body_name\u0026gt; \u0026lt;standards_body_acronym\u0026gt;CRABC\u0026lt;/standards_body_acronym\u0026gt; \u0026lt;/standards_body\u0026gt; \u0026lt;publisher_item\u0026gt; \u0026lt;item_number item_number_type=\u0026#34;designation\u0026#34;\u0026gt;12083\u0026lt;/item_number\u0026gt; \u0026lt;/publisher_item\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/standard1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/abc\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/standard_metadata\u0026gt; \u0026lt;/standard\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Designator types ","Optional designators ","Standards body information ","History ","Example of a standard deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/titles/", "title": "Titles", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "This information relates to the title of a work, such as a journal article, book or book chapter, or conference paper. For advice on registering the title of a series, such as a journal, book series, or conference proceedings, learn more about journal title management.\nThe title of your work is used for citation matching, so follow these best practices to make sure your metadata can be used correctly by reference management tools:", "content": "This information relates to the title of a work, such as a journal article, book or book chapter, or conference paper. For advice on registering the title of a series, such as a journal, book series, or conference proceedings, learn more about journal title management.\nThe title of your work is used for citation matching, so follow these best practices to make sure your metadata can be used correctly by reference management tools:\nReview how the title is treated or changed throughout the various stages of your production workflow Title must be in title or sentence case (not ALL CAPS) Title field must not include other metadata such as author, price, volume numbers Use separate title elements for different language titles - do not cram multiple titles in multiple languages into one element Subtitles should be recorded in a separate subtitle element Use UTF-8 encoding May include face markup, LaTeX, or MathML where appropriate If you need to update or correct a title, learn more about updating title records.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/verify-your-registration/", "title": "Verify your registration", "subtitle":"", "rank": 4, "lastmod": "2024-10-07", "lastmod_ts": 1728259200, "section": "Documentation", "tags": [], "description": "The quickest way to test whether your DOI and its associated metadata have been registered successfully (and your DOI is now active) is to enter your DOI link (DOI displayed as a link, such as https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy) into a browser window, and check if it resolves correctly. DOI 10.13003/5jchdy has been registered so it is resolving to that DOI\u0026rsquo;s landing page (or, resolution URL). DOIs that have not been registered will resolve to a DOI NOT FOUND error message on doi.", "content": "The quickest way to test whether your DOI and its associated metadata have been registered successfully (and your DOI is now active) is to enter your DOI link (DOI displayed as a link, such as https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy) into a browser window, and check if it resolves correctly. DOI 10.13003/5jchdy has been registered so it is resolving to that DOI\u0026rsquo;s landing page (or, resolution URL). DOIs that have not been registered will resolve to a DOI NOT FOUND error message on doi.org, such as https://0-doi-org.libus.csd.mu.edu/10.13003/unregisteredDOI.\nIf your DOI doesn\u0026rsquo;t resolve successfully, read on for more information about the process your submission goes through, why there might be a delay, and which messages you’ll receive depending on your submission method.\nVerify your registration - web deposit form Verify your registration - grant registration form Verify your registration - if you’re still using the deprecated Metadata Manager Verify your registration - direct deposit of XML using our admin tool Verify your registration - XML deposit using HTTPS POST Verify your registration - Crossref XML plugin for OJS Verify your registration - web deposit form If you register your content using the web deposit form, your submission is sent to a submission queue. You’ll see a “success” message in the web deposit form confirming that your submission has been successfully sent to our submission queue, but this doesn’t mean that your registration is complete.\nShow image × As your submission is processed in the queue, we send you two messages:\nXML record email, subject line: Crossref WebDeposit - XML. This email includes the XML created by the web deposit form. Do keep this information, as it may be useful in the future. Receiving this email is a confirmation that your file has been received for processing, and entered into our submission queue. submission log email, subject line: Crossref Submission ID. This email is sent once your XML has made it through the queue, includes your submission ID, tells you if your deposit has been successful, and provides the reason for any failure. If your submission log email tells you that your submission was successful, your DOI is now live and active (or your update to metadata for an existing DOI has worked).\nIf your submission failed, please address the errors flagged in the confirmation, and resubmit. Learn more about error messages.\nIf you don’t receive your submission log email immediately, it’s probably because your submission is still in the queue. It can stay in the queue between several minutes and several hours depending on how large your submission file is, and how busy our submission queue is at that time. Learn more about how to view the submission queue.\nIf you don’t receive your submission log email and you can’t see your submission in the queue, it may be that your access to register content has been suspended due to unpaid invoices. If this is the case, please contact us.\nVerify your registration - grant deposit form The grant registration form registers your record in real time, with no queueing or delay. If your submission has been successful, you’ll see a “success” message, which means your DOI is now live and active or your update to an existing DOI has worked.\nYour “success” message will also contain a submission ID. If you need to, you can log into our admin tool using your account credentials and use this submission to view your deposit.\nIf your submission hasn’t been successful, you’ll see an error message explaining the problem.\nVerify your registration - if you\u0026rsquo;re still using the deprecated Metadata Manager The Metadata Manager tool is in beta and contains many bugs. It’s being deprecated at the end of 2021. We recommend using the web deposit tool as an alternative, or the OJS plugin if your content is hosted on the OJS platform from PKP.\nIf you’re still using Metadata Manager, here’s how to verify your registration.\nUnlike other content registration methods, Metadata Manager registers content in real-time - with no queueing of content. If your submission has been successful, you’ll see a “success” message, which means that your DOI is now live and active (or your update to metadata for an existing DOI has worked).\nYour \u0026ldquo;success\u0026rdquo; message will also contain a submission ID. If you need to, you can log in to our admin tool using your account credentials and use this submission ID to view your deposit.\nShow image × If your submission hasn’t been successful, you’ll see a warning symbol - click on this to see the error message explaining the problem.\nShow image × Learn more about submitting a deposit, and reviewing deposit results in Metadata Manager.\nVerify your registration - direct deposit of XML using our admin tool Submissions using our admin tool are sent to a submission queue. Once your submission has been accepted into the queue we display a SUCCESS - Your batch submission was successfully received message. This means that your deposit has been submitted to our processing queue, but it has not yet been processed.\nShow image × Registration of your content only occurs after your submission has worked its way through the queue, when you will receive an email with the subject line Crossref Submission ID, which includes your submission ID, tells you if your deposit has been successful, and provides the reason for any failure.\nIf your deposit has been successful, then your new DOI is live and active (or your update to metadata for an existing DOI has worked).\nIf your submission failed, please address the errors flagged in the email, and resubmit. Not sure what the error messages mean and what you need to do? Learn more about error messages.\nIf you don’t receive your submission log email immediately, it’s probably because your submission is still in the queue. It can stay in the queue between several minutes and several hours depending on how large your submission file is, and how busy our submission queue is at that time. Learn more about how to view the submission queue.\nIf you don’t receive your submission log email and you can’t see your submission in the queue, it may be that your access to register content has been suspended due to unpaid invoices. If this is the case, please contact us.\nVerify your registration - XML deposit using HTTPS POST Most items registered with us are submitted via HTTPS POST. When files are POSTed to our system, you’ll receive a 200 status message to confirm that we’ve received it. Your files are then added to a submission queue to await processing, and once your submission has been processed, you’ll receive a submission log (either by email or through the notification callback service if you have that enabled).\nIf your submission log shows a success, then your DOI is live and active (or your update to metadata for an existing DOI has worked).\nIf your submission log shows a failure, please address the errors flagged in the email, and resubmit. Not sure what the error messages mean and what you need to do? Learn more about error messages.\nThere may be a delay between your submission being received by the queue and completing processing. It can stay in the queue between several minutes and several hours depending on how large your submission file is, and how busy our submission queue is at that time. Learn more about how to view the submission queue.\nVerify your registration - Crossref XML plugin for OJS If you are using the Crossref XML plugin for OJS to create an XML file that you upload through our admin tool, please follow Verify your registration - direct deposit of XML using our admin tool.\nIf you are using the Crossref XML plugin for OJS to send your submission to us directly, check the status of your deposit by clicking the Articles tab at the top of the plugin settings page.\n", "headings": ["Verify your registration - web deposit form ","Verify your registration - grant deposit form ","Verify your registration - if you\u0026rsquo;re still using the deprecated Metadata Manager ","Verify your registration - direct deposit of XML using our admin tool ","Verify your registration - XML deposit using HTTPS POST ","Verify your registration - Crossref XML plugin for OJS "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/verify-your-registration/submission-queue-and-log/", "title": "Submission queue and log", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "If you register content with us using the web deposit form, XML upload via our admin tool, or XML deposit using HTTPS POST, your submission will be placed in our submission queue.\nWhen your deposit has been processed, we’ll email you a submission log containing the final status of your submission. You should review these submission logs to make sure your content was registered or updated successfully.\nIf you register content with us by sending the files to us directly using the Crossref XML plugin for OJS, or if you’re still using the deprecated Metadata Manager, your submission is processed immediately (it isn’t placed in our submission queue).", "content": "If you register content with us using the web deposit form, XML upload via our admin tool, or XML deposit using HTTPS POST, your submission will be placed in our submission queue.\nWhen your deposit has been processed, we’ll email you a submission log containing the final status of your submission. You should review these submission logs to make sure your content was registered or updated successfully.\nIf you register content with us by sending the files to us directly using the Crossref XML plugin for OJS, or if you’re still using the deprecated Metadata Manager, your submission is processed immediately (it isn’t placed in our submission queue). We don’t send you a submission log to show the final status of your submission; instead, you’ll see a message within Metadata Manager or OJS itself. But a submission log is still generated, and you can log in to our admin tool using your account credentials to view the submission log for your deposit.\nThe submission queue If you’ve registered some content with us using the web deposit form, XML upload via our admin tool, or XML deposit using HTTPS POST, and you don’t receive your submission log email immediately, it is likely that your deposit is waiting in the submission queue.\nTo see the submission queue, log in to the admin tool using your account credentials, and click Show My Submission Queue on the opening page (or click Submissions, then Show System Queue).\nAt the top of the page, you will see all the submissions that are being actively processed at the moment. They are listed individually by submission ID number, along with file name, file type, percent completed, and timestamps.\nThe submissions that are still waiting to be processed are displayed at the bottom of the page. They are grouped by the role used to submit the files. Click + under Details (on the left, next to your depositor ID) to expand a list of your deposits waiting to be processed. You will also see the submission ID, filename, and position in the queue.\nShow image × It typically takes only a few minutes for a submission to be picked up for processing and then for the processing to be completed. Processing may take longer depending on overall system traffic, and submission size and complexity. If there is a problem with the submission queue, we usually post an update - please check our status page for updates. If you\u0026rsquo;re concerned about your submission processing time, or are planning a large update and would like to coordinate with us about timing, please contact us.\nSubmission logs Submission logs are delivered through these channels:\nEmail The admin tool - you can view submission logs for past deposits or see the deposit history for a DOI using the admin tool. Polling - see using HTTPS to retrieve logs Notification callback service Submission log emails We email you an XML-formatted log for records that are submitted through the web deposit form or Simple Text Query, uploaded via our admin tool, or sent to us through HTTPS POST.\nThe log is sent to the email address you provided when using the web deposit form or Simple Text Query, or included in the \u0026lt;email_address\u0026gt; field in your deposit XML.\nThe email will have the subject line: Crossref Submission ID and it’s sent once your submission has made it through the queue. It includes your submission ID, tells you if your deposit has been successful, and provides the reason for any failure.\nView submission logs for past deposits If you didn\u0026rsquo;t receive a submission log email, you can use the admin tool to search for submission logs for past deposits:\nLog in to the admin tool using your account credentials Click the Submissions tab, then the Administration sub-tab Click Search at the bottom of the screen, and you\u0026rsquo;ll see a list of all past deposits for your account, from newest to oldest. Click on the Submission ID number to the left of any deposit to access the Submission details, including the submission log for that deposit, or click on the file icon to view the file that was submitted. After step 3 above, you can also narrow your search by entering parameters into any of the following fields on the Submissions administration sub-tab page:\nSelect a date range using the Last Day, Last Three Days, or Last Week buttons, or enter a custom date range to search for older deposits If your account submits metadata deposits for multiple prefixes, you can use the Registrant field to narrow your search to just the deposits for a single prefix. Click Find next to Registrant In the pop-up window, enter the member name associated with the prefix and click Submit Select the appropriate member name/prefix and the pop-up window will close. You\u0026rsquo;ll see a code for that prefix entered in the Registrant field Select a deposit type from the Type drop-down menu to limit your search to just one type of deposit. Metadata will limit results to full metadata deposits. This is the most common type. DOI resources will limit results to resource-only deposits, including references, Similarity Check full-text URLs, funding metadata, and license metadata Conflict Management will limit results to text files that were deposited to resolve conflicts Check the Has Error box to only search for deposits with errors. Check the Has Conflict box to only search for deposits with conflicts. View the history of a DOI Find the deposit history of an individual DOI using the admin tool, including all deposit files and submission logs.\nTo view a DOI history report:\nLog in to the admin tool using your account credentials Click the Report tab Type or paste a DOI into the box Click Show to view its report. The report lists every successful deposit or update of the DOI being searched. View the submission details (including log and submitted XML) by clicking on the submission number:\nShow image × Use HTTPS to retrieve logs In addition to the submission report you receive by email, you can also retrieve the results of submission processing or the contents of a submission at any time using HTTPS. You need to include your account credentials in the URL.\nIf you are using organization-wide shared role credentials, please use this version of the query, and swop \u0026ldquo;role\u0026rdquo; for your role, and \u0026ldquo;password\u0026rdquo; for your password.\nhttps://doi.crossref.org/servlet/submissionDownload?usr=_role_\u0026amp;pwd=_password_\u0026amp;doi_batch_id=_doi batch id_\u0026amp;file_name=filename\u0026amp;type=_submission type_ If you are using personal, unique user credentials, please use this version of the query, and swop \u0026ldquo;name@someplace.com\u0026rdquo; for your email address, \u0026ldquo;role\u0026rdquo; for your role, and \u0026ldquo;password\u0026rdquo; for your personal password.\nhttps://doi.crossref.org/servlet/submissionDownload?usr=name@someplace.com/role\u0026amp;pwd=_password_\u0026amp;doi_batch_id=_doi batch id_\u0026amp;file_name=filename\u0026amp;type=_submission type_ In both versions of the query, you can choose to track a submission by either its doi_batch_id or by its file_name. We recommend choosing file_name.\nThe main difference between using doi_batch_id and file_name is that doi_batch_id is inserted into the database after the submission has been parsed. Using file_name is preferable because submissions in the queue or in process can be tracked before deposit. Non-parse-able submissions can also be tracked using this method.\nTo use this feature effectively, make sure each tracking ID (doi_batch_id or file_name) is unique as only the first match is returned.\nFinally, you need to add in the type of data you want back. Use result to retrieve submission results (deposit log) or use contents to retrieve the XML file.\n", "headings": ["The submission queue ","Submission logs ","Submission log emails ","View submission logs for past deposits ","View the history of a DOI ","Use HTTPS to retrieve logs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/verify-your-registration/interpret-submission-logs/", "title": "Interpret submission logs", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "Submission logs contain information about the DOIs and metadata you have submitted to our system. They let you know if your content is registered successfully, and if not, what issues need to be addressed.\nYour logs are by default emailed to the address provided in your registration XML or entered in our web form. You may also use the admin tool to search for past deposits or retrieve them by polling.", "content": "Submission logs contain information about the DOIs and metadata you have submitted to our system. They let you know if your content is registered successfully, and if not, what issues need to be addressed.\nYour logs are by default emailed to the address provided in your registration XML or entered in our web form. You may also use the admin tool to search for past deposits or retrieve them by polling.\nExamples of a log:\nfor a successful deposit with deposit errors with XML validation error with warnings containing references Example of a log for a successful deposit Note that the \u0026lt;failure_count\u0026gt; = 0 and that the \u0026lt;record_count\u0026gt; = \u0026lt;success_count\u0026gt;.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic\u0026gt; \u0026lt;submission_id\u0026gt;9349810\u0026lt;/submission_id\u0026gt; \u0026lt;batch_id\u0026gt;FINAL_001\u0026lt;/batch_id\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/second_conflict_003\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully added\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;1\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;1\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;0\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;0\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; Example of a log with deposit errors This is an example of a deposit containing errors. In the example, note that the \u0026lt;success_count\u0026gt; and \u0026lt;record_count\u0026gt; do not match. A status of \u0026ldquo;Failure\u0026rdquo; indicates the record was rejected and the DOI was not registered or updated. The \u0026lt;record_diagnostic\u0026gt; for each registration failure contains an error message. Each error within a deposit should be corrected and the deposit resubmitted. Learn more about error and warning messages.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic status=\u0026#34;completed\u0026#34; sp=\u0026#34;cr5.crossref.org\u0026#34;\u0026gt; \u0026lt;submission_id\u0026gt;394260418\u0026lt;/submission_id\u0026gt; \u0026lt;batch_id\u0026gt;314668373.xml\u0026lt;/batch_id\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Failure\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/11111\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Record not processed because submitted version: 20070904093839 is less or equal to\tpreviously submitted version (DOI match)\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Failure\u0026#34; msg_id=\u0026#34;4\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/44444\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Record not processed because submitted version: 20070904093839 is less or equal to\tpreviously submitted version (DOI match)\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/55555\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully added\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;3\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;1\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;0\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;2\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; Example of a log with XML validation error This is an example of a submission log for a deposit with an error that prevented all DOIs from being processed. This happens when there are XML formatting issues, or if the uploaded item is not XML. Note that \u0026lt;record_count\u0026gt; and \u0026lt;failure_count\u0026gt; both equal 1. This will be true no matter how many DOIs were actually included in the submission. Learn more about error and warning messages.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic status=\u0026#34;completed\u0026#34; sp=\u0026#34;ds3.crossref.org\u0026#34;\u0026gt; \u0026lt;submission_id\u0026gt;394260418\u0026lt;/submission_id\u0026gt; \u0026lt;batch_id\u0026gt;314668373.xml\u0026lt;/batch_id\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Failure\u0026#34; msg_id=\u0026#34;29\u0026#34;\u0026gt; \u0026lt;doi /\u0026gt; \u0026lt;msg\u0026gt;Deposited XML is not well-formed or does not validate: Error on line 1: Content is not allowed in prolog.\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;1\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;0\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;0\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;1\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; Example of a log with warnings This is an example of a submission log with warnings. Warnings almost always indicate that DOIs have been successfully deposited and were flagged as a conflict with a previously deposited DOI.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic status=\u0026#34;completed\u0026#34; sp=\u0026#34;ds4.crossref.org\u0026#34;\u0026gt; \u0026lt;submission_id\u0026gt;394260418\u0026lt;/submission_id\u0026gt; \u0026lt;batch_id\u0026gt;314668373.xml\u0026lt;/batch_id\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/11112\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully added\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Warning\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/11113\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Added with conflict\u0026lt;/msg\u0026gt; \u0026lt;conflict_id\u0026gt;5166446\u0026lt;/conflict_id\u0026gt; \u0026lt;dois_in_conflict\u0026gt; \u0026lt;doi\u0026gt;10.5555/22223\u0026lt;/doi\u0026gt; \u0026lt;/dois_in_conflict\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/11114\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully added\u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Warning\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/11115\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Added with conflict\u0026lt;/msg\u0026gt; \u0026lt;conflict_id\u0026gt;5166447\u0026lt;/conflict_id\u0026gt; \u0026lt;dois_in_conflict\u0026gt; \u0026lt;doi\u0026gt;10.5555/22225\u0026lt;/doi\u0026gt; \u0026lt;/dois_in_conflict\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;4\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;2\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;2\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;0\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; Example of a log containing references This is an example of a submission log from a deposit containing references. Each reference in the deposit will be included in the log, identified by the citation key included in the deposit.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic status=\u0026#34;completed\u0026#34; sp=\u0026#34;ds5.crossref.org\u0026#34;\u0026gt; \u0026lt;submission_id\u0026gt;03480197\u0026lt;/submission_id\u0026gt; \u0026lt;batch_id\u0026gt;XYZ00000000\u0026lt;/batch_id\u0026gt; \u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5555/example\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully updated\u0026lt;/msg\u0026gt; \u0026lt;citations_diagnostic\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0030\u0026#34; status=\u0026#34;error\u0026#34;\u0026gt;Either ISSN or Journal title or Proceedings title must be supplied.\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0005\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0010\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0015\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1590/S0006-87051960000100077\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0045\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0050\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1007/BF01916741\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0075\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0080\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1093/jxb/4.3.403\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0085\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0090\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0095\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0100\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0105\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1038/181424b0\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0110\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1038/1831600a0\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0115\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1007/BF01912405\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0120\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1038/185699a0\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0125\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0150\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0155\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0160\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1038/1781359a0\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0165\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1093/jxb/13.1.75\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0170\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0175\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.2134/agronj1960.00021962005200080014x\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0180\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.2134/agronj1960.00021962005200080015x\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0185\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0190\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0195\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1007/BF00622243\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0200\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.4141/cjps58-055\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0205\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0210\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0215\u0026#34; status=\u0026#34;resolved_reference\u0026#34;\u0026gt;10.1038/178601a0\u0026lt;/citation\u0026gt; \u0026lt;citation key=\u0026#34;10.5555/example_bb0220\u0026#34; status=\u0026#34;stored_query\u0026#34;\u0026gt;\u0026lt;/citation\u0026gt; \u0026lt;/citations_diagnostic\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;1\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;1\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;0\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;0\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; ", "headings": ["Example of a log for a successful deposit ","Example of a log with deposit errors ","Example of a log with XML validation error ","Example of a log with warnings ","Example of a log containing references "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/verify-your-registration/troubleshooting-submissions/", "title": "Troubleshooting submissions", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "If you register your content with us using the web deposit form, XML file upload using our admin tool, or XML deposit using HTTPS POST, you’ll receive a submission log email from us. This email lets you know if your submission was successful, and if not, will provide error messages to explain more.\nIf you register your content with us using the third party Crossref XML plugin for OJS, or if you’re still using the deprecated Metadata Manager, you won’t automatically receive a submission log, and instead the success o r error message will show in your interface.", "content": "If you register your content with us using the web deposit form, XML file upload using our admin tool, or XML deposit using HTTPS POST, you’ll receive a submission log email from us. This email lets you know if your submission was successful, and if not, will provide error messages to explain more.\nIf you register your content with us using the third party Crossref XML plugin for OJS, or if you’re still using the deprecated Metadata Manager, you won’t automatically receive a submission log, and instead the success o r error message will show in your interface. However, you can still log in to the admin tool using your Crossref account credentials to retrieve submission logs if you wish.\nHere are some examples of submission logs - read on to find out more about a specific error message and how to fix the problem.\nDOIs that return a Warning status in your submission logs have been deposited, but may need extra attention. A Failure status means that either a particular DOI has not been deposited, or the entire deposit file was unable to be processed due to some error.\nWarnings Errors in title records Errors in titles Errors in title-level DOIs Errors in title ownership General title record discrepancy error Invalid ISSN or ISBN error Errors in DOI suffixes Errors in the XML Errors in timestamps Other types of errors Warnings Warning Meaning Solution Added with conflict The DOI was deposited; however, there was already another DOI in our system with identical metadata. This often occurs when an article is published ahead of print and deposited with no page numbers Review DOIs with warnings and resolve all conflicts Errors in title records In order to prevent duplicate records, and to ensure that DOIs are being assigned by the appropriate member, our deposit system includes certain checks on titles, ISSNs, and ISBNs.\nWhen you submit the very first deposit for a work associated with a title, a title record is created in our system. This record includes the title, ISSNs or ISBNs, (optional) title-level DOI, record type, and the details of the associated member.\nIn all subsequent deposits, the title, ISSNs or ISBNs, and (if included) title-level DOI must match this existing record exactly.\nThere are several errors that you may see in your submission logs that may indicate that there isn’t a match.\nErrors in titles ISSN \u0026quot;[ISSN]\u0026quot; has already been assigned, issn ([ISSN]) is assigned to another title ([title])\nProblem: The title in your submission is slightly different from the existing title record we hold.\nYou may think that the title you are depositing is the same as the title record we hold, but there are some common reasons why it may not be, such as:\nPunctuation, easily-confused characters, and punctuation marks that look similar: An em-dash looks like this —, its UTF-8 (hex) representation is e2 80 94 and in Unicode it’s U+2014. An en-dash looks like this –, its UTF-8 (hex) representation is e2 80 93 and in Unicode it’s U+2013. The default dash-like character on some keyboards is a hyphen, but on others is an en-dash, so they can easily be confused. A hyphen looks like this ‐, its UTF-8 (hex) representation is e2 80 90, and in Unicode it’s U+2010. Use of Cyrillic Х (Cyrillic Ha) instead of Latin X. While the Cyrillic Х (U+0425 in Unicode) looks almost exactly like the Latin X (U+0058 in Unicode), they are not the same letter. Language: if your journal is in more than one language, you need to choose one version of the title as the master entry. Typos, such as The Journal of Thigns, in the spelling of your journal title Variant spellings of your journal title, such as The Journal of Things and Journal of Things Solution 1: Change the title in your current submission to match the previously registered title and resubmit.\nSolution 2: Change the title in the existing title record - you’ll need to contact us for help with this.\nErrors in title-level DOIs Deposit contains title error: The journal has a different DOI assigned; If you want to change the journal's DOI please contact Crossref support: title=Journal of Metadata; current-doi=10.14393/JoM; deposited-doi=10.14393/JoM.1.1\nProblem: The journal level DOI that you have in your submission is slightly different from the journal level DOI in our title record.\nSolution 1: Change the journal level DOI in your submission to match the previously registered journal level DOI and resubmit.\nSolution 2 : Change the journal level DOI in the existing title record - you’ll need to contact us for help with this.\nErrors in title ownership ISSN \u0026quot;{ISSN}\u0026quot; has already been assigned to a different publisher {publisher name}({publisher prefix})\nISBN \u0026quot;{ISBN}\u0026quot; has already been assigned to a different publisher {publisher name}({publisher prefix})\nISSN \u0026quot;[ISSN]\u0026quot; has already been assigned, title/issn: [journal title]/[issn] is owned by publisher: [prefix]\nProblem: These errors indicate that the title you are depositing is owned by another member or prefix in our system.\nSolution: If you are the correct member for the title being deposited, please follow the procedures indicated in establishing and transferring ownership to have the title record transferred to your member/prefix.\nGeneral title record discrepancy error ISSN \u0026quot;{ISSN}\u0026quot; has already been assigned to a different title/publisher/record type\nProblem: This error indicates the ISSN(s), title, or publisher in your deposit do not match the data we have on record for that ISSN.\nSolution: To verify the data in the title record with a given ISSN, please search for that ISSN on the Crossref title list. Resubmit with the correct information. If you need to change your title, get in touch with us. If there are any discrepancies between the title, ISSN(s), publisher, or record type that need to be updated, please contact Support and include the submission ID for your deposit.\nInvalid ISSN or ISBN error ISSN \u0026quot;{ISSN}\u0026quot; is invalid\nISBN \u0026quot;{ISBN}\u0026quot; is invalid\nProblem: This error indicates that the ISSN or ISBN in your deposit is incorrect. All valid ISSNs and ISBNs use a check digit which is algorithmically validated by our deposit system.\nSolution: If your deposit fails with this error, please verify the ISSN or ISBN, and resubmit the deposit with the correct number.\nErrors in DOI suffixes If you see \u0026lt;msg\u0026gt;DOI: 10.5555/2411?3417 , contains invalid characters\u0026lt;/msg\u0026gt; in your submission log, your deposit has failed because within your DOI suffix 2411?3417, there is a non-approved character, indicated by the ?. You can learn more about approved characters for DOI suffixes, but here are the two most common problems:\nProblem 1: Using an em-dash or en-dash instead of a hyphen. An em-dash looks like this —, its UTF-8 (hex) representation is e2 80 94 and in Unicode it’s U+2014. An en-dash looks like this –, its UTF-8 (hex) representation is e2 80 93 and in Unicode it’s U+2013. The default dash-like character on some keyboards is a hyphen, but on others is an en-dash, so they can easily be confused.\nSolution 1: reconfigure your DOI so it doesn\u0026rsquo;t include an em- or en-dash, or replace the em- or en-dash with a hyphen. A hyphen looks like this ‐, its UTF-8 (hex) representation is e2 80 90, and in Unicode it’s U+2010.\nProblem 2: Use of Cyrillic Х (Cyrillic Ha) instead of Latin X. While the Cyrillic Х (U+0425 in Unicode) looks almost exactly like the Latin X (U+0058 in Unicode), they are not the same letter.\nSolution 2: change the Х to the Latin X or another approved character, and then resubmit your deposit.\nErrors in the XML Deposited XML is not well-formed or does not validate: Error on line 538\nProblem: This error means that the XML is poorly formatted against our schema, or as an XML file in general. For example, it may contain self-closing tags, or invalid values.\nSolution: Check your XML file for mistakes. Be sure to edit it in an XML editor and not a word processing program. Check you have saved the file correctly (as an .xml file), and deposit it again. If it still fails, contact Support for help. We also have a collection of XML examples you may use as a template.\nErrors in timestamps Record not processed because submitted version: 201907242206 is less or equal to previously submitted version 201907242206\nProblem: The timestamp in this deposit file is numerically smaller than the timestamp in a previous deposit for this same DOI.\nSolution: Every deposit has a \u0026lt;timestamp\u0026gt; value, and that value needs to be incremented each time the DOI is updated. This is done automatically for you in the Crossref XML plugin for OJS, the web deposit form, or if you’re still using the deprecated Metadata Manager. But if you’re updating an existing DOI by sending us the whole XML file again, you need to make sure that you update the timestamp as well as the field you’re trying to update.\nTo fix this, simply increment the timestamp value to be larger than the current timestamp value, and resubmit your XML file. Timestamps can be found by reviewing past deposits, in the depositor report, or by retrieving the DOI\u0026rsquo;s metadata record.\nOther types of errors When processing a metadata deposit, we do a number of checks to prevent the introduction of bad data to our system.\nError Meaning Solution User with ID: {0} can\u0026rsquo;t submit into handle, please contact the Crossref admin The handle system username and password assigned to this prefix is incorrect in the Crossref system This is usually a clerical error. Please contact Support and include the submission ID in your email User not allowed to add records for prefix: {0} The role that was used to submit this deposit does not have permissions to deposit DOIs beginning with this prefix Confirm that you are using the correct prefix and Crossref credentials. If you’re still having trouble, please contact Support and include the submission ID in your email All prefixes in a submission must match (DOI[{0}]) All DOIs included in a single deposit submission must have the same prefix, regardless of ownership Revise submission, and split the single file into multiple deposits, each with a single prefix. Then resubmit the new deposit files title \u0026ldquo;{title}\u0026rdquo; was previously deleted by a Crossref admin The title record being deposited or updated was deleted from our system, usually at the publisher\u0026rsquo;s request Review your title and compare to previous deposits for that type of content. If you’re still having trouble, please contact Support and include the submission ID in your email user not allowed to add or update records for the title \u0026ldquo;{title}\u0026rdquo; The Crossref account that was used to submit this deposit does not have permissions to deposit for this title Review title to confirm that you are using the appropriate account and prefix. If you’re still having trouble, please contact Support and include the submission ID in your email [error] :286:24:Invalid content starting with element {element name}\u0026rsquo;. The content must match \u0026lsquo;((https://0-data-crossref-org.libus.csd.mu.edu/reports/help/schema_doc/doi_resources4.4.2/index.html: item_number) {0-3}, (https://0-data-crossref-org.libus.csd.mu.edu/reports/help/schema_doc/doi_resources4.4.2/index.html: identifier) {0-10}) This is an example of a parsing error being reported in the log file. Since this output comes directly from the Xerces parser the actual message will vary depending on the error Review file at line / column indicated (in this example: line 286 column 24), edit, and resubmit. If you’re still having trouble, please contact Support and include the submission ID in your email org.jdom.input.JDOMParseException: Error on line 312 of document file:///export/home/resin/journals/crossref/inprocess/395032106: The content of elements must consist of well-formed character data or markup Unacceptable markup in file Review the file as indicated, correct, and resubmit [fatal error] :1:1: Content is not allowed in prolog Characters precede the XML declaration. This is almost always a Byte Order Mark (BOM) which most often occurs when word processing programs are used to edit XML files Open file in a text or XML editor and remove characters (usually ). If the encoding is shown as UTF-8 With BOM, change this to UTF-8 or UTF-8 Without BOM. Then resubmit the deposit. java.io.UTFDataFormatException: invalid byte 1 of 1-byte \u0026gt; UTF-8 sequence (0x92) There is a badly encoded character. Locate and correct the bad character encoding Learn more about using special characters in your XML java.sql.SQLException: ORA-00001: unique constraint (ATYPON.NDX1_CIT_RELS) violated Two files containing the same DOIs have been submitted simultaneously. The system attempts to process both deposits, but only one deposit will be successful. The unsuccessful deposit will generate this error Review DOI metadata to be sure it was updated correctly java.lang.NullPointerException Most often this means a citation-only deposit or multiple resolution resource-only deposit has been uploaded as a metadata deposit (or vice-versa) Resubmit deposit as DOI Resources or doDOICitUpload. If this does not apply to your deposit, please contact Support and include the submission ID in your email Submission version NULL is invalid Schema declaration is incorrect Resubmit with correct schema declaration Invalid namespace/version Wrong operation type, or submitted file is not XML Submit with correct operation type cvc-pattern-valid: Value \u0026lsquo;https://orcid.org/0000-0001-XXX-XXX' is not facet-valid with respect to pattern \u0026lsquo;https?://orcid.org/[0-9]{4}-[0-9]{4}-[0-9]{4}-[0-9]{3}[X0-9]{1}\u0026rsquo; for type \u0026lsquo;orcid_t\u0026rsquo; The ORCID ID you have included doesn\u0026rsquo;t match the expected pattern. The most common reason for this is a trailing space Remove the space (or update the ORCID ID to the correct format) and re-submit cvc-enumeration-valid: Value \u0026lsquo;VoR\u0026rsquo; is not facet-valid with respect to enumeration \u0026lsquo;[vor, am, tdm]\u0026rsquo;. It must be a value from the enumeration. Some elements have a pre-defined list of values in our schema, and the submission must match these values exactly - they are even case sensitive. Here, the submission included the attribute of \u0026lsquo;version of record\u0026rsquo; as VoR, but the schema defines that value as vor. Correct to vor and resubmit. ", "headings": ["Warnings ","Errors in title records ","Errors in titles ","Errors in title-level DOIs ","Errors in title ownership ","General title record discrepancy error ","Invalid ISSN or ISBN error ","Errors in DOI suffixes ","Errors in the XML ","Errors in timestamps ","Other types of errors "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/verify-your-registration/notification-callback-service/", "title": "Notification callback service", "subtitle":"", "rank": 4, "lastmod": "2023-11-29", "lastmod_ts": 1701216000, "section": "Documentation", "tags": [], "description": "Notification callback is a service you can use to notify you when a submission log, either in the test or production admin tool, is available for a metadata, batch query, or Cited-by query submission. Notification is provided in the form of a HTTP(S) URL where the log can be retrieved. If the notification callback service is enabled, you will no longer receive submission log emails.\nHow the notification callback service works The callback will be an HTTP(S) request to a URL (notify-url) provided by the member with all data relayed via HTTPS headers.", "content": "Notification callback is a service you can use to notify you when a submission log, either in the test or production admin tool, is available for a metadata, batch query, or Cited-by query submission. Notification is provided in the form of a HTTP(S) URL where the log can be retrieved. If the notification callback service is enabled, you will no longer receive submission log emails.\nHow the notification callback service works The callback will be an HTTP(S) request to a URL (notify-url) provided by the member with all data relayed via HTTPS headers. The notification specifies the availability of the result, some context of the request, and an HTTP(S) URL from which to get the result. ​The submission log may then be retrieved using the HTTP(S) URL.\nThe headers use the simple name and value structure; that is, the value has no additional structure that divides it into parts. To ensure that all Unicode values can be accommodated all header values will be UTF-8 encoded.\nWhen the notify-url is used the following HTTPS headers are provided:\nCROSSREF-NOTIFY-ENDPOINT: the notify-endpoint (required) is just the name used to identify the specific notification (more on this below) CROSSREF-EXTERNAL-ID: the id given by the member with regards to the request. For metadata deposits, for example, it is the value of the doi_batch_id element (Optional) CROSSREF-INTERNAL-ID: the id given by us with regards to the request (Optional) CROSSREF-RETRIEVE-URL: the URL for the member to use to retrieve the request\u0026rsquo;s result. Since the HTTPS header value is UTF-8 encoded, the URL will contain no URI encodings. For example, an Á will not be encoded as %C3%81 CROSSREF-SERVICE-DATE: the date and time stamp of the service request. Learn more about format specification in RFC 2616. CROSSREF-RETRIEVE-URL-EXPIRATION-DATE: the timestamp after which service result is no longer available at the given retrieve-url. Setting up an endpoint You\u0026rsquo;ll need to set up and register an endpoint to receive callbacks.\nCreate an endpoint using cURL: curl -s -D - \u0026#34;https://0-doi-crossref-org.libus.csd.mu.edu/notification-callback/exec/setNotifyEndpoint\\ ?usr=ROLE\\ \u0026amp;pwd=PASSWORD\\ \u0026amp;endpoint=com.foo.1\\ \u0026amp;url=http://foo.com/crossref/callback Test your endpoint: curl -s -D - \u0026#34;https://0-doi-crossref-org.libus.csd.mu.edu/notification-callback/exec/createNotificationCallback\\ ?usr=USERNAME\\ \u0026amp;pwd=PASSWORD\\ \u0026amp;notifyEndpoint=com.foo.1\\ \u0026amp;notifyPayloadContentType=text/plain\\ \u0026amp;notifyPayloadContent=this+is+test+1\\ \u0026amp;externalTrackingId=test-1\u0026#34; After a few minutes your end-point will receive a callback with your test payload message.\nThis is an example of the test payload message that will be delivered to your end-point: https://0-doi-crossref-org.libus.csd.mu.edu/retrieve/f41557cf-f2f2-4f33-9d4c-3848fcc42187.\n{ \u0026#34;id\u0026#34; : 918323, \u0026#34;status\u0026#34; : \u0026#34;N\u0026#34;, \u0026#34;completed\u0026#34; : null, \u0026#34;serviced\u0026#34; : \u0026#34;2022-05-03 15:13:54.0\u0026#34;, \u0026#34;notifyEndpoint\u0026#34; : \u0026#34;org.jonmstark.submission\u0026#34;, \u0026#34;notifyPayloadId\u0026#34; : \u0026#34;f41557cf-f2f2-4f33-9d4c-3848fcc42187\u0026#34;, \u0026#34;notifyPayloadExpiration\u0026#34; : \u0026#34;2022-05-10 15:13:54.0\u0026#34;, \u0026#34;internalTrackingId\u0026#34; : \u0026#34;jms-test-1\u0026#34;, \u0026#34;externalTrackingId\u0026#34; : \u0026#34;jms-test-1\u0026#34;, \u0026#34;recordCreated\u0026#34; : \u0026#34;2022-05-03 15:13:54.0\u0026#34;, \u0026#34;recordUpdated\u0026#34; : null } Contact us to activate the service - we’ll need: your endpoint info (notify-endpoint and notify-url) \u0026ndash; the notify-endpoint is just a name to identify the specific notification. The notify-endpoint should be something you can recognize so when you receive responses that include the endpoint name, it is easy to know which of the callback feeds it is coming from. Thenotify-url has to be the actual URL of your callback receiver, as that is where the notification callback transmits to via http/https the services you’re activating the service for (metadata submissions, batch querying, Cited-by alerts) the username and/or DOI prefix you’ll be using if configuring the notification callback service for Cited-by alerts, you\u0026rsquo;ll need to provide us with the email address that was used to set your fl_query alerts Make sure you inform us of any changes to your endpoint: if a message fails to send we will retry for up to a week after which you will no longer be able to receive it.\nExample of a notification For the submission 1368966558 the notification would be as follows (new lines have been added between header name and header value to improve readability):\nCROSSREF-NOTIFY-ENDPOINT: F8DD070C-89A9-4B82-B77E-1CADCD989DAE CROSSREF-EXTERNAL-ID: apsxref:7ca42f54-093f-11e4-b65b-005056b31eb6 CROSSREF-INTERNAL-ID: 1368966558 CROSSREF-SERVICE-DATE: Fri, 11 Jul 2014 21:08:24 GMT CROSSREF-RETRIEVE-URL-EXPIRATION-DATE: Fri, 18 Jul 2014 21:08:24 GMT CROSSREF-RETRIEVE-URL: https://0-doi-crossref-org.libus.csd.mu.edu/notification/retrieve/67BCBED2-7AE2-4FD7-B90E-514E19B1DE49 Querying for past callbacks The notification callback service can be queried for past callbacks. The query is implemented as an HTTPS service (access control and limits to end-points and time frames TBD).\nThe query takes 3 criteria, the notify-endpoints, an inclusive from timestamp, and an exclusive until timestamp. All timestamps use the ISO 8061 format YYYY-MM-DD’T’hh:mm:ss’Z, for example, 2014-07-23T14:43:01Z.\nThe query results in a JSON array of callbacks. For example, querying for the single endpoint \u0026ldquo;1CFA094C-4876-497E-976B-6A6404652FC2\u0026rdquo; returns:\n[ { \u0026#34;notify-endpoint\u0026#34;: \u0026#34;1CFA094C-4876-497E-976B-6A6404652FC2\u0026#34;, \u0026#34;external-id\u0026#34;: \u0026#34;apsxref:7ca42f54-093f-11e4-b65b-005056b31eb6\u0026#34;, \u0026#34;service-date\u0026#34;: \u0026#34;2014-07-14T21:08:24Z\u0026#34;, \u0026#34;retrieve-url\u0026#34;: \u0026#34;https://0-doi-crossref-org.libus.csd.mu.edu/.../67...49\u0026#34;, \u0026#34;retrieve-url-expiration-date\u0026#34;: \u0026#34;2014-07-11T21:08:24Z\u0026#34;, \u0026#34;audit\u0026#34;: [ { \u0026#34;notify-url\u0026#34; : \u0026#34;http://abc.org/crossref/callbacks\u0026#34;, \u0026#34;date\u0026#34; : \u0026#34;2014-07-14T21:09:00Z\u0026#34;, \u0026#34;explanation\u0026#34;: \u0026#34;http status 200\u0026#34; } ] }, { \u0026#34;notify-endpoint\u0026#34;: \u0026#34;1CFA094C-4876-497E-976B-6A6404652FC2\u0026#34;, \u0026#34;external-id\u0026#34;: ... }, ... ] A flat structure is used to aid processing the result as a stream. There is no order defined.\nThe audit item is a record of attempted callbacks. It details the notify-endpoint\u0026rsquo;s notify-url used at the time of the callback, the timestamp of the callback, and the HTTPS status of the callback. If more than one attempt has been tried then the audit array will contain multiple elements; there is no order defined.\nThe query service is currently available at:\nhttps://doi.crossref.org/notification-callback/exec/findNotificationCallbackAttempts?usr=USER\u0026amp;pwd=PASSWORD\u0026amp;notifyEndpoints=ENDPOINT\u0026amp;from=2014-01-01\u0026amp;until=2014-12-31 The usr and pwd are your Crossref username and password. The ENDPOINT value is a notify-endpoint or a space separated set of notify-endpoints.\nA note on trusting Let\u0026rsquo;s Encrypt We use Let\u0026rsquo;s Encrypt, a global Certificate Authority, to enable secure HTTPS connections. Please ensure your local certificate library is updated to include the letsencrypt root certificate before requesting a notification callback for your account/prefix.\nGlossary of notification callback service terms notify-url: the URL that the member provides and is used to notify them of the availability of a service request\u0026rsquo;s result. How the URL is provided to us will depend on the service. notify-endpoint: an opaque token used to select a notify-url. The token will be anonymous and difficult to guess. The notify-endpoint is provided by the member. The notify-endpoint is associated with one notify-url (many notify-endpoints can be associated with the same notify-url). retrieve-url: the URL that we provides that is used by the member to get the service request result. notify-payload: the data that specifies what service request this notification is for. This payload will use HTTPS headers so as to be HTTPS method-neutral (such as POST, PUT). retrieve-payload: the service result. Each service will define its own result content-type (that is very much like what would be sent in email today). notification-authentication: This is the method of authentication we will use with the notify-url. Credentials are provided by the member. retrieval-authentication: This is the method of authentication the member will use with the retrieve-url. The account credentials are provided by us. ", "headings": ["How the notification callback service works ","Setting up an endpoint ","Example of a notification ","Querying for past callbacks ","A note on trusting Let\u0026rsquo;s Encrypt ","Glossary of notification callback service terms "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/creating-and-managing-dois/", "title": "Creating and managing DOIs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Creating DOIs A DOI is registered for each new content item by its owner as it\u0026rsquo;s published. This single DOI would then remain associated with the content item forever. DOIs become active once they and their associated metadata are registered with us. Find out more about:\nConstructing your DOIs Ways to register your content Managing and updating the metadata for your existing DOIs Once you have registered your DOIs, you can update the metadata associated with them at any time, free of charge.", "content": "Creating DOIs A DOI is registered for each new content item by its owner as it\u0026rsquo;s published. This single DOI would then remain associated with the content item forever. DOIs become active once they and their associated metadata are registered with us. Find out more about:\nConstructing your DOIs Ways to register your content Managing and updating the metadata for your existing DOIs Once you have registered your DOIs, you can update the metadata associated with them at any time, free of charge. Here are some examples of metadata maintenance tasks.\nChanging or deleting DOIs Because DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs can’t be fully deleted. You can always update the metadata associated with a DOI, but the DOI string itself can’t change. Find out more.\nTransferring titles or prefixes between members Find out what to do if a title with existing DOIs is acquired by a member with a different DOI prefix.\nMultiple resolution Multiple resolution is used where many members host the same content.\n", "headings": ["Creating DOIs","Managing and updating the metadata for your existing DOIs","Changing or deleting DOIs","Transferring titles or prefixes between members","Multiple resolution"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/creating-and-managing-dois/changing-or-deleting-dois/", "title": "Changing or deleting DOIs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Because DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs can’t be fully deleted. You can always update the metadata associated with a DOI, but the DOI string itself can’t change, and once it’s been registered, it will be included in your next content registration invoice. It’s important that you only register a DOI that you definitely want to use.\nHowever, since mistakes happen, here are some work-around solutions to common problems.", "content": "Because DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs can’t be fully deleted. You can always update the metadata associated with a DOI, but the DOI string itself can’t change, and once it’s been registered, it will be included in your next content registration invoice. It’s important that you only register a DOI that you definitely want to use.\nHowever, since mistakes happen, here are some work-around solutions to common problems.\nYour deposited DOI does not match the DOI you published If you\u0026rsquo;ve registered the wrong DOI for an item, it’s not possible to change the suffix of a DOI after it has been deposited. Instead, you\u0026rsquo;ll need to do the following:\nDeposit the correct DOI Alias the incorrect DOI to the correct DOI (or ask support to do this for you) The incorrect DOI will then be redirected to the newly-registered DOI.\nIf you need to update a title-level book or journal DOI please contact us as we will need to make adjustments to allow the update submission to pass.\nYou deposited a DOI by mistake It’s not possible to fully delete a DOI, but in some circumstances we can direct a DOI to a deleted DOI page. The metadata for the DOI is updated to disconnect the DOI from any identifying metadata. This process should only be applied to DOIs that have not been distributed or otherwise used for linking. DOIs for items that have been retracted or otherwise withdrawn should be directed to a retraction or withdrawal notice on the publisher\u0026rsquo;s website.\nIf you think your DOI needs to be deleted, please contact us with details.\n", "headings": ["Your deposited DOI does not match the DOI you published ","You deposited a DOI by mistake "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/creating-and-managing-dois/transferring-responsibility-for-dois/", "title": "Transferring responsibility for titles and DOIs", "subtitle":"", "rank": 4, "lastmod": "2021-10-21", "lastmod_ts": 1634774400, "section": "Documentation", "tags": [], "description": "We enforce a concept of ownership for the titles you register through us.\nWe allow members to freely register records for titles that do not exist in our system. When the first submission for that title is processed, a title record is added to our database. This title record ties the title to the prefix belonging to the first registrant. The member who owns that prefix is then the only member allowed to create new DOIs for that title (or update the metadata on existing DOIs for that title).", "content": "We enforce a concept of ownership for the titles you register through us.\nWe allow members to freely register records for titles that do not exist in our system. When the first submission for that title is processed, a title record is added to our database. This title record ties the title to the prefix belonging to the first registrant. The member who owns that prefix is then the only member allowed to create new DOIs for that title (or update the metadata on existing DOIs for that title).\nIf a title is acquired by a member with a different prefix, we have two options. The most common is that we update the title record to associate the title with the acquiring member\u0026rsquo;s prefix going forward. But if the acquiring member has acquired all of the disposing members titles, we can also transfer the disposing members entire prefix over to the acquiring member.\nOn this page, find out more about:\nTitle transfers - updating a title record to a new prefix belonging to a different member Prefix transfers - moving an entire prefix and all titles to a different member Requesting a title transfer (and what to do next) Requesting a prefix transfer Title dispute resolutions Title transfers In a standard title transfer, Member A acquires a single title from Member B. We transfer title ownership and all relevant reports over to Member A\u0026rsquo;s prefix.\nMember A must then register new content for that title on their own prefix. But they also inherit control of all the existing DOIs for this title, even though these DOIs are on Member B\u0026rsquo;s prefix. Those existing DOIs will not change.\nMember A can now update any metadata associated with the existing DOIs - for example, the resolution URL. They will also now show as the publisher in the metadata for these DOIs. Member A should continue to display and use the existing DOIs and they SHOULD NOT register a new DOI for content that already has a DOI. Once a DOI has been registered for an item, that DOI needs to remain the persistent identifier for that item - forever. Registering new DOIs for content that already has DOIs contravenes clause 2 h 3 of the Crossref membership terms, and causes confusion and inaccuracies for the organizations and individuals using Crossref metadata.\nHere\u0026rsquo;s an example of how this works. Let\u0026rsquo;s say that DOI 10.1234/abcd is for an article in a title that\u0026rsquo;s acquired by a new member. The new members prefix is 10.5678, and so ownership for that whole title is assigned to prefix 10.5678.\nThis means that the existing DOI for that article will continue to be 10.1234/abcd. The difference is that the member responsible for prefix 10.5678 is also able to update the metadata record for 10.1234/abcd. For example, they may need to update the resolution URL to point at their website.\nBackfile and current DOIs for that journal may, therefore, have different prefixes — and that’s OK!\nLearn more about what can often change, but always stays the same?\nTransferring a title without taking responsibility for existing DOIs Typically, when a title is acquired by a member, all existing content is also acquired. We move the title itself, AND ownership of all existing records for that title to the acquiring member.\nHowever, we can also assign ownership to individual records within a title. This is sometimes necessary when content ownership or hosting responsibility is assigned to different chunks of content for the same title.\nFor example, current issues of Journal A may be published by a member with prefix 10.1234. Issues of Journal A published prior to 2010 are hosted and maintained by a member with prefix 10.5678. Journal A is owned by prefix 10.1234, but the member with prefix 10.5678 retains control of the back issue DOIs owned by prefix 10.5678.\nPrefix transfers In a prefix transfer, Member C acquires Member D and all their titles. We move the entire prefix belonging to Member D (and all relevant reports) over to Member C. Member C can then continue to assign DOIs on Member D’s old prefix (the original prefix). If Member C uses a service provider to deposit metadata on their behalf, we will simply enable the service provider\u0026rsquo;s account credentials to work with the newly acquired prefix.\nRequesting a title transfer There are several steps to a title transfer.\n1. Disposing and acquiring publisher confirm that all existing DOIs have definitely been registered with Crossref and agree financial arrangement for registration of DOIs.\nPrior to the transfer, it\u0026rsquo;s important to make sure that any DOIs that have been displayed by the disposing publisher on their prefix have definitely been successfully registered with Crossref. If there are DOIs that are displayed on their prefix that haven\u0026rsquo;t been registered, it will be very complicated to get them registered after the title transfer. However, this will have to be done, as once a DOI has been displayed, it has to be registered with Crossref to preserve the scholarly record.\nIn advance of sending us the title transfer request, make sure to get confirmation from the disposing publisher that they have definitely successfully registered all DOIs for this title that have already been displayed on their site.\n2. Disposing or acquiring publisher contacts our support team to request a title transfer.\nWe need to receive a title transfer notification to confirm that the current owners are happy with the transfer. There are several different ways to do this:\nOption A (preferred): If a title transfer has been posted to the Enhanced Transfer Alerting Service (TAS) let us know and we’ll proceed with the transfer without further confirmation. Option B: If you don\u0026rsquo;t participate in ETAS, please send us confirmation that the disposing publisher is aware of and agrees with the ownership transfer. The confirmation may be a forwarded email from the disposing publisher to the acquiring publisher acknowledging the transfer. The forwarded email must contain the original sender details. Option C: Alternatively, if there is an announcement on the website of the disposing publisher, that works too. Whichever option you use, please be specific about what is being transferred - include ISSNs, ISBNs, and when you need the transfer to occur (if applicable). Do be specific about which prefix the title is being transferred to, as some publishers have more than one prefix. Let us know if this is a transfer of the entire title and all associated DOIs, or just a transfer for future content.\n(NB: We used to allow disposing publishers to transfer titles themselves through the Metadata Manager tool, but this service has been deprecated).\n3. We update the title record in our system and confirm when this is complete.\nWe will update the title record to associate that title with the acquiring publisher\u0026rsquo;s prefix going forward. This means that the acquiring publisher will be able to register new DOIs on their own prefix in the future.\nAfter the transfer is complete, it\u0026rsquo;s extremely important that the acquiring publisher doesn\u0026rsquo;t register new DOIs for content that already has an existing DOI registered by the disposing publisher. Once a DOI has been registered for an item, that DOI needs to remain the persistent identifier for that item - forever. Registering new DOIs for content that already has DOIs contravenes clause 2 h 3 of the Crossref membership terms. The acquiring publisher should continue to display and use the existing DOIs, despite the fact that they aren\u0026rsquo;t on their prefix. However, the acquiring publisher will now be able to update the metadata associated with these existing DOIs, even though they aren\u0026rsquo;t on their prefix.\nWe will provide the acquiring publisher with a link to all the DOIs that have been previously registered for this title.\n4. Acquiring publisher updates the metadata on existing DOIs as required\nAt this point, the acquiring publisher will be able to update existing metadata records on the disposing publisher prefix, and create new records on their own prefix.\nAs the acquiring publisher, you should review the full metadata records provided by the disposing publisher, and remove or update any member-specific metadata such as text and data mining license and full-text URLs, Similarity Check full-text URLs, or Crossmark data. If the metadata supplied by the previous member is complete and accurate, you’ll only need to update the resolution URLs (the URLs associated with each DOI to point to your content).\nLearn more about our top tips for a pain-free title transfer.\nRequesting a prefix transfer DOI prefixes may be moved from one member to another with the consent of the current prefix owner. This may happen as part of a merger or acquisition. Prefixes may also be moved from one DOI registration agency to another. Please contact us to start a prefix transfer.\nPrefix permissions If a prefix moves between members, note that the permissions associated with all DOIs currently owned by that prefix will transfer as well. This includes permissions related to Cited-by matches. You may transfer ownership of individual DOIs to a different prefix as needed.\nTitle dispute resolution Title ownership may come into dispute when two members claim ownership of a single publication. This may occur when content is registered by members through an agreement with a society, and the society takes up an agreement with a new publisher. Or perhaps there is just a disagreement over who has the current rights to register the content - see term 2c of our membership terms:\nRights to Content. The Member will not deposit or register Metadata for any Content for which the Member does not have legal rights to do so.\nAs described above, the ‘owning’ member in our system is the member who is currently registering content for that publication. They have the ability to continue registering content for that title. The ‘disputing’ member is the member who wishes to register content for that journal going forward, but is unable to. Here\u0026rsquo;s how this situation needs to be handled:\nThe disputing member will notify us of the title dispute - an email to Support is sufficient We’ll contact the owning member informing them of the title dispute. If the owning member agrees that their ownership is incorrect or if they do not respond within 10 working days, we will re-assign title and record ownership to the disputing member, who then becomes the new owning member. If the owning member challenges the claim, the two parties must resolve the issue together within 90 days. We will move title ownership under instruction from the owning member, or under direction from legal authority. If the dispute is not resolved within 90 days, the disputing member can request that we remove the ability for any party to register further content for the publication under dispute until this is resolved. This remains the case until we receive notice of a legal conclusion. ", "headings": ["Title transfers ","Transferring a title without taking responsibility for existing DOIs ","Prefix transfers ","Requesting a title transfer ","Requesting a prefix transfer ","Prefix permissions ","Title dispute resolution "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/creating-and-managing-dois/prefix-transfers-other-ra/", "title": "Transferring prefixes between different Registration Agencies", "subtitle":"", "rank": 4, "lastmod": "2022-04-14", "lastmod_ts": 1649894400, "section": "Documentation", "tags": [], "description": "Alongside Crossref, there are other agencies of the DOI Foundation. Many focus on specific regions of the world (mEDRA, JaLC, CNKI, KISTI, et al) or on the needs of institutional repositories rather than publishers (eg DataCite).\nIt\u0026rsquo;s important to carefully research which agency you want to join so you start with the right agency for you and continue to work with them for the long term. If you do start to work with one agency and need to move to another agency later, this is possible and prefixes can be transferred between Registration Agencies.", "content": "Alongside Crossref, there are other agencies of the DOI Foundation. Many focus on specific regions of the world (mEDRA, JaLC, CNKI, KISTI, et al) or on the needs of institutional repositories rather than publishers (eg DataCite).\nIt\u0026rsquo;s important to carefully research which agency you want to join so you start with the right agency for you and continue to work with them for the long term. If you do start to work with one agency and need to move to another agency later, this is possible and prefixes can be transferred between Registration Agencies. But there\u0026rsquo;s extra work for you to make this happen, so it\u0026rsquo;s much better to start with the right agency. Do contact us for advice.\nIf you do wish to move between agencies and transfer your prefix:\nContact the Registration Agency that you are moving to - the one that will be receiving the prefix. So to transfer a prefix from DataCite to Crossref for example, contact us. To transfer a prefix from Crossref to DataCite, contact DataCite. The two agencies will work together to confirm if the prefix can be transferred. This is usually possible, but if the prefix is shared or belongs to a generalist repository then it won\u0026rsquo;t be able to be transferred and we\u0026rsquo;ll need to take a different approach. If the prefix can be transferred, then the two Registration Agencies will liaise to make this happen. Once the prefix has been transferred, you will need to re-register all your existing DOIs with the new Registration Agency. This ensures that all metadata relating to your titles is with the same registration agency, you can manage all your DOIs in one central place, all your citation tracking will be centralized, and you can be sure that the thousands of organizations and individuals using Crossref metadata via our API have full information about your titles, helping to improve your discoverability. Re-registration is not optional - it\u0026rsquo;s an obligation of the transfer. At Crossref, we will work with you to ensure that you are not charged for any content that you are re-registering with us after transferring a prefix from another agency. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/creating-and-managing-dois/multiple-resolution/", "title": "Multiple resolution", "subtitle":"", "rank": 1, "lastmod": "2024-04-03", "lastmod_ts": 1712102400, "section": "Documentation", "tags": [], "description": "Ideally, a DOI is registered for each new content item by its owner prior to or at the time it is published. This single DOI would then remain associated with the content item forever. However, because content can travel from place to place online, and it can live in multiple locations, a content item may exist at more than one URL. With multiple resolution, you can assign multiple URLs to a single metadata record.", "content": "Ideally, a DOI is registered for each new content item by its owner prior to or at the time it is published. This single DOI would then remain associated with the content item forever. However, because content can travel from place to place online, and it can live in multiple locations, a content item may exist at more than one URL. With multiple resolution, you can assign multiple URLs to a single metadata record. Members often use multiple resolution for co-hosted content or content in transition from one platform to another. Instead of resolving directly to a single page, a multiple resolution-enabled link will instead land on an interim page. The interim page presents a list of link choices to the end-user.\nA single member may register multiple URLs for their content, but multiple resolution usually involves coordination between several members. One member needs to deposit the DOIs and metadata as the primary depositor. The primary depositor is typically the DOI prefix owner of the content being registered, and will commit to maintaining the metadata record. If second (or third) parties are involved, they will only be able to add and update secondary URLs for existing records.\nMultiple resolution interim pages can be set up for an entire DOI prefix, or for individual titles.\nThere are no fees associated with multiple resolution. To get started, please let us know who and what content is involved in your multiple resolution project, and send us your additional URLs. Learn more about how to set up multiple resolution.\nIf the content you are working with does not already have DOIs and is not published by you, please contact us.\nOn this page, learn more about:\nHow to set up multiple resolution Unlocking DOIs for multiple resolution Registering secondary URLs How to update multiple resolution URLs Reversing multiple resolution DOI resolution by country code The role of the DOI proxy in multiple resolution What if I want to do multiple resolution but sometimes want to send people directly to my site? How does multiple resolution affect my resolution statistics? How to set up multiple resolution Multiple resolution typically involves two (or more) members involved in a co-hosting agreement. For the purposes of multiple resolution, the primary depositor is the member responsible for the prefix of the multiple resolution content being registered. The secondary depositor has been authorized by the content owner to also host content and assign additional URLs (called secondary URLs) to DOIs. We’ll always defer to the primary depositor’s instructions regarding changes to a metadata record including all assigned URLs.\nFollow these steps to coordinate and implement multiple resolution:\nEstablish permissions - contact us to let us know what organizations and content will be involved in your multiple resolution project and we’ll adjust permissions as needed. You can skip this step if you are implementing multiple resolution without a secondary depositor or intend to supply the secondary URLs yourself (as you are by default enabled to register multiple resolution URLs for your own content). The primary depositor must notify us of the intention to implement multiple resolution for their metadata records, as well as all titles and/or prefixes involved. The secondary depositor may coordinate multiple resolution activity with permission from the primary depositor - this can be an email stating, for example: XYZ Publishing has permission to coordinate multiple resolution activity on our behalf for titles (\u0026hellip;). Primary depositors can create metadata records and deposit primary and secondary URLs. Secondary depositors may only register secondary URLs for existing records. The secondary depositor will be assigned a new system account to be used for multiple resolution deposits only. Unlock your DOIs - you must enable each metadata record for multiple resolution by sending us an ‘unlock’ flag for each DOI. This can be included in your files or sent separately as a resource-only deposit, like this example file. Register your secondary URLs. Secondary URLs are usually added to an existing metadata record using a resource-only deposit. The secondary URL registration file contains the DOI(s) being updated, the secondary URL, and a label. The label value is case-sensitive and must be a minimum of 6 characters (no spaces). Create an interim page template We provide a standard interim page when a reader clicks on a DOI that is in a multiple resolution relationship. This means that users can be confident that they’ll see consistent behaviour across all Crossref DOIs that are in multiple resolution. Depositors don’t need to do anything to create this interim page, it will generate automatically for a DOI when the multiple resolution relationship is created.\nClicking on these DOIs will take you to examples of the interim pages:\nhttps://doi.org/10.1007/978-94-6209-116-0\nhttps://doi.org/10.1049/cp.2018.1305\nhttps://doi.org/10.18574/nyu/9781479845309.003.0004\nNote that the logos are pulled from a service called Clearbit who curate company logo and other information. These are not hosted and curated by Crossref. If your logo isn’t appearing and you would like it to, or you\u0026rsquo;d like to update your logo, you can contact support@clearbit.com so that they can assess your request.\nUnlock DOIs for multiple resolution The primary depositor must enable (or unlock) each multiple resolution DOI before secondary URLs can be deposited. You can do this using either a metadata deposit or a resource-only deposit (details below). It is expected that once a content owner gives permission for multiple resolution to be attached to DOIs of a given title, or to all their content, that the content owner will routinely enable multiple resolution when creating or updating their DOIs.\nUnlocking a DOI does not change the linking behavior of a DOI - an unlocked DOI will continue to resolve to the URL supplied during registration until a secondary URL has been registered.\nUnlock DOIs using the main deposit schema This mode should be used for all new DOIs created after the content owner has recognized that secondary deposits will be taking place. It allows the primary content owner to enable the DOI multiple resolution permission at the same time as the DOI is initially created.\nThe XML used by the content owner to create (or update) the DOI must include an a element with the multi-resolution attribute set to unlock.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.3.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.3.0 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.3.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;19990628123304\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;name\u0026gt;xyz\u0026lt;/name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref Test\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Sample Journal\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;SJ\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;55555555\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2008\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;10\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;10\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Sample Article\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Firstname\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Surname\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2008\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-www-crossref-org.libus.csd.mu.edu/hello/\u0026lt;/resource\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34; multi-resolution=\u0026#34;unlock\u0026#34; /\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Unlock DOIs using the DOI resources schema This approach can be used for all existing records or can be used for new records if the content owner does not wish to include this metadata in their main metadata deposit. Resource-only deposits should be uploaded as \u0026lsquo;DOI Resources\u0026rsquo; when using the admin tool or operation=doDOICitUpload when doing a programmed HTTPS transaction.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.3.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.3.0\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;name\u0026gt;xyz\u0026lt;/name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34; multi-resolution=\u0026#34;unlock\u0026#34; /\u0026gt; \u0026lt;/doi_resources\u0026gt; \u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest2\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34; multi-resolution=\u0026#34;unlock\u0026#34; /\u0026gt; \u0026lt;/doi_resources\u0026gt; \u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest3\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34; multi-resolution=\u0026#34;unlock\u0026#34; /\u0026gt; \u0026lt;/doi_resources\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of a multiple-resolution unlock resource-only deposit Register secondary URLs When more than one URL is registered for a DOI, the DOI becomes a multiple resolution DOI. The primary URL is registered through a primary metadata deposit, but secondary URLs are typically submitted as a resource-only deposit by a secondary depositor. The secondary URL deposit consists of the DOI being updated, the secondary URL(s), and a label. No item-level metadata is required:\n\u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34;\u0026gt; \u0026lt;item label=\u0026#34;SECONDARY_X\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/test1\u0026lt;/resource\u0026gt; \u0026lt;/item\u0026gt; \u0026lt;/collection\u0026gt; \u0026lt;/doi_resources\u0026gt; The label value is case-sensitive, the label must be a minimum of 6 characters (no spaces).\nExample of a secondary URL resource-only deposit Example of secondary URLs as part of a primary metadata deposit Upload secondary URLs A secondary URL resource-only deposit must be uploaded with type doDOICitUpload for HTTPS POST (or DOI Resources when using the admin tool. The secondary depositor must have permission to add URLs to the primary depositor\u0026rsquo;s DOIs.\nHow to update multiple resolution URLs If you are the primary depositor, the primary URL may be updated in the standard way. If you need to update a secondary URL you’ll need to re-send the secondary XML file to us with the updated URLs included. When updating, please note that the item label value and the depositor role must be consistent with those used in the previous update - this is how we know what URL to update.\nReverse multiple resolution Multiple resolution can be reversed if the service is no longer needed for a DOI. When multiple resolution is reversed, the content owner should also lock the multiple resolution DOIs, preventing any future multiple resolution deposits.\nTo remove secondary URLs and lock DOIs, submit a resource-only deposit with a closed collection element, for example:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.3.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.3.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.3.0 http://0-www-crossref-org.libus.csd.mu.edu/schemas/doi_resources4.3.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;name\u0026gt;xyz\u0026lt;/name\u0026gt; \u0026lt;email_address\u0026gt;xyz@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.50505/mrtest1\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;list-based\u0026#34; multi-resolution=\u0026#34;lock\u0026#34; /\u0026gt; \u0026lt;/doi_resources\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; DOI resolution by country code Crossref\u0026rsquo;s implementation of multiple resolution supports a form of appropriate copy based on the country of origin of the user requesting the resolution service. This service allows a content owner to deposit multiple URLs for a single DOI, each of which is intended to service users from a particular country. The DOI resolver will determine the resolution request\u0026rsquo;s country of origin and select the appropriate URL target based on country codes (see list.\nThe country code and URL information are supplied within \u0026lt;collection\u0026gt; (learn more about the collection element), and can be deposited as part of a primary metadata deposit or as a resource-only deposit. If a country code is not supplied, the DOI will resolve to the URL supplied in the top level resource element.\nMetadata deposit example for multiple resolution \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/ilovedois\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/hello\u0026lt;/resource\u0026gt; default URL \u0026lt;collection property=\u0026#34;country-based\u0026#34;\u0026gt; \u0026lt;item country=\u0026#34;US\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/howdy\u0026lt;/resource\u0026gt; USA URL \u0026lt;/item\u0026gt; \u0026lt;item country=\u0026#34;SE\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/hej\u0026lt;/resource\u0026gt; Sweden URL \u0026lt;/item\u0026gt; \u0026lt;item country=\u0026#34;KE\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/hujambo\u0026lt;/resource\u0026gt; Kenya URL \u0026lt;/item\u0026gt; \u0026lt;/collection\u0026gt; \u0026lt;/doi_data\u0026gt; Resource-only deposit example for multiple resolution \u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.5555/ilovedois\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;country-based\u0026#34;\u0026gt; \u0026lt;item country=\u0026#34;US\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/howdy\u0026lt;/resource\u0026gt; USA URL \u0026lt;/item\u0026gt; \u0026lt;item country=\u0026#34;SE\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/hej\u0026lt;/resource\u0026gt; Sweden URL \u0026lt;/item\u0026gt; \u0026lt;item country=\u0026#34;KE\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/hujambo\u0026lt;/resource\u0026gt; Kenya URL \u0026lt;/item\u0026gt; \u0026lt;/collection\u0026gt; \u0026lt;/doi_resources\u0026gt; Role of the DOI proxy in multiple resolution The DOI proxy is maintained by CNRI on behalf of the IDF. Multiple resolution required the introduction of an additional Handle property for DOIs, called 10320/loc, which is itself a Handle.\nExample Handle record Show image × In the sample handle record the default URL is set to represent the content\u0026rsquo;s primary location. This is typically the platform of the content owner, or its primary publisher. The presence of property 10320/loc, containing an XML snippet, indicates to the proxy that multiple resolution is enabled for this DOI. The XML is interpreted as follows:\n\u0026lt;locations\u0026gt; element, chooseby: specifies the order of rules to be applied by the proxy when selecting from the \u0026lt;location\u0026gt; elements. locatt: used if the DOI request specifies a specific location item country: used if any location item specifies a specific country which must match the country of the requester weight: a weighted random selection from those \u0026lt;location\u0026gt; elements having weight values \u0026lt;location\u0026gt; element identifies a specific location id: a unique ID given to each location element cr_type: a Crossref property that specifies the type of multiple resolution to support cr_src: a Crossref property that identifies which user deposited the location value label: used by us to identify the co-host href: the URL of the location weight: the weighted value to use when applying the weighted-random selection process The presence or absence of a rule in the chooseby property will enable or disable that type of selection process by the proxy.\nWhat if I want to do multiple resolution but sometimes want to send people directly to my site? DOI resolution requests may be structured to bypass our interim page using features built into the proxy\u0026rsquo;s multiple resolution capabilities.\nYou can bypass the interim page by appending a label parameter to your DOI link. To force the DOI to resolve to the primary (original) host location, add the locatt=mode:legacy parameter to the end of the URL, for example:\nhttps://doi.org/10.50505/200806091300?locatt=mode:legacy To force the DOI to resolve to a secondary URL, add locatt=label:HOST-XYZ to the end of the URL, where HOST-XYZ is the label supplied in the secondary URL deposit, for example:\nhttps://doi.org/10.50505/200806091300?locatt=label:HOST-XYZ Learn more about the role of the DOI proxy in multiple resolution.\nHow does multiple resolution affect my resolution statistics? A click on a multiple resolution DOI is still a single click, it’s just that the clicks will be coming from an interim page instead of the DOI resolver, and your resolution reports will reflect that.\n", "headings": ["How to set up multiple resolution ","Create an interim page template ","Unlock DOIs for multiple resolution ","Unlock DOIs using the main deposit schema ","Unlock DOIs using the DOI resources schema ","Register secondary URLs ","Upload secondary URLs ","How to update multiple resolution URLs ","Reverse multiple resolution ","DOI resolution by country code ","Metadata deposit example for multiple resolution ","Resource-only deposit example for multiple resolution ","Role of the DOI proxy in multiple resolution ","Example Handle record ","What if I want to do multiple resolution but sometimes want to send people directly to my site? ","How does multiple resolution affect my resolution statistics? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/", "title": "Maintaining your metadata", "subtitle":"", "rank": 4, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "When you become a member of Crossref, you’re joining a community of organizations who have committed to link their content to each other persistently, and to share their metadata with each other and with the scholarly community.\nYou’re committing to:\nStewarding your DOIs and their associated metadata for the long term; Making sure that the DOI always resolves to a live landing page; Keeping the scholarly community aware of any changes to your content (such as withdrawals or retractions); Adding to, updating, and perhaps even deleting some metadata to keep your metadata useful to the whole community and to make your content even more discoverable.", "content": "When you become a member of Crossref, you’re joining a community of organizations who have committed to link their content to each other persistently, and to share their metadata with each other and with the scholarly community.\nYou’re committing to:\nStewarding your DOIs and their associated metadata for the long term; Making sure that the DOI always resolves to a live landing page; Keeping the scholarly community aware of any changes to your content (such as withdrawals or retractions); Adding to, updating, and perhaps even deleting some metadata to keep your metadata useful to the whole community and to make your content even more discoverable. This means that the work doesn\u0026rsquo;t stop after you first register your records - you should ensure that you continue to maintain this metadata record for the long term. There\u0026rsquo;s no charge to update the metadata after a record has first been registered, and you should aim to keep your records clean, complete and up-to-date.\nKeep your records clean Identify and correct any errors.\nUse our reports to help you identify problems. If you have omitted or provided any incorrect metadata, just update your metadata to make corrections, remove incorrect metadata, or provide missing data. Pay particular attention to the titles of journals, books, and conference proceedings - our support team may need to help you with corrections here. Although you can remove metadata elements from a record, it\u0026rsquo;s not possible to fully delete the records or DOIs, as they are designed to be persistent. Read more about changing or deleting DOIs, and contact us with the details of your situation so we can help. Keep your records complete Add information for additional fields, and don’t forget to do this for your backfiles too.\nGo beyond basic bibliographic metadata, and deposit as much rich metadata as possible. Richer metadata includes information such as: References ORCID iDs Funding information, including funder registry IDs and funding award numbers Crossmark metadata Text and data mining URLs License information Similarity Check URLs Abstracts Check your participation report to see what metadata is missing from your records. Keep your records up-to-date Metadata may change over time, and ownership of records may change, so make sure your metadata is updated with these changes.\nUpdate your resolution URLs if the location of your landing pages or full-text content changes - for example: if your website domain changes; if a journal moves from one hosting platform to another; or if a journal ceases to publish and the content must be accessed through an archive. Keep the community up-to-date with updates, retractions, or withdrawals by registering updates. Change the ownership of deposited DOIs when the ownership of the published research object changes. ", "headings": ["Keep your records clean","Keep your records complete","Keep your records up-to-date"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reference-linking/how-do-i-create-reference-links/", "title": "How do I create reference links?", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Step 1: Find DOIs for as many of your references as possible. There are two ways to find DOIs for your references:\nSimple Text Query - paste your reference lists into this web form, and it will return matches. This is a manual interface, and is suitable for low-volume querying. XML API - submit XML formatted according to the query schema section to our system as individual requests or as a batch upload.", "content": "Step 1: Find DOIs for as many of your references as possible. There are two ways to find DOIs for your references:\nSimple Text Query - paste your reference lists into this web form, and it will return matches. This is a manual interface, and is suitable for low-volume querying. XML API - submit XML formatted according to the query schema section to our system as individual requests or as a batch upload. This method requires API skills, and allows you significant control over your query execution and results. Step 2: Display the DOIs in your references: once you have retrieved the relevant DOIs, you must display them as URLs in your references (following our DOI display guidelines).\nShow image\r×\r", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reference-linking/data-and-software-citation-deposit-guide/", "title": "Data and software citation deposit guide", "subtitle":"", "rank": 4, "lastmod": "2022-06-07", "lastmod_ts": 1654560000, "section": "Documentation", "tags": [], "description": "As well as providing persistent links to scholarly content, we also provide community infrastructure by linking publications to associated content, making research easy to find, cite, link, and assess. Data citations are a core part of this service, linking publications to their supporting data, making both the research itself and the research process more transparent and reproducible.\nData citations are references to data, just as bibliographic citations make reference to other scholarly sources.", "content": "As well as providing persistent links to scholarly content, we also provide community infrastructure by linking publications to associated content, making research easy to find, cite, link, and assess. Data citations are a core part of this service, linking publications to their supporting data, making both the research itself and the research process more transparent and reproducible.\nData citations are references to data, just as bibliographic citations make reference to other scholarly sources.\nMembers deposit data citations by including them in their metadata as references and/or relationship types. Once deposited, data citations across journals (and publishers) are then aggregated and made freely available for the community to retrieve and reuse in a single, shared location.\nThere are two ways for members to deposit data citation links:\nBibliographic references: The main mechanism for depositing data and software citations is to insert them into an article’s reference metadata. Data citations are included in the deposit of bibliographic references for each publication. Follow the general process for depositing references and apply tags as applicable. Relationship type: data links are asserted in the relationship section of the metadata deposit, where they connect the publication to a variety of associated online resources (such as data and software, supporting information, protocols, videos, published peer reviews, preprint, conference papers) in a structured way, making discovery more powerful and accurate. Here, publishers can identify data which are direct outputs of the research results if this is known. This level of specificity is optional, but can support scientific validation and research funding management. The two methods are independent, and can be used individually or together.\nMethod Benefits Limitations Bibliographic references \u0026lt;ul\u0026gt;\u0026lt;li\u0026gt;Data and software citation is automatically deposited when included with publisher’s reference deposit\u0026lt;/li\u0026gt;\u0026lt;/ul\u0026gt; \u0026lt;ul\u0026gt;\u0026lt;li\u0026gt;Limited to datasets with DataCite DOIs. Others cannot be identified and validated from references deposit\u0026lt;/li\u0026gt;\u0026lt;li\u0026gt;Noise: not all DataCite DOIs linked are datasets/software (they could be other record types such as articles, slides, preprints)\u0026lt;/li\u0026gt;\u0026lt;/ul\u0026gt; Relation type \u0026lt;ul\u0026gt;\u0026lt;li\u0026gt;Precise identification of data, differentiated from other content\u0026lt;/li\u0026gt;\u0026lt;li\u0026gt;Dataset differentiation between those generated as part of research results from those cited by the research\u0026lt;/li\u0026gt;\u0026lt;/ul\u0026gt; \u0026lt;ul\u0026gt;\u0026lt;li\u0026gt;None\u0026lt;/li\u0026gt;\u0026lt;/ul\u0026gt; Sending this metadata to Crossref makes it easier for the research community to see links between different research outputs and work with these outputs. It also makes it easier to see these citations, so that researchers can get credit for their data and the sharing of that data.\nWe collect these citations, and make them freely available via our APIs in multiple interfaces (REST, OAI-­PMH, OpenURL) and formats (XML, JSON). Data is made openly available to a wide range of organizations and individuals across the extended research ecosystem including funders, research organisations, technology and service providers, indexers, and many others.\nBibliographic references` ``\u0026lt; As part of content registration, members add data and software citations into the bibliographic references, following the general process for depositing references.\nFull data or software citations can be deposited as unstructured references. See FORCE11’s community best practice: Joint Declaration of Data Citation Principles, Software Citation Principles, and advice on placement of citations.\nYou can employ any number of reference tags currently accepted by Crossref, but as good practice we\u0026rsquo;d recommend tagging the identifier for the code or dataset as shown below:\n\u0026lt;citation key=\u0026#34;ref2\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5061/dryad.684v0\u0026lt;/doi\u0026gt; \u0026lt;cYear\u0026gt;2017\u0026lt;/cYear\u0026gt; \u0026lt;author\u0026gt;Morinha F, Dávila JA, Estela B, Cabral JA, Frías Ó, González JL, Travassos P, Carvalho D, Milá B, Blanco G\u0026lt;/author\u0026gt; \u0026lt;/citation\u0026gt; Existing reference tags were originally established to match article and book references and do not readily apply to data or software. We are exploring JATS4R recommendations to expand the current collection and better support these citations. Please contact us if you would like to make additional suggestions.\nRelationship type` ``\u0026lt; Establishing data and software citations via relation type enables precise tagging of the dataset and its specific relationship to the research results published.\nTo tag the data and software citation in the metadata deposit, we ask for the description of the dataset and software (optional), dataset and software identifier and identifier type (DOI, PMID, PMCID, PURL, ARK, Handle, UUID, ECLI, and URI), and relationship type. In general, use the relation type references for data and software resources.\nTo specify that the data or software resource was generated as part of the research results, use isSupplementedBy. Being this specific is optional, but can support scientific validation and research funding management. See the list of controlled options for accepted identifier types.\nExamples of asserting a relationship to data and software in the metadata deposit` ``\u0026lt; Dataset Snippet of deposit XML containing link Dataset or software generated as part of research article: Data from: Extreme genetic structure in a social bird species despite high dispersal capacity. Database: Dryad Digital Repository``DOI: https://0-doi-org.libus.csd.mu.edu/10.5061/dryad.684v0 \u0026lt;program xmlns=\u0026quot;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026quot;\u0026gt; `\u0026lt;related_item\u0026gt;` \u0026lt;description\u0026gt;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/description\u0026gt; `\u0026lt;inter_work_relation relationship-type=\u0026quot;isSupplementedBy\u0026quot; identifier-type=\u0026quot;doi\u0026quot;\u0026gt;10.5061/dryad.684v0\u0026lt;/inter_work_relation\u0026gt;` \u0026lt;/related_item\u0026gt; `` \u0026lt;/program\u0026gt; Associated dataset or software:NKX2-5 mutations causative for congenital heart disease retain functionality and are directed to hundreds of targetsDatabase: Gene Expression Omnibus (GEO) **Accession number:** GSE44902 URL: https://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/geo/query/acc.cgi?acc=GSE44902 \u0026lt;program xmlns=\u0026quot;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026quot;\u0026gt; `\u0026lt;related_item\u0026gt;` \u0026lt;description\u0026gt;NKX2-5 mutations causative for congenital heart disease retain and are directed to hundreds of targets\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026quot;references\u0026quot; identifier-type=\u0026quot;Accession\u0026quot;\u0026gt;GSE44902\u0026lt;/inter_work_relation\u0026gt; `` \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; Example of data citation as relationship (full metadata deposit)` `` \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.4.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0 http://0-www-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;20170807\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;2017080715731\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Molecular Ecology\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;Mol Ecol\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn\u0026gt;09621083\u0026lt;/issn\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2017\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;26\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;10\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Francisco\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Morinha\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;Laboratory of Applied Ecology; Centre for Research and Technology of Agro-Environment and Biological Sciences (CITAB); University of Trás-os-Montes and Alto Douro (UTAD); Quinta de Prados 5000-801 Vila Real Portugal\u0026lt;/affiliation\u0026gt; \u0026lt;affiliation\u0026gt;Morinha Lab - Laboratory of Biodiversity and Molecular Genetics; Rua Dr. José Figueiredo, lote L-2, Lj B5 5000-562 Vila Real Portugal\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;additional\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;José A.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Dávila\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;Instituto de Investigación en Recursos Cinegéticos; IREC (CSIC, UCLM, JCCM); Ciudad Real Spain\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;additional\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Estela\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Bastos\u0026lt;/surname\u0026gt; \u0026lt;affiliation\u0026gt;Laboratory of Applied Ecology; Centre for Research and Technology of Agro-Environment and Biological Sciences (CITAB); University of Trás-os-Montes and Alto Douro (UTAD); Quinta de Prados 5000-801 Vila Real Portugal\u0026lt;/affiliation\u0026gt; \u0026lt;affiliation\u0026gt;Department of Genetics and Biotechnology; School of Life and Environmental Sciences; University of Trás-os-Montes and Alto Douro (UTAD); Quinta de Prados 5000-801 Vila Real Portugal\u0026lt;/affiliation\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;05\u0026lt;/month\u0026gt; \u0026lt;year\u0026gt;2017\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;03\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;13\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2017\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;2812\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;2825\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;references\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5061/dryad.684v0\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;archive_locations\u0026gt; \u0026lt;archive name=\u0026#34;Portico\u0026#34;/\u0026gt; \u0026lt;/archive_locations\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1111/mec.14069\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-doi-wiley-com.libus.csd.mu.edu/10.1111/mec.14069\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Example of data citation as relation (resource-only deposit)` `` \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.4.2\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.4.2\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/doi_resources4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;123456\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;doi_relations\u0026gt; \u0026lt;doi\u0026gt;10.1111/xxxx.xxxx\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;references\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5061/dryad.684v0\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;/doi_relations\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Bibliographic references` ``\u0026lt;","Relationship type` ``\u0026lt;","Examples of asserting a relationship to data and software in the metadata deposit` ``\u0026lt;","Example of data citation as relationship (full metadata deposit)` ``","Example of data citation as relation (resource-only deposit)` ``"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/cited-by/cited-by-participation/", "title": "How to participate in Cited-by", "subtitle":"", "rank": 4, "lastmod": "2023-04-28", "lastmod_ts": 1682640000, "section": "Documentation", "tags": [], "description": "Members can participate in Cited-by by completing the following steps:\nDeposit references for one or more prefixes as part of your content registration process. Use your Participation Report to see your progress with depositing references. This step is not mandatory, but highly recommended to ensure that your citation counts are complete. We will match the metadata in the references to DOIs to establish Cited-by links in the database. As new content is registered, we automatically update the citations and, for those members with Cited-by alerts enabled, we notify you of the new links.", "content": "Members can participate in Cited-by by completing the following steps:\nDeposit references for one or more prefixes as part of your content registration process. Use your Participation Report to see your progress with depositing references. This step is not mandatory, but highly recommended to ensure that your citation counts are complete. We will match the metadata in the references to DOIs to establish Cited-by links in the database. As new content is registered, we automatically update the citations and, for those members with Cited-by alerts enabled, we notify you of the new links. Retrieve citations through a URL query or the admin tool. Display the links on your website. We recommend displaying citations you retrieve on DOI landing pages, for example: Show image × If you are a member through a Sponsor, you may have access to Cited-by through your sponsor – please contact them for more details. OJS users can use the Cited-by plugin.\nCitation matching Members sometimes submit references without including a DOI tag for the cited work. When this happens, we look for a match based on the metadata provided. If we find one, the reference metadata is updated with the DOI and we add the \u0026quot;doi-asserted-by\u0026quot;: \u0026quot;crossref\u0026quot; tag. If we don’t find a match immediately, we will try again at a later date.\nThere are some references for which we won’t find matches, for example where a DOI has been registered with an agency other than Crossref (such as DataCite) or if the reference refers to an object without a DOI, including conferences, manuals, blog posts, and some journals’ articles.\nTo perform matching, we first check if a DOI tag is included in the reference metadata. If so, we assume it is correct and link the corresponding work. If there isn’t a DOI tag, we perform a search using the metadata supplied and select candidate results by thresholding. The best match is found through a further validation process. Learn more about how we match references. The same process is used for the results shown on our Simple Text Query tool.\nAll citations to a work are returned in the corresponding Cited-by query.\n", "headings": ["Citation matching "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/cited-by/retrieve-citations/", "title": "Retrieve citations using Cited-by", "subtitle":"", "rank": 4, "lastmod": "2023-04-28", "lastmod_ts": 1682640000, "section": "Documentation", "tags": [], "description": "There are a number of methods through which to retrieve citations:\nInput Query for Output Admin tool DOI List of citing DOIs HTTPS doi.crossref.org DOI, prefix XML XML DOI XML OAI-PMH oai.crossref.org prefix, title XML There are two additional options to receive a \u0026ldquo;push\u0026rdquo; notification any time one of your deposited works is cited:\nInput Output Email notification Email sent to an address provided by the member Notification callback Sent to an HTTP(S) URL endpoint hosted by the member Note that we don’t provide a plugin to directly display Cited-by results on a publisher website, although a community-developed plugin is available for OJS.", "content": "There are a number of methods through which to retrieve citations:\nInput Query for Output Admin tool DOI List of citing DOIs HTTPS doi.crossref.org DOI, prefix XML XML DOI XML OAI-PMH oai.crossref.org prefix, title XML There are two additional options to receive a \u0026ldquo;push\u0026rdquo; notification any time one of your deposited works is cited:\nInput Output Email notification Email sent to an address provided by the member Notification callback Sent to an HTTP(S) URL endpoint hosted by the member Note that we don’t provide a plugin to directly display Cited-by results on a publisher website, although a community-developed plugin is available for OJS. The data from our APIs is delivered in XML or JSON format and needs to be parsed for display on a webpage.\nOn this page, learn more about:\nRetrieving citation matches using: HTTPS POST XML queries the admin tool OJS Plugin OAI-PMH Citation notifications Troubleshooting Cited-by queries Retrieve citation matches using HTTPS POST Using a URL, you can retrieve all citations for a single DOI or prefix within a date range. You will need to provide your Crossref account credentials in the query.\nIf you use personal, individual user credentials, queries have the following format:\nhttps://doi.crossref.org/servlet/getForwardLinks?usr=email@address.com/role\u0026amp;pwd=password\u0026amp;doi=doi\u0026amp;startDate=YYYY-MM-DD\u0026amp;endDate=YYYY-MM-DD where:\nemail@address.com is your user credential email address role is the role corresponding to the prefix or title being retrieved password is your user credential password doi can be a full DOI or a prefix If you use shared, organization-wide role credentials, queries have the following format:\nhttps://doi.crossref.org/servlet/getForwardLinks?usr=username\u0026amp;pwd=password\u0026amp;doi=doi\u0026amp;startDate=YYYY-MM-DD\u0026amp;endDate=YYYY-MM-DD where:\nusername is the shared role and password is the shared password for the prefix or title being retrieved; doi can be a full DOI or a prefix. On both versions of the query, date range is optional. Dates in the query refer to when the citation match was made (usually shortly after the DOI of the citing article was registered), not the publication date of the articles being queried for: all citations found in the given period will be returned, regardless of when the cited articles were originally deposited. Queries can also be made for a single day, in which case use the following format:\nhttps://doi.crossref.org/servlet/getForwardLinks?usr=role\u0026amp;pwd=password\u0026amp;doi=prefix\u0026amp;date=YYYY-MM-DD By default, citations from posted content (including preprints) are not included. To retrieve them as well, include \u0026amp;include_postedcontent=true in the query URL:\nhttps://doi.crossref.org/servlet/getForwardLinks?usr=role\u0026amp;pwd=password\u0026amp;doi=prefix\u0026amp;date=YYYY-MM-DD\u0026amp;include_postedcontent=true Output is XML formatted according to Crossref’s query schema.\nIf the query times out, we recommend using a smaller query, for example by using a narrower date range or splitting prefixes into individual DOIs. This is unlikely to affect most users, however if you frequently experience timeouts due to large query results get in touch.\nHere is some example output:\n\u0026lt;crossref_result xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qrschema/crossref_query_output2.0.xsd\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;none\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;none\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;/forward_link\u0026gt; \u0026lt;forward_link doi=\u0026#34;10.1021/jacs.9b09811\u0026#34;\u0026gt; \u0026lt;journal_cite fl_count=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;2161-1653\u0026lt;/issn\u0026gt; \u0026lt;issn type=\u0026#34;electronic\u0026#34;\u0026gt;2161-1653\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;ACS Macro Letters\u0026lt;/journal_title\u0026gt; \u0026lt;journal_abbreviation\u0026gt;ACS Macro Lett.\u0026lt;/journal_abbreviation\u0026gt; \u0026lt;article_title\u0026gt;Critical Role of Ion Exchange Conditions on the Properties of Network Ionic Polymers\u0026lt;/article_title\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;contributor first-author=\u0026#34;true\u0026#34; sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Naisong\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Shan\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor first-author=\u0026#34;false\u0026#34; sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Chengtian\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Shen\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor first-author=\u0026#34;false\u0026#34; sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Christopher M.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Evans\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;volume\u0026gt;9\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;12\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;1718\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2020\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1021/acsmacrolett.0c00678\u0026lt;/doi\u0026gt; \u0026lt;/journal_cite\u0026gt; \u0026lt;/forward_link\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; Note that the fl_count property gives the number of times the citing article has itself been cited.\nRetrieve citation matches using an XML query Citations can be retrieved through an XML query. The query contains only the DOI of the cited article stored in the fl_query element. Each XML file must contain only a single DOI.\nIf you submit a batch query submission with more than one DOI per query, the remaining DOIs in that query will return the message \u0026ldquo;exceeded limit of forward link queries per submission.\u0026rdquo; So, any DOIs after the first will not have alerts enabled.\nSetting the alert attribute to “true” instructs the system to remember this query and to send new Cited-by link results to the specified email address when they occur. Note that an email address cannot be unset from receiving notifications, so only use this option for email addresses that will continue to receive notifications on a long-term basis.\nHere is an example XML query:\nhttps://doi.crossref.org/servlet/query?usr=ROLE\u0026amp;pwd=PASSWORD\u0026amp;qdata=\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;query_batch version=\u0026#34;2.0\u0026#34; xmlns = \u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qschema/crossref_query_input2.0.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;EMAIL\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;fl_001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;fl_query alert=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1021/acs.joc.7b01326\u0026lt;/doi\u0026gt; \u0026lt;/fl_query\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_batch\u0026gt; By default, citations from posted content (including preprints) are not included. To retrieve them as well, use \u0026lt;fl_query include_postedcontent=\u0026quot;true\u0026quot;\u0026gt; in the body of the query.\nHere is an example of the output XML:\n\u0026lt;crossref_result xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; version=\u0026#34;2.0\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/qrschema/2.0 http://0-www-crossref-org.libus.csd.mu.edu/qrschema/crossref_query_output2.0.xsd\u0026#34;\u0026gt; \u0026lt;query_result\u0026gt; \u0026lt;head\u0026gt; \u0026lt;email_address\u0026gt;{email}\u0026lt;/email_address\u0026gt; \u0026lt;doi_batch_id\u0026gt;fl_001\u0026lt;/doi_batch_id\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;forward_link doi=\u0026#34;10.5555/ums71316\u0026#34;\u0026gt; \u0026lt;journal_cite fl_count=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;issn type=\u0026#34;print\u0026#34;\u0026gt;1070-3632\u0026lt;/issn\u0026gt; \u0026lt;issn type=\u0026#34;electronic\u0026#34;\u0026gt;1608-3350\u0026lt;/issn\u0026gt; \u0026lt;journal_title\u0026gt;Russian Journal of General Chemistry\u0026lt;/journal_title\u0026gt; \u0026lt;journal_abbreviation\u0026gt;Russ J Gen Chem\u0026lt;/journal_abbreviation\u0026gt; \u0026lt;article_title\u0026gt;Simultaneous Formation of Cage and Spirane Pentaalkoxyphosphoranes in Reaction of 5,5-Dimethyl-2-(2-oxo-1,2-diphenylethoxy)-1,3,2-dioxaphosphorinane with Hexafluoroacetone\u0026lt;/article_title\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;contributor first-author=\u0026#34;true\u0026#34; sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;V. F.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Mironov\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor first-author=\u0026#34;false\u0026#34; sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;M. N.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Dimukhametov\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor first-author=\u0026#34;false\u0026#34; sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Ya. S.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Blinova\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;contributor first-author=\u0026#34;false\u0026#34; sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;F. Kh.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Karataeva\u0026lt;/surname\u0026gt; \u0026lt;/contributor\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;volume\u0026gt;90\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;11\u0026lt;/issue\u0026gt; \u0026lt;first_page\u0026gt;2080\u0026lt;/first_page\u0026gt; \u0026lt;year\u0026gt;2020\u0026lt;/year\u0026gt; \u0026lt;publication_type\u0026gt;full_text\u0026lt;/publication_type\u0026gt; \u0026lt;doi type=\u0026#34;journal_article\u0026#34;\u0026gt;10.1134/S1070363220110109\u0026lt;/doi\u0026gt; \u0026lt;/journal_cite\u0026gt; \u0026lt;/forward_link\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/query_result\u0026gt; \u0026lt;/crossref_result\u0026gt; Retrieve citation matches using the admin tool You can find citations to single DOIs using our admin tool. Log in using your Crossref account credentials, click on the Queries tab, then Cited By Links. This returns a list of DOIs:\nShow image\r×\rRetrieve citation matches using the OJS Cited-by plugin For members who manage their journal using OJS v3.1.2.4 or later, you can install a Cited-by plugin from the plugin gallery. It pulls data from the Cited-by API and can display it directly on article webpages. This plugin has been generously contributed by the community and is not maintained by Crossref.\nIf you are not using OJS but use another third party software to manage your journal there is a good chance that there is also a plugin available. We don\u0026rsquo;t maintain a comprehensive list of Cited-by plugins, but you can contact your software provider for details.\nRetrieve citation matches using OAI-PMH Note that the OAI-PMH API returns matches for the following article types: Journals, Books, Book Series, and Components. Other types are not included. To get complete results, we recommend using the HTTPS POST or an XML query (see the two sections above) for retrieving Cited-by matches rather than OAI-PMH.\nThis format retrieves Cited-by matches established within a date range for a prefix or title. Queries have the following format:\nhttps://oai.crossref.org/OAIHandler?verb=ListRecords\u0026amp;usr=role\u0026amp;pwd=password\u0026amp;set=record type:prefix:pubID\u0026amp;from=YYYY-MM-DD\u0026amp;until=YYYY-MM-DD\u0026amp;metadataPrefix=cr_citedby\u0026amp;include_postedcontent=false where:\nrole and password are the role credentials for the prefix or title being retrieved; record type is a single letter. Use J for journal; B for books, conference proceedings, datasets, reports, standards, or dissertations; and S for series; prefix is the owning prefix of the title being retrieved; pubID is the publication identification number of the title. This is optional: to query for all titles related to a prefix, simply omit the pubID; metadataPrefix=cr_citedby indicates that the results should include Cited-by matches rather than item metadata. The from and until parameters are optional and define a date range using YYYY-MM-DD format (ISO 8601). Items returned were cited at least once in the period. All citations for these items are returned, not only those that occurred between the two dates. Note that the date range does not refer to the publication date of the cited works, but the dates they were cited. By default, citations from posted content (including preprints) are not included. To retrieve them as well, add \u0026amp;include_postedcontent=true to the query URL.\nOutput is XML formatted according to our query schema and contains a list of the DOIs that cited the specified article or prefix.\nSome OAI-PMH requests are too big to be retrieved in a single transaction. If a given response contains a resumption token, the user must make an additional request to retrieve the rest of the data. Learn more about resumption tokens, and OAI-PMH requests.\nAn example OAI-PMH query response is as follows:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;OAI-PMH xmlns=\u0026#34;http://www.openarchives.org/OAI/2.0/\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd\u0026#34;\u0026gt; \u0026lt;responseDate\u0026gt;2020-12-21T10:38:26\u0026lt;/responseDate\u0026gt; \u0026lt;request verb=\u0026#34;ListRecords\u0026#34; from=\u0026#34;2020-01-01\u0026#34; until=\u0026#34;2020-01-02\u0026#34; set=\u0026#34;J:10.1021\u0026#34; metadataPrefix=\u0026#34;CR_CITEDBY\u0026#34; resumptionToken=\u0026#34;78da6164be33c5fb\u0026#34; \u0026gt;http://0-oai-crossref-org.libus.csd.mu.edu/oai\u0026lt;/request\u0026gt; \u0026lt;!-- recipient 1234 abc --\u0026gt; \u0026lt;ListRecords\u0026gt; \u0026lt;record\u0026gt; \u0026lt;header\u0026gt; \u0026lt;identifier\u0026gt;info:doi/10.1016/1044-0305(94)80016-2\u0026lt;/identifier\u0026gt; \u0026lt;datestamp\u0026gt;2020-12-18\u0026lt;/datestamp\u0026gt; \u0026lt;setSpec\u0026gt;J:10.1021\u0026lt;/setSpec\u0026gt; \u0026lt;/header\u0026gt; \u0026lt;metadata\u0026gt; \u0026lt;citations\u0026gt; \u0026lt;citation xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/crossref_citations_1.0.0\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1023/1051-030580416\u0026lt;/doi\u0026gt; \u0026lt;citations-cited-by\u0026gt; \u0026lt;doi type=\u0026#34;journal-article\u0026#34;\u0026gt;10.1021/acs.jproteome.0c00464\u0026lt;/doi\u0026gt; \u0026lt;doi type=\u0026#34;conference-paper\u0026#34;\u0026gt;10.1007/978-1-0716-0943-9_16\u0026lt;/doi\u0026gt; \u0026lt;doi type=\u0026#34;journal-article\u0026#34;\u0026gt;10.1038/s41598-020-78800-6\u0026lt;/doi\u0026gt; \u0026lt;doi type=\u0026#34;posted-content\u0026#34;\u0026gt;10.1101/2020.12.01.407270\u0026lt;/doi\u0026gt; \u0026lt;doi type=\u0026#34;journal-article\u0026#34;\u0026gt;10.1007/s11120-020-00803-1\u0026lt;/doi\u0026gt; \u0026lt;citations-cited-by\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citations\u0026gt; \u0026lt;/metadata\u0026gt; \u0026lt;/record\u0026gt; \u0026lt;resumptionToken \u0026gt;78da6164be33c5fb\u0026lt;/resumptionToken\u0026gt; \u0026lt;/ListRecords\u0026gt; \u0026lt;/OAI-PMH\u0026gt; OAI-PMH queries return the DOI of each citation. You can use our REST API or XML API to retrieve the full bibliographic data for each citation.\nCitation notifications You can receive citation notifications by email or an endpoint notification. In both cases the text of the message is the same: it contains the same output as an XML query, containing details of the citing and cited works.\nTo select an email address for Cited-by notifications, see the XML query section.\nTroubleshooting Cited-by queries Sometimes citations don’t show up in Cited-by when you would expect them. There could be several reasons for this:\nThe references haven’t been included in the metadata. We don’t use article PDFs or crawl websites to retrieve references, we rely on them being deposited as metadata by our members. Check the metadata of the citing work using our APIs to see whether references have been included. The DOI of the cited work wasn’t included in the reference and there was either an error in the metadata or insufficient information for us to make a reliable match. In this case, check the metadata for any errors and contact the owner of the citing work to redeposit the references. If the citing article was registered very recently it can take time to update the cited article’s metadata. If this happens, wait for a few days before trying again. Note that citations are only retrieved from works with a Crossref DOI and will differ from citation counts provided by other services. Not all scholarly publications are registered with us and not all publishers opt to deposit references, so we can\u0026rsquo;t claim that citation counts are comprehensive.\nIf you have difficulty accessing citation matches for your own content, try checking first with the admin tool and see if you can replicate the results there using one of the API options above.\n", "headings": ["Retrieve citation matches using HTTPS POST ","Retrieve citation matches using an XML query ","Retrieve citation matches using the admin tool ","Retrieve citation matches using the OJS Cited-by plugin ","Retrieve citation matches using OAI-PMH ","Citation notifications ","Troubleshooting Cited-by queries "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/accessing-the-funder-registry/", "title": "Accessing the Open Funder Registry", "subtitle":"", "rank": 4, "lastmod": "2021-04-09", "lastmod_ts": 1617926400, "section": "Documentation", "tags": [], "description": "The list is available to download as an RDF file from the Open Funder Registry GitLab repository, and is freely available under a CC0 license. The RDF file provides the funder preferred name, alternate name(s), country, type (government, private), subtype, and any relationship to another entity. Private funding subtypes include: academic, corporate, foundation, international, other non-profit (private), professional associations and societies. Government funding subtypes include: federal (national government), government non-federal (state/provincial government).", "content": "The list is available to download as an RDF file from the Open Funder Registry GitLab repository, and is freely available under a CC0 license. The RDF file provides the funder preferred name, alternate name(s), country, type (government, private), subtype, and any relationship to another entity. Private funding subtypes include: academic, corporate, foundation, international, other non-profit (private), professional associations and societies. Government funding subtypes include: federal (national government), government non-federal (state/provincial government). You can therefore easily build funding metadata into your own tools such as manuscript tracking systems, or analytics services.\nYou may also download a .csv file of the funder names and identifiers in the Open Funder Registry, and download a list of funders in JSON format.\nOpen Funder Registry updates The current and previous versions of the Registry are available from the Open Funder Registry GitLab repository. The registry is usually updated monthly.\nYou can request to have a missing funder added to the Open Funder Registry by using our contact form. Please include the name of the funder, its website address, country, and if possible any DOIs of research articles that already acknowledge this funder.\n", "headings": ["Open Funder Registry updates "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/funding-data-overview/", "title": "Funding data overview", "subtitle":"", "rank": 4, "lastmod": "2025-02-10", "lastmod_ts": 1739145600, "section": "Documentation", "tags": [], "description": "The funding data service lets members register funding source information for content items deposited with Crossref.\nThings to understand before you deposit `` Funders can be represented three ways: 1) the ROR id, 2) the funder name, or 3) the funder name nested with the funder identifier. Since the Open Funder Registry is transitioning into ROR, using the ROR id to identify funders is the preferred method.\nIf you are not using a ROR id, funding metadata must include the name of the funding organization and the funder identifier (where the funding organization is listed in the Registry), and should include an award/grant number or grant identifier.", "content": "The funding data service lets members register funding source information for content items deposited with Crossref.\nThings to understand before you deposit `` Funders can be represented three ways: 1) the ROR id, 2) the funder name, or 3) the funder name nested with the funder identifier. Since the Open Funder Registry is transitioning into ROR, using the ROR id to identify funders is the preferred method.\nIf you are not using a ROR id, funding metadata must include the name of the funding organization and the funder identifier (where the funding organization is listed in the Registry), and should include an award/grant number or grant identifier. Funder names should only be deposited without the accompanying ID if the funder is not found in the Registry. While members can deposit the funder name without the identifier, those records will not be considered valid until such a time as the funder is added to the database and they are redeposited (updated) with an ID. What that means is that they will not be found using the filters on funding information that we support via our REST API, or show up in our Open Funder Registry search.\nCorrect nesting of funder names and identifiers is essential as it significantly impacts how funders, funder identifiers, and award numbers are related to each other. If you use the ROR id to identify funders, this nesting is not neccessary and invalid.\nHere are some examples in order of most to least preferred:\n**Correct**: In this example, funder \"National Science Foundation\" is associated with the ROR id https://ror.org/021nxhr62. No name should be added. \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; **Correct**: In this example, funder \"National Science Foundation\" is associated with the funder identifier https://0-doi-org.libus.csd.mu.edu/10.13039/100000001 \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; **Correct**: In this example, funder \"National Science Foundation\" is only identified by name. \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation\u0026lt;/fr:assertion\u0026gt; **Incorrect**: Here, the funder name and funder identifier are not nested - these assertions will be indexed as separate funders. \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation\u0026lt;/assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/fr:assertion\u0026gt; **Incorrect**: Here, the funder name and ROR id will be indexed as separate funders. \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation\u0026lt;/assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; **Incorrect**: Here, the funder name and ROR id are nested - this is invalid. \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation\u0026lt;/assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; The purpose of funder groups is to establish relationships between funders and award numbers. A funder group assertion should only be used to associate funder names and identifiers with award numbers when multiple funders are present. Funding data deposit with one group of funders (no \u0026ldquo;fundgroup\u0026rdquo; needed): Show image × Funding data deposit with two fundgroups:\nShow image × Incorrect: Groups used to associate funder names with funder identifiers, these need to be nested as described above. Show image × Deposits using a funder_identifier that is not taken from the Open Funder Registry will be rejected. Deposits with only funder_name (no funder_identifier) will not appear in funder search results in Open Funder Registry search or the REST API. Funding data schema section `` The \u0026lt;fr:program\u0026gt; element in the deposit schema section (see documentation) supports the import of the fundref.xsd schema (see documentation). The fundref namespace (xmlns:fr=https://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd) must be included in the schema declaration, for example:\n\u0026lt;doi_batch xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2 https://0-data-crossref-org.libus.csd.mu.edu/schemas/crossref4.4.2.xsd\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.2\u0026#34; xmlns:jats=\u0026#34;http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/JATS1\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; xmlns:ai=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34; version=\u0026#34;4.4.2\u0026#34;\u0026gt; The fundref.xsd consists of a series of nested \u0026lt;fr:assertion\u0026gt; tags with enumerated name attributes. The name attributes are:\nfundgroup: used to group a funder and its associated award number(s) for items with multiple funders. ror: identifier of the funding agency as it appears in the Research Organization Registry (ROR). To be used instead of nested funder_name and funder_identifier. funder_name: name of the funding agency as it appears in the funding Registry. Funder names that do not match those in the registry will be accepted to cover instances where the funding organization is not listed. funder_identifier: funding agency identifier in the form of a DOI, must be nested within the funder_name assertion. The funder_identifier must be taken from the funding Registry and cannot be created by the member. Deposits without funder_identifier or ror do not qualify as funding records. award_number: grant number or other fund identifier Either ror or funder_name and funder_identifier must be present in a deposit where the funding body is listed in the Open Funder Registry. Multiple funder_name, funder_identifier, and award_number assertions may be included. Funder and award number hierarchy `` A relationship between a single funder and an award_number is established by including assertions with a \u0026lt;fr:program\u0026gt;.\nIn this example, funder National Institute on Drug Abuse with ROR id https://ror.org/00fq5cm18 is associated with award number JQY0937263:\n\u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/00fq5cm18\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;JQY0937263\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; In this example, funder National Institute on Drug Abuse with funder identifier https://0-doi-org.libus.csd.mu.edu/10.13039/100000026 is associated with award number JQY0937263:\n\u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Institute on Drug Abuse \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100000026\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;JQY0937263\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; If multiple funder and award combinations exist, each combination should be deposited within a fundgroup to ensure that the award number is associated with the appropriate funder(s). In this example, two funding groups exist:\nFunder National Science Foundation with ROR id https://ror.org/021nxhr62 is associated with award numbers CBET-106 and CBET-106, and Funder Basic Energy Sciences, Office of Science, U.S. Department of Energy with funder identifier https://0-doi-org.libus.csd.mu.edu/10.13039/100006151 is associated with award number 1245-ABDS. \u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-106\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-7259\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Basic Energy Sciences, Office of Science, U.S. Department of Energy \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100006151\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;1245-ABDS\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; Items with multiple funder names but no award numbers may be deposited without a fundgroup.\nAt a minimum, a funding data deposit must contain either a ror or a funder_name and funder_identifier assertion, and using the ROR id is preferred. Deposits with just an award_number assertion are not allowed. A ror or nested funder_name\\funder_identifier and award_number should be included in deposits whenever possible. If a ROR id is used, it should not include a funder_name or funder_identifier.\nIf the funder name cannot be matched in ROR or the Open Funder Registry, you may submit funder_name only, and the funding body will be reviewed and considered for addition to the official Registry. Until it is added to the Registry, the deposit will not be considered a valid funding record and will not appear in funding search or the REST API.\nAs demonstrated in Example 3 below, items with several award numbers associated with a single funding organization should be grouped together by enclosing the funder_name, funder_identifier, and award_number(s) within a fundgroup assertion.\nSome rules will be enforced by the deposit logic, including:\nNesting of the \u0026lt;fr:assertion\u0026gt; elements: the schema allows infinite nesting of the assertion element to accommodate nesting of an element within itself. Deposit code will only allow 3 levels of nesting (with attribute values of fundgroup, funder_name, and funder_identifier) Values of different \u0026lt;fr:assertion\u0026gt; elements: funder_name, funder_identifier, and award_number may have deposit rules imposed Only valid funder identifiers will be accepted: the funder_identifier value will be compared against the Open Funder Registry file. If the funder_identifier is not found, the deposit will be rejected. Deleting or updating funding metadata `` If funding metadata is incorrect or out-of-date, it may be updated by redepositing the metadata. Be sure to redeposit all available metadata for an item, not just the elements being updated. A DOI may be updated without resubmitting funding metadata, as previously deposited funding metadata will remain associated with the DOI.\nFunding metadata may be deleted by redepositing an item with an empty \u0026lt;fr:program name=\u0026quot;fundref\u0026quot;\u0026gt; element:\n\u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;year\u0026gt;2011\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;15\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;fr:program name=\u0026#34;fundref\u0026#34; /\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/cm_test_1.1\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;https://0-www-crossref-org.libus.csd.mu.edu/crossmark/index.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; Funder metadata examples `` Example 1: Funder information with ROR id `` The \u0026lt;fr:program\u0026gt; element captures funding data. It should be placed before the \u0026lt;doi_data\u0026gt; element. This deposit contains minimal funding data - one ror must be present; it is recommended over using funder_name and funder_identifier.\n\u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; Example 2: One funder, two grant numbers `` This example contains one funder_name and one funder_identifier. Note that the funder_identifier is nested within the funder_name assertion, establishing https://0-doi-org.libus.csd.mu.edu/10.13039.100000001 as the funder identifier for funder name National Science Foundation. Two award numbers are present.\n\u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-106\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-7259\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; Example 3: Multiple funders and grant numbers `` This example contains one ror (for the National Science Foundation) and one funder_name/identifier (for Basic Energy Sciences, Office of Science, U.S. Department of Energy) with two award_numbers for each funder. Each funding organization is within its own fundgroup.\n\u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;ror\u0026#34;\u0026gt;https://ror.org/021nxhr62\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-106\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CBET-7259\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Basic Energy Sciences, Office of Science, U.S. Department of Energy \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.13039/100006151\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;1245-ABDS\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;98562-POIUB\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; ", "headings": ["Things to understand before you deposit ``","Funding data schema section ``","Funder and award number hierarchy ``","Deleting or updating funding metadata ``","Funder metadata examples ``","Example 1: Funder information with ROR id ``","Example 2: One funder, two grant numbers ``","Example 3: Multiple funders and grant numbers ``"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/funding-data-deposits/", "title": "Funding data deposits", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Funding metadata can be deposited with Crossref in two ways:\nIn a stand-alone deposit where just the funding metadata is provided. As part of the full set of metadata for an article. When funding data is successfully deposited, an inserted identifier will appear as a message (\u0026lt;msg\u0026gt;) within the submission log:\n\u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1016/j.apcatb.2018.04.081\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Inserted identifier: 501100001809 for name: \u0026#34;the National Natural Science Foundation of China\u0026#34; \u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; Learn more about the detailed rules for depositing funding metadata.", "content": "Funding metadata can be deposited with Crossref in two ways:\nIn a stand-alone deposit where just the funding metadata is provided. As part of the full set of metadata for an article. When funding data is successfully deposited, an inserted identifier will appear as a message (\u0026lt;msg\u0026gt;) within the submission log:\n\u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.1016/j.apcatb.2018.04.081\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Inserted identifier: 501100001809 for name: \u0026#34;the National Natural Science Foundation of China\u0026#34; \u0026lt;/msg\u0026gt; \u0026lt;/record_diagnostic\u0026gt; Learn more about the detailed rules for depositing funding metadata.\nStand-alone deposit Stand-alone deposits are intended as a convenience for depositing funding data to existing DOIs without having to repeat the existing metadata. The deposit XML file contains just the DOI of the article and the specific Funding data. Please note the following:\nIf the DOI currently has any funding data it will be replaced by the stand-alone deposited data. If the DOI currently has any Crossmark data, the stand-alone deposited funding data will be inserted within the existing (previously deposited) Crossmark data. When uploading stand-alone XML deposits to Crossref via HTTPS POST, the operation must be doDOICitUpload. When uploading using the admin tool, the file type must be DOI References/Resources. Review this sample or download an XML file.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.4.2\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.4.2\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.4.2 doi_resources4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;arg_123_954\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;pfeeney@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;fundref_data\u0026gt; \u0026lt;doi\u0026gt;10.32013/12345678\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; National Science Foundation \u0026lt;assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CHE-1152342\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;/fundref_data\u0026gt; \u0026lt;fundref_data\u0026gt; \u0026lt;doi\u0026gt;10.32013/879fk3\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;National Science Foundation \u0026lt;assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000001\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;award_number\u0026#34;\u0026gt;CHE-1152342\u0026lt;/assertion\u0026gt; \u0026lt;/assertion\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;/fundref_data\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Full metadata deposit Funding data may be deposited as part of a normal \u0026lsquo;full\u0026rsquo; metadata XML deposit for a DOI.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.4.0\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schema/4.4.0 http://0-www-crossref-org.libus.csd.mu.edu/schema/deposit/crossref4.4.0.xsd\u0026#34; xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;doi_batch_id\u0026gt;7ce0adc7155c63a5e2b-3ebc\u0026lt;/doi_batch_id\u0026gt; \u0026lt;timestamp\u0026gt;201610241300\u0026lt;/timestamp\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;depositor_name\u0026gt;Crossref Support\u0026lt;/depositor_name\u0026gt; \u0026lt;email_address\u0026gt;support@crossref.org\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;registrant\u0026gt;Crossref\u0026lt;/registrant\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;journal\u0026gt; \u0026lt;journal_metadata language=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;full_title\u0026gt;Applied Physics Letters\u0026lt;/full_title\u0026gt; \u0026lt;abbrev_title\u0026gt;Appl. Phys. Lett.\u0026lt;/abbrev_title\u0026gt; \u0026lt;issn media_type=\u0026#34;print\u0026#34;\u0026gt;00036951\u0026lt;/issn\u0026gt; \u0026lt;coden\u0026gt;APPLAB\u0026lt;/coden\u0026gt; \u0026lt;/journal_metadata\u0026gt; \u0026lt;journal_issue\u0026gt; \u0026lt;publication_date media_type=\u0026#34;print\u0026#34;\u0026gt; \u0026lt;month\u0026gt;09\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;10\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;101\u0026lt;/volume\u0026gt; \u0026lt;/journal_volume\u0026gt; \u0026lt;issue\u0026gt;11\u0026lt;/issue\u0026gt; \u0026lt;/journal_issue\u0026gt; \u0026lt;journal_article publication_type=\u0026#34;full_text\u0026#34;\u0026gt; \u0026lt;titles\u0026gt; \u0026lt;title\u0026gt;Total energy loss to fast ablator-ions and target capacitance of direct-drive implosions on OMEGA\u0026lt;/title\u0026gt; \u0026lt;/titles\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;N.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Sinenian\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;A. B.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Zylstra\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;M. J.-E.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Manuel\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;H. G.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Rinderknecht\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;J. A.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Frenje\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;F. H.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Séguin\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;C. K.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Li\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;R. D.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Petrasso\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;V.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Goncharov\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;J.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Delettrez\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;I. V.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Igumenshchev\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D. H.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Froula\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;C.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Stoeckl\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;T. C.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Sangster\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D. D.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Meyerhofer\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;J. A.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Cobble\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;D. G.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Hicks\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;publication_date media_type=\u0026#34;online\u0026#34;\u0026gt; \u0026lt;month\u0026gt;09\u0026lt;/month\u0026gt; \u0026lt;day\u0026gt;10\u0026lt;/day\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;/publication_date\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;114102\u0026lt;/first_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;crossmark\u0026gt; \u0026lt;crossmark_version\u0026gt;1\u0026lt;/crossmark_version\u0026gt; \u0026lt;crossmark_policy\u0026gt;10.1063/aip-crossmark-policy-page\u0026lt;/crossmark_policy\u0026gt; \u0026lt;crossmark_domains\u0026gt; \u0026lt;crossmark_domain\u0026gt; \u0026lt;domain\u0026gt;aip.org\u0026lt;/domain\u0026gt; \u0026lt;/crossmark_domain\u0026gt; \u0026lt;/crossmark_domains\u0026gt; \u0026lt;crossmark_domain_exclusive\u0026gt;true\u0026lt;/crossmark_domain_exclusive\u0026gt; \u0026lt;custom_metadata\u0026gt; \u0026lt;assertion name=\u0026#34;received\u0026#34; label=\u0026#34;Received\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;0\u0026#34;\u0026gt;2012-07-31\u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;accepted\u0026#34; label=\u0026#34;Accepted\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;1\u0026#34;\u0026gt;2012-08-28\u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;published\u0026#34; label=\u0026#34;Published\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;2\u0026#34;\u0026gt;2012-09-10\u0026lt;/assertion\u0026gt; \u0026lt;fr:program name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; U.S. Department of Energy \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000015\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;DE-FG03-03SF22691\u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;DE-AC52-06NA27279\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; \u0026lt;/custom_metadata\u0026gt; \u0026lt;/crossmark\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.1063/1.4752012\u0026lt;/doi\u0026gt; \u0026lt;timestamp\u0026gt;20130806074500\u0026lt;/timestamp\u0026gt; \u0026lt;resource\u0026gt; http://0-scitation-aip-org.libus.csd.mu.edu/content/aip/journal/apl/101/11/10.1063/1.4752012 \u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/journal_article\u0026gt; \u0026lt;/journal\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; ", "headings": ["Stand-alone deposit ","Full metadata deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/updating-funding-deposits-with-new-registry-info/", "title": "Updating funding deposits with new registry info", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The Funder Registry is growing rapidly. New funders are continually being evaluated for inclusion, and the Registry is updated almost every month. Once added, a funder name is given a DOI, making it an official Registry entry. Check regularly and add funder IDs to existing records where no funder ID was available at the time of first deposit.\nUpdating deposited data to reflect registry changes `` If an appropriate identifier did not exist in the registry (or a match could not be made) at the time of deposit, metadata will need to be updated to include new identifiers and better registry metadata.", "content": "The Funder Registry is growing rapidly. New funders are continually being evaluated for inclusion, and the Registry is updated almost every month. Once added, a funder name is given a DOI, making it an official Registry entry. Check regularly and add funder IDs to existing records where no funder ID was available at the time of first deposit.\nUpdating deposited data to reflect registry changes `` If an appropriate identifier did not exist in the registry (or a match could not be made) at the time of deposit, metadata will need to be updated to include new identifiers and better registry metadata.\ngetFunders API `` We have a simple getFunders service to help you identify Funder Registry changes that affect your existing deposits. The getFunders service displays the funder information for a DOI (including Registry changes).\nFor example, DOI https://0-doi-org.libus.csd.mu.edu/10.1037/0735-7036.121.3.306 was registered on 18 May 2017 with the following funding data. This deposit identifies five sources of funding for the article, only one of which was identified with a registry DOI.\n\u0026lt;xref:publisher_item\u0026gt; \u0026lt;xref:identifier id_type=\u0026#34;other\u0026#34;\u0026gt;2007-11961-009\u0026lt;/xref:identifier\u0026gt; \u0026lt;xref:identifier id_type=\u0026#34;pmid\u0026#34;\u0026gt;17696657\u0026lt;/xref:identifier\u0026gt; \u0026lt;/xref:publisher_item\u0026gt; \u0026lt;fr:program xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; American Psychological Association \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100006324\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; American Association of University Women \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100005280\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;Soroptimist International\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt;SEASPACE\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; Walt Disney Company \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100004795\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; \u0026lt;xref:doi_data\u0026gt; \u0026lt;xref:doi\u0026gt;10.1037/0735-7036.121.3.306\u0026lt;/xref:doi\u0026gt; \u0026lt;xref:resource\u0026gt; http://0-doi-apa-org.libus.csd.mu.edu/getdoi.cfm?doi=10.1037/0735-7036.121.3.306 \u0026lt;/xref:resource\u0026gt; The API call https://0-doi-crossref-org.libus.csd.mu.edu/getFunders?q=10.1037/0735-7036.121.3.306 shows these same five funders with additional Registry information (in JSON format)\n{ \u0026#34;fundedItemDOI\u0026#34;: \u0026#34;10.1037/0735-7036.121.3.306\u0026#34;, \u0026#34;funders\u0026#34;: [ { \u0026#34;asDeposited\u0026#34;: \u0026#34;American Psychological Association\u0026#34;, \u0026#34;depositedAsIdentifier\u0026#34;: \u0026#34;false\u0026#34;, \u0026#34;suggested\u0026#34;: [ { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100006324\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Psychological Association\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100006324\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Psychological Association\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100006324\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Psychological Association\u0026#34; } ] }, { \u0026#34;asDeposited\u0026#34;: \u0026#34;American Association of University Women\u0026#34;, \u0026#34;depositedAsIdentifier\u0026#34;: \u0026#34;false\u0026#34;, \u0026#34;suggested\u0026#34;: [ { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100005280\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Association of University Women\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100005280\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Association of University Women\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100005280\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;American Association of University Women\u0026#34; } ] }, { \u0026#34;asDeposited\u0026#34;: \u0026#34;Soroptimist International\u0026#34;, \u0026#34;depositedAsIdentifier\u0026#34;: \u0026#34;false\u0026#34;, \u0026#34;suggested\u0026#34;: [] }, { \u0026#34;asDeposited\u0026#34;: \u0026#34;SEASPACE\u0026#34;, \u0026#34;depositedAsIdentifier\u0026#34;: \u0026#34;false\u0026#34;, \u0026#34;suggested\u0026#34;: [] }, { \u0026#34;asDeposited\u0026#34;: \u0026#34;Walt Disney Company\u0026#34;, \u0026#34;depositedAsIdentifier\u0026#34;: \u0026#34;false\u0026#34;, \u0026#34;suggested\u0026#34;: [ { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100004795\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;Walt Disney Company\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100004795\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;Walt Disney Company\u0026#34; }, { \u0026#34;isRegistryName\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;identifier\u0026#34;: \u0026#34;10.13039/100004795\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;http://sws.geonames.org/6252001/\u0026#34;, \u0026#34;prefLabel\u0026#34;: \u0026#34;Walt Disney Company\u0026#34; } ] } ] } This tells you that:\nSEASPACE was deposited without an identifier, and nothing in the registry has changed Walt Disney Company was deposited without an identifier, but there is a registry entry for The Walt Disney Company which has a registry DOI Soroptimist International was deposited without an identifier, and nothing in the registry has changed American Psychological Association was deposited without an identifier, but it does have a registry DOI American Association of University Women was deposited with a registry DOI. Depositing funding and license metadata using a .csv file` `` We support the deposit of funding and text and data mining license metadata in .csv format through our web deposit form - learn how to do a supplemental metadata upload using a .csv file.\n", "headings": ["Updating deposited data to reflect registry changes ``","getFunders API ``","Depositing funding and license metadata using a .csv file` ``"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/funder-data-via-the-api/", "title": "Funder data via the API", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Funding data can be accessed via the Crossref REST API. The API is openly available, and there is no requirement to register or be a Crossref member in order to use it. Funder IDs are DOIs that share a common prefix, so for API queries, only the DOI suffix should be used. Learn more about the structure of a DOI.\nUseful queries `` All funders listed in the Open Funder Registry - https://api.", "content": "Funding data can be accessed via the Crossref REST API. The API is openly available, and there is no requirement to register or be a Crossref member in order to use it. Funder IDs are DOIs that share a common prefix, so for API queries, only the DOI suffix should be used. Learn more about the structure of a DOI.\nUseful queries `` All funders listed in the Open Funder Registry - https://0-api-crossref-org.libus.csd.mu.edu/funders Find the funder ID for a specific funding body - https://0-api-crossref-org.libus.csd.mu.edu/funders?query={name}. For example: https://0-api-crossref-org.libus.csd.mu.edu/funders?query=wellcome List of DOIs associated with a specific funder - https://0-api-crossref-org.libus.csd.mu.edu/funders/{funder ID}/works. For example: https://0-api-crossref-org.libus.csd.mu.edu/funders/100004440/works Metadata for DOIs that cite a specific award/grant number - https://0-api-crossref-org.libus.csd.mu.edu/works?filter=award.number:{grant number}. For example: https://0-api-crossref-org.libus.csd.mu.edu/works?filter=award.number:CBET-0756451 Learn more in our REST API documentation, including detailed instructions on constructing further API queries.\n", "headings": ["Useful queries ``"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/participate/", "title": "How to participate in Similarity Check", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "When you apply for the Similarity Check service, you must ensure you have full-text URLs for Similarity Check present in the metadata of at least 90% of your registered articles (across all your journal prefixes). These URLs will be used by Turnitin to index your content into the iThenticate database, making you eligible for reduced-rate access to iThenticate through the Similarity Check service.\nThe URLs must point directly to your full-text PDF, HTML, or plain text content, and you must continue to include these links in all future deposits.", "content": "When you apply for the Similarity Check service, you must ensure you have full-text URLs for Similarity Check present in the metadata of at least 90% of your registered articles (across all your journal prefixes). These URLs will be used by Turnitin to index your content into the iThenticate database, making you eligible for reduced-rate access to iThenticate through the Similarity Check service.\nThe URLs must point directly to your full-text PDF, HTML, or plain text content, and you must continue to include these links in all future deposits. If you aren’t registering any journal articles and instead are registering other record types (such as conference papers), please contact us.\nThe metadata you deposit with Crossref is available to be searched and retrieved by everyone, and this includes Similarity Check full-text URLs. If your content is paywalled, please make sure that your Similarity Check URLs prompt an authentication step before allowing a user to access full-text content. You’ll also need to ensure that your hosting provider has safelisted the Turnitin IP range to ensure that the content is available for them to index.\nWhere should Similarity Check URLs point? These URLs will be used to index your content, so they need to resolve directly to the content itself - the full-text PDF, HTML or plain text content. PDFs in a frame can\u0026rsquo;t be indexed, and neither can content that\u0026rsquo;s wrapped in javascript. The URL must point directly to the location of the full-text content, and not to the article landing page (even if the content is available via a link on that page). Most members supply the PDF download link.\nLearn more about how to include these full-text URLs in your new deposits or add them to content that you’ve previously registered.\nSafelisting the Turnitin IP address Once you\u0026rsquo;ve added your Similarity Check URLs to your metadata, the Turnitin indexing crawler will index your content. If your content is openly available, the crawler will be able to access and index your content without further work on your side. But if your content is protected by authentication, you may need to safelist Turnitin\u0026rsquo;s IP address and UserAgent so they can do this.\nIf your content is protected by authentication, please ask your hosting provider to safelist the following IP address and UserAgent:\nIP address range: 199.47.87.132 to 199.47.87.135 AND 199.47.82.0 to 199.47.82.15. UserAgent: TurnitinBot/ContentIngest (http://www.turnitin.com/robot/crawlerinfo.html)\n", "headings": ["Where should Similarity Check URLs point? ","Safelisting the Turnitin IP address "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/participate/eligibility/", "title": "Checking your eligibility and applying", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "Update 2024: We are no longer able to offer the Similarity Check service to members based in Russia. Find out more.\nSimilarity Check is only available to Crossref members who have full-text URLs for Similarity Check present in the metadata of at least 90% of their registered articles (across all journal prefixes). These URLs must point directly to the full-text PDF, HTML, or plain text content and if your content is behind authentication, you need to safelist the Turnitin IP address.", "content": "Update 2024: We are no longer able to offer the Similarity Check service to members based in Russia. Find out more.\nSimilarity Check is only available to Crossref members who have full-text URLs for Similarity Check present in the metadata of at least 90% of their registered articles (across all journal prefixes). These URLs must point directly to the full-text PDF, HTML, or plain text content and if your content is behind authentication, you need to safelist the Turnitin IP address. Learn more about how to participate.\nYou can check the percentage of Similarity Check URLs already included in your metadata using the widget below - just start typing your account name in the box, select the correct one from the list, and your result will be automatically calculated. Don’t worry, you don’t need to know your Member ID.\nIf you do meet the threshold, the widget will link you to a form where you can apply for Similarity Check and click to accept the service terms. If you don’t meet the threshold, the widget will provide a .csv file to download which shows all your DOIs that don’t have Similarity Check full-text URLs. It will also provide instructions for how to add the missing full-text URLs.\nMember Name\rMember ID\rGood news - you’re eligible to apply for our Similarity Check service. You can now apply here and accept the service terms.\nOnce we receive your application, we’ll work with the team at Turnitin to confirm that they can access your content. If they can\u0026rsquo;t access your content, we won\u0026rsquo;t be able to continue with your application until this problem is solved, but we\u0026rsquo;ll work with you to fix any issues. Don’t forget, if your full-text content is protected by authentication, then you\u0026rsquo;ll need to ask your hosting provider to safelist Turnitin’s IP range to ensure your content is accessible for indexing purposes. Do make sure that this is done before you apply.\nWe’ll also send you a pro-rated invoice for your first year subscription to the service.\nWe’re sorry, but you are not eligible to register for our Similarity Check service just yet. To be eligible you need to register full-text urls for Similarity Check pointing to the full text article for more than 90% of your content. Find out how to add these full-text urls for Similarity Check to your existing content. To see exactly which content items are missing the full text article, simply click the “Generate CSV” button above. Please note, this CSV can only show the first 10k content items. If you have more than 10k content items missing DOIs, please contact us and we'll be able to provide you with the full list. We’re sorry, but you are not eligible to register for our Similarity Check service just yet. It looks as though you haven’t registered any content with us yet. To be eligible for Similarity Check you need to be registering content with us and including URLs for Similarity Check that point to the full text article for at least 90% of your content. Find out more about how to register content and learn more about full-text URLs for Similarity Check. Please note - if you aren\u0026rsquo;t registering ANY journal articles at all and are only registering non-journal content (eg conference papers, books) then the tool may not give accurate results. Please contact our support team for more help.\nUnfortunately, you aren’t currently eligible for the Similarity Check service as you don’t have Similarity Check URLs in the metadata of over 90% of your content. As you have so many DOIs registered, we can’t provide a CSV file showing which DOIs are missing the Similarity Check URLs. Please contact our support team with our team and they’ll be able to provide you with the CSV file. ", "headings": ["Running for [[membername]] (ID:[[memberid]])","Running for [[membername]] (ID:[[memberid]])","Results for [[membername]] (ID:[[memberid]])","[[simCheckTotal]]/[[journalTotal]] = [[percentage]]%"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/participate/urls-for-new-deposits/", "title": "Adding full-text URLs to new deposits", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "If you’re planning to participate in Similarity Check in the future, it’s best to include Similarity Check URLs in your deposits from the start. You can deposit Similarity Check URLs using our helper tools, or by including them in your XML deposits.\nHelper tools Our helper tools all contain a specific field where you can add your full-text URL specifically for Similarity Check:\nCrossref XML plugin for OJS: OJS automatically includes the Similarity Check URL as part of your deposit.", "content": "If you’re planning to participate in Similarity Check in the future, it’s best to include Similarity Check URLs in your deposits from the start. You can deposit Similarity Check URLs using our helper tools, or by including them in your XML deposits.\nHelper tools Our helper tools all contain a specific field where you can add your full-text URL specifically for Similarity Check:\nCrossref XML plugin for OJS: OJS automatically includes the Similarity Check URL as part of your deposit. Web deposit form: select Add Similarity Check URL Still using the deprecated Metadata Manager? Add under Additional information Direct XML deposit The full-text URL can be included as part of your standard metadata. For Similarity Check, the full-text URL needs to be deposited within the crawler-based collection property, with item crawler iParadigms. Here\u0026rsquo;s an example:\n\u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/sampledoi\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://www.yoururl.org/article1_.html\u0026lt;/resource\u0026gt; \u0026lt;collection property=\u0026#34;crawler-based\u0026#34;\u0026gt; \u0026lt;item crawler=\u0026#34;iParadigms\u0026#34;\u0026gt; \u0026lt;resource\u0026gt;http://www.yoururl.org/article1_.html\u0026lt;/resource\u0026gt; \u0026lt;/item\u0026gt; \u0026lt;/collection\u0026gt; \u0026lt;/doi_data\u0026gt; Use our widget to check the percentage of Similarity Check URLs included in your metadata.\n", "headings": ["Helper tools ","Direct XML deposit "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/participate/urls-for-existing-deposits/", "title": "Adding full-text URLs to existing deposits", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "If you\u0026rsquo;ve previously registered content without including your full-text URLs for Similarity Check, don\u0026rsquo;t worry - you can still add them later. Here are the options for adding full-text URLs for Similarity Check for existing deposits:\nUse the web deposit form’s supplemental metadata upload using a .csv file option If you have a large number of DOIs to update, it\u0026rsquo;s easiest to upload a .csv file of the DOIs and their Similarity Check full-text URLs using the web deposit form\u0026rsquo;s supplemental metadata upload using a .", "content": "If you\u0026rsquo;ve previously registered content without including your full-text URLs for Similarity Check, don\u0026rsquo;t worry - you can still add them later. Here are the options for adding full-text URLs for Similarity Check for existing deposits:\nUse the web deposit form’s supplemental metadata upload using a .csv file option If you have a large number of DOIs to update, it\u0026rsquo;s easiest to upload a .csv file of the DOIs and their Similarity Check full-text URLs using the web deposit form\u0026rsquo;s supplemental metadata upload using a .csv file option.\nUpload a resource-only deposit If you deposit Crossref metadata by sending us the XML directly, you may wish to update your existing XML using a resource-only deposit, as in this example XML file. Learn how to upload resource-only deposits to add metadata to an existing record.\nA full redeposit (update) Run a standard metadata deposit by adding the Similarity Check URLs, as in this example. Don\u0026rsquo;t forget to update your timestamp!\nUse our widget to check the percentage of Similarity Check URLs included in your metadata.\n", "headings": ["Use the web deposit form’s supplemental metadata upload using a .csv file option ","Upload a resource-only deposit ","A full redeposit (update) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-setup/", "title": "Setting up your iThenticate v1 account (admins only)", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v1. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration.", "content": "This section is for Similarity Check account administrators using iThenticate v1. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nYour personal administrator account in iThenticate v1 Once Turnitin has enabled iThenticate v1 for your organization, the main editorial contact provided on your application form will become the iThenticate account administrator. As an administrator, you create and manage the users on your account, and you decide how your organization uses the iThenticate tool.\nTo start with, you need to login to iThenticate and set your password.\nLog in to your administrator account (v1) Start from the link in the invitation email from noreply@ithenticate.com with the subject line “Account Created” and click Login Enter your username and single-use password Click to agree to the terms of the end-user license agreement. These terms govern your personal use of the service. They’re separate from the central Similarity Check service agreement that your organization has agreed to. You will be prompted to choose a new password Click ​Change Password​ to save. How do you know if you’re an account administrator? Once you\u0026rsquo;ve logged in, you will only be able to see the Manage Users tab if you\u0026rsquo;re an account administrator.\nShow image × So if you can\u0026rsquo;t see Manage Users or Users, you’re not an account administrator, and you can skip ahead to the user instructions for iThenticate v1.\nUpdating your personal email address or password Changing your email address or updating your password is the same for admins and other users. There\u0026rsquo;s more information in the user instructions for iThenticate v1.\n", "headings": ["Your personal administrator account in iThenticate v1","Log in to your administrator account (v1) ","How do you know if you’re an account administrator?","Updating your personal email address or password"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-setup/administrator-checklist/", "title": "Administrator checklist", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v1.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nNot sure whether you\u0026rsquo;re an account administrator?", "content": "This section is for Similarity Check account administrators using iThenticate v1.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nNot sure whether you\u0026rsquo;re an account administrator? Check here.\nAdministrator Checklist for iThenticate v1 As an administrator, you create and manage the users on your account, and you decide how your organization uses the iThenticate tool. You’ll find the system easier to use if you set it up correctly to start with, so do read through the checklist below carefully and make sure you\u0026rsquo;ve set up your account how you want it before inviting any users to your account.\nHow do you want to manage users? How will you use the folders in iThenticate? How will you use the exclusions functionality? Which iThenticate repositories will you want to check your manuscripts against? How will you budget for your document checking fees? 1. How do you want to manage users? (v1) You can set up users individually, or put them into groups and manage their settings at group level.\nHow many users do you need to set up? Do you need them to have different permissions or access to different folders? Do you want users to be able to see each other’s folders? Do you want to set up groups to manage the users? Do you want to be the only account administrator, or do you want to add other administrators? Learn more about how to manage users.\n2. How will you use the folders in iThenticate? (v1) If you set up different folders in iThenticate to manage the manuscripts you’re checking, you’ll be able to:\nAssign different users or groups to each folder Set up different exclusions on each folder Choose which repositories to include in each folder Report separately on different folders. You may choose to set up different folders for different titles or years of publication, for example.\nLearn more about how to manage folders.\n3. How will you use the exclusions functionality? (v1) Exclusions allow you to set iThenticate to ignore particular phrases, document sections, common words, and URLs, so that they are not flagged in your account’s Similarity Reports.\nWe recommend starting without any exclusions to avoid excluding anything important. Once your users are experienced enough to identify words and phrases that appear frequently but are not potentially problematic matches (and can therefore be ignored) in a Similarity Report, you can start carefully making use of this feature.\nAt account level, administrators can set phrase exclusions and URL filters. At folder level, administrators can exclude quotes, bibliography, phrases, small matches, small sources, abstracts, and methods and materials. Users can also edit filters and exclusions for existing folders. Users can set exclusions when setting up a new folder and adjust some settings at Similarity Report level. Set clear guidelines for your users so they understand the settings you have already applied, and can make skilful use of the options they can choose for themselves at report level.\n4. Which iThenticate repositories will you want to check your manuscripts against? (v1) iThenticate has a number of content repositories, grouped by the type of content they contain, including: Crossref, Crossref posted content, Internet, Publications, Your Indexed Documents.\nYou can choose which of iThenticate’s repositories you’re checking your manuscripts against. We recommend including them all to start with.\nThe person (whether an administrator or a user) who sets up a folder selects the repositories to check against for that folder. When the folder is shared, other users cannot adjust the repositories selected. Learn more about choosing which repositories to search against.\n5. How will you budget for your document checking fees? (v1) There’s a charge for each document checked, and you’ll receive an invoice in January each year for the documents you’ve checked in the previous year. If you\u0026rsquo;re a member of Crossref through a Sponsor, your Sponsor will receive this invoice.\nAs well as setting a Similarity Check document fees budget for your account each year, it’s useful to monitor document checking and see if you’re on track. You can monitor your usage in the reports section of the iThenticate platform. Ask yourself:\nHow many documents do you plan to check? How often do you want to monitor usage? Set yourself a reminder to check your usage reports periodically. How do you want to segment your report? You can report separately by groups of users, so think about what types of groups would make sense for your circumstances. Learn more about how usage reports can help you monitor the number of documents checked on your account.\nIt’s a good idea to come back to these questions periodically, consider how your use of the tool is evolving, and make changes accordingly.\n", "headings": ["Administrator Checklist for iThenticate v1 ","1. How do you want to manage users? (v1) ","2. How will you use the folders in iThenticate? (v1) ","3. How will you use the exclusions functionality? (v1) ","4. Which iThenticate repositories will you want to check your manuscripts against? (v1) ","5. How will you budget for your document checking fees? (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-setup/settings/", "title": "Admin settings", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "This section shows Similarity Check account administrators using iThenticate v1 how to update their account admin settings. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2.", "content": "This section shows Similarity Check account administrators using iThenticate v1 how to update their account admin settings. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nNot sure whether you\u0026rsquo;re an account administrator? Check here.\nThe Settings tab controls general, document, and report display options. These options include the number of documents shown for each page, default report view, and controlling email notifications.\nShow image × General settings (v1) Show image × Use General settings to set your home folder - this is the folder will open by default when you log in to iThenticate. Choose your home folder from the drop-down menu.\nFrom the Number of documents to show drop-down, choose how many uploaded documents are listed in your folders before a new page is created.\nChoose what is displayed after you upload a document to iThenticate: Display the upload folder (to see the processing of the document you have just uploaded), or Upload another document (returns you to the upload form).\nYou can also choose the time zone and language for your account - the language you choose will set the language of your user interface.\nClick Update Settings to save your changes.\nDocuments settings (v1) Show image × Use Documents settings to choose the default way iThenticate sorts your uploaded documents: by processed date, title, Similarity Score, and author. Choose your preferred option from the drop-down menu.\nYou can set the threshold at which the Similarity Score color changes, based on the percentage of similarity. All Similarity Scores above the percentage you set will appear in the folder in blue, all those beneath the percentage will appear in gray. This visual distinction helps you easily identify matches above a given threshold. Learn more about how to interpret the Similarity Score.\nClick Update Settings to save your changes.\nReports settings (v1) Show image × Use Reports settings to adjust your email notifications, choose whether to color-code your reports, and view available document repositories for your account.\nEmail notifications tell you when a Similarity Report has exceeded particular thresholds, including Similarity Reports in shared folders. Email notifications are sent to the email address you used to sign up to iThenticate.\nReport email frequency: choose whether to receive notifications, chose how often you would like to receive them every hour, once a day, every other day, or once a week Similarity Report threshold: this refers to a paper’s overall Similarity Score. If the Similarity Score of a paper in your account exceeds the threshold set, you will receive an email notification. The default setting is \u0026lsquo;don\u0026rsquo;t notify me\u0026rsquo;. Content tracking report threshold: this refers to the All Sources section of the Similarity Report. If a single source for a paper in your account exceeds the similarity threshold set, you will receive an email notification. The default setting is don\u0026rsquo;t notify me. Color code report: color-coding the Similarity Report can make viewing matches easier. Choose Yes or No to enable or disable this feature.\nAvailable document repositories: this section shows the available repositories for your account. Modify them in the folder settings.\n", "headings": ["General settings (v1) ","Documents settings (v1) ","Reports settings (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-setup/account-info/", "title": "Account information", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v1.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nNot sure whether you\u0026rsquo;re an account administrator?", "content": "This section is for Similarity Check account administrators using iThenticate v1.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here.\nNot sure whether you\u0026rsquo;re an account administrator? Check here.\nManage your admin account Manage your admin account using the Account Information tab. From here, you can make changes to your details in My Profile, set up URL filters and phrase exclusions across the whole account, and set up API access to connect your iThenticate account to your manuscript submission system.\nShow image × Your admin account profile (v1) The Account Information section shows important information about your iThenticate account, including your account name, account ID, and user ID. Please ignore the iThenticate account expiry date - we’re working with iThenticate to have this removed. The iThenticate account expiry date is set to 1 June 2022 by default.\nShow image × From Account Info, then My Profile, you can:\nUpdate your profile: this form shows your current details. To make changes, enter your password in the Current Password field at the top of the form. Change the name attributed to your account: enter the first and last name in the relevant fields. These fields are required, you cannot leave them blank. Change your email address: enter your email into the email field. This email address is used to send you important account information, so please make sure it is valid. This field is required, you cannot leave it blank. Add a photo to your account: click Choose File, and select the image file you want to upload. Change your password: enter your current password in the Current Password field, enter your new password in the Change Password field, and enter it again in the Confirm Password field. Click Update Profile to save your changes. URL filters (v1) This tab only appears if you are an account administrator.\nUse URL filters to apply URL exclusion filters across your account. Any URLs that you add here will be ignored when the system checks your manuscript against the iThenticate database, and it will apply across your whole account. If you want to let individual users decide which URLs to exclude instead, they can do this themselves at folder level.\nURL filters at the account level works in the same way as at the folder level. Learn more about exclusion settings when setting up a new folder, editing filters and exclusions in existing folders, filters and exclusions within the Similarity Report, and URL filters and phrase exclusions for account administrators.\nAdd a URL to be filtered, and click Add URL. Don’t forget to include / at the end of your URL. Click the X icon to the right of the URL to remove it.\nShow image × Phrase exclusions (v1) This tab only appears if you are an account administrator.\nUse Phrase Exclusions to apply phrase exclusion filters across your account. Any phrases that you add here will be ignored when the system checks your manuscript against the iThenticate database, and it will apply across your whole account. If you want to let individual users decide which phrases to exclude instead, they can do this themselves at folder level.\nPhrase exclusions at the account level works in the same way as at the folder level. Learn more about exclusion settings when setting up a new folder, editing filters and exclusions in existing folders, filters and exclusions within the Similarity Report, and URL filters and phrase exclusions for account administrators.\nClick Add a new phrase, enter the phrase you would like to exclude in the Phrase text field, and click Create. You can add another phrase, go Back to List, or go Back to Account.\nShow image × From the main Phrase Exclusions page, you can view, edit, or remove a phrase.\nAPI access (v1) This tab only appears if you are an account administrator.\nIf you want to connect your iThenticate account to your manuscript submission system, you can do this using the API. Once connected, you’ll be able to submit manuscripts for checking from within your manuscript submission system and see limited results. However, you\u0026rsquo;ll need to visit the iThenticate website to explore the results further.\nYou’ll need to contact iThenticate to set up access to the iThenticate API. Once your account has API access enabled, you’ll see the API Access IP addresses option under Account Info.\nShow image × Use the IP addresses field to specify the IP address ranges that are allowed access to your account. Talk to your manuscript submission system contact for details of what to include here.\nUse the special address 0.0.0.0 to allow access from any IP address. Enter addresses individually, or in Classless Inter-Domain Routing (CIDR) format, such as 192.68.2.0/24. Add multiple addresses by separating them with a space.\nLearn more about the technical reference specification for the iThenticate API.\n", "headings": ["Manage your admin account","Your admin account profile (v1) ","URL filters (v1) ","Phrase exclusions (v1) ","API access (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-setup/manage-users/", "title": "Manage users", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "This section shows Similarity Check account administrators using iThenticate v1 how to set up their users.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. This tab only appears if you are an account administrator. If you can’t see this tab, please start from your Similarity Check user account.", "content": "This section shows Similarity Check account administrators using iThenticate v1 how to set up their users.\nIf you are using iThenticate v2 rather than iThenticate v1, there are separate instructions for you.\nUsing iThenticate v2 directly in the browser - go to setting up iThenticate v2. Integrating iThenticate v2 with your MTS - go to setting up your MTS integration. This tab only appears if you are an account administrator. If you can’t see this tab, please start from your Similarity Check user account.\nShow image × On this page, learn more about:\nProfiles - add, remove and update users on your account. Groups - create groups and add users Reports - access statistics for your account and help plan your budget Sharing preferences - manage what sharing options are available to your users Email - customise your welcome email Profiles (v1) Within user profiles, you can:\nAdd a user: from Add User, click Add New User. On the User Information form, enter the new user\u0026rsquo;s details. Use the Reporting Group drop-down to assign them to a reporting group. Choose a time zone and language (this will be the language they see in the user interface and welcome email). You may upload an image to be attributed to the user - click Choose File to choose an image file from your device. Under the User Permissions section, choose whether this user may: submit documents or only be a reader of shared documents, select a reporting group to which to assign documents when uploading, share their folders with other users, update their profile information, and whether you would like to make this user an account administrator. Click Create to add the user to the iThenticate account. Add a list of users: from Add User, choose Upload User List. To see an example of a correctly-formatted user list, click examples. Click Browse, choose your file, and click Upload. Click View profile to adjust the settings for each user. Edit a user\u0026rsquo;s information: click Edit to the right of the user\u0026rsquo;s email to make changes to a user’s details and permissions Resend an activation email: when a new user is added, they are sent an activation email. To resend their activation email, click Send Activation. Deactivate a user: from the User Information page, click Deactivate User. A deactivated user may no longer log in to iThenticate, but all files associated with them are retained, and still viewable by administrators. Click Activate User to reactivate a user and restore their access to the account and all of their submitted documents and folders. Delete a user: from the User Information page, click Delete User to permanently delete this user from the account. Once a user has been deleted, all the documents they submitted are no longer accessible by the account administrator or shared users. If you accidentally delete a user, click undo in the banner beneath the top menu. If you navigate away from the page, Delete User cannot be undone. Search for a user: enter the user’s name into the search field and click Search. Groups (v1) Use Groups to create reporting groups and add users to groups. By grouping users, you can track usage statistics of a group.\nTo create a new group, enter a name for the new group in the Add New Group field and click Add Report Group. Show image × Add users to the group by going to the user’s profile, and use the Reporting Group drop-down menu to add them to a group. Delete a reporting group from your account by clicking the X icon to the right of the group name, and click OK to confirm. Show image × Change the name of a group by clicking the group’s name, edit the Update Group Name field, and click Update Group Name to save the new group name. Show image × Reports (v1) Under the Reports tab, you can access statistics for your account, reporting groups, and individual account users.\nShow image × View usage statistics by user/group, month, or date range. Click a group name to see more detailed usage statistics for the users in that group. Click a user’s name within a group to see their individual usage report, including document submissions, page count per month, and total submissions made. Click your organization name to see your organization’s usage report, including statistics of all submissions by all account users. This will help you budget for the per-document invoice you’ll receive each January for the documents you’ve checked in the previous year. Learn more about fees for Similarity Check. Click change by a report’s date range to change the date range. Enter dates in YYYY-MM-DD format or click the calendar icon to choose a date, then click Update Date Range Please note that the report will display a maximum of 150,000 lines/submissions. If your volume of submissions checked is higher than this for the time period you\u0026rsquo;ve entered, you\u0026rsquo;ll need to adjust the date range to smaller increments. Reporting on estimated usage for budgeting (v1) Each January you\u0026rsquo;ll be invoiced two separate fees for Similarity Check. There\u0026rsquo;s the annual service fee (which is included in your annual membership invoice) and your annual per document checking fees for all the documents you\u0026rsquo;ve checked in the previous year.\nWe know it’s difficult to keep an eye on how many documents you’re checking, particularly if you have more than one person at your organization using the service. However, do monitor your usage against the budget you set for Similarity Check. As the account administrator, you can keep up-to-date with how many documents have been checked in the Reports section under Manage Users. This can help you to estimate what you\u0026rsquo;ll be invoiced at the end of the year.\nShow image\r×\rOnce in the the Reports section under Manage Users, click Set Date Range, choose your date range, and then click Update Date Range.\nWhat you see next will depend on how you’ve set up your iThenticate account.\nAll accounts will see an orange link in the name of your account, with the number of submissions and documents checked in the selected date range next to it. You can drill down into more information by clicking on the orange link - this will show documents checked by month across your account, split up by individual users.\nIf you’ve created groups, you’ll see a list of your groups with the number of submissions and documents checked in the date range for each group. You can drill down into more information by clicking on each group.\nShow image\r×\rThe difference between submissions and documents and why this report is just an estimate (v1) There are two key columns on this table - Submissions and Document count.\nThe Submissions column shows the number of files you’ve submitted in iThenticate in your chosen date range, and the Document count shows how many documents these submissions are counted as. Some submissions include files that are so large that they\u0026rsquo;re considered two or more documents. Your per document fees invoice will be based on the Document count column.\nWhile this report provides an estimate for the per document fees invoice you\u0026rsquo;ll receive in January, it won\u0026rsquo;t be an exact match. For example, we don\u0026rsquo;t charge you for the first 100 documents you check each year, and we try to avoid charging you if you accidentally submit the same document within a 24 hour period. You can find out more about these differences in our billing section.\nSharing preferences (v1) From the Sharing tab, choose the type of sharing you would like to have for your account:\nView only folders shared by other users (default) View ALL users\u0026rsquo; folders View folders of selected users To change the sharing type, select your preferred sharing type and click Update Sharing.\nIf you select the View folders of selected users option, you must also choose the users’ folders to be shared - to select a user, click the check-box next to their name, and click Update Sharing.\nSet which non-administrator users may share folders by adjusting their permissions (learn more about user profiles.\nCustomize welcome email (v1) A welcome email is sent to new users you add to your account. To customize this welcome message, start from the Email tab.\nThe customized message is prefixed to the automated email, but does not replace it. The text of the automated email cannot be changed, as it contains important information about your account.\nEdit the Custom Email Subject and Custom Message fields as you wish, and click Set Custom Message. The Example \u0026ldquo;Welcome\u0026rdquo; Email Message will update to show you a preview of the welcome email.\n", "headings": ["Profiles (v1) ","Groups (v1) ","Reports (v1) ","Reporting on estimated usage for budgeting (v1) ","The difference between submissions and documents and why this report is just an estimate (v1) ","Sharing preferences (v1) ","Customize welcome email (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-account-setup/", "title": "Setting up your iThenticate v2 account for use directly in the browser (admins only)", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators only. It explains how administrators need to set up the iThenticate v2 account for their organizations if they are planning to use iThenticate in the browser. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v1 rather than iThenticate v2, take a look at the section for v1 account administrators.", "content": "This section is for Similarity Check account administrators only. It explains how administrators need to set up the iThenticate v2 account for their organizations if they are planning to use iThenticate in the browser. You need to follow the steps in this section before you start to set up your users and share the account with your colleagues.\nIf you are using iThenticate v1 rather than iThenticate v2, take a look at the section for v1 account administrators. If you intend to use iThenticate v2 through an integration with your Manuscript Submission System (MTS) instead, go to setting up your MTS integration. Your personal administrator account in iThenticate v2 Once Turnitin has enabled iThenticate v2 for your organization, the main editorial contact provided on your application form will become the iThenticate account administrator.\nYou will receive an email from Turnitin with a link to set your credentials. The email will look like this:\nClick on the blue ‘Set up my account’ button at the bottom of the email. This will bring you to a page which looks something like this:\nFill out your username and password, and don’t forget to tick to agree to the terms and conditions. You will then arrive at your new iThenticate v2 account.\nHow do you know if you’re an account administrator? When you are logged in to iThenticate, what tabs can you see?\nIf you\u0026rsquo;re using iThenticate v2, you will only be able to see Users on the menu if you\u0026rsquo;re an account administrator.\nShow image × So if you can\u0026rsquo;t see Manage Users or Users, you’re not an account administrator, and you can just read the user instructions for iThenticate v2 on the Turnitin website.\nUpdating your email address, username or password in the future If you need to change your personal email address, username or password in the future, you can find instructions on the Turnitin website.\nUpdating your email address or username Changing your password Forgot password? If you forgot your password and have never signed into your new v2 account, you\u0026rsquo;ll need to reach out directly to Turnitin\u0026rsquo;s support to have your password resent to you from Turnitin.\nIf you\u0026rsquo;ve already signed into your v2 account, but can\u0026rsquo;t remember your password, you can simply use the Forgot Password link on the login screen of your unique v2 website (https://crossref-xxx.turnitin.com, with xxx being your member ID).\n", "headings": ["Your personal administrator account in iThenticate v2","How do you know if you’re an account administrator?","Updating your email address, username or password in the future","Forgot password?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-account-setup/administrator-checklist/", "title": "Administrator checklist", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "As an administrator, you create and manage the users on your account, and you decide how your organization uses the iThenticate tool. You’ll find the system easier to use if you set it up correctly to start with, so do read through the checklist below carefully and make sure you\u0026rsquo;ve set up your account how you want it before inviting any users to your account.\nThis section is for Similarity Check account administrators using iThenticate v2 through the browser.", "content": "As an administrator, you create and manage the users on your account, and you decide how your organization uses the iThenticate tool. You’ll find the system easier to use if you set it up correctly to start with, so do read through the checklist below carefully and make sure you\u0026rsquo;ve set up your account how you want it before inviting any users to your account.\nThis section is for Similarity Check account administrators using iThenticate v2 through the browser.\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. As an administrator, you decide how your organization uses the iThenticate tool. You’ll find the system easier to use if you set it up correctly to start with, so do read through the checklist below carefully and make sure you\u0026rsquo;ve set up your account how you want it before inviting any users to your account.\nDecide how to manage your users and folders Decide on your exclusions Decide if you want to use the Submitted Works repository (or Private Repository) Decide how you\u0026rsquo;ll budget for your document checking fees Make sure you stay eligible for the Similarity Check service. 1. How do you want to manage your users and folders? How many users do you need to set up? Do you need them to have different permissions or access to different folders? Do you want users to be able to see each other’s folders? Do you want to be the only account administrator, or do you want to add other administrators? If you set up different folders in iThenticate to manage the manuscripts you’re checking, you’ll be able to assign different users to each folder. For example, you may choose to set up different folders for different titles or years of publication. Learn more about how to manage users and folders on the Turnitin website.\n2. Decide on your exclusions You can decide to exclude preprints, certain websites, or even specific sections of text. We recommend starting without any exclusions to avoid excluding anything important. Once your users are experienced enough to identify words and phrases that appear frequently but are not potentially problematic matches (and can therefore be ignored) in a Similarity Report, you can start carefully making use of this feature.\nFind out more in the Exclusions section.\n3. Decide if you want to use the Submitted Works repository (or Private Repository) The Submitted Works repository (or Private Repository) allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions but it also means you could share sensitive data between users, so you need to think very carefully about how you will use this feature. Find out more.\n4. Decide how you\u0026rsquo;ll budget for your document checking fees There’s a charge for each document checked, and you’ll receive an invoice in January each year for the documents you’ve checked in the previous year. If you\u0026rsquo;re a member of Crossref through a Sponsor, your Sponsor will receive this invoice.\nAs well as setting a Similarity Check document fees budget for your account each year, it’s useful to monitor document checking and see if you’re on track. You can monitor your usage in your Statistics section. Ask yourself:\nHow many documents do you plan to check? How often do you want to monitor usage? Set yourself a reminder to check your Statistics periodically. It’s a good idea to come back to these questions periodically, consider how your use of the tool is evolving, and make changes accordingly.\n5. Make sure you can stay eligible for the Similarity Check service Your organization gets reduced rate access to the iThenticate tool through the Similarity Check service because you make your own published content available to be indexed into the iThenticate database. You do this by providing full text URLs specifically for this service in the metadata that you register with Crossref. Talk to your colleagues who are responsible for registering your DOIs with Crossref, and make sure that they continue to include full text URLs for Similarity Check in the metadata they register with us.\n", "headings": ["1. How do you want to manage your users and folders? ","2. Decide on your exclusions ","3. Decide if you want to use the Submitted Works repository (or Private Repository) ","4. Decide how you\u0026rsquo;ll budget for your document checking fees ","5. Make sure you can stay eligible for the Similarity Check service "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-account-setup/exclusions/", "title": "Exclusions", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v2 through the browser, and describes how you can manage exclusions within your account settings..\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator?", "content": "This section is for Similarity Check account administrators using iThenticate v2 through the browser, and describes how you can manage exclusions within your account settings..\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Exclusions If you want to exclude items from your Similarity Report results, you can do this by clicking on Settings in the left hand menu in iThenticate v2 homepage. There are two tabs where you can change different items - one is labelled Crossref Web, and the other is labelled Web and API. Here are the various items you can exclude.\nPreprint Label and Exclusions iThenticate v2 introduces a new feature which will automatically identify preprint sources within your Similarity Report. This will allow you to easily identify preprints so your editors can make a quick decision as to whether to investigate this source further or exclude it from the report.\nIn order to start using this feature you will need to configure it within the iThenticate settings by logging directly into your iThenticate account. You can find instructions on how to configure this feature in Turnitin\u0026rsquo;s help documentation.\nYou also have the option to automatically exclude all preprint sources from reports. All excluded preprints will still be available within the Similarity Exclusions panel of your Similarity Report and can be reincluded in the report.\nFurther details of how preprints appear within the Similarity Report can be found in Turnitin\u0026rsquo;s help documentation .\nHere’s more information about things to consider when you find a match to a preprint in your Similarity Report.\nWebsite Exclusions The Website Exclusions setting will allow you to automatically exclude all matches to specific websites. Instructions on how to turn on and configure this feature can be found in Turnitin\u0026rsquo;s help documentation.\nThis feature will only exclude matches in the Internet repository. If a journal website is added to the list of excluded websites then all pages which have been crawled and indexed into Turnitin’s Internet repository will be excluded. However, journal articles from that journal which appear in the Crossref repository will not be excluded.\nThis feature will apply to all submissions made to the iThenticate account; including all web submissions and submissions made through any integration.\nAll excluded matches will still be available within the Similarity Exclusions panel of your Similarity Report and can be reincluded in the report. Further details of how these exclusions will appear can be found in Turnitin\u0026rsquo;s help documentation.\nCustomized Exclusions A new feature in iThenticate v2 is Customized Exclusions. The Customized Exclusions setting allows administrators to create sections of text that can be excluded from the Similarity Report. Administrators can tailor these keywords and phrases to best meet the needs of their organization (for example, ‘Further Acknowledgments’).\nMore details of how to turn on and configure this feature can be found in Turnitin\u0026rsquo;s help documentation.\n", "headings": ["Exclusions","Preprint Label and Exclusions","Website Exclusions","Customized Exclusions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-account-setup/private-repository/", "title": "Private repository", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v2 through the browser.\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members.", "content": "This section is for Similarity Check account administrators using iThenticate v2 through the browser.\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members. This feature allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions.\nHow does this work? You have received a manuscript from Author 1 and have decided to index this manuscript into your Submitted Works repository. At some point later you receive a new manuscript from Author 2. When generating your Similarity Report, you have decided to check against your Submitted Works repository. There is a paragraph in the manuscript from Author 2 which matches a paragraph in the manuscript from Author 1. This would be highlighted within your Similarity Report as a match against your Submitted Works repository.\nBy clicking on this match you can see the full text of the submission you’ve matched against:\nAnd details about the submission, such as the name and email address of the user who submitted it, the date it was submitted and the title of the submission:\nThe ability to see the full source text and the details can both be switched off individually.\nAs with all matches, they can be excluded from the Sources Overview panel or you can turn off matches against all Submitted Works from the settings:\nSetting up the Submitted Works repository There are instructions on how you can set up your Submitted Works settings in Turnitin\u0026rsquo;s help documentation.\nSubmitted Works repository FAQs Q. How much does this feature cost to use?\nThis feature comes free with every v2 account.\nQ. How many submissions can I index to my private repository?\nThere is no limit to the number of submissions you can index.\nQ. Can I delete submissions from my private repository?\nYes. An Administrator can find and delete a submission using the Paper Lookup Tool. Go to Turnitin\u0026rsquo;s help documentation for more information.\n", "headings": ["How does this work?","Setting up the Submitted Works repository","Submitted Works repository FAQs","Q. How much does this feature cost to use?\n","Q. How many submissions can I index to my private repository?\n","Q. Can I delete submissions from my private repository?\n"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-account-setup/manage-users/", "title": "Manage users and folders", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators using iThenticate v2 through the browser.\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Setting up users You can find instructions on how to set up users here.", "content": "This section is for Similarity Check account administrators using iThenticate v2 through the browser.\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Integrating iThenticate v2 with your Manuscript Submission System (MTS) instead? Go to setting up your MTS integration. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Setting up users You can find instructions on how to set up users here.\nUser groups Users can be added to a User Group in order to help facilitate collaboration. Instructions on how to set up new User Groups can be found here.\nUser settings Once a user has been set up they can change exclusion settings in the Similarity report which will apply to all submissions they make. Users should be encouraged to review their settings before making any new submissions. Instructions on how to update these settings can be found here.\nMoving users from iThenticate v1 to iThenticate v2 If you have recently upgraded to iThenticate v2, you can move your existing users from iThenticate v1 over to your new iThenticate v2 account. See our upgrade FAQs for more information.\nShared folders Shared Folders can be created when accessing iThenticate through the browser in order to help collaboration between users. Often a Shared Folder will be created for each journal so that all submissions sent to that folder can be accessed by each user from that journal. Folders can be shared either with individual users or with whole User Groups.\nInstructions on how to share a folder can be found in Turnitin\u0026rsquo;s help documentation.\n", "headings": ["Setting up users","User groups","User settings","Moving users from iThenticate v1 to iThenticate v2","Shared folders"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/", "title": "Setting up your iThenticate v2 account MTS integration (admins only)", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS). It explains how administrators need to set up the iThenticate v2 account for their organizations in order to integrate with their MTS.\nIf you are using iThenticate v1 rather than iThenticate v2, take a look at the section for v1 account administrators. If you intend to use iThenticate v2 directly in the browser (and not through an integration with your Manuscript Submission System (MTS) please skip to the section on setting up iThenticate v2 for browser users for iThenticate administrators.", "content": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS). It explains how administrators need to set up the iThenticate v2 account for their organizations in order to integrate with their MTS.\nIf you are using iThenticate v1 rather than iThenticate v2, take a look at the section for v1 account administrators. If you intend to use iThenticate v2 directly in the browser (and not through an integration with your Manuscript Submission System (MTS) please skip to the section on setting up iThenticate v2 for browser users for iThenticate administrators. Your personal administrator account in iThenticate v2 Once Turnitin has enabled iThenticate v2 for your organization, the main editorial contact provided on your application form will become the iThenticate account administrator.\nYou will receive an email from Turnitin with a link to set your credentials. The email will look like this:\nClick on the blue ‘Set up my account’ button at the bottom of the email. This will bring you to a page which looks something like this:\nFill out your username and password, and don’t forget to tick to agree to the terms and conditions. You will then arrive at your new iThenticate v2 account.\nHow do you know if you’re an account administrator? hen you are logged in to iThenticate, what tabs can you see?\nIf you\u0026rsquo;re using iThenticate v2, you will only be able to see Users on the menu if you\u0026rsquo;re an account administrator.\nIf you\u0026rsquo;re using iThenticate v2, you will only be able to see Users on the menu if you\u0026rsquo;re an account administrator.\nShow image × So if you can\u0026rsquo;t see Manage Users or Users, you’re not an account administrator, and you can just read the user instructions for iThenticate v2 on the Turnitin website.\nUpdating your email address, username or password in the future If you need to change your personal email address, username or password in the future, you can find instructions on the Turnitin website.\nUpdating your email address or username Changing your password Forgot password? If you forgot your password and have never signed into your new v2 account, you\u0026rsquo;ll need to reach out to Turnitin\u0026rsquo;s support to have your password resent to you from Turnitin.\nIf you\u0026rsquo;ve already signed into your v2 account, but can\u0026rsquo;t remember your password, you can simply use the Forgot Password link on the login screen of your unique v2 website (https://crossref-xxx.turnitin.com, with xxx being your member ID).\n", "headings": ["Your personal administrator account in iThenticate v2","How do you know if you’re an account administrator?","Updating your email address, username or password in the future","Forgot password?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/administrator-checklist/", "title": "Administrator checklist", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here.", "content": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. As an administrator, you decide how your organization uses the iThenticate tool. You’ll find the system easier to use if you set it up correctly to start with, so do read through the checklist below carefully and make sure you\u0026rsquo;ve set up your account as you want it before you start checking your manuscripts.\nSet up your integration with your MTS Decide on your exclusions DO NOT set up users or folders Decide if you want to use the Submitted Works repository (or Private Repository) Decide how you\u0026rsquo;ll budget for your document checking fees Make sure you stay eligible for the Similarity Check service 1. Set up your integration with your MTS To set up your integration, you need to create an API key by logging into iThenticate through the browser. You will then share this API key and the URL of your iThenticate v2 account with your MTS. Find out more.\nIf you are using OJS and want to use the iThenticate plugin, you\u0026rsquo;ll need to ensure that you\u0026rsquo;re on OJS version 3.3, 3.4, or 3.5. For instructions on how to upgrade your OJS instance, please visit PKP\u0026rsquo;s documentation here or here, depending on which version you\u0026rsquo;re currently running.\n2. Decide on your exclusions You can decide to exclude preprints, certain websites, or even specific sections of text. We recommend starting without any exclusions to avoid excluding anything important. Once your users are experienced enough to identify words and phrases that appear frequently but are not potentially problematic matches (and can therefore be ignored) in a Similarity Report, you can start carefully making use of this feature.\nFind out more in the Exclusions section.\n3. DO NOT set up users or folders If you are integrating iThenticate v2 with your MTS, you don\u0026rsquo;t need to set up more users or folders in your iThenticate - everything will be managed from your MTS. Find out more.\n4. Decide if you want to use the Submitted Works repository (or Private Repository) The Submitted Works repository (or Private Repository) allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions but it also means you could share sensitive data between users, so you need to think very carefully about how you will use this feature. This feature is currently only available to those integrating with ScholarOne Manuscripts. Find out more.\n5. Decide how you\u0026rsquo;ll budget for your document checking fees There’s a charge for each document checked, and you’ll receive an invoice in January each year for the documents you’ve checked in the previous year. If you\u0026rsquo;re a member of Crossref through a Sponsor, your Sponsor will receive this invoice.\nAs well as setting a Similarity Check document fees budget for your account each year, it’s useful to monitor document checking and see if you’re on track. You can monitor your usage in your Statistics section. Ask yourself:\nHow many documents do you plan to check? How often do you want to monitor usage? Set yourself a reminder to check your Statistics periodically. It’s a good idea to come back to these questions periodically, consider how your use of the tool is evolving, and make changes accordingly.\n6. Make sure you can stay eligible for the Similarity Check service Your organization gets reduced rate access to the iThenticate tool through the Similarity Check service because you make your own published content avaialble to be indexed into the iThenticate database. You do this by providing full text URLs specifically for this service in the metadata that you register with Crossref. Talk to your colleagues who are responsible for registering your DOIs with Crossref, and make sure that they continue to include full text URLs for Similarity Check in the metadata they register with us.\n", "headings": ["1. Set up your integration with your MTS ","2. Decide on your exclusions ","3. DO NOT set up users or folders ","4. Decide if you want to use the Submitted Works repository (or Private Repository) ","5. Decide how you\u0026rsquo;ll budget for your document checking fees ","6. Make sure you can stay eligible for the Similarity Check service "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/api-key/", "title": "Setting up your MTS integration with an API key", "subtitle":"", "rank": 4, "lastmod": "2022-07-18", "lastmod_ts": 1658102400, "section": "Documentation", "tags": [], "description": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here.", "content": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. To set up your integration, you need to create an API key by logging into iThenticate through the browser. You will then share this API key and the URL of your iThenticate v2 account with your MTS.\nStep One: Decide how many API scopes and API keys you need Within iThenticate, you can set up different API Scopes, and within that, different API keys. Most members will just need one API Scope and one API key. However, some members may need more than one.\nIf you need to integrate with more than one Manuscript Tracking System (MTS), you will need a different API Scope for each MTS. If you publish on behalf of societies or work with other organizations who want to keep their activities separate from each other, you will need a different API Scope and API key for each society. If at some point in the future, you need to change your API key for an existing MTS integration, you must generate a new API key under the same scope that you originally used for this integration. Step Two: Create your API Scope and API key(s) Click on “Integrations” in the menu.\nThis will bring you to the Integrations section. Click on the “Generate API Scope” key.\nYou will then give your API Scope a name.\nFor example, this may be the name of a particular MTS, or of a particular society.\nUnder your new API Scope, you can then set up your first API key.\nOnce you add the key name, you will be able to click on the “Create and view” button. The system will then generate your key.\nStep three: Add your API key into your Manuscript Tracking System (MTS) In order to integrate your new iThenticate v2 account and your Manuscript Tracking system(s), your MTS will require from you:\nAt least one API key Your unique iThenticate URL containing your Crossref membership number using the following format: https://crossref-xxx.turnitin.com. (For example, if your Crossref Membership number is 1234, your URL will be: https://crossref-1234.turnitin.com. If you are not sure what your Crossref Membership number is, please ask us. Follow the instructions below for the relevant MTS:\nOJS Follow the instructions found on PKP\u0026rsquo;s website. You\u0026rsquo;ll need to ensure that you\u0026rsquo;re on OJS version 3.3, 3.4, or 3.5. For instructions on how to upgrade your OJS instance, please visit PKP\u0026rsquo;s documentation here or here, depending on which version you\u0026rsquo;re currently running. Editorial Manager Enter your iThenticate API key(s) and your iThenticate v2 account URL into the iThenticate configuration page in Editorial Manager. There are instructions available from Aries Systems here. eJournal Press Email your API key(s) and your iThenticate v2 account URL to support@ejpress.com and the team at eJournal Press will set up the integration for you. ScholarOne If you are already using iThenticate with ScholarOne and are upgrading from iThenticate v1 to iThenticate v2, please email your API key(s) and your iThenticate v2 account URL to s1help@clarivate.com, and the team at ScholarOne will make the change for you. Please put “Product Management” in the subject line of your email. If you are a new subscriber to Similarity Check and you haven’t used iThenticate before, you don\u0026rsquo;t need to email the team at ScholarOne. Just enter your iThenticate API key(s) and your iThenticate v2 account URL into the iThenticate configuration page in ScholarOne. Scholastica The team at Scholastica will set up the integration for you. Give them your API key(s) and your iThenticate v2 account URL by filling out this form. The team at Scholastica will also set up any exclusions for you, so in the form they\u0026rsquo;ll ask you which sort of content you want to exclude from displaying as a match. ", "headings": ["Step One: Decide how many API scopes and API keys you need","Step Two: Create your API Scope and API key(s)","Step three: Add your API key into your Manuscript Tracking System (MTS)","OJS","Editorial Manager","eJournal Press","ScholarOne","Scholastica"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/exclusions/", "title": "Exclusions", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS). This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS)\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2?", "content": "This section of our documentation is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS). This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS)\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Exclusions If you are integrating iThenticate v2 with your MTS, there are some exclusions that you need to set directly in the iThenticate account in the browser before you configure your settings in your MTS integration.\nWithin iThenticate, you need to click on Settings in the left hand menu in iThenticate v2 homepage and go to the Web and API tab. Here are the various items you can exclude.\nPreprint Label and Exclusions iThenticate v2 can automatically identify preprint sources within your Similarity Report. This will allow you to easily identify preprints so your editors can make a quick decision as to whether to investigate this source further or exclude it from the report.\nIn order to start using this feature you will need to configure it within the iThenticate settings by logging directly into your iThenticate account. You can find instructions on how to configure this feature in Turnitin\u0026rsquo;s help documentation.\nYou also have the option to automatically exclude all preprint sources from reports. All excluded preprints will still be available within the Similarity Exclusions panel of your Similarity Report and can be reincluded in the report.\nOnce this is done, you also need to edit your integration configuration within your Manuscript Submission System to exclude preprints.\nFurther details of how preprints appear within the Similarity Report can be found in Turnitin\u0026rsquo;s help documentation .\nHere’s more information about things to consider when you find a match to a preprint in your Similarity Report.\nWebsite Exclusions The Website Exclusions setting allows to automatically exclude all matches to specific websites. Instructions on how to turn on and configure this feature can be found in Turnitin\u0026rsquo;s help documentation.\nThis feature will only exclude matches in the Internet repository. If a journal website is added to the list of excluded websites then all pages which have been crawled and indexed into Turnitin’s Internet repository will be excluded. However, journal articles from that journal which appear in the Crossref repository will not be excluded.\nThis feature will apply to all submissions made to the iThenticate account; including all web submissions and submissions made through any integration.\nAll excluded matches will still be available within the Similarity Exclusions panel of your Similarity Report and can be reincluded in the report. Further details of how these exclusions will appear can be found in Turnitin\u0026rsquo;s help documentation.\nCustomized Exclusions A feature in iThenticate v2 is Customized Exclusions. The Customized Exclusions setting allows administrators to create sections of text that can be excluded from the Similarity Report. Administrators can tailor these keywords and phrases to best meet the needs of their organization (for example, ‘Further Acknowledgments’).\nStart by configuring this feature directly in your iThenticate account in the browser . More details can be found in Turnitin\u0026rsquo;s help documentation. Once this is done, you then need to edit the integration configuration in your MTS to exclude customized sections.\n", "headings": ["Exclusions","Preprint Label and Exclusions","Website Exclusions","Customized Exclusions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/private-repository/", "title": "Private repository", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Private Repository - ScholarOne only The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2.", "content": "This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS).\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. Private Repository - ScholarOne only The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2. The only MTS that currently integrates with this feature is ScholarOne. This feature allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions.\nHow does this work? You have received a manuscript from Author 1 and have decided to index this manuscript into your Submitted Works repository. At some point later you receive a new manuscript from Author 2. When generating your Similarity Report, you have decided to check against your Submitted Works repository. There is a paragraph in the manuscript from Author 2 which matches a paragraph in the manuscript from Author 1. This would be highlighted within your Similarity Report as a match against your Submitted Works repository.\nBy clicking on this match you can see the full text of the submission you’ve matched against:\nAnd details about the submission, such as the name and email address of the user who submitted it, the date it was submitted and the title of the submission:\nThe ability to see the full source text and the details can both be switched off individually.\nSetting up the Submitted Works repository If you are using a third party integration then you should have options inside your MTS when setting up your configuration with iThenticate to decide whether submissions will be indexed to the Submitted Works repository and whether generated Similarity Reports will match against the Submitted Works.\nImportant: This feature means that sensitive data could be shared between different journals using your iThenticate account The Submitted Works repository is shared across your entire iThenticate account. This means regardless of whether a submission was made natively from the iThenticate website or through an integration, all Similarity Reports which match against the Submitted Works repository will potentially match against any submissions which were indexed within it. This means that an editor working on one journal may be able to view submissions for another journal. If you are worried about giving your users access to sensitive data, we recommend switching this functionality off.\nSubmitted Works repository FAQs Q. How much does this feature cost to use?\nThis feature comes free with every v2 account.\nQ. How many submissions can I index to my private repository?\nThere is no limit to the number of submissions you can index.\nQ. Can I delete submissions from my private repository?\nYes. An Administrator can find and delete a submission using the Paper Lookup Tool. Go to Turnitin\u0026rsquo;s help documentation for more information.\n", "headings": ["Private Repository - ScholarOne only","How does this work?","Setting up the Submitted Works repository","Important: This feature means that sensitive data could be shared between different journals using your iThenticate account","Submitted Works repository FAQs","Q. How much does this feature cost to use?\n","Q. How many submissions can I index to my private repository?\n","Q. Can I delete submissions from my private repository?\n"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticatev2-mts-account-setup/manage-users/", "title": "Manage users and folders", "subtitle":"", "rank": 4, "lastmod": "2022-07-18", "lastmod_ts": 1658102400, "section": "Documentation", "tags": [], "description": "This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS)\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. No need to create users with an integration with an MTS If you are using iThenticate through an integration with an MTS, then you do not need to set up any other users on your new iThenticate account.", "content": "This section is for Similarity Check account administrators who are integrating iThenticate v2 with their Manuscript Submission System (MTS)\nUsing iThenticate v1 instead? Go to the v1 account administrators section. Using iThenticate v2 through a browser instead? Go to setting up iThenticate v2 for browser users. Not sure if you\u0026rsquo;re using iThenticate v1 or iThenticate v2? More here. Not sure whether you\u0026rsquo;re an account administrator? Find out here. No need to create users with an integration with an MTS If you are using iThenticate through an integration with an MTS, then you do not need to set up any other users on your new iThenticate account. This is because all the submissions from your MTS will be made by the API key you’ve set, rather than individual users. The only person who will need credentials for the iThenticate account is the administrator.\nNo need to create folders with an integration with an MTS If you previously used iThenticate v1, you might be used to creating folders in iThenticate to integrate with your MTS. However, you no longer need to create folders. Everything will be handled through the integration panel in your MTS.\n", "headings": ["No need to create users with an integration with an MTS","No need to create folders with an integration with an MTS"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/", "title": "Using your iThenticate account", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 using your iThenticate account\nv1 using your iThenticate account, keep reading:\nWelcome to your Similarity Check user account!\nWhen your organization signs up for Similarity Check, a central contact at your organization will become your Similarity Check account administrator.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 using your iThenticate account\nv1 using your iThenticate account, keep reading:\nWelcome to your Similarity Check user account!\nWhen your organization signs up for Similarity Check, a central contact at your organization will become your Similarity Check account administrator. They will set up all the users on your account.\nWhen your administrator adds you as a user, you’ll receive an email from noreply@ithenticate.com with the subject line “Account Created” containing a username and a single-use password. You may only log in once with the single-use password, and you must change it the first time you log in.\nLog in to your user account (v1) Start from the link in the invitation email from noreply@ithenticate.com with the subject line “Account Created” and click Login Enter your username and single-use password Click to agree to the terms of the end-user license agreement. These terms govern your personal use of the service. They’re separate from the central Similarity Check service agreement that your organization has agreed to. You will be prompted to choose a new password Click ​Change Password​ to save. Your user account profile (v1) Manage your user account using the Account Info tab. Learn more about your Similarity Check user account.\nReset your password (v1) Start from iThenticate Click Login, then click Forgot Password. Enter your email address and click Submit You\u0026rsquo;ll receive a password reset link by email. Click the link in the email, choose a new password, and click Reset Password. Change your email address or password (v1) Start from iThenticate Enter your username and password Go to Profile. To change your email address, remove your current address from the ​Email​ field, enter your new email in the same field, and click ​Update Profile​ to save. To change your password, enter a new password in the Change Password field, repeat it in the Confirm Password field, and click ​Update Profile​ to save. Find your way around for users (v1) In the main navigation bar at the top of the screen, you will see several tabs:\nFolders: this is the main area of iThenticate, where you upload, manage, and view documents for checking Settings: configure the iThenticate interface Account Info: manage your account, including your user profile and account usage Show image × ", "headings": ["Log in to your user account (v1) ","Your user account profile (v1) ","Reset your password (v1) ","Change your email address or password (v1) ","Find your way around for users (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/folders/", "title": "Folders", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Folders\nv1 Folders, keep reading:\nThe Folders page contains the main functionality of iThenticate, the service which powers Crossref Similarity Check.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Folders\nv1 Folders, keep reading:\nThe Folders page contains the main functionality of iThenticate, the service which powers Crossref Similarity Check. It is where folders are created, browsed and shared with other users, where documents are submitted within a folder to be checked against the iThenticate database for similarity, and where documents can be deleted or moved from one folder to another.\nStart from iThenticate, and log in.\nOn this page, learn more about how to:\nCreate a new folder Exclusions Repositories to search against Save your new folder Create a new folder group Organize folders Move folders Delete folders Share folders Edit folder settings Automatic exclusion of bibliography sections Automatic exclusion of quotations Create a new folder (v1) Look for the New Folder section on the right of the screen, and click New Folder\nShow image × On the Create A New Folder page, use Folder Groups to specify the group where you’d like to save your new folder, then enter a name in Folder Name.\nShow image × Exclusions (v1) You can choose to exclude certain text from the Similarity Check for all documents uploaded into this folder. Learn more about filters and exclusions within the Similarity Report, and URL filters and phrase exclusions for account administrators.\nUse the relevant tick boxes to exclude quotes, bibliography, certain phrases (set these under Account Info), small matches, and small sources.\nShow image × To exclude small matches, you set an exclusion threshold. Any match with fewer words than the threshold will be excluded from the Similarity Check. This affects the Match Overview view in Document Viewer. Modify this option from within Document Viewer.\nShow image × To exclude small sources, you set a word count or a percentage exclusion threshold. Any matches with fewer words, or lower than a certain percentage matched will be excluded from the Similarity Check. This affects the All Sources view in Document Viewer. Modify this option from within Document Viewer.\nShow image × Think carefully about using percentage thresholds if you are working with large documents, where a set percentage of 1% may exclude very large matches/sources. For example, 1% of a 100-page document is one full page.\nThe exclude sections option allows you to exclude longer abstracts or methods and materials sections from being picked up by the Similarity Check.\nShow image × Please be aware that section exclusion may not work properly if documents contain:\nWatermarks Unevenly spaced line numbering Sub-headings that are indistinguishable from the Methods and Materials heading Abstract or Methods and Materials section appearing within a table Section headings and body text using the same font, font size, and font treatment Repositories to search against (v1) Choose which collections to include in the Similarity Check. Here are the currently available repositories:\nCrossref - research articles, books, and conference proceedings provided by publishers of scholarly content all over the world Crossref posted content - preprints, eprints, working papers, reports, dissertations, and many other types of content that has not been formally published but has been registered with Crossref Internet - a database of archived and live publicly-available web pages, including billions of pages of existing content, and with tens of thousands of new pages added each day Publications - third-party periodical, journal, and publication content including many major professional journals, periodicals, and business publications Show image × To buy the option to create a customizable database source with your own content to submit to and search against, please contact sales@ithenticate.com.\nSave your new folder (v1) Once you are satisfied with the changes you’ve made, click Create at the bottom of the form to create your new folder.\nCreate a new folder group (v1) Start from the New folder section to the right of the page, and click New Folder Group.\nShow image × On the Create A New Folder Group screen, name your new folder group, and click Create.\nNow you have an empty folder group. To add a folder to this folder group, click Create a folder. To delete an empty folder group, click Remove this empty group.\nOrganize folders (v1) Folders in the folder group are shown in alphabetical order. To see a folder group’s content, go to the My Folders section on the left, and click My Folders.\nShow image × You can choose to organize the folders within a folder group by title, or by date processed:\nTo sort the folders by title, click the Title header in the title column. A down arrow shows that the folders have been arranged in alphabetical order. Click the down arrow again to put the folders in reverse alphabetical order. To sort the folders by date created, click the Date Created header in the date created column. A down arrow shows that the folders have been arranged by date created, with the most recent first (reverse chronological order). Click the down arrow again to put the folders in chronological order. Show image × Move folders (v1) To move folders to another folder group, go to the folder group containing the folders you wish to move. Click the tick box beside the folder you want to move. From the drop-down menu, use Move selected to\u0026hellip; to choose the destination folder group, and click Move.\nShow image × The drop-down menu will not show unless you have created other folders to make it possible to move a document.\nDelete folders (v1) Start from the My Folders side menu, and hover over the folder you wish to delete. Click the trash can icon to move the folder to the Trash folder group.\nShow image × To delete multiple folders, go to the folder group, and check the tick boxes for each folder you wish to delete. Click Trash in the menu bar above to move the folders to the Trash folder group.\nOnce a folder has been moved to the trash, you can review it before you delete it permanently. From the My Folders menu on the left, click the Trash folder group. In the trash, you can see all the folders you have moved here. To remove a folder from the trash, check its tick box, and use Move selected to\u0026hellip; to move the folder to another location.\nTo permanently delete a folder, check its tick box, and click Delete in the menu bar above. Once you have permanently deleted a folder from Trash, you will not be able to get it back.\nShare folders (v1) Depending on how your account administrator has set up sharing permissions, you may be able to (a) view only folders shared by other users, (b) view all users\u0026rsquo; folders, or (c) view folders of selected users. If you cannot automatically view others’ folders, use the sharing feature to share folders with other users within the same account.\nStart from the folder you want to share, and click the Sharing tab.\nShow image × You will see a list of users with whom you can share the folder. Check the box next to the users’ names, and click ​Update Sharing​. Sharing a folder with another user allows them to view the Similarity Report only. It does not allow them to submit a document to the folder.\nOnce a folder has been shared, there are two ways to unshare the folder:\nby the user who shared it: uncheck the box next to the user’s name, and click ​Update Sharing by the user with whom it is being shared: in the user’s directory, hover the cursor over the folder name, and an X icon will appear to the right of the folder name. Click the X icon to remove the shared folder. Account administrators can enable or disable sharing access based on the organization’s internal guidelines. If the sharing feature is disabled, users will not be able to view previously shared documents.\nEdit folder settings (v1) To customize a folder’s settings, use the Settings tab within the folder. Folder settings includes three tabs: Folder Options, Report Filters, and Phrase Exclusions.\nShow image × Use Folder Options to view and modify the options you chose when you created the folder. Use Report Filters to manage the list of URLs that are filtered out from comparison checking for that folder. Use Add URL to add a URL to be filtered, and click Add URL. The URL you add may be as specific or general as you wish, for example: http://example.com/ (don’t forget to include the trailing “/”) - to exclude an entire site http://example.com/docs/ - to exclude a specific directory http://example.com/docs/paper.pdf - to exclude a specific document To remove a URL, click the X icon to the right of the URL. Show image × Use Phrase Exclusions to add and remove phrases to exclude from comparison checking for every submission in this folder. To add a new phrase, click Add a new phrase, enter the phrase you wish to exclude in the Phrase text box, and click Create. If you don’t want to create a phrase to exclude, click Back to list to return to the Phrase Exclusions tab, or Back to folder to return to the folder view. Show image × Automatic exclusion of bibliography sections (v1) iThenticate detects the following keywords and ignores any matches after the keyword:\nreference, references reference list reference cited, references cited reference and note, reference and notes references and note, references and notes reference \u0026amp; note, references \u0026amp; note reference \u0026amp; notes, references \u0026amp; notes references and further reading resource, resources resources directory bibliography bibliographic information works cited, work cited citations literature literature cited When it reaches any of the following words in the paper, it resumes the Similarity Check:\nappendix appendices glossary table tables acknowledgement, acknowledgements exhibits figure figures chart charts Automatic exclusion of quotations (v1) Supported marks: iThenticate recognizes these quotation marks and will ignore any matches that use them:\n\u0026ldquo;\u0026hellip;\u0026rdquo; «\u0026hellip;» »\u0026hellip;« „…“ 《\u0026hellip;》 〈\u0026hellip;〉 『\u0026hellip;』 Unsupported marks: iThenticate does not recognize these quotation marks and will flag any matches that use them:\n\u0026lsquo;\u0026hellip;\u0026rsquo; This applies even when (single) \u0026lsquo;quotes\u0026rsquo; appear within (double) \u0026ldquo;quotes\u0026rdquo;. For example:\n\u0026ldquo;This text would be excluded \u0026lsquo;but this text would not be excluded\u0026rsquo; \u0026ldquo;then this text would also be excluded.\u0026rdquo;\niThenticate will also exclude formatted block quotations (indented blocks of text) in .doc or .docx files.\n", "headings": ["Create a new folder (v1) ","Exclusions (v1) ","Repositories to search against (v1) ","Save your new folder (v1) ","Create a new folder group (v1) ","Organize folders (v1) ","Move folders (v1) ","Delete folders (v1) ","Share folders (v1) ","Edit folder settings (v1) ","Automatic exclusion of bibliography sections (v1) ","Automatic exclusion of quotations (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/documents-overview/", "title": "Documents overview", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Documents overview\nv1 Documents overview, keep reading:\nWithin a folder, the Documents tab shows all the submitted documents for that folder.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Documents overview\nv1 Documents overview, keep reading:\nWithin a folder, the Documents tab shows all the submitted documents for that folder.\nShow image × Each document submitted generates a Similarity Report after the document has been through the Similarity Check. If more documents are present than can be displayed at once, the pages feature will appear beneath the documents - click the page number to display, or click Next to move to the next page of documents.\nYou can submit documents in three ways:\nupload a file - to submit a single file zip file upload - to submit a zip file containing multiple documents, up to a maximum of 100MB or 1,000 files. Larger files may take longer to upload cut \u0026amp; paste - to submit text directly into the submission box. Use this to copy and paste a submission from a file format that is not supported. This method supports plain text only (no images or non-text information) iThenticate currently accepts the following file types for document upload:\nMicrosoft Word® (.doc and .docx) Word XML plain text (.txt) Adobe PostScript® Portable Document Format (.pdf) HTML Corel WordPerfect® (.wpd) Rich Text Format (.rtf) Each file may not exceed 400 pages, and each file size may not exceed 100 MB. Reduce the size of larger files by removing non-text content. You can’t upload or submit to iThenticate files that are password-protected, encrypted, hidden, system files, or read-only.\n.pdf documents must contain text - if they contain only images of text, they will be rejected during the upload attempt. To check, copy and paste a section of the .pdf into a plain-text editor such as Microsoft Notepad® or Apple TextEdit®. If no text is copied over, the selection does not contain text.\nTo convert scanned images of a document, or an image saved as a .pdf, use Optical Character Recognition (OCR) software to convert the image to text. The conversion software can introduce errors, so manually check and correct the converted document.\nSome document formats can contain multiple data types, such as text, images, embedded information from another file, and formatting. Non-text information that is not saved directly within the document will not be included in a file upload, for example, references to a Microsoft Excel® spreadsheet included within a Microsoft Office Word® document.\nUse a word-processing program to save your file as one of the accepted types listed above, such as .rtf or .txt. Neither file type supports images or non-text data within the file. Plain text format does not support any formatting, and rich text format allows only limited formatting.\nWhen converting a file to a new format, save it with a different name from the original, to avoid accidentally overwriting the original file. This is especially important when converting to plain text or rich text formats, to prevent permanent loss of the original formatting or image content of the file.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/documents-submit/", "title": "Submit a document", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Submit a document\nv1 Submit a document, keep reading:\nYou can submit a document by simple upload, zip file upload, or cut and paste.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Submit a document\nv1 Submit a document, keep reading:\nYou can submit a document by simple upload, zip file upload, or cut and paste. Once uploaded, you can edit the document information.\nUploading your file (v1) Upload a File allows you to submit a single document from a variety of document types. From the Submit a document menu, click Upload a File, and the Upload a file form opens.\nShow image × Under Destination Folder, choose the folder to which you wish to upload the file. Its Similarity Report will be added to the same folder. Complete Author First Name, Author Last Name, and Document Title fields. If Document Title is left blank, the document’s filename will be used. Click Choose File, and locate the file to upload. Use Add another file to add more files, up to a total of ten. Click Upload to proceed with with uploading the selected document(s), or click Cancel to cancel the upload. Zip file upload (v1) iThenticate allows you to submit multiple documents from a variety of document types in a compressed zip file. The zip file may be up to approximately 100MB in size and contain up to 1,000 individual files. If the zip file exceeds either limit, it will be rejected. Check that your zip file contains only accepted file types, and no duplicate copies of the same file.\nShow image × Click Zip File Upload from the Submit a document menu. Choose your Destination Folder from the drop-down. The Similarity Report for the file will also be found here. The information you enter in the Author First Name and Author Last Name fields will be applied to all the documents in the zip file. You can manually change these once the document is uploaded to the folder. Click Choose file, locate the zip file on your device, and click Upload. The title of the each document in the zip files will be the default title of each submission.\nCut and paste (v1) Use the cut and paste submission option to submit information from non-supported file types, or to submit only specific parts or areas of a document.\nOnly text can be submitted using this method - any graphics, graphs, images, and formatting are lost when pasting into the text submission box.\nShow image × Click Cut \u0026amp; Paste from the Submit a document menu. Choose your Destination Folder from the drop-down. The Similarity Report for the file will also be found here. Complete the Author First Name, Author Last Name, and Document Title fields. If no title is given, the default title “Pasted Document” will be used. Copy your desired text for checking, paste it into the Paste your document in the area below text box, and click Upload. To view recent uploads, go to the Submit a document menu, click Recent Uploads, and you will see recent uploads listed in reverse chronological order (most recent first). Click the Date \u0026amp; Time header to see the uploads in chronological order (oldest first).\nEdit document information (v1) To edit a document’s information (title and author name), click the edit icon to the right of a document in a folder. You will see the Document Properties page. Edit the fields, and click Update to save your changes.\n", "headings": ["Uploading your file (v1) ","Zip file upload (v1) ","Cut and paste (v1) ","Edit document information (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/similarity-report-create/", "title": "Creating and finding your Similarity Report", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Creating and finding your Similarity Report\nv1 Creating and finding your Similarity Report, keep reading:\nFor each document you submit for checking, the Similarity Report provides an overall similarity breakdown.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Creating and finding your Similarity Report\nv1 Creating and finding your Similarity Report, keep reading:\nFor each document you submit for checking, the Similarity Report provides an overall similarity breakdown. This is displayed in the form of percentage of similarity between the document and existing published content in the iThenticate database. iThenticate’s repositories include the published content provided by Crossref members, plus billions of web pages (both current and archived content), work that has previously been submitted to Turnitin, and a collection of works including thousands of periodicals, journals, publications.\nMatches are highlighted, and the best matches are listed in the report sidebar. Other matches are called underlying sources, and these are listed in the content tracking mode. Learn more about the different viewing modes (Similarity Report mode, Content tracking mode, Summary report mode, Largest matches mode).\nIf two sources have exactly the same amount of matching text, the best match depends on which content repository contains the source of the match. For example, for two identical internet source matches, the most recently crawled internet source would be the best match. If an identical match is found to an internet source and a publication source, the publication source would be the best match.\nAccessing the Similarity Report (v1) To access the Similarity Report through iThenticate, start from the folder that contains the submission, and go to the Documents tab. In the Report column, you will see a button - click this Similarity Score to open the document in the Document Viewer.\nShow image × The Document Viewer (v1) The Document Viewer screen opens in the last used viewing mode. There are three sections:\nAlong the top of the screen, the document information bar shows details about the submitted document. This includes the document title, the date the report was processed, the word count and the number of matching sources found in the selected databases. The left panel is the document text. This shows the full text of the submitted document, highlighting areas of overlap with existing published content. The colors correspond to the matching sources, listed in the sources panel on the right. Show image × The layout will depend on your chosen report mode:\nMatch Overview (show highest matches together) shows the best matches between the submitted document and content from the selected search repositories. Matches are color-coded and listed from highest to lowest percentage of matching word area. Only the top or best matches are shown - you can see all other matches in the Match Breakdown and All Sources modes. All Sources shows matches between the submission and a specifically selected source from the content repositories. This is the full list of all matches found, not just the top matches per area of similarity, including those not seen in the Match Overview because they are the same or similar to other areas which are better matches. Match Breakdown shows all matches, including those that are hidden by a top source and therefore don’t appear in Match Overview. To see the underlying sources, hover over a match, and click the arrow icon. Select a source to highlight the matching text in the submitted document. Click the back arrow next to Match Breakdown to return to Match Overview mode. Side-By-Side Comparison is an in-depth view that shows a document’s match compared side-by-side with the original source from the content repositories. From the All Sources view, choose a source from the sources panel, and a source box highlights on the submitted document similar content within a snippet of the text from the repository source. In Match Overview, select the colored number at the start of the highlighted text to open this source box. To see the entire repository source, click Full Source View, which opens the full-text of the repository source in the sources panel and all the matching instances. The sidebar shows the source’s full text with each match to the document highlighted in red. Click the X icon in the top right corner of the full source text panel to close it. Use the view mode icons to switch between the Match Overview (default, left icon) and All Sources Similarity Report viewing modes. Click the right icon to change the Similarity Report view mode to All Sources.\nShow image × Viewing live web pages for a source (v1) You may access web-based sources by clicking on the source title/URL. If there are multiple matches to this source, use the arrow icons to quickly navigate through them.\nIf a source is restricted or paywalled (for example, subscription-based academic resources), you won’t be able to view the full-text of the source, but you’ll still see the source box snippet for context. Some internet sources may no longer be live.\nFrom Match Overview, click the colored number at the start of a piece of highlighted text on the submitted document. A source box will appear on the document text showing the similar content highlighted within a snippet of the text from the repository source. The source website will be in blue above the source snippet - click the link to access it.\nShow image × From Match Breakdown or All Sources, select the source for which you want to view the website, and a diagonal icon will appear to the right of the source. Click this to access it.\nShow image × ", "headings": ["Accessing the Similarity Report (v1) ","The Document Viewer (v1) ","Viewing live web pages for a source (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/similarity-report-use/", "title": "Working with your Similarity Report", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Working with your Similarity Report\nv1 Working with your Similarity Report, keep reading:\nOnce you have accessed your Similarity Report, you can:", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Working with your Similarity Report\nv1 Working with your Similarity Report, keep reading:\nOnce you have accessed your Similarity Report, you can:\nDownload Similarity Report PDF Apply filters and exclusions in individual Similarity Reports Exclude a match Excluded sources list Access the text-only report Download Similarity Report PDF (v1) To download a Similarity Report as a print-friendly .pdf document, click the print icon at the bottom left of the Document Viewer.\nShow image × The .pdf created is based on the current view of the Similarity Report, so a version created while in Match Overview will create a .pdf with color-coded highlights.\nFilters and exclusions in individual Similarity Reports (v1) You can use filters and exclusions to remove certain elements from being checked for similarity, and help you focus on more significant matches. The functions for excluding material are approximate - they are not perfectly accurate. Take care when choosing what to exclude, as you may miss important matches. At folder level, all users can set filters and exclusions, and administrators can also set URL filters and phrase exclusions. These settings will apply to any documents within the folder. But you can also set filters and exclusions on an individual document, so they only apply to the Similarity Report for that specific document.\nStart from the Document Viewer, and click the filters icon at the bottom of the sidebar to see the Filters \u0026amp; Settings menu.\nShow image × The filters and exclusions options are:\nExclude quoted or bibliographic material: Click the check-box next to Exclude Quotes or Exclude Bibliography, then click Apply Changes at the bottom of the Filter \u0026amp; Settings sidebar. Exclude small sources: Click the check-box for excluding by words or %, and enter a numerical value for sources to be excluded from this Similarity Report. To turn off excluding small sources, select Don’t exclude by size. Click Apply Changes at the bottom of the Filter \u0026amp; Settings sidebar. This setting will affect the All Sources view of the side panel. Exclude small matches: Under Exclude matches that are less than, choose words, and enter the numerical value for match instances to be excluded from this Similarity Report. To turn off excluding small matches, select Don’t exclude. Click Apply Changes at the bottom of the Filter \u0026amp; Settings sidebar. This setting will affect the Match Overview view of the side panel. Exclude sections: Under Exclude Sections, choose the sections you would like to exclude: abstract methods and materials (including variations) iThenticate will exclude sections of the submitted document with headers containing the excluded words: \u0026lsquo;abstract\u0026rsquo;, \u0026lsquo;method and materials\u0026rsquo;, \u0026lsquo;methods\u0026rsquo;, \u0026lsquo;method\u0026rsquo;, \u0026lsquo;materials\u0026rsquo;, and \u0026lsquo;materials and methods\u0026rsquo;. Exclude a match (v1) If you decide that a match does not need to be flagged, you can exclude the source from the Similarity Report through Match Breakdown or All Sources. The Similarity Score will be recalculated, and may change the current percentage of the Similarity Report.\nTo access Match Breakdown from Match Overview, hover over the match for which you would like to view the underlying sources, and click the arrow icon.\nShow image × In Match Breakdown, click Exclude Sources, and select the sources you would like to remove by selecting the check-box next to each, then click the Exclude button.\nShow image × To exclude an entire source match from All Sources, select Exclude Sources, select the sources you would like to remove by selecting the check-box next to each, then click the Exclude button.\nExcluded sources lis (v1) The excluded sources list shows all sources excluded from the Similarity Report. To see the excluded sources list, click the excluded sources icon at the bottom of the sidebar.\nShow image × Click the check-box next to any source you would like to re-include in the Similarity Report, and click the Restore button to include the source in the Similarity Report. To restore all of the sources that were excluded from the report, click the Restore All button. The Similarity Score will be recalculated.\nShow image × The text-only report (v1) Start in the Document Viewer, and click the Text-Only Report button at the bottom right to see the Similarity Report without document formatting. The report will stay in text-only view mode (even if you close and reopen it) until you click Document Viewer to return to that mode.\nShow image × Along the top of the screen, the document information bar shows important details about the submitted document (including the date the report was processed, word count, the folder the document was submitted from, the number of matching documents found in the selected databases and the similarity index), and a menu bar with various options. Use the information bar drop-down to switch between uploaded documents in the same folder.\nShow image × The menu bar beneath the information bar has a mode selection drop-down menu, options to exclude quotes, bibliography, small sources, and small matches, as well as options to print and download.\nChoose a viewing mode from the mode drop-down menu:\nSimilarity Report (default) - this mode has a similar layout to the Document Viewer. You will see the document\u0026rsquo;s text on the left of the screen, with similarities highlighted. On the right are the sources, color-coded and listed from highest to lowest percentage of matching words. Only the top or best matches are shown - choose Content Tracking mode to see all underlying matches. Content tracking mode lists all the matches between the submitted document and the databases. Regular updates means that there may be many matches from the same source, some of which may be partially or completely hidden due to the content appearing in a higher matched source. The sources that are the same will specify from where they were taken and when. Summary report mode offers a simple, printable list of the matches found followed by the paper with the matching areas highlighted. It shows the sources first, with the document text below. Largest matches mode shows the percentage of words that are a part of a matching text string (with some limited flexibility). In some cases, strings from the same source may overlap, in which case, the longer string in the largest match view will be displayed. You have options to filter and exclude:\nExclude quoted or bibliographic material - click Exclude Quotes or Exclude Bibliography from the menu bar. Exclude phrases - click enable this setting for a folder means that any submission made to that folder will exclude the phrases specified in the folder settings. If you would like to include these phrases in the report, click Do not Exclude Phrases in the menu bar. Exclude a match - use this to exclude a source from the Similarity Report in either the Similarity Report or largest matches viewing modes. To exclude a match, view the report in Similarity Report or largest matches mode. Each source listed has an X icon to its right - click this to exclude the source. Any underlying source, if present, will replace the excluded source. Once a source has been excluded it can be re-included in the Similarity Report through the content tracking mode, which lists all sources with content matching that of the submission. In this view mode, excluded sources have a + icon to the right of their name - click this to re-include the source in the Similarity Report. Exclude small sources and matches - click Exclude small sources or Exclude small matches in the menu bar. Exclude small sources - To exclude a small source, enter a value into the word count or percentage field to set an exclusion threshold. Any source below the word court or match percentage threshold will be excluded from the record. Click Update to save the exclusion setting. Exclude small matches - To exclude a small match, enter a value into the word count field to set an exclusion threshold. Any match below that threshold will be excluded from the report. Click Update to save the exclusion setting. Making these changes may change the percentage of matching text found within the submission. Deselect an option to include it again.\nLearn more about exclusion settings when setting up a new folder, editing filters and exclusions in existing folders, filters and exclusions within the Similarity Report, and URL filters and phrase exclusions for account administrators.\n", "headings": ["Download Similarity Report PDF (v1) ","Filters and exclusions in individual Similarity Reports (v1) ","Exclude a match (v1) ","Excluded sources lis (v1) ","The text-only report (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/doc-to-doc-comparison/", "title": "Doc-to-doc comparison", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nUse doc-to-doc comparison to compare a primary uploaded document with up to five comparison uploaded documents. Any documents that you upload to doc-to-doc comparison will not be indexed and will not be searchable against any future submissions.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nUse doc-to-doc comparison to compare a primary uploaded document with up to five comparison uploaded documents. Any documents that you upload to doc-to-doc comparison will not be indexed and will not be searchable against any future submissions.\nUploading a primary document to doc-to-doc comparison will cost you a single document submission, but the comparison documents uploaded will not cost you any submissions.\nv2 Doc-to-doc comparison\nv1 Doc-to-doc comparison, keep reading:\nHow to use doc-to-doc comparison (v1) Start from Folders, go to the Submit a document menu, and click Doc-to-Doc Comparison.\nShow image × The doc-to-doc comparison screen allows you to choose one primary document and up to five comparison documents. Choose the destination folder for the documents you will upload. The Similarity Report for the comparison will be added to the same folder.\nFor your primary document, provide the author’s first name, last name, and document title. If you do not provide these details, the filename will be used for the title, and the author details will stay blank.\nIf you have administrator permissions, you can assign the Similarity Report for the comparison to a reporting group by selecting one from the Reporting Group drop-down. Learn more about reporting groups.\nClick Choose File, and select the file you want to upload as your primary document. See the file requirements for both the primary and comparison documents on the right of the screen.\nYou can choose up to five comparison documents to check against your primary document. These do not need to be given titles and author details. Each of the filenames must be unique. Click Choose Files, and select the files you would like to upload as comparison documents. To remove a file from the comparison before you upload it, click the X icon next to the file. To upload your files for comparison, click Upload.\nOnce your document has been uploaded and compared against the comparison documents, it will appear in your chosen destination folder.\nThis upload will have ‘Doc-to-Doc Comparison’ beneath the document title to show that this is a comparison upload and has not been indexed.\nShow image × The upload will be given a Similarity Score against the selected comparison documents, which is also displayed in the report column. Click the similarity percentage to open the doc-to-doc comparison in the Document Viewer.\nThe Document Viewer is separated into three sections:\nAlong the top of the screen, the paper information bar shows details about the primary document, including document title, author, date the report was processed, word count, number of comparison documents provided, and how many of those documents matched with the primary document. On the left panel is the paper text - this is the text of your primary document. Matching text is highlighted in red. Your comparison documents will appear in the sources panel to the right, showing instances of matching text within the submitted documents. By default, the doc-to-doc comparison will open the Document Viewer in the All Sources view. This view lists all the comparison documents you uploaded. Each comparison document has a percentage showing the amount of content within them that is similar to the primary document. If a comparison document has no matching text with the primary document, it has 0% next to it.\nDoc-to-doc comparison can also be viewed in Match Overview mode. In this view, the comparison documents are listed with highest match percentage first, and all the sources are shown together, color-coded, on the paper text.\n", "headings": ["How to use doc-to-doc comparison (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/account-info/", "title": "Your Similarity Check user account", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Similarity Check user account\nv1 Similarity Check user account, keep reading:\nManage your user account using the Account Information tab.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 Similarity Check user account\nv1 Similarity Check user account, keep reading:\nManage your user account using the Account Information tab.\nShow image × Your user account profile (v1) The Account Information section shows important information about your iThenticate account, including your account name, account ID, and user ID. Please ignore the iThenticate account expiry date - we’re working with iThenticate to have this removed. The iThenticate account expiry date is set to 1 June 2022 by default.\nShow image × From Account Info, then My Profile, you can:\nUpdate your profile: this form shows your current details. To make changes, enter your password in the Current Password field at the top of the form. Change the name attributed to your account: enter the first and last name in the relevant fields. These fields are required, you cannot leave them blank. Change your email address: enter your email into the email field. This email address is used to send you important account information, so please make sure it is valid. This field is required, you cannot leave it blank. Add a photo to your account: click Choose File, and select the image file you want to upload. Change your password: enter your current password in the Current Password field, enter your new password in the Change Password field, and enter it again in the Confirm Password field. Click Update Profile to save your changes. ", "headings": ["Your user account profile (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/ithenticate-account-use/settings/", "title": "Settings", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 User Settings\nv1 User Settings, keep reading:\nThe Settings tab controls general, document, and report display options. These options include the number of documents shown for each page, default report view, and controlling email notifications.", "content": "iThenticate v1 or iThenticate v2?\nTo work out which version you\u0026rsquo;re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.\nv2 User Settings\nv1 User Settings, keep reading:\nThe Settings tab controls general, document, and report display options. These options include the number of documents shown for each page, default report view, and controlling email notifications.\nShow image × General settings (v1) Show image × Use General settings to set your home folder - this is the folder will open by default when you log in to iThenticate. Choose your home folder from the drop-down menu.\nFrom the Number of documents to show drop-down, choose how many uploaded documents are listed in your folders before a new page is created.\nChoose what is displayed after you upload a document to iThenticate: Display the upload folder (to see the processing of the document you have just uploaded), or Upload another document (returns you to the upload form).\nYou can also choose the time zone and language for your account - the language you choose will set the language of your user interface.\nClick Update Settings to save your changes.\nDocuments settings (v1) Show image × Use Documents settings to choose the default way iThenticate sorts your uploaded documents: by processed date, title, Similarity Score, and author. Choose your preferred option from the drop-down menu.\nYou can set the threshold at which the Similarity Score color changes, based on the percentage of similarity. All Similarity Scores above the percentage you set will appear in the folder in blue, all those beneath the percentage will appear in gray. This visual distinction helps you easily identify matches above a given threshold. Learn more about how to interpret the Similarity Score.\nClick Update Settings to save your changes.\nReports settings (v1) Show image × Use Reports settings to adjust your email notifications, choose whether to color-code your reports, and view available document repositories for your account.\nEmail notifications tell you when a Similarity Report has exceeded particular thresholds, including Similarity Reports in shared folders. Email notifications are sent to the email address you used to sign up to iThenticate.\nReport email frequency: choose whether to receive notifications, chose how often you would like to receive them every hour, once a day, every other day, or once a week Similarity Report threshold: this refers to a paper’s overall Similarity Score. If the Similarity Score of a paper in your account exceeds the threshold set, you will receive an email notification. The default setting is \u0026lsquo;don\u0026rsquo;t notify me\u0026rsquo;. Content tracking report threshold: this refers to the All Sources section of the Similarity Report. If a single source for a paper in your account exceeds the similarity threshold set, you will receive an email notification. The default setting is don\u0026rsquo;t notify me. Color code report: color-coding the Similarity Report can make viewing matches easier. Choose Yes or No to enable or disable this feature.\nAvailable document repositories: this section shows the available repositories for your account. Modify them in the folder settings.\n", "headings": ["General settings (v1) ","Documents settings (v1) ","Reports settings (v1) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/similarity-report-understand/", "title": "Understanding your Similarity Report", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "How is the Similarity Score calculated? The below information will help you understand how to interpret your Similarity Report, whether you\u0026rsquo;re using iThenticate v1 or v2.\nTo calculate the Similarity Score, iThenticate scans your submitted document’s text, and checks it against each of the repositories you’ve chosen. The system takes the number of matching words found within the document and divides it by the document’s total word count to produce the Similarity Score percentage for the report.", "content": "How is the Similarity Score calculated? The below information will help you understand how to interpret your Similarity Report, whether you\u0026rsquo;re using iThenticate v1 or v2.\nTo calculate the Similarity Score, iThenticate scans your submitted document’s text, and checks it against each of the repositories you’ve chosen. The system takes the number of matching words found within the document and divides it by the document’s total word count to produce the Similarity Score percentage for the report.\nIf you apply exclusion options to the document, the system removes all matches for the exclusion option logic and recalculates the Similarity Score percentage.\nLearn more about exclusion settings when setting up a new folder (v1 only), editing filters and exclusions in existing folders (v1 only), filters and exclusions within the Similarity Report (v1 or v2), and URL filters (v1 or v2) for account administrators.\nHow to interpret the Similarity Report iThenticate does not check for plagiarism; it checks for similarity. Where a section of the submission’s content is similar or identical to one or more sources, it will be flagged for review. This doesn’t automatically mean plagiarism, however - just similarity.\nIt’s perfectly natural for a submission to match against some sources in the database. A high degree of overlap may indicate a well-researched document with many references to existing work, and as long as these sources are quoted and referenced correctly, this is perfectly acceptable. A high degree of overlap may also be present where an author has already shared their work on a preprint repository. If the author(s) are the same, this is not a problem.\nIt’s important that you don’t set a Similarity Score over which you automatically reject manuscripts - where there’s a high degree of overlap, your editors and reviewers should decide if the match is acceptable or not, as part of their general review process.\nSimilarity Reports and preprints It is entirely possible (and acceptable) for an author to submit an article to a journal even though they’ve previously made the article available as a preprint. In this case, we expect a high degree of similarity between the preprint and author’s submitted manuscript.\nTherefore, if you find a high degree of similarity between a manuscript you’re checking in iThenticate and a preprint by the same author(s), this is likely to be because the manuscript is a match with its own preprint. However, if the manuscript and preprint do not have the same author(s), this may indicate a problem, and you should investigate further.\nSome preprints can be found in iThenticate’s Crossref Posted Content repository, so take this into account if you are checking against this repository. But even if you have excluded the Crossref Posted Content repository in your settings (v1or v2), it is still possible for preprints to appear as matches to a submission, because iThenticate also crawls preprint repositories on the web.\nWe recommend including preprints in your results to ensure you are checking that preprints haven’t been plagiarised by a different author, but if you see a pre-print match for the same author, this isn\u0026rsquo;t plagiarism.\n", "headings": ["How is the Similarity Score calculated? ","How to interpret the Similarity Report ","Similarity Reports and preprints "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/upgrading/", "title": "Upgrading from iThenticate v1 to iThenticate v2", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "Background The Similarity Check service gives Crossref members reduced rate access to the iThenticate tool from Turnitin, and there\u0026rsquo;s now a new version of iThenticate available for some subscribers. iThenticate v2 has lots of new and improved features, including:\nThe ability to identify content on preprint servers more easily A “red flag” feature that signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement A faster, more user-friendly and responsive Similarity Report interface For those members who integrate their iThenticate account with a Manuscript Tracking Service (MTS) there are even more benefits - you can now submit your manuscripts and view your Similarity Report from within the MTS, and you can also manage your exclusions from within your MTS too.", "content": "Background The Similarity Check service gives Crossref members reduced rate access to the iThenticate tool from Turnitin, and there\u0026rsquo;s now a new version of iThenticate available for some subscribers. iThenticate v2 has lots of new and improved features, including:\nThe ability to identify content on preprint servers more easily A “red flag” feature that signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement A faster, more user-friendly and responsive Similarity Report interface For those members who integrate their iThenticate account with a Manuscript Tracking Service (MTS) there are even more benefits - you can now submit your manuscripts and view your Similarity Report from within the MTS, and you can also manage your exclusions from within your MTS too. There are also some important changes in how you manage users.\nYou can find out more about the benefits of iThenticate v2 on our blog.\nWho can upgrade from iThenticate v1 to iThenticate v2? Most existing subscribers to Similarity Check are currently using iThenticate v1, and over the next year or so we\u0026rsquo;ll be inviting these members to upgrade to iThenticate v2. Not everyone can upgrade immediately. This is because many members integrate their iThenticate account with their Manuscript Tracking Service (MTS), and not all the MTSs are able to integrate with iThenticate v2 just yet.\nAn upgrade is currently available to members of Crossref:\nUsing iThenticate through the browser Integrating iThenticate with eJournal Press Integrating iThenticate with ScholarOne Manuscripts Integrating iThenticate with Scholastica Peer Review Integrating iThenticate with Editorial Manager We\u0026rsquo;ll be letting members know as the other Manuscript Tracking Systems are able to integrate with iThenticate v2. Brand new subscribers who fall into the categories above will be set up directly on iThenticate v2.\nNot sure if you\u0026rsquo;re currently using iThenticate v1 or iThenticate v2? Here\u0026rsquo;s how to check.\nThe upgrade process When you are able to upgrade from iThenticate v1 to iThenticate v2, we will contact the Similarity Check editorial contact on your account and ask you to complete a form. We will then check that you are still eligible for Similarity Check. This means that we will check that you are still providing Similarity Check URLs for at least 90% of the content you have registered with Crossref, and that the team at Turnitin can continue to index this content into the iThenticate database.\nOnce this is done, the team at Turnitin will send an email to the Similarity Check editorial contact on your account, so they can set up administrator credentials on your new iThenticate v2 account. The email will look like this:\nClick on the blue ‘Set up my account’ button at the bottom of the email. This will bring you to a page which looks something like this:\nFill out your username and password, and don’t forget to tick to agree to the terms and conditions. You will then arrive at your new iThenticate v2 account.\nOnce you have administrator access to your new iThenticate v2 account, you can follow the instructions for getting set up. There are a different set of instructions depending on whether:\nYou will be using iThenticate directly in the browser - more here. You will be integrating iThenticate with your Manuscript Tracking System (MTS) - more here. If you have any follow up questions after your upgrade, do read our upgrade FAQs.\n", "headings": ["Background","Who can upgrade from iThenticate v1 to iThenticate v2?","The upgrade process"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/upgrading/v1-or-v2/", "title": "v1 or v2?", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "Not sure if you are currently using iThenticate v1 or iThenticate v2?\niThenticate v1 iThenticate v1 looks like this:\nShow image × If you access iThenticate through the browser, you will use the address ithenticate.com.\niThenticate v2 iThenticate v2 looks like this:\nIf you access iThenticate through the browser, you will use a bespoke URL: https://crossref-[your member ID].turnitin.com", "content": "Not sure if you are currently using iThenticate v1 or iThenticate v2?\niThenticate v1 iThenticate v1 looks like this:\nShow image × If you access iThenticate through the browser, you will use the address ithenticate.com.\niThenticate v2 iThenticate v2 looks like this:\nIf you access iThenticate through the browser, you will use a bespoke URL: https://crossref-[your member ID].turnitin.com\n", "headings": ["iThenticate v1","iThenticate v2"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/upgrading/faqs/", "title": "Upgrade FAQs", "subtitle":"", "rank": 4, "lastmod": "2022-07-15", "lastmod_ts": 1657843200, "section": "Documentation", "tags": [], "description": "Differences between iThenticate v1 and iThenticate v2 Upgrading from a iThenticate v1 to iThenticate v2 Problems after upgrading from v1 to v2 Differences between iThenticate v1 and iThenticate v2 Q. What are the big differences between iThenticate v1 and v2 for all users? As well as the faster, more user-friendly and responsive interface there are a few new features in iThenticate v2 which everyone can benefit from.\nThe Submitted Works repository (or Private Repository) The Submitted Works Repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members.", "content": " Differences between iThenticate v1 and iThenticate v2 Upgrading from a iThenticate v1 to iThenticate v2 Problems after upgrading from v1 to v2 Differences between iThenticate v1 and iThenticate v2 Q. What are the big differences between iThenticate v1 and v2 for all users? As well as the faster, more user-friendly and responsive interface there are a few new features in iThenticate v2 which everyone can benefit from.\nThe Submitted Works repository (or Private Repository) The Submitted Works Repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members. This feature allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions.\nUsing iThenticate v2 in the browser? Find out more about the Submitted Works Repository.\nUsing iThenticate v2 through an integration with your MTS? This feature is currently only available to ScholarOne Manuscripts users. Find out more about the Submitted Works Repository. Better identification of preprints Using iThenticate v2 in the browser? Find out more about Preprint labels and exclusions.\nUsing iThenticate v2 through an integration with your MTS? Find out more about Preprint labels and exclusions. Identification of hidden text or suspicious character replacement The new “red flag” feature signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement. Find out more about the flags\nNo need to manually exclude bibliography or citations In iThenticate v2, there is no need to choose to exclude the bibliography or citations in papers written in English, as Turnitin will automatically exclude these from your Similarity Report. For more information, please see Turnitin\u0026rsquo;s documentation here.\nPlease note, if you using an MTS integration and you find that the bibliography or citations are not being excluded, please reach out to your MTS, as they will need to adjust the settings on their end to ensure these exclusions are in place.\nQ. Are there other differences between v1 and v2 that those who integrate with an MTS will also notice? Yes, there are some other big improvements to iThenticate for members who integrate iThenticate with their MTS.\nView your Similarity Reports within your MTS You can now both submit your manuscripts AND view your Similarity Report from within your MTS.\nNo folders if you integrate with an MTS If you are using iThenticate through an integration then there is no need to set up any folders within your iThenticate account. Unlike in iThenticate v1, integrations will not send submissions to a folder which is accessible by logging into iThenticate through your browser. Submissions created by your MTS will only be accessible through your MTS.\nNo individual users in iThenticate if you integrate with an MTS If you are using iThenticate through an integration with an MTS, then you do not need to set up any new users on your new iThenticate account. This is because all the submissions from your MTS will be made by the API key you’ve set, rather than individual users.\nQ. Is AI detection available with iThenticate v2 through Similarity Check? We\u0026rsquo;re not currently offering the iThenticate AI add-on to Crossref members as part of the Similarity Check service. This is based on feedback from our members who tested its functionality.\nWe do hope to offer the AI feature in the future though, and we’ll revisit the decision as the service develops. Keep an eye out for announcements on our community forum.\nUpgrading from iThenticate v1 to iThenticate v2 Q. Can I move my existing users over from iThenticate v1 to iThenticate v2 when I upgrade? If you are integrating iThenticate v2 with your MTS, you don\u0026rsquo;t need to move or add your existing users. More here.\nIf you are using iThenticate v2 in the browser, you can import your users from v1 in bulk by following the steps below:\nExport users from v1: Log into your iThenticate v1 account and go to Manage Users and press the download xls link at the top of the Profile page. This will give you a complete list of users from your account.\nImport users to v2: You can find instructions on how to import users to your v2 account here. Using the file you downloaded from v1 you should be able to copy and paste the required data into the template provided.\nQ. When I move to iThenticate v2, will I immediately lose access to iThenticate v1? Once you have access to iThenticate v2, you may need some time to transition all your journals over from iThenticate v1.\nUsing iThenticate directly in the browser You will be able to continue to use your iThenticate v1 account for several months so you have time to move everyone over. After a few months you will lose your ability to submit new manuscripts through your iThenticate v1 account, but you will still be able to access your old reports. We will email your Similarity Check editorial contact with several months notice before we remove your ability to submit new manuscripts.\nUsing iThenticate through an integration with an MTS As soon as you update your integration to work with your iThenticate v2 account, you will lose your ability to submit new manuscripts to v1 through that MTS. However, you will still be able to access your past reports.\nIf you use more than one MTS, you can use iThenticate v2 with one MTS, but continue to use your iThenticate v1 account with other MTSs that aren\u0026rsquo;t yet ready to integrate with iThenticate v2. We won\u0026rsquo;t remove your ability to check new documents in iThenticate v1 until all the MTSs that you work with are able to work with v2, and you have integrated all of them with v2.\nQ. Can I still view my previous submissions from iThenticate v1 in iThenticate v2? The submissions you made through iThenticate v1 will not be accessible directly through iThenticate v2. However, you will still be able to access your historical reports.\nIf you are using an integration with an MTS If you were using iThenticate v1 through an integration which is now using iThenticate v2, then for most manuscript tracking systems we have asked them to continue to support access to your v1 reports. This means that when you attempt to access a historical report it will open in the old document viewer. Each MTS is only intending to support access to v1 historical reports for a limited period of time. Please check with your MTS for details as to when they will no longer support this functionality.\nOnce access to historical reports through your MTS is no longer possible you will still be able to access your historical reports by logging into your old iThenticate v1 account directly in the browser. You will need go to ithenticate.com and log in using the original administrator credentials for iThenticate v1. Once logged in, you can run reports to find historical reports. There are instructions for how to do this on Turnitin\u0026rsquo;s website.\nAt some point in the future iThenticate v1 will be sunset and you will no longer be able to access your old account. Before this happens a solution will be in place to ensure you can still access your historical reports.\nIf you use iThenticate through the browser Once you have moved to iThenticate v2 you will still be able to access your historical reports through your old iThenticate v1 account. You will need to log into this by going to ithenticate.com and logging in using the administrator credentials previously used. Once logged in, you can run reports to find historical reports. There are instructions for how to do this on Turnitin\u0026rsquo;s website.\nAt some point in the future iThenticate v1 will be sunset and you will no longer be able to access your old account. Before this happens a solution will be in place to ensure you can still access\nQ. Is there an extra charge for using iThenticate v2? There is no charge to upgrade to iThenticate v2, and the costs to check your manuscripts are the same as in iThenticate v1. If you are using iThenticate v2 for some of your journals, but continue to use iThenticate v1 for some of your other journals, your volume discounts will apply across both instances.\nHowever, it\u0026rsquo;s important to note that we can\u0026rsquo;t tell if you check the same document in both iThenticate v1 and iThenticate v2. If you do this, you will be charged twice.\nQ. Can I upgrade from v1 to v2 if I use the OJS platform from PKP? The OJS platform cannot currently integrate with iThenticate v2, so if you are already using iThenticate v1 and you are an OJS user, you will have to wait to upgrade to v2. But don\u0026rsquo;t worry - we understand that this integration will be ready really soon.\nQ. My organization uses more than one Manuscript Tracking System. One of my MTSs is able to integrate with iThenticate v2 already, but the other one can\u0026rsquo;t yet integrate with v2. Should I upgrade or not? If we have contacted you about upgrading to iThenticate v2, this will give you access to a v2 account, and you will be able to link this with the MTS that\u0026rsquo;s ready. You can continue using iThenticate v1 with the other MTS while you wait for them to be able to integrate with v2.\nQ. If I use iThenticate v2 with one of my MTSs, and iThenticate v1 with another (as the other MTS can\u0026rsquo;t yet integrate with v2), what happens to my volume discounts? Your volume discounts will apply against the total number of manuscripts that you\u0026rsquo;ve checked across both instances.\nPlease note though - we can\u0026rsquo;t tell if you\u0026rsquo;ve checked the same document in both v1 and v2. If you do this, you will be charged twice, so it\u0026rsquo;s important to keep them separate.\nProblems after upgrading from v1 to v2 Q. I can\u0026rsquo;t login to my new v2 account If you are the main Similarity Check contact on your member account, you should have received an email from Turnitin to set up your administrator password. Have you received the email and set up your new credentials yet? More information here.\nIf you have already received the email and set up your credentials and are still having problems, please check that you are going to the right place to access your iThenticate v2 account. Your old iThenticate v1 account and your new iThenticate v2 account are completely different, and you will get to them using a different URL. Some members accidentally try to login to their new v2 account on the URL for their old v1 account by mistake. To make sure that you are definitely going to the right place for v2, follow these instructions.\nQ. I can\u0026rsquo;t share links to allow others to go directly to Similarity Check reports within my v2 instance This is deliberate - being able to share links to the system was a security risk and this functionality has been deliberately removed by Turnitin. You can however download and email Similarity Reports to others.\n", "headings": ["Differences between iThenticate v1 and iThenticate v2","Q. What are the big differences between iThenticate v1 and v2 for all users?","The Submitted Works repository (or Private Repository)","Better identification of preprints","Identification of hidden text or suspicious character replacement","No need to manually exclude bibliography or citations","Q. Are there other differences between v1 and v2 that those who integrate with an MTS will also notice?","View your Similarity Reports within your MTS","No folders if you integrate with an MTS","No individual users in iThenticate if you integrate with an MTS","Q. Is AI detection available with iThenticate v2 through Similarity Check?","Upgrading from iThenticate v1 to iThenticate v2","Q. Can I move my existing users over from iThenticate v1 to iThenticate v2 when I upgrade?","Q. When I move to iThenticate v2, will I immediately lose access to iThenticate v1?","Using iThenticate directly in the browser","Using iThenticate through an integration with an MTS","Q. Can I still view my previous submissions from iThenticate v1 in iThenticate v2?","If you are using an integration with an MTS","If you use iThenticate through the browser","Q. Is there an extra charge for using iThenticate v2?","Q. Can I upgrade from v1 to v2 if I use the OJS platform from PKP?","Q. My organization uses more than one Manuscript Tracking System. One of my MTSs is able to integrate with iThenticate v2 already, but the other one can\u0026rsquo;t yet integrate with v2. Should I upgrade or not?","Q. If I use iThenticate v2 with one of my MTSs, and iThenticate v1 with another (as the other MTS can\u0026rsquo;t yet integrate with v2), what happens to my volume discounts?","Problems after upgrading from v1 to v2","Q. I can\u0026rsquo;t login to my new v2 account","Q. I can\u0026rsquo;t share links to allow others to go directly to Similarity Check reports within my v2 instance"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/get-help/", "title": "Get help with Similarity Check", "subtitle":"", "rank": 4, "lastmod": "2020-05-19", "lastmod_ts": 1589846400, "section": "Documentation", "tags": [], "description": "Do start by reading the documentation on this website.\nHelp with initial set up for iThenticate administrators Setting up your iThenticate v1 account Setting up your iThenticate v2 account to use directly in the browser Setting up your iThenticate v2 account to use with your Manuscript Tracking System Help using the iThenticate tool Using your iThenticate account Understanding your Similarity Report Help with upgrading from iThenticate v1 to iThenticate v2 Upgrading from iThenticate v1 to iThenticate v2 If you still don\u0026rsquo;t understand how to use the iThenticate tool, the team at Crossref can help - just contact us.", "content": "Do start by reading the documentation on this website.\nHelp with initial set up for iThenticate administrators Setting up your iThenticate v1 account Setting up your iThenticate v2 account to use directly in the browser Setting up your iThenticate v2 account to use with your Manuscript Tracking System Help using the iThenticate tool Using your iThenticate account Understanding your Similarity Report Help with upgrading from iThenticate v1 to iThenticate v2 Upgrading from iThenticate v1 to iThenticate v2 If you still don\u0026rsquo;t understand how to use the iThenticate tool, the team at Crossref can help - just contact us.\nIf you encounter a technical issue (such as an error message or a bug), please contact the Turnitin team directly at tiisupport@turnitin.com.\nEmail: tiisupport@turnitin.com Phone: +1 866 816 5046, extension 241 To find out about outages or planned maintenance that affect the iThenticate system, check the Turnitin status page or Turnitin\u0026rsquo;s X feed. These pages display information on service outages, maintenance alerts, or incidents related to the performance of iThenticate. You can also subscribe to the notifications feed to automatically receive updates. For known bugs in the iThenticate software, please visit Turnitin\u0026rsquo;s known issues page.\n", "headings": ["Help with initial set up for iThenticate administrators","Help using the iThenticate tool","Help with upgrading from iThenticate v1 to iThenticate v2"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/best-practices/versioning/", "title": "Version control, corrections, and retractions", "subtitle":"", "rank": 4, "lastmod": "2023-04-15", "lastmod_ts": 1681516800, "section": "Documentation", "tags": [], "description": "Version control is the management of changes to a document, file, or dataset. Versions may include: draft, preprint, ending publication, author accepted manuscript (AAM), version of record (VOR), updated or corrected, and even retracted.\nDraft Preprint - early draft or manuscript shared by researcher in a preprint repository or dedicated channel (outside of a specific journal) Pending publication (PP) - a manuscript which has been accepted but has not yet been published online Advanced online publication or ahead of print (AOP) - early release of publication which publisher makes available to readers on their platform (prior to typesetting or before final published form) Author accepted manuscript (AAM) - accepted version which has been peer reviewed but not typeset or copyedited Version of record (VoR) - typeset, copyedited, and published version Updated - adding supplementary data or making corrections to the file, or its retraction.", "content": "Version control is the management of changes to a document, file, or dataset. Versions may include: draft, preprint, ending publication, author accepted manuscript (AAM), version of record (VOR), updated or corrected, and even retracted.\nDraft Preprint - early draft or manuscript shared by researcher in a preprint repository or dedicated channel (outside of a specific journal) Pending publication (PP) - a manuscript which has been accepted but has not yet been published online Advanced online publication or ahead of print (AOP) - early release of publication which publisher makes available to readers on their platform (prior to typesetting or before final published form) Author accepted manuscript (AAM) - accepted version which has been peer reviewed but not typeset or copyedited Version of record (VoR) - typeset, copyedited, and published version Updated - adding supplementary data or making corrections to the file, or its retraction. Version control is important for:\ntraceability (following the development of the document), identifiability (connecting documents to decisions, contributions, contributors, and time), clarity (distinguishing between multiple versions of documents, and identifying the latest version), reduced duplication (removing out-of-date versions), and reduced errors (clearly indicating to readers which is the current version). Reading list COPE Guidelines for retracting articles - part of the COPE guidelines Journal Article Versions (JAV): Recommendations of the NISO/ALPSP JAV Technical Working Group (opens .pdf file) Errata, Retractions, and Other Linked Citations in PubMed Publication stages and DOIs How do I decide if I should assign a DOI to a work, and at what stage? This table sets out seven publication stages of a research object (a publication such as a journal article, book, or dataset). A work may not go through all of these seven stages, so you only need to consider the stages relevant to your publication.\nPublication stage Eligible for a DOI? Which DOI? 1 Draft No DOI for draft item n/a 2 Preprint Yes DOI A 3 Pending publication (PP) Yes DOI B 4 Advanced online publication/ahead of print (AOP) Yes DOI B 5 Author accepted manuscript (AAM) Yes DOI B 6 Version of record (VoR) Yes DOI B 7 Updated Yes DOI C A DOI should not be assigned to a draft (unpublished) work. A preprint should have its own DOI (DOI A). Accepted versions (including PP, AOP, AAM, and VoR) should have a separate DOI (DOI B). Establish a relationship between DOI B and DOI A to show the connection between them, such as DOI B \u0026ldquo;hasPreprint\u0026rdquo; DOI A. In the case of a significant change to the published version, a notice should be published explaining the correction/update/retraction. The updated version should have a new DOI (DOI C). Updates should only be deposited for changes that are likely to affect the interpretation or crediting of the work (editorially significant changes), and instead of simply asserting a relationship, these should be recorded as updates. The metadata for the update is part of the Crossmark section of the metadata, and should include a link to the item being updated, and the type of update:\n\u0026lt;updates\u0026gt; \u0026lt;update type=\u0026#34;retraction\u0026#34; label=\u0026#34;Retraction\u0026#34; date=\u0026#34;2009-09-14\u0026#34;\u0026gt;10.5555/12345678\u0026lt;/update\u0026gt; \u0026lt;/updates\u0026gt; Note that you don\u0026rsquo;t need to use all aspects of Crossmark to register updates.\n", "headings": ["Reading list ","Publication stages and DOIs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/participating-in-crossmark/", "title": "Participating in Crossmark", "subtitle":"", "rank": 4, "lastmod": "2024-08-09", "lastmod_ts": 1723161600, "section": "Documentation", "tags": [], "description": "Implementing Crossmark includes several stages, some of which require technical knowledge to modify websites or PDFs.\nFull implementation means that you\u0026rsquo;ll need to include Crossmark-specific metadata when registering content and add the Crossmark button to your website and PDFs. If you are not able to finish the process, that\u0026rsquo;s ok, make a start and continue when you have the expertise to do so.\nOn this page, learn more about:\nStep one: Designate an update policy page and assign it a DOI Step two: Include the policy page in all your registered content Step three: Register published updates Step four: Implement Crossmark on your HTML pages Step five: Apply Crossmark to your PDF content Further options: adding more Crossmark information Step one: Designate an update policy page and assign it a DOI You will need to explain to your readers how your content is updated after publication and indicate when this has happened.", "content": "Implementing Crossmark includes several stages, some of which require technical knowledge to modify websites or PDFs.\nFull implementation means that you\u0026rsquo;ll need to include Crossmark-specific metadata when registering content and add the Crossmark button to your website and PDFs. If you are not able to finish the process, that\u0026rsquo;s ok, make a start and continue when you have the expertise to do so.\nOn this page, learn more about:\nStep one: Designate an update policy page and assign it a DOI Step two: Include the policy page in all your registered content Step three: Register published updates Step four: Implement Crossmark on your HTML pages Step five: Apply Crossmark to your PDF content Further options: adding more Crossmark information Step one: Designate an update policy page and assign it a DOI You will need to explain to your readers how your content is updated after publication and indicate when this has happened. The first step is to have a single page on your website explaining these processes. This page should be registered and have a DOI to enable persistent linking. It must include your policies on corrections, retractions, withdrawals and other updates. It can include links to other relevant policies such as author submission guidelines and peer review guidelines, and may contain definitions and explanations of additional custom metadata fields you have used. You may already have a suitable page on your website, but don\u0026rsquo;t forget to assign it a DOI and register the metadata with us.\nLearn more about creating an update policy page.\nStep two: Add the policy page DOI to all of your content It’s important to apply Crossmark to all of your current content, not just content that has updates. When an item is published, you don’t know if it will be updated in the future. Therefore, a researcher may download a PDF article today without a Crossmark button, and if the article is subsequently updated they have no way of knowing if their locally-saved version is still current. If you\u0026rsquo;re using the Crossmark service, we expect you to display the Crossmark button on all your content, whether it has an update or not.\nAt a minimum, you will need to include the update policy page in each metadata record that you register. Here\u0026rsquo;s how to do that via several registration methods:\nXML deposit If you register content with us in XML format using either the admin tool or HTTPS POST, you can include Crossmark metadata in your initial deposit. You can also add Crossmark metadata to existing DOIs using a resource-only-XML deposit.\nWe provide sample a sample XML file with the fields you need to include.\nUsing the web deposit form Using the web deposit form you can register Crossmark metadata for journal articles.\nShow image × Fill in the \u0026lsquo;Policy page DOI\u0026rsquo; field (and other fields if they are relevant). Note that Crossmark metadata for types other than journal articles (such as books or preprints) is not supported by the web deposit form.\nMetadata manager (deprecated) Registering the update policy page is also possible using the Metadata Manager by adding the update policy page up at journal level. Go to the journal-level record for your publication, add your Crossmark policy page DOI, and click Save.\nShow image × The metadata will automatically be added to all content registered to this journal.\nStep three: Add metadata that reflects any updates to specific items If a registered item is updated, you need to register a different metadata record for the update. This is only necessary for editorially significant changes\u0026mdash;those that are likely to affect the interpretation or crediting of the work, and where a separate update notice is usually published. Minor changes can be made directly to the content without notifying Crossref, including cases such as minor spelling corrections or formatting changes that don\u0026rsquo;t affect the metadata.\nThere are 12 defined types of update accepted in our schema:\naddendum clarification correction corrigendum erratum expression_of_concern new_edition new_version partial_retraction removal retraction withdrawal If an update does not fall into one of these categories, it should instead be placed in the more information section of the pop-up box in the web deposit form by being deposited as an assertion.\nWhen deposited content corrects or updates earlier content, the DOI(s) of the corrected content must be supplied in the Crossmark metadata. See the Crossref unixref documentation section on updates for examples of how this is recorded in the Crossmark metadata. When a correction is made in situ (that is, it replaces the earlier version completely), then the DOI of the corrected content will be the same as the DOI for the original Crossref deposit. In situ updates are not considered best practice as they obscure the scholarly record.\nStep four: Apply the Crossmark button to your HTML pages There are two options for applying Crossmark to your website.\nAdd the Crossmark logo with a link. Install a JavaScript widget that creates a Crossmark popup. These are explained in the following sections.\nAdd a logo with a link This is the simplest way to implement Crossmark on websites. Simply add a version of the Crossmark logo to the landing page for each of your registered items (usually the page where the abstract is shown) and link the logo to the Crossmark page of the relevant DOI.\nThere are several variations of the Crossmark logo, for example you can use:\nCROSSMARK_Color_square.eps CROSSMARK_Color_horizontal.eps The link needs to be specific to each landing page. Here is an example:\nhttps://crossmark.crossref.org/dialog?doi=10.5555/abcdef\u0026amp;domain=html\u0026amp;date_stamp=2008-08-14\ndoi is the DOI of the content item domain tells the Crossmark system what kind of static content the link is coming from, and will change for different static formats (such as html, pdf, epub) date_stamp tells the Crossmark system the date on which a last Major Version of the PDF was generated. In most cases, this will be the date the article was published. However, when a member makes significant corrections to a PDF in-situ (no notice issued, and no new version of the work with a new DOI) then the date_stamp should reflect when the PDF was regenerated with the corrections. The system will then use the date_stamp in order to tell whether the reader needs to be alerted to updates or not. The date_stamp argument should be recorded in the form YYYY-MM-DD (learn more about ISO 8601). The final result will look like this:\nClicking the logo will open a new tab or window displaying the Crossmark information.\nA Crossmark popup A different solution that is more technical to implement enables a popup containing Crossmark information. It has the advantage that readers do not leave your website when clicking the button.\nWe supply a templated HTML/JavaScript code widget which will embed the Crossmark button and functionality into your web pages. The latest version of the widget (v2.0) is below. Ensure you are using the latest version and that it points to our production server. Do not alter the script or host the button locally.\n\u0026lt;!-- Start Crossmark Snippet v2.0 --\u0026gt; \u0026lt;script src=\u0026#34;https://0-crossmark--cdn-crossref-org.libus.csd.mu.edu/widget/v2.0/widget.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;a data-target=\u0026#34;crossmark\u0026#34;\u0026gt;\u0026lt;img src=\u0026#34;https://0-crossmark--cdn-crossref-org.libus.csd.mu.edu/widget/v2.0/logos/CROSSMARK_BW_horizontal.svg\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;!-- End Crossmark Snippet --\u0026gt; Select one of the variations of the Crossmark button available. You can change the Crossmark button that is used simply by changing the src attribute of the img element to point to one of the following, for example:\nCROSSMARK_Color_square.eps CROSSMARK_Color_horizontal.eps Alternatively, check the source on this page to see the correct link for each style of button.\nThe button can be resized according to your design needs by changing the image width in the image tag but do follow the Crossmark button guidelines.\nThe Crossmark popup needs to have a DOI to reference in order to pull in the relevant information. This needs to be embedded in the head of the HTML metadata for all content to which Crossmark buttons are being applied as follows:\n\u0026lt;meta name=\u0026quot;dc.identifier\u0026quot; content=\u0026quot;doi:10.5555/12345678\u0026quot;/\u0026gt;\nStep five: Apply the Crossmark button to your PDF content To implement Crossmark in PDF files, the solution is very similar to that for the first website solution above:\nSelect a suitable Crossmark logo. Add the logo to your PDF files with a link to the correct Crossmark link, for example https://0-crossmark-crossref-org.libus.csd.mu.edu/dialog?doi=10.5555/abcdef\u0026amp;domain=pdf\u0026amp;date_stamp=2008-08-14. Optional updates to PDF metadata For additional transparency and to enable easier machine-reading of Crossmark metadata, you can modify the metadata of your PDFs. This is best done during production before the final PDF has been created and any security has been added to the document.\nA minimal XMP file for the above PDF would look like this:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;?xpacket begin=\u0026#34;?\u0026#34; id=\u0026#34;W5M0MpCehiHzreSzNTczkc9d\u0026#34;?\u0026gt; \u0026lt;x:xmpmeta xmlns:x=\u0026#34;adobe:ns:meta/\u0026#34; x:xmptk=\u0026#34;Adobe XMP Core 4.0-c316 44.253921, Sun Oct 01 2006 17:14:39\u0026#34;\u0026gt; \u0026lt;rdf:RDF xmlns:rdf = \u0026#34;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026#34; xmlns:pdfx = \u0026#34;http://ns.adobe.com/pdfx/1.3/\u0026#34; xmlns:pdfaid = \u0026#34;http://www.aiim.org/pdfa/ns/id/\u0026#34; xmlns:xap = \u0026#34;http://ns.adobe.com/xap/1.0/\u0026#34; xmlns:xapRights = \u0026#34;http://ns.adobe.com/xap/1.0/rights/\u0026#34; xmlns:dc = \u0026#34;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026#34; xmlns:dcterms = \u0026#34;http://0-purl-org.libus.csd.mu.edu/dc/terms/\u0026#34; xmlns:prism = \u0026#34;http://prismstandard.org/namespaces/basic/2.0/\u0026#34; xmlns:crossmark = \u0026#34;http://crossref.org/crossmark/2.0/\u0026#34;\u0026gt; \u0026lt;rdf:Description rdf:about=\u0026#34;\u0026#34;\u0026gt; \u0026lt;dc:identifier\u0026gt;doi:10.5555/12345678\u0026lt;/dc:identifier\u0026gt; \u0026lt;prism:doi\u0026gt;10.5555/12345678\u0026lt;/prism:doi\u0026gt; \u0026lt;prism:url\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.5555/12345678\u0026lt;/prism:url\u0026gt; \u0026lt;crossmark:MajorVersionDate\u0026gt;2015-08-14\u0026lt;/crossmark:MajorVersionDate\u0026gt; \u0026lt;crossmark:DOI\u0026gt;10.5555/12345678\u0026lt;/crossmark:DOI\u0026gt; \u0026lt;pdfx:doi\u0026gt;10.5555/12345678\u0026lt;/pdfx:doi\u0026gt; \u0026lt;pdfx:CrossmarkMajorVersionDate\u0026gt;2015-08-14\u0026lt;/pdfx:CrossmarkMajorVersionDate\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026lt;/rdf:RDF\u0026gt; \u0026lt;/x:xmpmeta\u0026gt; \u0026lt;?xpacket end=\u0026#34;w\u0026#34;?\u0026gt; It may appear redundant to apply Crossmark elements both in their own Crossmark namespace as well in the pdfx namespace, but the latter is necessary to ensure the Crossmark elements appear in the PDF dictionary, a specific requirement for some search engines. Any metadata found in the pdfx namespace will be copied over to the document info dictionary. Simply make sure that Crossmark metadata is in the pdfx namespace in the XMP provided to the tool.\nFurther options: adding more information to the Crossmark button The Crossmark box has a section for you to show any additional non-bibliographic information about the content. You decide what to include here, and you are not required to add anything. In this section, Crossmark participants often include publication history dates, details of the peer review process used, and links to supporting information.\nUse Metadata Manager to add custom metadata, or use the assertion element in your XML.\nSeveral metadata elements will automatically display in the Crossmark box if you are registering them:\nAuthor names and their ORCID iDs (learn more about contributors) Funding information (learn more about funding information) License URLs (learn more about license information) If you are already registering this additional metadata at the time you implement Crossmark, there is nothing more you need to do. If you start to register these metadata elements after you have set up Crossmark, they will automatically be put into the Crossmark box.\nPlease note that @order is an optional attribute. If @order is absent, it will return results in the order in which you list them in your deposit, but this is not guaranteed. If you want to be sure of the order, then you can use @order. Learn more about the Crossmark deposit elements (including what is optional) in the schema.\n", "headings": ["Step one: Designate an update policy page and assign it a DOI ","Step two: Add the policy page DOI to all of your content ","XML deposit","Using the web deposit form","Metadata manager (deprecated)","Step three: Add metadata that reflects any updates to specific items ","Step four: Apply the Crossmark button to your HTML pages ","Add a logo with a link","A Crossmark popup","Step five: Apply the Crossmark button to your PDF content ","Optional updates to PDF metadata","Further options: adding more information to the Crossmark button "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/crossmark-policy-page/", "title": "Update policy page", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "To participate in Crossmark, you must create an update policy page on your website that has been assigned a DOI and registered with us. You may choose to have one policy page for all of your titles, or a separate policy page for each title.\nWe recommend that the following appear on the update policy page. These are guidelines only, there may be variations due to common practice in your field or region:", "content": "To participate in Crossmark, you must create an update policy page on your website that has been assigned a DOI and registered with us. You may choose to have one policy page for all of your titles, or a separate policy page for each title.\nWe recommend that the following appear on the update policy page. These are guidelines only, there may be variations due to common practice in your field or region:\nA link to, or description of, editorial policies. How is content reviewed prior to publication? Are there guidelines for reviewers or editors? Link to any ethical guidelines or standards authors should adhere to. How can readers report potential issues with published content? Under what circumstances works might be updated or retracted after publication? What happens to retracted or corrected works? Are updated publications replaced, does the original remain available? You may add the information directly to the registered page or link to additional pages that contain the details, for example you may already have a separate page about how retractions are handled.\nIf you have fully implemented Crossmark, you may also include (or adapt) the following text:\nCrossmark, from Crossref, provides a standard way for readers to locate the current version of a piece of content. By clicking the Crossmark button, readers can determine whether changes have been made after publication. Depositing Crossmark policy page(s) Crossmark policy pages should be deposited as datasets with a \u0026ldquo;database\u0026rdquo; called \u0026ldquo;PublisherName Crossmark Policy Statement\u0026rdquo;. If you have multiple policy pages (for example, different policy pages for different journals) you should include them in the database deposit as multiple datasets.\nSee an example of a member’s Crossmark policy page (section 10 Permanency of content).\n", "headings": ["Depositing Crossmark policy page(s) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/crossmark-button-guidelines/", "title": "Crossmark button guidelines", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "It’s important that all members use the Crossmark button consistently, to make sure it is familiar to readers and can be easily recognised.\nThe Crossmark button is available in color and monochrome versions, and can be resized to suit your website or PDFs. It must not otherwise be altered or adapted. We recommend using the color button on HTML pages, and either the color or monochrome button on PDFs for maximum user recognition.", "content": "It’s important that all members use the Crossmark button consistently, to make sure it is familiar to readers and can be easily recognised.\nThe Crossmark button is available in color and monochrome versions, and can be resized to suit your website or PDFs. It must not otherwise be altered or adapted. We recommend using the color button on HTML pages, and either the color or monochrome button on PDFs for maximum user recognition.\nThe Crossmark button should be used in two contexts: on HTML abstract pages and PDF files.\nCROSSMARK_BW_horizontal.eps CROSSMARK_BW_square.eps CROSSMARK_BW_square_no_text.eps CROSSMARK_Color_horizontal.eps CROSSMARK_Color_square.eps CROSSMARK_Color_square_no_text.eps Do Place the Crossmark button close to the title of the article, preferably next to or above the title. Use our button (View Source of that page to see the embed code) and do not modify or adapt it. For web and PDF there are three versions of the button. We recommend you use one of the two buttons with the Check for updates text. Please be careful not to make them so small they become illegible. If you need a smaller button, use the square one with no text.\nDon’t Modify the colors or text of the button. Create your own version of the Crossmark button. Download JavaScript or image assets and serve them from your own site. Doing this will prevent us from providing software updates and could cause the dialog to stop working properly. Adjust the ratios of the buttons. ", "headings": ["Do ","Don’t "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/linked-clinical-trials/", "title": "Linked clinical trials", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Crossmark can be used to display the relationships between different publications that report on a common clinical trial. This section describes the steps a member needs to take to participate in this initiative. Learn more about the background to the project.\nClinical trial numbers should be extracted from the paper by the publisher or supplied by the author on submission.\nThere are three elements to the metadata that members need to deposit to participate in linked clinical trials:", "content": "Crossmark can be used to display the relationships between different publications that report on a common clinical trial. This section describes the steps a member needs to take to participate in this initiative. Learn more about the background to the project.\nClinical trial numbers should be extracted from the paper by the publisher or supplied by the author on submission.\nThere are three elements to the metadata that members need to deposit to participate in linked clinical trials:\nThe registry in which the clinical trial has been registered (required) Clinical trials should be registered with one of the WHO-approved national trial registries or with ClinicalTrials.gov. Crossref maintains a list of these approved registries, and has assigned a DOI to each one. This ID should be deposited. The registry ID is used in combination with the trial number to identify trials correctly. The registered clinical trial number (required) The trial number, including its prefix, for example, ISRCTN00757750 The relationship of the publication to the clinical trial (optional) This field is optional but encouraged. The three allowed elements are \u0026ldquo;pre-results\u0026rdquo;, \u0026ldquo;results\u0026rdquo; and \u0026ldquo;post-results\u0026rdquo;, indicating which stage of the trial the publication is reporting on. These fields should be included within the custom metadata section of the Crossmark deposit. When clinical trial metadata is deposited, the Clinical Trials section of the Crossmark box will automatically appear and be populated.\nExample deposit for linked clinical trials \u0026lt;clinicaltrial_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;ct:program\u0026gt; \u0026lt;ct:clinical-trial-number registry=\u0026#34;10.18810/isrctn\u0026#34; type=\u0026#34;results\u0026#34;\u0026gt;ISRCTN1234\u0026lt;/ct:clinical-trial-number\u0026gt; \u0026lt;ct:clinical-trial-number registry=\u0026#34;10.18810/isrctn\u0026#34; type=\u0026#34;results\u0026#34;\u0026gt;ISRCTN9999\u0026lt;/ct:clinical-trial-number\u0026gt; \u0026lt;/ct:program\u0026gt; \u0026lt;/clinicaltrial_data\u0026gt; ", "headings": ["Example deposit for linked clinical trials "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/crossmark-and-transferring-responsibility-for-dois/", "title": "Crossmark and transferring responsibility for DOIs", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "If content moves from a member which participates in Crossmark to one which does not, the Crossmark button would need to be removed from that content. Although the button can be removed, the Crossmark metadata will remain in the system to enable simple reactivation if the new hosting member chooses to participate in Crossmark.\nIt is likely that Crossmark-associated content will continue to exist, for example, on readers’ local drives. Clicking on these Crossmark buttons will show a message stating that the content is no longer being tracked in Crossmark, and the current status of the content is unknown.", "content": "If content moves from a member which participates in Crossmark to one which does not, the Crossmark button would need to be removed from that content. Although the button can be removed, the Crossmark metadata will remain in the system to enable simple reactivation if the new hosting member chooses to participate in Crossmark.\nIt is likely that Crossmark-associated content will continue to exist, for example, on readers’ local drives. Clicking on these Crossmark buttons will show a message stating that the content is no longer being tracked in Crossmark, and the current status of the content is unknown.\nLearn more about transferring responsibility for DOIs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/crossmark-terms/", "title": "Crossmark terms", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "CROSSMARK® SERVICE. Crossmark is an optional service for Crossref members in good standing. Anything assigned a Crossref DOI that the publisher is taking responsibility for and stewarding can be registered in the Crossmark system.\nOBLIGATIONS OF CROSSMARK PARTICIPANTS. Crossref members participating in the Crossmark Service (\u0026ldquo;Participating Publishers\u0026rdquo;) will be obligated to:\nInclude the Crossmark button as a clickable link in digital formats (HTML, PDF, and, at the Participating Publishers option, ePub) of the abstract and full text of all current content deposited at Crossref by the Participating Publisher.", "content": " CROSSMARK® SERVICE. Crossmark is an optional service for Crossref members in good standing. Anything assigned a Crossref DOI that the publisher is taking responsibility for and stewarding can be registered in the Crossmark system.\nOBLIGATIONS OF CROSSMARK PARTICIPANTS. Crossref members participating in the Crossmark Service (\u0026ldquo;Participating Publishers\u0026rdquo;) will be obligated to:\nInclude the Crossmark button as a clickable link in digital formats (HTML, PDF, and, at the Participating Publishers option, ePub) of the abstract and full text of all current content deposited at Crossref by the Participating Publisher. For purposes of these terms and conditions, \u0026ldquo;current content\u0026rdquo; is defined as content published after the date that the Participating Publisher begins to participate in the CrossMark system. The Participating Publisher may include the Crossmark button in content, published prior to that date. In such event, the button must be included in the HTML version of the abstract and full text (and the ePub file if relevant). Inclusion in the PDF versions of previously published research outputs is encouraged but not required. Maintain the content and register as promptly as reasonably possible any major updates in the Crossmark system, which updates shall include at a minimum corrections, retractions and withdrawals and other updates that have an impact upon the crediting or interpretation of the work. Comply with the guidelines for the use of the Crossmark button issued from time to time by Crossref. Learn more about the Crossmark button guidelines METADATA. For each content item registered in the Crossmark system, Participating Publishers must deposit the minimum Crossmark metadata and keep it up-to-date. Participating Publishers may deposit additional optional metadata and must keep it up-to-date. Crossref will make all Crossmark metadata deposited by Participating Publishers openly available from time-to-time in standard formats for harvesting at no charge, and the Participating Publisher hereby gives Crossref permission to release the data for such purposes.\nOBLIGATIONS AFTER PARTICIPATION IN CROSSMARK CEASES. If a Participating Publisher stops participating in the Crossmark service for any reason use of the Crossmark logo must stop and the logo must be removed from all content to the extent practicable. All metadata deposited through participation in the Crossmark system will remain in the databases maintained by Crossref and may be used by Crossref as provided in Section 3 above. In the event that either the cessation of participation in Crossmark by a Participating Publisher or the transfer by the Participating Publisher of ownership of a content item to a non-participating publisher results in the failure to maintain the Crossmark metadata, Crossref may add language to the Crossmark status message indicating that the data provided may no longer be up-to-date.\nPROMOTION OF CROSSMARK SERVICE. The Participating Publisher agrees to use reasonable commercial efforts to promote awareness of the Crossmark Service within the scholarly community (i.e. among scholars, researchers and librarians), and Crossref will, upon request, provide the Participating Publisher with digital and print marketing materials for use in such promotional activities.\nLICENSE TO USE THE CROSSMARK BUTTON.\nIn consideration of its agreement to abide by these Terms and Conditions, the Participating Publisher is hereby granted a limited, non-exclusive, non-transferable license to include the Crossmark button with its content, on the terms and conditions and for the purposes set forth herein. The Participating Publisher may give permission to a third party to display the Crossmark button in connection with the content provided that the Participating Publisher conditions such permission on the third-party’s agreement that it will not remove or alter the button. The Participating Publisher must display the Crossmark button on all formats of the abstract and full text of at least one publisher-maintained copy of current Crossmark registered content. BILLING. There is no annual service fee for Crossmark.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/metadata-plus/metadata-plus-keys/", "title": "Metadata Plus keys", "subtitle":"", "rank": 4, "lastmod": "2023-08-29", "lastmod_ts": 1693267200, "section": "Documentation", "tags": [], "description": "To access all of the features of Metadata Plus, you’ll need to create an API Key. Once you’ve subscribed to Plus, your technical contact will receive an email to set a password to access key manager. This is where you’ll create and manage API keys for Metadata Plus access. If you subscribed to Plus before August 2023, you\u0026rsquo;ll be using a token.\nCreate a new API Key Login to key manager Under API Keys, click “Add New” Give the key a name (description) Copy the API Key (Note: The key will only be displayed once, so you must copy and paste it somewhere safe) Delete an API Key Note: a deleted key cannot be recovered.", "content": "To access all of the features of Metadata Plus, you’ll need to create an API Key. Once you’ve subscribed to Plus, your technical contact will receive an email to set a password to access key manager. This is where you’ll create and manage API keys for Metadata Plus access. If you subscribed to Plus before August 2023, you\u0026rsquo;ll be using a token.\nCreate a new API Key Login to key manager Under API Keys, click “Add New” Give the key a name (description) Copy the API Key (Note: The key will only be displayed once, so you must copy and paste it somewhere safe) Delete an API Key Note: a deleted key cannot be recovered.\nUnder API Keys, find the correct key and click the three dots to its right Choose “Delete” from the drop down menu Type DELETE in the modal and click ok. The key now cannot be used to access Plus services. The key cannot be recovered. It may take up to 60 seconds for this to propagate through the system. Edit a key’s description Under API Keys, find the correct key and click the three dots to its right Choose “Edit” from the drop down menu Edit the key’s description and click ok. Use an API Key API Keys can be used to access Metadata Plus services. When making requests to the REST API (including for snapshots) or OAI-PMH, put your API Key (or other token) in the Crossref-Plus-API-Token HTTPS header of all your requests. The example below shows how this should be formatted, with XXX replaced by your key:\nCrossref-Plus-API-Token: Bearer XXX\nFor full information on how to use the REST API, see the documentation at api.crossref.org.\n", "headings": ["Create a new API Key","Delete an API Key","Edit a key’s description","Use an API Key"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/metadata-plus/metadata-plus-snapshots/", "title": "Metadata Plus snapshots", "subtitle":"", "rank": 4, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Metadata Plus snapshots provide access to our 160,104,382 metadata records in a single file, providing an easy way to retrieve an up-to-date copy of our records. The files are made available via a /snapshots route in the REST API which offers a compressed .tar file (tar.gz) containing the full extract of the metadata corpus in either JSON or XML formats.\nHow to access snapshots New snapshots are created each month, available by the 5th day, providing all records up to and including the previous month.", "content": "Metadata Plus snapshots provide access to our 160,104,382 metadata records in a single file, providing an easy way to retrieve an up-to-date copy of our records. The files are made available via a /snapshots route in the REST API which offers a compressed .tar file (tar.gz) containing the full extract of the metadata corpus in either JSON or XML formats.\nHow to access snapshots New snapshots are created each month, available by the 5th day, providing all records up to and including the previous month.\nIf you’re looking for the most up-to-date snapshot (all records up to and including the previous month), you can use the following URLs which will always alias to the current month:\nJSON output: https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/latest/all.json.tar.gz XML output: https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/latest/all.xml.tar.gz If you want to test to see if a particular snapshot is available, you can do a HTTPS HEAD request using the following URL patterns:\nJSON output: https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/{YYYY/MM}/all.json.tar.gz XML output: https://0-api-crossref-org.libus.csd.mu.edu/snapshots/monthly/{YYYY/MM}/all.xml.tar.gz Please note that XML snapshots are available in UNIXSD format only.\nAs snapshots are available to Metadata Plus users only, you will need to identify yourself in the request by using a \u0026ldquo;Crossref-Plus-API-Token\u0026rdquo; HTTPS header with your access token. The example below shows how this should be formatted, with XXX replaced by your token:\nCrossref-Plus-API-Token: Bearer XXX\nThe files will be very large (\u0026gt;42GB) so may take a while to download depending on the speed of your internet connection.\nPlease contact us if you’re unable to access snapshots.\nKeeping your data current For applications where you want to keep a copy of our metadata records current, use OAI-PMH Plus (as described above) or the REST API to query for new records at your preferred interval.\nSnapshots FAQs Are snapshots for ‘all time’ available? Snapshots are available for current and previous quarters. With each new snapshot, we may remove files older than the current and previous quarters. For example, on 1 April the files from the previous October, November, and December may be removed.\nI’m seeing a 404 error when I request the URL If you’re looking for the current month, this may be because the archive hasn’t yet been created for that month. Snapshots are usually available by the 5th of each month.\nIf you’re looking for a month that’s more than 6 months old, it may be that the snapshot has been deleted. If the archive you looking isn’t particularly new or old and you’re still seeing a 404 error, please contact us.\nI’m seeing a 401 error when I request the URL Snapshots are only available to Metadata Plus users. This 401 message means that the system doesn’t recognise you as a Metadata Plus user. If you’re already a Metadata Plus user, make sure you’re using your correct token in the header of your query. If you’re still having problems, please contact us.\nI need a full snapshot mid-month Snapshot archives are provided at the start of each month. The archive contains all the registered content received by Crossref up until that time. (Really? Yeah, all of it.) If you need a snapshot mid-month, you should download and ingest the latest archive and then harvest and ingest the registered content that has changed since then.\nTo get the registered content that has changed since an archive was created, use OAI-PMH Plus or the REST API. For example, if the archive was created on January 31, 2018 then the OAI-PMH Plus harvest’s initial URL is\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;set=J\u0026amp;from=2018-01-31\u0026amp;metadataPrefix=cr_unixsd This will harvest journal data. If you are interested in book data then use the \u0026ldquo;B\u0026rdquo; set.\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;set=B\u0026amp;from=2018-01-31\u0026amp;metadataPrefix=cr_unixsd If you are interested in series data then use the \u0026ldquo;S\u0026rdquo; set.\nhttps://oai.crossref.org/oai?verb=ListRecords\u0026amp;set=S\u0026amp;from=2018-01-31\u0026amp;metadataPrefix=cr_unixsd It is important to use the created date and not the completed date. It takes time to build the archive, so changes will occur during the build. Using the created date ensures those changes are harvested too.\n", "headings": ["How to access snapshots ","Keeping your data current ","Snapshots FAQs ","Are snapshots for ‘all time’ available? ","I’m seeing a 404 error when I request the URL ","I’m seeing a 401 error when I request the URL ","I need a full snapshot mid-month "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/event-data/transparency/", "title": "Transparency of Event Data", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "Event Data can be considered as a stream of assertions about research objects. When you interpret an assertion, you should know who made the assertion and which data they were working from. For interpretation, you may want to accept some events at face value but independently verify others. For this reason we make every effort to be open and transparent with Event Data. We do this in several ways.\nOpen source code All Event Data code is open source and available from Crossref’s Gitlab repository - learn more in our Knowledge Database.", "content": "Event Data can be considered as a stream of assertions about research objects. When you interpret an assertion, you should know who made the assertion and which data they were working from. For interpretation, you may want to accept some events at face value but independently verify others. For this reason we make every effort to be open and transparent with Event Data. We do this in several ways.\nOpen source code All Event Data code is open source and available from Crossref’s Gitlab repository - learn more in our Knowledge Database. Crossref is a community-focused membership organisation and we welcome contributions to the code, as well as enabling others to make use of the code to gather events they might be interested in themselves.\nEvent Data uses lists of attributes called artifacts. Examples include a list of news websites, and landing domains of publishers. They are used as inputs for the function of Event Data and are, of course, completely open. They are versioned so you can see which artefact was used when a given event was recorded. See the current list of artifacts.\nLogs, logs, logs We take an evidence-first approach to providing event data. If an event is created using external data, we create an evidence record. This maps the journey from finding a mention online to associating it with a DOI. A link to the evidence record is included in the metadata for each event.\nAgents also create evidence logs to record their activity. This is typically a list of the pages they have visited and any potential events they found, even if they were eventually not added to Event Data. Learn more about how to access the evidence logs.\nThe logs typically run to over a gigabyte of data each day and provide a comprehensive record of Event Data provenance.\n", "headings": ["Open source code ","Logs, logs, logs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/event-data/use/", "title": "How can I use Event Data?", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "The main way to access events created by Event Data is via the API, which returns data in JSON format. For example, the following finds the first 500 events:\nhttps://api.eventdata.crossref.org/v1/events?rows=500 It is not required, but we recommend that you include your email address. We will not share this information, but can use it to contact you if a problem arises, for example:\nhttps://api.eventdata.crossref.org/v1/events?mailto=anon@test.com\u0026amp;rows=500 Full documentation for the API is available. Briefly, the results can be filtered by various parameters based on time, source, and object.", "content": "The main way to access events created by Event Data is via the API, which returns data in JSON format. For example, the following finds the first 500 events:\nhttps://api.eventdata.crossref.org/v1/events?rows=500 It is not required, but we recommend that you include your email address. We will not share this information, but can use it to contact you if a problem arises, for example:\nhttps://api.eventdata.crossref.org/v1/events?mailto=anon@test.com\u0026amp;rows=500 Full documentation for the API is available. Briefly, the results can be filtered by various parameters based on time, source, and object.\nThe results can also include facets of the data: summary counts of a certain characteristic. For example, to see the 10 news websites that have produced the most events, use the query:\nhttps://api.eventdata.crossref.org/v1/events?rows=0\u0026amp;source=newsfeed\u0026amp;facet=subj-id.domain:10 If you want to make regular and extensive use of the API, we highly recommend reading the full documentation.\nDevelopers are welcome to build applications based on Event Data, however Crossref doesn’t offer a dashboard or plugin to provide Event Data results. Some Jupyter notebooks are available that demonstrate possible uses of Event Data, including accessing all events about a single DOI.\nWhat is the current status of Event Data? The Crossref status page shows stability of the Event Data service and query response times. You can also track the latest development tasks.\nA list of the current agents is available on our Gitlab pages.\nWho uses Event Data? There are a large number of uses for Event Data and it is intended as a free and transparent data source for third parties. Some examples of how the data can be used are the following:\nBuilding links between scholarly outputs, such as Scholix Tracking the impact of research outputs, such as HIRMEOS from OPERAS; Atypon; PaperBuzz Input for scholarly information services, such as H1insights ", "headings": ["What is the current status of Event Data? ","Who uses Event Data? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/event-data/contribute/", "title": "Contributing to Event Data", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "There are a number of ways that you can contribute to Event Data. On this page, we show you how to make sure an event is picked up by Event Data, and other ways to get more involved.\nHow can I create an event? The easiest way to create an event is to take an action that our agents are tracking. For example, an event will be created if you post a tweet, publish a dataset that is cited by or cites a research article, or add a citation to a Wikipedia article.", "content": "There are a number of ways that you can contribute to Event Data. On this page, we show you how to make sure an event is picked up by Event Data, and other ways to get more involved.\nHow can I create an event? The easiest way to create an event is to take an action that our agents are tracking. For example, an event will be created if you post a tweet, publish a dataset that is cited by or cites a research article, or add a citation to a Wikipedia article.\nMake sure that when you cite a Crossref article you are as specific as possible: including the DOI link, such as https://0-doi-org.libus.csd.mu.edu/10.1016/j.jmb.2007.03.025 or, where this isn\u0026rsquo;t possible, a link to the article. An event will not be created if you only use details such as the title and authors without including a link. We attempt to track publisher website URLs but they can change and we may not be successful in creating a link based on the website if there is no DOI.\nFor data citations to be picked up by Event Data, these citations must be included in reference lists deposited with Crossref, and must be in a structured format. A reference that the publisher formats using the “unstructured reference” type will not lead to an event.\nIf you run a website or service that generates a large number of events, please get in touch via our community forum. Similarly, if you run a source that we already cover, we would welcome a discussion about the most efficient way to transfer data.\nThere is something missing from Crossref Event Data, what can I do? If you see events online that you think should have been included in Event Data, first check our status page to see whether there are any current issues. If not, please start a discussion on our community forum.\nCan I contribute to the code base? Yes, you can! Crossref runs Event Data for the benefit of the scholarly community and we welcome collaborations to develop and improve our code. All of the code is open source and a summary of the elements is available in our Knowledge Database.\n", "headings": ["How can I create an event? ","There is something missing from Crossref Event Data, what can I do? ","Can I contribute to the code base? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/updating-your-metadata/", "title": "Updating your metadata", "subtitle":"", "rank": 4, "lastmod": "2022-05-20", "lastmod_ts": 1653004800, "section": "Documentation", "tags": [], "description": "Because DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs cannot be deleted.\nHowever, you can update metadata associated with your registered DOIs, and we encourage you to do this as often as required. No fees are charged for updating existing metadata records.\nTo add, change, or remove metadata from your existing records, you generally just need to resubmit your complete metadata record with the changes included.", "content": "Because DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs cannot be deleted.\nHowever, you can update metadata associated with your registered DOIs, and we encourage you to do this as often as required. No fees are charged for updating existing metadata records.\nTo add, change, or remove metadata from your existing records, you generally just need to resubmit your complete metadata record with the changes included.\nThere are a few exceptions where you can\u0026rsquo;t just resubmit your records using your chosen content registration method, or where there is an easier option, and you need to do something different:\nExceptions to standard metadata updates Crossmark: if you want to update or replace (not just delete) metadata for Crossmark and your deposit method is web deposit form or direct deposit of XML, you can\u0026rsquo;t just resubmit the new information. Follow a two-step process to (1) actively delete the relevant metadata by redepositing the record but with the Crossmark field blank (markup example for users of direct deposit of XML), then (2) add the new Crossmark metadata in another redeposit. Funding, license, and Similarity Check full-text URLs: you can add this metadata to multiple DOIs at once by uploading a csv file to the web deposit form using the supplemental metadata upload. References can be added in a few different ways - learn more about adding references to your metadata record Relationships: if you want to update or replace (not just delete) metadata for relationships and your deposit method is direct deposit of XML, you can\u0026rsquo;t just resubmit the new information. Follow a two-step process to (1) actively delete the relevant metadata by sending us a full redeposit but with the relationship field blank, then (2) add the new relationship metadata in another full redeposit. Resolution URLs: you may update resolution URLs in bulk using a tab-separated .txt file. See full details below. Titles: to update the title for a book, journal, or other content associated with an ISSN or ISBN, you will need to contact us as we may need to make some changes on our side. Some non-bibliographic metadata may be updated, added to, or removed from a metadata record independently using a resource-only deposit. This might make things easier for those of you who send us XML files directly. Updating bibliographic metadata by resubmitting your complete record Most metadata changes can be done by resubmitting your complete metadata record, but there are some exceptions - please check the exceptions list above.\nOJS: find the record you wish to update, leave the relevant field blank to delete it, or add in your new metadata to update it, and deposit it again using the Crossref import/export plugin. You must be running at least OJS 3.1.2 and have the Crossref import/export plugin enabled Web deposit form: re-enter all the metadata including the changes - leave the relevant field blank to delete it, or add in your new metadata to update it - and resubmit Still using the deprecated Metadata Manager?: search for the journal record you wish to update, leave the relevant field blank to delete it, or add in your new metadata to update it, and deposit it again. Learn more about metadata corrections, updates, and additions in Metadata Manager Depositing XML files with Crossref: make changes to your XML file and resubmit it to Crossref. When making an update, you must supply all the bibliographic metadata for the record being updated, not just the fields that need to be changed. During the update process we overwrite the existing metadata with the new information you submit, and insert null values for any fields not supplied in the update. This means, for example, that if you’ve supplied an online publication date in your initial deposit, you’ll need to include that date in subsequent deposits if you wish to retain it. Note that the value included in the \u0026lt;timestamp\u0026gt; element must be incremented each time a DOI is updated. Updating your resolution URLs If your content moves and you need to update all of your URLs, you can update them in bulk to make sure that all your DOIs resolve to your content persistently.\nReasons for resource resolution URL updates Platform migration: if you know that your URLs are going to need to be updated in the future as you’re planning a platform migration, use our handy guide and checklist to help you manage this effectively. Title transfers: if you’ve acquired a title from another Crossref member, you’ll need to update your resolution URLs. Do make sure to also check the inherited metadata for other fields, as this may also need to be updated. Learn more about updating inherited DOIs and metadata after a title transfer. No longer hosting the content: if the worst happens and you are no longer able to host your content, it’s invaluable to have an archive provider as a backup. We encourage all Crossref members to use best efforts to engage an archive provider, and to include information about archiving arrangements in their metadata. How to update your resolution URLs If you only have a few URLs to update, you can just resubmit your record.\nIf you have a long list of URLs that need updating (for example, you’ve just finished a platform migration, or you’ve acquired a new title), you can do a bulk resource URL update. Create a tab-separated list (formatted as a text (.txt) file) of DOIs and their new URLs, and apply the following header:\nH: email=youremail@email.com;fromPrefix=10.xxxx\nwhere youremail@email.com is your email address and 10.xxxx is the owner prefix (this should be the prefix associated with the username you\u0026rsquo;ll be processing this request with) for the DOIs you\u0026rsquo;re updating.\nOnly DOIs of the same owning prefix may be updated together using this header. For example, if you have DOIs against two owning prefixes, you\u0026rsquo;ll need to separate your submissions and use the appropriate 10.xxxx prefix for each set of your DOIs.\nThis is what your tab-separated list should look like:\nH: email=youremail@email.com;fromPrefix=10.5555\n10.5555/doi1 http://www.yourdomain.com/newurl1\n10.5555/doi2 http://www.yourdomain.com/newurl2\n10.5555/doi3 http://www.yourdomain.com/newurl3\nYou can upload the file to the admin tool or use the upload tool. To use the admin tool, login and navigate to Submissions\u0026gt;Upload. Upload your file, choose \u0026ldquo;Bulk URL Update\u0026rdquo; as the Type, and click Upload.\nIf you have more than 4,000 URLs to update, you should break them into smaller files. The file upload size limit for this operation is 400 KB.\nWe can provide a list of your existing DOIs and URLs if needed.\nUpdating title records When a new journal, book, conference proceeding, standard, report, or dissertation is registered with Crossref, we create a title record in our database from the metadata provided in the submission. Titles associated with an ISSN or ISBN must be consistent from registration to registration (inconsistencies in title-level metadata submitted will lead to deposit errors). This means that if you need to change the title of a journal or a book, we will need to modify the title record in our system before you can update your metadata for that title.\nAlthough members can\u0026rsquo;t edit titles once they have been deposited, our support team can do the following when necessary - please send us the details, and we’ll update the records:\nAdd or adjust ISSNs - correct ISSN errors or add additional ISSNs. Add or adjust alternative spellings of titles - alternative title spellings and abbreviations are recorded for each title and used in query matching. They can be included in the \u0026lt;abbrev\u0026gt; elements, or we can add them for you. Correct title spelling - we\u0026rsquo;ll need to correct these for you, please open a support ticket. Merge titles - if two title entries have been created in error, we\u0026rsquo;ll merge the titles entries into one upon request. Delete titles - titles submitted by mistake can be deleted, once the DOIs assigned to the title have been migrated to another title. The adjustments above affect only the title record - you\u0026rsquo;ll also need to update your metadata for any existing content records related to this title to update the title within that metadata record.\nJournal title and ISSN changes The ISSN International Centre recommends applying for a new ISSN if the publication’s medium changes (for example, a print magazine becomes an online magazine), or if the publication’s title changes. A change of ISSN is not required for other changes such as change of publisher, publication location, frequency, editorial policy.\nIn the case of a significant title change, such as “The Journal of Things” to “International Journal of Important Stuff”, DOIs registered for the original title should stay associated with that title, and all new DOIs should be registered to the new title (and new ISSN and journal-title-level DOI).\nFor a minor title change, such as “The Journal of Things” to “Things Journal”, keep the existing ISSN and update the title record. Ask us to update the journal title record in our system, then you update all DOIs previously registered for this title. Once all records associated with that title have been updated to include the new title, the journal title in your submissions will match the one in our records, and no longer trigger deposit errors (such as “This error indicates the ISSN(s), title, or publisher in your deposit do not match the data we have on record for that ISSN”).\n", "headings": ["Exceptions to standard metadata updates ","Updating bibliographic metadata by resubmitting your complete record ","Updating your resolution URLs ","Reasons for resource resolution URL updates ","How to update your resolution URLs ","Updating title records ","Journal title and ISSN changes "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/add-references/", "title": "Adding references to your metadata record", "subtitle":"", "rank": 4, "lastmod": "2024-09-06", "lastmod_ts": 1725580800, "section": "Documentation", "tags": [], "description": "We encourage you to include references (citation lists, bibliographies, data and software citations) with all content you register. A key benefit is that they will appear in Cited-by query results. You can include references when your first register content, or you can add them to existing DOIs later. Learn more about the benefits of registering your references.\nIncluding references (or adding them to an existing deposit) can be done by:", "content": "We encourage you to include references (citation lists, bibliographies, data and software citations) with all content you register. A key benefit is that they will appear in Cited-by query results. You can include references when your first register content, or you can add them to existing DOIs later. Learn more about the benefits of registering your references.\nIncluding references (or adding them to an existing deposit) can be done by:\nCrossref XML plugin for OJS: You must first enable References as a submission metadata field and then enable the Crossref reference linking plugin, to include references in your initial deposit, or add them later. Web deposit form: the web deposit form can’t currently be used to add references when you first register your content, but you can use Simple Text Query to match references and add them to an existing record. Metadata Manager: If you\u0026rsquo;re still using the deprecated Metadata Manager, there’s a field where you can add references and Metadata Manager will even match your references to their DOIs. If you want to add references to an existing deposit, simply find the existing journal record, add your references, and resubmit. Learn more about updating article metadata using Metadata Manager. Direct deposit of XML: you can include references in your original deposit, or add them later. Learn more at how to deposit references for users of direct deposit of XML. Using Simple Text Query to match and deposit references WATCH A VIDEO TUTORIAL - ADDING REFERENCES - STQ FORM\nMatching and depositing references using Simple Text Query is a suitable method for helper tool users. If your content registration method is direct deposit of XML, see how to deposit references for users of direct deposit of XML.\nSimple Text Query allows you to both find the DOIs for your references and add them to the metadata for a DOI that you have already registered with Crossref. Please note that this method will overwrite any references previously deposited for the content item - if you’ve previously added references to an item, and want to add more references using Simple Text Query, you need to include both the existing and the new references in your deposit.\nStart at Simple Text Query Enter your references into the form. Don\u0026rsquo;t check List all possible DOIs per reference. Click submit Select the Deposit button and complete the fields your email address - this is used to send you a submission log after your reference deposit has been processed the parent DOI is the DOI of the content item for which you are adding references. It must already be registered with Crossref your Crossref account credentials Click Deposit again. If your details have been entered correctly, you will see a success message, showing that your deposit has been submitted to the system queue for processing. When the reference deposit has been submitted, you will receive an email containing the XML deposit generated by the form. After that submission has been processed (usually within minutes of your submission), you will receive a submission log by email with the results of your submission.\nNon-members can also use Simple Text Query to match references with DOIs.\n", "headings": ["Using Simple Text Query to match and deposit references "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/resource-only-deposit/", "title": "Resource-only deposit", "subtitle":"", "rank": 4, "lastmod": "2022-08-01", "lastmod_ts": 1659312000, "section": "Documentation", "tags": [], "description": "A resource-only deposit is a way of adding or updating certain elements in an existing metadata deposit without having to do a full metadata redeposit. Resource-only deposits use the resource-only section of the schema (with the exception of stand-alone components which use the main deposit section of the schema).\nWhether you use a helper tool or submit your own XML to Crossref, you may find a resource-only deposit useful for adding the following:", "content": "A resource-only deposit is a way of adding or updating certain elements in an existing metadata deposit without having to do a full metadata redeposit. Resource-only deposits use the resource-only section of the schema (with the exception of stand-alone components which use the main deposit section of the schema).\nWhether you use a helper tool or submit your own XML to Crossref, you may find a resource-only deposit useful for adding the following:\nReferences Funder information Crossmark License information Relationships between different research objects A resolution URL must be included in all metadata records and cannot be updated using a resource-only deposit. However, the following additional URLs may be added using a resource-only deposit: Similarity Check full-text URLs multiple resolution secondary URLs text and data mining URLs Uploading resource-only deposits Resource-only deposits may only be submitted for existing Crossref DOIs. Deposits using the resource-only section of the schema must be uploaded with type doDOICitUpload for HTTPS POST, or DOI Resources if you are using the admin tool.\n", "headings": ["Uploading resource-only deposits "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/registering-updates/", "title": "Registering updates", "subtitle":"", "rank": 4, "lastmod": "2024-05-27", "lastmod_ts": 1716768000, "section": "Documentation", "tags": [], "description": "Typically, when an editorially significant update is made to a document, the publisher will not modify the original document, but will instead issue a separate document (such as a correction or retraction notice) which explains the change. This separate document will have a different DOI from the document that it corrects and will there have different metadata. This process is complementary to versioning.\nShow image × In this example, article A (with the DOI 10.", "content": "Typically, when an editorially significant update is made to a document, the publisher will not modify the original document, but will instead issue a separate document (such as a correction or retraction notice) which explains the change. This separate document will have a different DOI from the document that it corrects and will there have different metadata. This process is complementary to versioning.\nShow image × In this example, article A (with the DOI 10.5555/12345678) is eventually retracted by a retraction notice RN (with the DOI 10.5555/24242424x). Each document has Crossmark metadata, but the fact that RN updates article A is only recorded in the RN\u0026rsquo;s Crossmark deposit. The Crossmark internal API has to tie the two documents together and indicate in metadata of the original document (A), that it has been updated_by the second document (RN).\nThe Crossmark part of the metadata schema is used to register updates, but this doesn\u0026rsquo;t mean that you need to have implemented other parts of Crossmark to deposit updates. In the examples below, in the \u0026lt;crossmark\u0026gt; section you can use only the \u0026lt;update\u0026gt; field in the deposit XML if you don\u0026rsquo;t usually deposit other Crossmark metadata.\nExample 1: simple retraction This is a simple example of article A being retracted by a retraction notice RN where both A and RN have Crossmark metadata deposited.\nFirst, the PDF is produced and the XML deposited to Crossref.\nArticle A Deposit XML Article A Landing Page Article A XMP Article A PDF When the retraction is issued, it is issued as a separate \u0026ldquo;retraction notice\u0026rdquo; with its own DOI, PDF, and Crossref metadata.\nRetraction Notice of A Deposit XML Retraction Notice of A Landing Page Retraction Notice of A XMP Retraction Notice of A PDF Example 2: simple correction This is a simple example of article B being corrected by a correction notice CN where both B and CN have Crossmark metadata deposited. The only real difference between this and the previous example is that we are creating a different kind of update.\nArticle B Deposit XML Article B Landing Page Article B XMP Article B PDF Correction notice of article B Deposit XML Correction notice of article B Landing Page Correction notice of article B XMP Correction notice of article B PDF Example 3: in-situ correction When a member does not issue a separate update/correction/retraction notice and instead just makes the change to the document (without changing its DOI either), this is called an in-situ update. In-situ updates or corrections are not recommended because they tend to obscure the scholarly record. How do you tell what the differences are between what you downloaded and the update? How do you differentiate them when citing them (remember, we are only talking about \u0026ldquo;significant updates\u0026rdquo; here)? However, some members need to support in-situ updates, and this is how they can be supported.\nArticle D Deposit XML before correction issued Article D Deposit XML after correction issued Article D Landing Page Article D XMP generated before correction issued Article D XMP generated after correction issued Article D PDF generated before correction issued Article D PDF generated after correction issued Example 4: correction of article that has no Crossmark metadata deposited If you deposit Crossmark metadata for a retraction or and update notice which, in turn, points at an article that does not have Crossmark metadata assigned to it, we will generate a \u0026ldquo;stub\u0026rdquo; Crossmark for the item being updated. The stub metadata will simply copy essential Crossmark metadata. This metadata can be queried via our API, but won’t activate anything on your site unless you add the Crossmark widget to the corresponding page of the item being updated.\nArticle E Deposit XML (has no Crossmark metadata) Article E Landing Page (again, no Crossmark button) Article E XMP (none exists because it doesn’t have Crossmark metadata) Article E PDF (has no Crossmark button or metadata) Still, note that if you query Crossmark metadata for Article E, you will get a Crossmark stub which has been automatically been generated by Crossref.\nThe procedure for updating the content follows the same pattern as a simple correction or retraction:\nCorrection of Article E Deposit XML Correction of Article E Landing Page Correction of Article E XMP Correction of Article E PDF Example 5: correction notice that corrected multiple documents Sometimes members issue correction or clarification notices which provide corrections for multiple documents. This too can be supported by Crossmark. In the following example, one correction/clarification document provides updates to two documents (F and G)\nArticle F Deposit XML Article F Landing Page Article F XMP Article F PDF Article G Deposit XML Article G Landing Page Article G XMP Article G PDF Correction Notice of F and G Deposit XML Correction Notice of F and G Landing Page Correction Notice of F and G XMP Correction Notice of F and G PDF ", "headings": ["Example 1: simple retraction ","Example 2: simple correction ","Example 3: in-situ correction ","Example 4: correction of article that has no Crossmark metadata deposited ","Example 5: correction notice that corrected multiple documents "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/metadata-removal-markup-guide/", "title": "Metadata removal markup guide", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "For most metadata elements, you can just update the record to delete elements. However, if you are sending us XML, there are some non-bibliographic metadata elements where you have to go through a two-stage process - firstly send us a submission to delete this element, and then send us a further submission to add in the replacement data.\nMetadata that needs to be explicitly deleted includes:\nCrossmark data Funding, clinical trial, or license data from Crossmark Funding data License data Relationship data Text and data mining, Similarity Check, and multiple resolution URLs References You can also delete non-bibliographic metadata by supplying an empty parent element (see examples below), and include it in a metadata update or submit it as a resource-only deposit.", "content": "For most metadata elements, you can just update the record to delete elements. However, if you are sending us XML, there are some non-bibliographic metadata elements where you have to go through a two-stage process - firstly send us a submission to delete this element, and then send us a further submission to add in the replacement data.\nMetadata that needs to be explicitly deleted includes:\nCrossmark data Funding, clinical trial, or license data from Crossmark Funding data License data Relationship data Text and data mining, Similarity Check, and multiple resolution URLs References You can also delete non-bibliographic metadata by supplying an empty parent element (see examples below), and include it in a metadata update or submit it as a resource-only deposit. Note that metadata submitted as part of a Crossmark update needs to be removed within Crossmark metadata (see examples below).\nRemove all Crossmark data Remove all Crossmark data from a record by supplying an empty Crossmark element in a metadata deposit:\n\u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;crossmark/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345678.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; or as part of a resource-only deposit:\n\u0026lt;crossmark_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;crossmark/\u0026gt; \u0026lt;/crossmark_data\u0026gt; Remove funding, clinical trial, or license data from Crossmark Funding, license, and clinical trial data may all be supplied as part of a Crossmark update. If you need to remove funding, license, or clinical trial metadata from your Crossmark metadata, you must submit the appropriate empty element within a Crossmark update. Note the other Crossmark metadata must be present as well to be retained.\nIn this example, funding data is removed from a Crossmark update:\n\u0026lt;crossmark\u0026gt; \u0026lt;crossmark_version\u0026gt;1\u0026lt;/crossmark_version\u0026gt; \u0026lt;crossmark_policy\u0026gt;10.5555/crossmark_policy\u0026lt;/crossmark_policy\u0026gt; \u0026lt;crossmark_domains\u0026gt; \u0026lt;crossmark_domain\u0026gt; \u0026lt;domain\u0026gt;psychoceramics.labs.crossref.org\u0026lt;/domain\u0026gt; \u0026lt;/crossmark_domain\u0026gt; \u0026lt;/crossmark_domains\u0026gt; \u0026lt;crossmark_domain_exclusive\u0026gt;true\u0026lt;/crossmark_domain_exclusive\u0026gt; \u0026lt;custom_metadata\u0026gt; \u0026lt;assertion name=\u0026#34;orcid\u0026#34; label=\u0026#34;ORCID\u0026#34; group_name=\u0026#34;identifiers\u0026#34; group_label=\u0026#34;Identifiers\u0026#34; order=\u0026#34;0\u0026#34; href=\u0026#34;http\\://orcid.org/0000-0002-1825-0097\u0026#34;\u0026gt;http\\://orcid.org/0000-0002-1825-0097\u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;received\u0026#34; label=\u0026#34;Received\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;0\u0026#34;\u0026gt;2012-07-24\u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;accepted\u0026#34; label=\u0026#34;Accepted\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;1\u0026#34;\u0026gt;2012-08-29\u0026lt;/assertion\u0026gt; \u0026lt;assertion name=\u0026#34;published\u0026#34; label=\u0026#34;Published\u0026#34; group_name=\u0026#34;publication_history\u0026#34; group_label=\u0026#34;Publication History\u0026#34; order=\u0026#34;2\u0026#34;\u0026gt;2012-09-10\u0026lt;/assertion\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;/\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34;/\u0026gt; \u0026lt;program \u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/clinicaltrials.xsd\u0026#34;/\u0026gt; \u0026lt;/custom_metadata\u0026gt; \u0026lt;/crossmark\u0026gt; Remove funding data Remove funding data from a non-Crossmark record by supplying an empty fundref program element in a metadata deposit:\n\u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345678.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; or as part of a resource-only deposit:\n\u0026lt;fundref_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34;/\u0026gt; \u0026lt;/fundref_data\u0026gt; Remove license data Remove license data from a non-Crossmark record by supplying an empty AccessIndicators program element in a metadata deposit:\n\u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34;/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345678.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; or as part of a resource-only deposit:\n\u0026lt;lic_ref_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34;/\u0026gt; \u0026lt;/lic_ref_data\u0026gt; Remove relationship data Remove relationship data by supplying an empty relationship program element in a metadata deposit:\n\u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;/\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345678.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; or as part of a resource-only deposit:\n\u0026lt;doi_relations\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;https://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;/\u0026gt; \u0026lt;/doi_relations\u0026gt; Remove text and data mining, Similarity Check, and multiple resolution URLs Text and data mining and multiple resolution secondary URLs may be removed from a record by submitting an update containing an empty collection tag that includes the appropriate property:\nText and data mining uses property text-mining Similarity Check URLs use the property crawler-based Multiple resolution secondary URLs use the property list-based The country-code resolution feature uses the property country-based For example, to remove text and data mining URLs from a record submit the following as part of a metadata deposit:\n\u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/515151\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-annalsofpsychoceramics-labs-crossref-org.libus.csd.mu.edu/abstract/515151/\u0026lt;/resource\u0026gt; \u0026lt;collection property=\u0026#34;text-mining\u0026#34;/\u0026gt; \u0026lt;/doi_data\u0026gt; They may be included in a metadata deposit (above) or as part of a resource-only deposit:\n\u0026lt;doi_resources\u0026gt; \u0026lt;doi\u0026gt;10.5555/515151\u0026lt;/doi\u0026gt; \u0026lt;collection property=\u0026#34;text-mining\u0026#34;/\u0026gt; \u0026lt;/doi_resources\u0026gt; Remove references Remove a reference list from a record by supplying an empty citation_list element in a metadata deposit:\n\u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;3\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;resource\u0026gt;http://0-psychoceramics-labs-crossref-org.libus.csd.mu.edu/10.5555-12345678.html\u0026lt;/resource\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;citation_list/\u0026gt; or as part of a resource-only deposit:\n\u0026lt;doi_citations\u0026gt; \u0026lt;doi\u0026gt;10.5555/12345678\u0026lt;/doi\u0026gt; \u0026lt;citation_list/\u0026gt; \u0026lt;/doi_citations\u0026gt; ", "headings": ["Remove all Crossmark data ","Remove funding, clinical trial, or license data from Crossmark ","Remove funding data ","Remove license data ","Remove relationship data ","Remove text and data mining, Similarity Check, and multiple resolution URLs ","Remove references "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/maintaining-your-metadata/updating-after-title-transfer/", "title": "Updating metadata for inherited DOIs after a title transfer", "subtitle":"", "rank": 4, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "If you’ve acquired titles from another publisher, you may have also acquired existing DOIs and metadata that were previously registered. Although these DOIs won\u0026rsquo;t be on your prefix, these DOIs will now be your responsibility and you\u0026rsquo;ll be able to update the metadata associated with them.\nDon\u0026rsquo;t try to register new DOIs on your prefix for content that already has a DOI. Instead, you should just update the metadata for these DOIs if you want to change something.", "content": "If you’ve acquired titles from another publisher, you may have also acquired existing DOIs and metadata that were previously registered. Although these DOIs won\u0026rsquo;t be on your prefix, these DOIs will now be your responsibility and you\u0026rsquo;ll be able to update the metadata associated with them.\nDon\u0026rsquo;t try to register new DOIs on your prefix for content that already has a DOI. Instead, you should just update the metadata for these DOIs if you want to change something.\nConfirming your acquired DOIs If you aren\u0026rsquo;t sure which DOIs have already been registered for a particular title, look at the depositor report for the title. Please note: the depositor report updates at 0100 UTC each day, so a new publisher may need to wait until the next day to see its newly acquired titles.\nGo to the depositor report and look for your organization name. You might need to wait a while for the page to load properly - it\u0026rsquo;s a bit slow. Click on your name and you\u0026rsquo;ll see the list of titles associated with your organization. Click on the recently acquired title and you\u0026rsquo;ll see all the DOIs that are currently registered for it.\nUpdating the existing metadata for acquired titles When you acquire a title from another publisher and we perform a title transfer for you, the publisher name will update in the metadata automatically.\nHowever, there are likely to be some elements that you need to update yourself. For example, you may need to change the resolution URLs. And you may also need to change the full-text URLs for text and data mining or Similarity Check URLs if the previous publisher has submitted them. Or you may need to change a license URL that the previous publisher has submitted.\nTo add, change, or remove information from your metadata records, you generally need to resubmit your complete metadata record with the changes included.\nHowever, there are a few exceptions to this, and changes to resolution URLs is an important one - you may need our help here. Learn more about updating your resolution URLs.\nFinding the existing XML for acquired DOIs If you wish to see what is in the metadata that you have inherited, you can retrieve the metadata as an XML file using either the deposit harvester, or one of these REST API queries. If you plan to use a REST API query, we suggest installing a JSON formatter in your browser.\nTo retrieve all items by ISSN, use this API query: http://0-api-crossref-org.libus.csd.mu.edu/works?filter=issn:2090-8091\u0026amp;rows=1000 and replace 2090-8091 with the ISSN for your title To retrieve all items by title, use this API query: http://0-api-crossref-org.libus.csd.mu.edu/works?query.container-title=Connections and replace Connections with your title You can adjust the API query to retrieve just one element per DOI, such as full-text links (including for Similarity Check) - replace 2090-8091 with the ISSN for your title: http://0-api-crossref-org.libus.csd.mu.edu/works?filter=issn:2090-8091\u0026amp;rows=1000\u0026amp;select=DOI,link To transform the JSON to XML for individual records, append your API call with .xml, like this: (unsupported, so please do not rely on this feature) https://0-api-crossref-org.libus.csd.mu.edu/works/10.12794/journals.ntjur.v1i1.68 (JSON) https://0-api-crossref-org.libus.csd.mu.edu/works/10.12794/journals.ntjur.v1i1.68.xml (XML) If you register your content with us by sending us XML files, you can just edit this XML to remove or replace metadata, and then redeposit the XML.\nNote: for most metadata elements, you can just update the XML record and resubmit to delete elements. However, there are some non-bibliographic metadata elements where you have to go through a two-step process - firstly send us a submission to delete this element, and then send us a further submission to add in the replacement data. Learn more about updating your metadata.\n", "headings": ["Confirming your acquired DOIs ","Updating the existing metadata for acquired titles ","Finding the existing XML for acquired DOIs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/members-area/update-contacts/", "title": "Keep your account details up-to-date", "subtitle":"", "rank": 4, "lastmod": "2022-07-28", "lastmod_ts": 1658966400, "section": "Hello, members", "tags": [], "description": "To make sure that we are contacting the right people at your organization, and to make sure that we include the correct information on your invoices, we need to know if anything changes for you.\nPlease let us know by completing our contact form if any of the following changes:\nYour organization name Your mailing or billing address (If you are using our payment portal, you can update the billing address associated with your credit card there, but if you need a different billing address to appear on your invoice, you will need to let us know.", "content": "To make sure that we are contacting the right people at your organization, and to make sure that we include the correct information on your invoices, we need to know if anything changes for you.\nPlease let us know by completing our contact form if any of the following changes:\nYour organization name Your mailing or billing address (If you are using our payment portal, you can update the billing address associated with your credit card there, but if you need a different billing address to appear on your invoice, you will need to let us know.) A significant change to your organizations\u0026rsquo;s annual publishing revenue (this may mean that we need to change your annual member fee tier.) A change to one or more of the key contacts on your account. Five key contacts for each account When you join Crossref, you provide contact details for (ideally) at least three different people at your organization to undertake five roles. These are key to making your relationship with Crossref and the rest of the community a success, so do think carefully about who will take on each of these roles. These will be the people we contact to confirm any changes to your account.\nThe Primary contact - this person will be our key contact at your organization. They receive product and service updates, and we contact them about things like changes to terms or service agreements. They also receive our monthly resolution reports showing failed resolutions on your DOIs. (Those of you who have been members for a while will know this contact as the Business contact). The Voting contact - this person will vote in our board elections. The Voting contact is often the same person as the Primary contact. We do need a specific person\u0026rsquo;s name for the voting contact, but you can provide a generic email address. The Voting contact can\u0026rsquo;t be the voting contact on another member account. The Technical contact - this person will receive technical updates, DOI error reports, and conflict reports to help you solve problems with your content quickly. We encourage you to use a shared, generic email address for this contact. The Metadata quality contact - this person will be responsible for fixing any metadata errors that are spotted by the scholarly community. The Metadata Quality contact is often the same person as the Technical contact, and we encourage you to use a shared, generic email address for this contact. The Billing contact - this person will receive invoices from us by email and pay the annual membership and ongoing content registration fees. They will also receive reminder emails about unpaid invoices. We encourage you to use a shared, generic email address for this contact. If your billing platform has an email address to send invoices to, please use this. But do note that we are unable to manually upload invoices into a payment platform. If you work with Crossref through a sponsor, we only need a Primary contact and a Voting contact from you.\nBack to members area\n", "headings": ["Five key contacts for each account "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/browsable-title-list/", "title": "Browsable title list", "subtitle":"", "rank": 4, "lastmod": "2024-07-22", "lastmod_ts": 1721606400, "section": "Documentation", "tags": [], "description": "The browsable title list provides an alphabetical list of journals, books, and conference proceedings for which Crossref has metadata, and is updated weekly. Browsing and searching may be limited by genre (all, journals, books, or conference proceedings) or search type (title, ISSN/ISBN, subject, or publisher). To search for a specific title, enclose the title in quotes, or search by ISSN.\nAccess the browsable title list\nSearch results will include the following (when available):", "content": "The browsable title list provides an alphabetical list of journals, books, and conference proceedings for which Crossref has metadata, and is updated weekly. Browsing and searching may be limited by genre (all, journals, books, or conference proceedings) or search type (title, ISSN/ISBN, subject, or publisher). To search for a specific title, enclose the title in quotes, or search by ISSN.\nAccess the browsable title list\nSearch results will include the following (when available):\nTitle (Journal/Book/Conf Proc): Title name. Journal titles are gray, book titles are green, and conference proceedings titles are purple. Publisher: Publisher of the title as registered with us. Print ISSN/ISBN: ISSN or ISBN (indicated by color) of the print version of the title. Electronic ISSN/ISBN: ISSN or ISBN (indicated by color) of the electronic version of the title. DOI: DOI assigned at the title level. To review the results:\nClick the icon to view the year(s), volume(s), and issue(s) deposited with Crossref for a title Click the icon to view alternative title information, abbreviated titles (if any), other ISSNs or ISBNs, subjects covered, and any coverage notes for this content item. This information is obtained from a third party and may not match data deposited with Crossref To request a missed conflict report for a title, click the icon at the far right of the row You can also download a comma-separated journal coverage list (warning: large file).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/conflict-report/", "title": "Conflict report", "subtitle":"", "rank": 4, "lastmod": "2024-11-13", "lastmod_ts": 1731456000, "section": "Documentation", "tags": [], "description": "The conflict report shows where two (or more) DOIs have been submitted with the same metadata, indicating that you may have duplicate DOIs. You’ll start receiving conflict reports if you have at least one conflict. As you know, a DOI is a unique identifier — so there should only ever be one DOI for each content item.\nFix conflicts as soon as you can as they could lead to problems in the future.", "content": " The conflict report shows where two (or more) DOIs have been submitted with the same metadata, indicating that you may have duplicate DOIs. You’ll start receiving conflict reports if you have at least one conflict. As you know, a DOI is a unique identifier — so there should only ever be one DOI for each content item.\nFix conflicts as soon as you can as they could lead to problems in the future. Having two separate DOIs for the same content means researchers won’t know which one to cite, and this risks splitting your citation count. You may also forget you have two DOIs, and update only one of them if your URLs change. This means anyone using the DOI you haven’t updated will come to a dead link. The good news is that it’s very quick to eliminate this bad metadata and solve the problem.\nAccess the conflict report\nConflicts most often occur for two reasons:\nThe metadata registered for content isn\u0026rsquo;t sufficient to distinguish between two items. For example, items like Book Reviews, Letters, and Errata often share a single page and have no author Two or more records have the same metadata (but different identifiers), suggesting that duplicate records have been created. Conflicts are flagged in your submission log when a conflict is created. We also record all current conflicts in the conflict report on our website - if you do not see your member name on the conflict report page, you have no outstanding conflicts. If you have active conflicts, we’ll remind you via email each month. However, if your conflict level has increased by 500+ then we’ll let you know right away, as this indicates a bigger problem. If your organization has more than one prefix, you’ll receive a separate email for each prefix.\nWhat should I do with my conflict report? On the conflict reports page, you can locate your organization to see the conflicts. Please be patient, as the page can take a long time to load. You can view conflict details as an XML file or by title (see View conflicts by title below) as a simple .txt report.\nClick your organization’s name to see which titles have the problem.\nShow image × Click each title to show a report that displays the DOIs in conflict.\nAlternatively, you can see the conflict reports for your whole prefix by clicking on the .xml link.\nOther information includes:\nconflict ID is the unique ID number for the conflict. cause ID is the deposit submission of the DOI causing the conflict. other ID is the deposit submission of the affected DOI. Example conflict report XML \u0026lt;conflict_report prefix=\u0026#34;10.3201\u0026#34; date=\u0026#34;Sep 20,2016\u0026#34;\u0026gt; \u0026lt;conflict id=\u0026#34;4532830\u0026#34; created=\u0026#34;2013-09-09 15:26:11.0\u0026#34; causeID=\u0026#34;1361422379\u0026#34; otherIDs=\u0026#34;1360220986,\u0026#34;\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.3201/eid.1811.121112\u0026lt;/doi\u0026gt; \u0026lt;metadata\u0026gt; \u0026lt;journal_title\u0026gt;Emerging Infectious Diseases\u0026lt;/journal_title\u0026gt; \u0026lt;volume\u0026gt;18\u0026lt;/volume\u0026gt; \u0026lt;issue\u0026gt;11\u0026lt;/issue\u0026gt; \u0026lt;first_page/\u0026gt; \u0026lt;year\u0026gt;2012\u0026lt;/year\u0026gt; \u0026lt;article_title/\u0026gt; \u0026lt;/metadata\u0026gt; \u0026lt;other_conflicts\u0026gt; \u0026lt;conflict id=\u0026#34;4532830\u0026#34; status=\u0026#34;N\u0026#34;/\u0026gt; \u0026lt;/other_conflicts\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;doi_data\u0026gt; \u0026lt;doi\u0026gt;10.3201/eid1811.121112\u0026lt;/doi\u0026gt; \u0026lt;metadata/\u0026gt; \u0026lt;other_conflicts\u0026gt; \u0026lt;conflict id=\u0026#34;4532830\u0026#34; status=\u0026#34;N\u0026#34;/\u0026gt; \u0026lt;/other_conflicts\u0026gt; \u0026lt;/doi_data\u0026gt; \u0026lt;/conflict\u0026gt; View conflicts by title You can also examine the conflicts for a particular publication by clicking on the title in the expanded view. This will display a text file where:\nConfID is the unique ID number for the conflict. CauseID is the deposit submission of the DOI causing the conflict. OtherID is the deposit submission of the affected DOI. JT is the publication’s title. MD is metadata for the DOIs. Metadata for DOIs in conflict will be the same. DOI is the DOI involved in the conflict. Parenthetical value following the DOI (such as Journal and 4508537-N in this example) lists all the conflicts in which the DOI is involved and the resolution status of that conflict. ALERT, if it appears, indicates that the DOIs have more than one conflict, which can occur if they were deposited repeatedly with the same metadata. This field lists the other conflict IDs and their status: null – Not resolved A – Made an alias P – Made a prime U – Resolved by a metadata update R – Manually erased or resolved Show image × Resolving conflicts There are three scenarios to cause two (or more) DOIs to be submitted with the same metadata.\nScenario 1: You assigned two DOIs to distinct content items, but accidentally submitted the same metadata for both of them. In this case, one of the DOIs has incorrect metadata. If you update and resubmit the deposit to correct that DOI\u0026rsquo;s metadata, the conflict will be resolved.\nScenario 2: You assigned two DOIs to the same content item. In this case, you can resolve the conflict by assigning one of the DOIs as primary and the other as its alias. The alias DOI will automatically redirect to the primary DOI, so you\u0026rsquo;ll only need to maintain the primary. Learn more about creating aliases between DOIs.\nScenario 3: The two DOIs refer to different content items, but their metadata is so similar that a conflict was flagged. This happens when items have very little metadata included. The best thing to do is to register more metadata to remove the conflict. If you can’t do this, you can accept the conflict - learn more about accepting conflicts as-is.\nIf you have any further questions about your conflict report, please contact us.\nUpdate your metadata If a conflict exists because the metadata you\u0026rsquo;ve deposited is sparse, you should re-register your content with additional metadata. The conflict status will be resolved when one (or both) items are re-registered with distinctive metadata. To be sure that the system updates the correct record, include the relevant DOI in your submission.\nWhen making an update, you must supply all the metadata for the content item, not just the fields that need to be changed. During the update process, the system completely overwrites the existing metadata with the information you submit, including inserting null values for any fields not supplied in the update.\nIf the new metadata resolves the conflict, the system returns a message such as this one (which resulted from a redeposit of the metadata for DOI 10.50505/200702271050-conflict):\n\u0026lt;record_diagnostic status=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.50505/200702271050-conflict\u0026lt;/doi\u0026gt; \u0026lt;msg\u0026gt;Successfully updated\u0026lt;/msg\u0026gt; **\u0026lt;resolved_conflict_ids\u0026gt;352052\u0026lt;/resolved_conflict_ids\u0026gt;** \u0026lt;/record_diagnostic\u0026gt; Create aliases between DOIs If you have registered multiple records for the same content, you can alias the duplicate items to the (primary) record you intend to maintain. Records are aliased at the identifier (DOI) level. When DOIs are aliased, one DOI is flagged as the \u0026lsquo;primary\u0026rsquo; DOI - the DOI you intend to maintain in the future. The remaining DOIs are aliased to the primary DOI at the DOI resolver level. This means that when someone clicks on an aliased DOI link, the user is automatically redirected to the URL registered for the primary DOI.\nFor example, if the metadata for 10.1103/PhysRev.69.674 and 10.1103/PhysRev.69.674.2 are the same, you might make 10.1103/PhysRev.69.674 the primary DOI. In this case, metadata queries that match both DOIs will resolve to 10.1103/PhysRev.69.674, and DOI queries for either 10.1103/PhysRev.69.674 or 10.1103/PhysRev.69.674.2 will both return results.\nYou can assign primary status to DOIs in conflict one-by-one using the admin tool, or you can assign primary or alias status to multiple DOIs by uploading a .txt file.\nConflicts involving DOIs owned by other members must be resolved by Crossref - please contact us for help with this.\nAssigning primary status from within the admin tool Log in to the admin tool using your Crossref account credentials Click Metadata Admin tab Click Conflict tab (if necessary) In the appropriate box, enter the submission ID, the conflict ID, or the DOI Click Submit to see the DOIs associated with the conflict Select the DOI that you want to make primary Click Make selected DOI primary in all conflicts Show image × If you make a mistake, you can undo it by returning to this page and clicking Unresolve all conflicts Assigning primary or alias status to multiple DOIs by uploading a .txt file The status you assign applies to all conflicts that involve the DOIs.\nTo assign DOIs primary status, create a .txt file with the header H:email={email address};op=primary, for example: H:email=youremail@address.com;op=primary 10.1016/0032-1028(80)90001-2 10.1016/0032-1028(80)90002-4 10.1016/0032-1028(80)90003-6 10.1016/0032-1028(80)90004-8 10.1016/0032-1028(80)90005-X 10.1016/0032-1028(80)90006-1 10.1016/0368-3281(63)90014-7 Use op=alias when the primary DOIs are not known. If there are more than two DOIs involved in the conflict, the operation will be rejected because the system cannot determine which DOI to make primary.\nLog in to the admin tool using your Crossref account credentials Click Submissions tab Click Upload tab (if necessary) Locate and select the metadata file Select Conflict Management Click Upload All the DOIs listed in the file will be assigned the status you specified in the op element. The system will send you a message like this one for an individual DOI:\n\u0026lt;record_diagnostic doi=\u0026#34;10.1088/0368-3281/5/6/313\u0026#34;\u0026gt; \u0026lt;conflict status=\u0026#34;Success\u0026#34; ids=\u0026#34;48983,49365,49783,50243,51067\u0026#34;\u0026gt; \u0026lt;msg\u0026gt;Marked as alias\u0026lt;/msg\u0026gt; \u0026lt;doi_list\u0026gt; \u0026lt;doi\u0026gt;10.1016/0368-3281(63)90014-7\u0026lt;/doi\u0026gt; \u0026lt;/doi_list\u0026gt; \u0026lt;/conflict\u0026gt; \u0026lt;/record_diagnostic\u0026gt; or this one for multiple DOIs: \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch_diagnostic\u0026gt; \u0026lt;submission_id\u0026gt;1181263946\u0026lt;/submission_id\u0026gt; \u0026lt;record_diagnostic doi=\u0026#34;10.5555/prime\u0026#34;\u0026gt; \u0026lt;conflict status=\u0026#34;Success\u0026#34; ids=\u0026#34;23135669,2311211\u0026#34;\u0026gt; \u0026lt;msg\u0026gt;Marked as alias\u0026lt;/msg\u0026gt; \u0026lt;doi_list\u0026gt; \u0026lt;doi\u0026gt;10.5555/a1\u0026lt;/doi\u0026gt; \u0026lt;doi\u0026gt;10.5555/a2\u0026lt;/doi\u0026gt; \u0026lt;/doi_list\u0026gt; \u0026lt;/conflict\u0026gt; \u0026lt;/record_diagnostic\u0026gt; \u0026lt;/doi_batch_diagnostic\u0026gt; Accept conflicts as-is If you\u0026rsquo;ve determined that the content flagged with conflicts are not duplicate items, you can remove the conflict status by setting the status to \u0026lsquo;resolved\u0026rsquo;. This has no impact on the metadata records or DOIs but will remove the conflicts from our conflict report.\nIn some cases, you may want to leave conflicting or ambiguous records in our metadata database. You can do this within our admin tool, or by uploading a .txt file to our admin tool.\nAccepting a conflict as-is using the admin tool To accept conflicts:\nLog in to the admin tool using your Crossref account credentials Click Metadata Admin tab Click Conflict tab (if necessary) In the appropriate box, enter the submission ID, the conflict ID, or the DOI Select Show Consolidated Conflicts (if available) Click Submit to see the DOIs associated with the conflict. Click Mark All Conflicts as Resolved Accepting a conflict as-is by uploading conflict management submissions to the admin tool If you have a large number of DOIs to resolve, you can submit a text file to the admin tool.\nCreate a .txt file with the following header (include your email address): H:email=youremail@address.com;op=resolve 10.1016/0032-1028(80)90001-2 10.1016/0032-1028(80)90002-4 10.1016/0032-1028(80)90003-6 10.1016/0032-1028(80)90004-8 10.1016/0032-1028(80)90005-X 10.1016/0032-1028(80)90006-1 10.1016/0368-3281(63)90014-7 Log in to the admin tool using your Crossref account credentials Click Submissions tab Click Upload tab (if necessary) Locate and select the metadata file Select Live Select Conflict Management Click Upload Your conflict resolution file will be added to our submission queue and processed. A log will be sent to the email address you provided in the file header. Be sure to review the log to make sure your conflicts were resolved correctly.\nForcing prime/alias You can force a DOI to be an alias of another DOI even if the DOIs are not in conflict. Please contact us to discuss if this would be a suitable solution for your situation.\nExtreme care MUST be taken when using this feature. Normally two DOIs are put into a prime/alias pair when their metadata is the same and a conflict is created. In this case, a metadata query will find both DOIs but because of the forced aliasing will return the prime DOI. If an aliased DOI has very different metadata from a primary DOI, the match may be a false positive.\nTo force an alias between two DOIs, create a text file as described below and upload to the admin tool.\nCreate the .txt file with tab-separated pairs of DOIs as follows: H:email=youremail@address.com;op=force_alias;delim=tab 10.xxxx/primary1 10.xxxx/alias1 10.xxxx/primary2 10.xxxx/alias2 10.xxxx/primary3 10.xxxx/alias3 Log in to the admin tool using your Crossref account credentials Click Submissions Click Upload, if necessary Next to FileName, select Choose File Locate and select the force alias file Select Type Conflict Management Click Upload Removing a forced prime and/or alias (\u0026ldquo;un-aliasing\u0026rdquo;) Only DOIs that have been aliased need to be un-aliased. Supplying op=unalias allows you to unalias previously forced aliases.\nH:email=youremail@address.com;op=unalias;delim=tab 10.xxxx/alias1 10.xxxx/alias2 10.xxxx/alias3 ", "headings": ["The conflict report shows where two (or more) DOIs have been submitted with the same metadata, indicating that you may have duplicate DOIs. You’ll start receiving conflict reports if you have at least one conflict.","What should I do with my conflict report? ","Example conflict report XML ","View conflicts by title ","Resolving conflicts ","Update your metadata ","Create aliases between DOIs ","Assigning primary status from within the admin tool ","Assigning primary or alias status to multiple DOIs by uploading a .txt file ","Accept conflicts as-is ","Accepting a conflict as-is using the admin tool ","Accepting a conflict as-is by uploading conflict management submissions to the admin tool ","Forcing prime/alias ","Removing a forced prime and/or alias (\u0026ldquo;un-aliasing\u0026rdquo;) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/depositor-report/", "title": "Depositor report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "The depositor report is used for checking basic info about your DOI registrations.\nDepositor reports list all DOIs by member and title for journals, books, and conference proceedings. We currently have depositor reports for journals, books, and conference proceedings (but not for other record types). The index page is updated weekly. Title-level reports are updated as your metadata is updated with us.\nDepositor reports by record type Access journals depositor report Access books depositor report Access conference proceedings depositor report Each title-level report lists all DOIs registered for the title as well as (for each DOI) the owning prefix, the deposit timestamp, the date the record was last updated, and the number of Cited-by matches.", "content": "The depositor report is used for checking basic info about your DOI registrations.\nDepositor reports list all DOIs by member and title for journals, books, and conference proceedings. We currently have depositor reports for journals, books, and conference proceedings (but not for other record types). The index page is updated weekly. Title-level reports are updated as your metadata is updated with us.\nDepositor reports by record type Access journals depositor report Access books depositor report Access conference proceedings depositor report Each title-level report lists all DOIs registered for the title as well as (for each DOI) the owning prefix, the deposit timestamp, the date the record was last updated, and the number of Cited-by matches. To view each title-level report, select the member name then the appropriate title.\nField/missing metadata report: You can also see what basic bibliographic metadata fields are populated for your journal articles - click on the green triangle to the right of each member name to view a field / missing metadata report. DOI crawler: We crawl a broad sample of journal DOIs to make sure the DOIs are resolving to the appropriate page. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. You can access the crawler details for a given journal by selecting the linked date in the ‘last crawl date’ column. Click on a member name in the report, and you will see a list of that member’s titles below the name. Click on any publication title to open a text file which list all DOIs for that title.\nThe initial view shows:\nName: name of the member. Members with more than one prefix will appear multiple times Journal/Book/Conf Proc count: number of journal, book, or conference proceeding titles associated with the member Total DOIs: total number of DOIs deposited for the selected title Field report: shows missing metadata fields for each member, select the icon to view The expanded view shows:\nName of each journal, book, or conference proceeding with DOI names deposited by the member DOIs: Total number of DOIs registered for each journal, book, or conference proceeding deposited by the member Last crawl date: date of last crawler report (if available) Show image × Depositor report title view Select a journal, book, or conference proceeding title to retrieve a list of DOIs for the title (DOI), the owner prefix of the DOI (OWNER), the timestamp value for the DOI (DEPOSIT-TIMESTAMP) the date the DOI was last updated (LAST-UPDATED), and the number of Cited-by matches for the DOI:\nShow image × Title-level depositor report data may also be retrieved using format=doilist - learn more about retrieving DOIs by title.\n", "headings": ["Depositor report title view "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/doi-crawler-report/", "title": "DOI crawler report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.\nThe results are recorded in crawler reports, which you can access from the depositor report expanded view (access the depositor reports by type at the links below).", "content": "We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.\nThe results are recorded in crawler reports, which you can access from the depositor report expanded view (access the depositor reports by type at the links below).\nDepositor reports by record type Access journals depositor report Access books depositor report Access conference proceedings depositor report If a title has been crawled, the last crawl date is shown in the appropriate column. Crawled DOIs that generate errors will appear as a bold link:\nShow image × Click Last Crawl Date to view a crawler status report for a title:\nShow image × The crawler status report lists the following:\nTotal DOIs: Total number of DOI names for the title in system on last crawl date Checked: number of DOIs crawled Confirmed: crawler found both DOI and article title on landing page Semi-confirmed: crawler found either the DOI or the article title on the landing page Not Confirmed: crawler did not find DOI nor article title on landing page Bad: page contains known phrases indicating article is not available (for example, article not found, no longer available) Login Page: crawler is prompted to log in, no article title or DOI Exception: indicates error in crawler code httpCode: resolution attempt results in error (such as 400, 403, 404, 500) httpFailure: http server connection failed Select each number to view details. Select re-crawl and enter an email address to crawl again.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/doi-error-report/", "title": "DOI error report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "The DOI error report is sent immediately when a user informs us that they’ve seen a DOI somewhere which doesn’t resolve to a website. The DOI error report is used for making sure your DOI links go where they’re supposed to. When a user clicks on a DOI that has not been registered, they are sent to a form that collects the DOI, the user’s email address, and any comments the user wants to share.", "content": " The DOI error report is sent immediately when a user informs us that they’ve seen a DOI somewhere which doesn’t resolve to a website. The DOI error report is used for making sure your DOI links go where they’re supposed to. When a user clicks on a DOI that has not been registered, they are sent to a form that collects the DOI, the user’s email address, and any comments the user wants to share. We compile the DOI error report daily using those reports and comments, and send it via email to the technical contact at the member responsible for the DOI prefix as a .csv attachment.\nShow image × If you would like the DOI error report to be sent to a different person, please contact us.\nThe DOI error report .csv file contains (where provided by the user):\nDOI - the DOI being reported URL - the referring URL REPORTED-DATE - date the DOI was initially reported USER-EMAIL - email of the user reporting the error COMMENTS We find that approximately 2/3 of reported errors are ‘real’ problems. Common reasons why you might get this report include:\nyou’ve published/distributed a DOI but haven’t registered it the DOI you published doesn’t match the registered DOI a link was formatted incorrectly (a . at the end of a DOI, for example) a user has made a mistake (confusing 1 for l or 0 for O, or cut-and-paste errors) What should I do with my DOI error report? Review the .csv file attached to your emailed report, and make sure that no legitimate DOIs are listed. Any legitimate DOIs found in this report should be registered immediately. When a DOI reported via the form is registered, we’ll send out an alert to the reporting user (if they’ve shared their email address with us).\nI keep getting DOI error reports for DOIs that I have not published, what do I do about this? It’s possible that someone is trying to link to your content with the wrong DOI. If you do a web search for the reported DOI you may find the source of your problem - we often find incorrect linking from user-provided content like Wikipedia, or from DOIs inadvertently distributed by members to PubMed. If it’s still a mystery, please contact us.\n", "headings": ["The DOI error report is sent immediately when a user informs us that they’ve seen a DOI somewhere which doesn’t resolve to a website.","What should I do with my DOI error report? ","I keep getting DOI error reports for DOIs that I have not published, what do I do about this? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/field-or-missing-metadata-report/", "title": "Field or missing metadata report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "The field or missing metadata report gives details on metadata completeness and can be accessed by selecting the icon next to each member name in the depositor report (access the depositor reports by type at the links below). The fields checked are volume, issue, page, author, article title, and Similarity Check URL.\nDepositor reports by record type Access journals depositor report Access books depositor report Access conference proceedings depositor report To see your field or missing metadata report, use this URL but replace 10.", "content": "The field or missing metadata report gives details on metadata completeness and can be accessed by selecting the icon next to each member name in the depositor report (access the depositor reports by type at the links below). The fields checked are volume, issue, page, author, article title, and Similarity Check URL.\nDepositor reports by record type Access journals depositor report Access books depositor report Access conference proceedings depositor report To see your field or missing metadata report, use this URL but replace 10.5555 with your prefix:\nhttps://apps.crossref.org/myCrossref?report=missingmetadata\u0026amp;datatype=j\u0026amp;prefix=10.5555 Show image × Select a title to retrieve a list of DOIs for the title, and flagged fields for each DOI. For example, the DOIs in this report lack page and author information:\nShow image × Although the deposit section of the schema specifies that some bibliographic metadata is optional for content registration purposes, we strongly encourage members to register comprehensive metadata for each item registered.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/missed-conflict-report/", "title": "Missed conflict report", "subtitle":"", "rank": 4, "lastmod": "2024-07-22", "lastmod_ts": 1721606400, "section": "Documentation", "tags": [], "description": "Learn more about conflicts and the conflict report. Conflicts are usually flagged upon deposit, but sometimes this doesn\u0026rsquo;t happen, creating a missed conflict.\nAccess the conflict report\nA missed conflict may occur for several reasons:\nTwo DOIs are deposited for the same item, but the metadata is slightly different (DOI A deposited with an online publication date of 2011, DOI B deposited with a print publication date of 1972) DOIs were deposited with a unique item number.", "content": "Learn more about conflicts and the conflict report. Conflicts are usually flagged upon deposit, but sometimes this doesn\u0026rsquo;t happen, creating a missed conflict.\nAccess the conflict report\nA missed conflict may occur for several reasons:\nTwo DOIs are deposited for the same item, but the metadata is slightly different (DOI A deposited with an online publication date of 2011, DOI B deposited with a print publication date of 1972) DOIs were deposited with a unique item number. Before 2008, DOIs containing unique item numbers (supplied in the \u0026lt;publisher_item\u0026gt; element) were not checked for conflicts. The missed conflict report compares article titles across data for a specified journal or journals. To retrieve a missed conflict report for a title:\nStart from the browsable title list and search or browse for the title Click the icon at the far right of the title The missed conflict interface will pop up in a second window. Enter your email address in the appropriate field. Multiple title IDs can be included in a single request if needed A report will be emailed to the email address you provided. This report lists all DOIs with identical article titles that have not been flagged as conflicts.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/participation-reports/", "title": "Participation Reports", "subtitle":"", "rank": 4, "lastmod": "2024-10-15", "lastmod_ts": 1728950400, "section": "Documentation", "tags": [], "description": "Participation Reports are a visualization of the metadata that’s available via our free REST API. There’s a separate Participation Report for each member, and each report shows what percentage of that member’s metadata records include 11 key metadata elements. These key elements add context and richness, and help to open up content to easier discovery and wider and more varied use. As a member, you can use Participation Reports to see for yourself where the gaps in your organization\u0026rsquo;s metadata are, and perhaps compare your performance to others.", "content": "Participation Reports are a visualization of the metadata that’s available via our free REST API. There’s a separate Participation Report for each member, and each report shows what percentage of that member’s metadata records include 11 key metadata elements. These key elements add context and richness, and help to open up content to easier discovery and wider and more varied use. As a member, you can use Participation Reports to see for yourself where the gaps in your organization\u0026rsquo;s metadata are, and perhaps compare your performance to others. Participation Reports are free and open to everyone.\nAccess Participation Reports\nHow a Participation Report works There’s a separate Participation Report for each member. Visit Participation Reports and start typing the name of a member under Find a member. A list of member names will appear for you to select from. Behind the scenes, our REST API will pull together a report and output it in a clear, visual way. Please note - it should usually take a maximum of 24 hours for you to see changes to your Participation Reports if you\u0026rsquo;ve added new records or updated the metadata in your existing records.\nYou can use the dropdowns near the top of the page to see reports for different publication time periods and work types. Current content includes any records with a publication date in the current calendar year or up to two years previously. For example, in 2024, current content is anything with a publication date in 2024, 2023, or 2022. Anything published in 2021 or earlier is considered back file.\nThe work types currently covered by Participation Reports are:\nJournal articles Conference papers Books Book chapters Posted content (including preprints) Reports Datasets Standards Dissertations The 11 key metadata elements for which Participation Reports calculate each member’s coverage are:\nReferences Abstracts ORCID iDs Affiliations ROR IDs Funder Registry IDs Funding award numbers Crossmark enabled Text mining URLs License URLs Similarity Check URLs References Percentage of records that include reference lists in their metadata.\nWhy is this important? Your references are a big part of the story of your content, highlighting its provenance and where it sits in the scholarly map. References give researchers and other users of Crossref metadata a vital data point through which to find your content, which in turn increases the chances of your content being read and used.\nWhere can I learn more? Cited-by service How can I improve my percentage? Whenever you register records with us, make sure you include your references in the submission. Find out more here.\nYou can also add references to your existing records.\nAbstracts Percentage of records that include the abstract in the metadata, giving further insights into the content of the work.\nWhy is this important? The abstract gives more information to the user about your content, making your items more discoverable.\nWhere can I learn more? Abstracts How can I improve my percentage? Make sure you include abstracts when you register your content - it’s available for everything other than dissertations and reports. For existing records, you can add abstracts by running a full metadata redeposit (update).\nORCID iDs Percentage of records containing ORCID iDs. These persistent identifiers enable users to precisely identify a researcher’s work - even when that researcher shares a name with someone else, or if they change their name.\nWhy is this important? Researcher names are inherently ambiguous. People share names. People change names. People record names differently in different circumstances.\nGovernments, funding agencies, and institutions are increasingly seeking to account for their research investments. They need to know precisely what research outputs are being produced by the researchers that they fund or employ. ORCID iDs allow this reporting to be done automatically and accurately.\nFor some funders, ORCID iDs are critical for their research investment auditing, and they are starting to mandate that researchers use ORCID iDs.\nResearchers who do not have ORCID iDs included in their Crossref metadata risk not being counted in these audits and reports.\nWhere can I learn more? ORCID Open letter: list of funders supporting ORCID Open letter: list of publishers supporting ORCID ORCID adoption through national consortia in Italy, New Zealand and Norway Ten reasons to get - and use - an ORCID iD! How can I improve my percentage? Make sure you ask your authors for their ORCID iD through your submission system and include them when you register your content. There’s a specific element in the XML for ORCID iDs if you register via XML. If you use the web deposit form or if you’re still using the deprecated Metadata Manager, there’s a specific field to complete.\nTo add ORCID iDs to existing records, you need to update your metadata.\nAffiliations The percentage of registered records that include affiliation metadata for at least one contributor.\nWhy is this important? Affiliation metadata ensures that contributor institutions can be identified and research outputs can be traced by institution.\nWhere can I learn more? Affiliations and ROR How can I improve my percentage? Make sure you collect affiliation details from authors via your submission system and include them in your future Crossref deposits.\nFor existing records, you can add affiliation metadata by running a full metadata redeposit (update).\nROR IDs The percentage of registered records that include at least one ROR ID, e.g. in the contributor metadata.\nWhy is this important? Affiliation metadata ensures that contributor institutions can be identified and research outputs can be traced by institution.\nA ROR ID is a single, unambiguous, standardized organization identifier that will always stay the same. This means that contributor affiliations can be clearly disambiguated and greatly improves the usability of your metadata.\nWhere can I learn more? Affiliations and ROR How can I improve my percentage? If the submission system you use does not yet support ROR, or if you don’t use a submission system, you’ll still be able to provide ROR IDs in your Crossref metadata. ROR IDs can be added to JATS XML, and many Crossref helper tools support the deposit of ROR IDs. There’s also an OpenRefine reconciler that can map your internal identifiers to ROR identifiers.\nIf you find that an organization you are looking for is not yet in ROR, please submit a curation request.\nFor existing records, you can add affiliation metadata by running a full metadata redeposit (update).\nFunder Registry IDs The percentage of registered records that contain the name and Funder Registry ID of at least one of the organizations that funded the research.\nWhy is this important? Funding acknowledgements give vital context for users and consumers of your content. Extracting these acknowledgements from your content and adding them to your metadata allows funding organizations to better track the published results of their grants, and allows publishers to analyze the sources of funding for their authors and ensure compliance with funder mandates. And, by using the unique funder IDs from our central Funder Registry, you can help ensure the information is consistent across publishers.\nWhere can I learn more? Funder Registry How can I improve my percentage? Make sure you collect funder names from authors via your submission system, or extract them from acknowledgement sections. Match the names with the corresponding Funder IDs from our Funder Registry and make sure you include them in your future Crossref deposits.\nIf your funder isn’t yet in the Funder Registry, please let us know.\nTo add funder information to records you’ve already registered, you can do a full metadata redeposit (update), or use our supplemental metadata upload method.\nFunding award numbers The percentage of registered records that contain at least one funding award number - a number assigned by the funding organization to identify the specific piece of funding (the award or grant).\nWhy is this important? Funding organizations are able to better track the published results of their grants Research institutions are able to track the published outputs of their employees Publishers are able to analyze the sources of funding for their authors and ensure compliance with funder mandates Everyone benefits from greater transparency on who funded the research, and what the results of the funding were. Where can I learn more? Funder Registry How can I improve my percentage? Make sure you collect grant IDs from authors via your submission system, or extract them from acknowledgement sections. Make sure you include them in your future Crossref deposits and add them to your existing records using our supplemental metadata upload method.\nCrossmark enabled Percentage of records using the Crossmark service, which gives readers quick and easy access to the current status of an item of content - whether it’s been updated, corrected, or retracted.\nWhy is this important? Crossmark gives quick and easy access to the current status of an item of content. With one click, you can see if the content has been updated, corrected, or retracted and can access extra metadata provided by the publisher. It allows you to reassure readers that you’re keeping content up-to-date, and showcases any additional metadata you want readers to view while reading the content.\nWhere can I learn more? Crossmark How can I improve my percentage? Learn more about participating in Crossmark.\nText mining URLs The percentage of registered records containing full-text URLs in the metadata to help researchers easily locate your content for text and data mining.\nWhy is this important? Researchers are increasingly interested in carrying out text and data mining of scholarly content - the automatic analysis and extraction of information from large numbers of documents. If you can make it easier for researchers to mine your content, you will massively increase your discoverability.\nThere are technical and logistical barriers to text and data mining for scholarly researchers and publishers alike. It is impractical for researchers to negotiate many different websites to locate the full-text that they need. And it doesn’t make sense for each publisher to have a different set of instructions about how to best find the full-text in the required format. All parties benefit from the support of standard APIs and data representations in order to enable text and data mining across both open access and subscription-based publishers.\nOur API can be used by researchers to locate the full text of content across publisher sites. Members register these URLs - often including multiple links for different formats such as PDF or XML - and researchers can request them programmatically.\nThe member remains responsible for actually delivering the full-text of the content requested. This means that open access publishers can simply deliver the requested content, while subscription publishers use their existing access control systems to manage access to full-text content.\nWhere can I learn more? Text and Data Mining information from LIBER Crossref support for text and data mining How can I improve my percentage? Make sure you include full-text URLs in your future Crossref deposits and add them to your existing records using a resource-only deposit.\nLicense URLs The percentage of registered records that contain URLs that point to a license that explains the terms and conditions under which readers can access content.\nWhy is this important? Adding the full-text URL into your metadata is of limited value if the researchers can’t determine what they are permitted to do with the full text. This is where the license URLs come in. Members include a link to their use and reuse conditions: whether their own proprietary license, or an open license such as Creative Commons.\nWhere can I learn more? License information How can I improve my percentage? Make sure you include license URLs in your future Crossref deposits, and add them to your existing records using a resource-only deposit, or by using a supplemental metadata upload.\nSimilarity Check URLs The percentage of registered records that include full-text links for the Similarity Check service.\nWhy is this important? The Similarity Check service helps you to prevent scholarly and professional plagiarism by providing editorial teams with access to Turnitin’s powerful text comparison tool.\nSimilarity Check members contribute their own published content to iThenticate’s database of full-text literature via Similarity Check URLs, and this is an obligation of using the service. If members aren’t registering these, they can’t take part in the Similarity Check service.\nWhere can I learn more? Similarity Check How can I improve my percentage? For future records, make sure you include these URLs as part of your standard metadata deposit. They need to be deposited within the crawler-based collection property, with item crawler iParadigms.\nYou can add these URLs into your already-deposited DOIs using a resource-only deposit, or by using the Supplemental-Metadata Upload option available with our web deposit form.\n", "headings": ["How a Participation Report works ","References ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Abstracts ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","ORCID iDs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Affiliations ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","ROR IDs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Funder Registry IDs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Funding award numbers ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Crossmark enabled ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Text mining URLs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","License URLs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? ","Similarity Check URLs ","Why is this important? ","Where can I learn more? ","How can I improve my percentage? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/resolution-report/", "title": "Resolution report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "The monthly resolution report shows the number of successful and failed DOI resolutions for the previous month. What is a resolution? When a researcher clicks on a DOI link for an article and the link resolves to the article, that counts as one DOI resolution. For example, clicking on https://0-doi-org.libus.csd.mu.edu/10.1038/nature02426 counts as one resolution to Nature. No information is captured about the user, and these numbers are not a precise measure of traffic to a member\u0026rsquo;s website, but they provide a measure of the effectiveness of a member\u0026rsquo;s participation in Crossref.", "content": " The monthly resolution report shows the number of successful and failed DOI resolutions for the previous month. What is a resolution? When a researcher clicks on a DOI link for an article and the link resolves to the article, that counts as one DOI resolution. For example, clicking on https://0-doi-org.libus.csd.mu.edu/10.1038/nature02426 counts as one resolution to Nature. No information is captured about the user, and these numbers are not a precise measure of traffic to a member\u0026rsquo;s website, but they provide a measure of the effectiveness of a member\u0026rsquo;s participation in Crossref.\nIf a researcher clicks on a DOI link for an article and it doesn\u0026rsquo;t resolve, that counts as a failed resolution.\nWhat\u0026rsquo;s in your Resolution report? Resolution reports are sent out to members via email at the beginning of each month, and include statistics about all DOI resolutions from the preceding month. By default, resolution reports are sent to the Primary contact for your organization (previously known as the business contact), but we can add or change the recipient(s) as needed. We’ll send you a separate report for each DOI prefix you’re responsible for.\nThe report includes:\nResolution failure rate: the percentage of DOI resolution attempts that failed. The prefix failure rate and the overall failure rate (for all Crossref members) are also included so you can see how you compare to others. Resolutions by month: total number of resolutions per month for the past 12 months, by prefix (count) and overall (all members). Resolution stats: resolution counts for the report prefix. Top ten DOIs: list of the ten DOIs with the highest number of successful resolutions for the month, and the number of times each DOI was successfully resolved. Failed DOIs: a list of DOI resolution attempts that failed (i.e. resolved to a Handle error page). This list is presented as a .csv file attached to the report email and contains both the failed DOI and number of failures. Resolution counts by publication title: the number of total DOI resolutions per title What should I do with my resolution report? The resolution report gives you an overview of DOI resolution traffic, and can help identify problems with your DOI links or your DOI registration process. The failed DOI.csv linked to your resolution report email contains a list of all DOIs with failed resolution attempts - if a user clicks on a DOI with your DOI prefix and the DOI is not registered, it will be included on this report.\nThere’s a certain amount of noise with these reports (resolutions from crawlers or automated processes) but do check any DOIs with a high number of failures, and look out for significant changes in your resolution failure rate.\nTroubleshooting: possible reasons for DOI failures and how to fix them The DOI has never been registered with Crossref - if your DOI has never been successfully registered with us, then you will see failed resolutions. Make sure you (or your suppliers) are definitely successfully registering your DOIs. Learn more about how to verify your registration. DOI publication and registration are out of sync - if you publish your DOIs on your website before you register them with us, this will lead to DOI resolution failures. DOIs should be registered and distributed simultaneously. If this is not possible, the gap should be hours or a day, not days or weeks. The DOIs are displayed incorrectly on others websites - sometimes others may copy your DOIs and display them on their own website, but they make a mistake with the display. They may add a period to the end, or cut off the final digit. If others then try to resolve that incorrect DOI, you will see resolution failures. You can find this out by googling any DOIs with problems from your Resolution Report and seeing where you find them online. You can then decide if you wish to contact the website owners to ask them to update the DOI to the correct one. Linking issues - the DOI resolution link may be incorrect. You can update your DOI resolution URLs at any time by resubmitting your record. User error - users sometimes make mistakes when typing or copying DOIs. If you often see failures that you think are caused by user error, review how your DOIs are displayed. Some common user errors are: Confusing O and 0, or l and 1 - this is more common for DOIs on print publications, because have to type them, rather than clicking on a link. If your DOI failures often contain DOIs with O being confused with 0, or l with 1, consider changing your DOI suffix. Long strings of letters and numbers can cause problems as well DOIs ending with \u0026lsquo;.\u0026rsquo; - if a viable DOI has a \u0026lsquo;.\u0026rsquo; appended, it will fail to resolve. This is often caused by the DOI being linked from references that end with a \u0026lsquo;.\u0026rsquo; DOIs with special characters instead of ‘-’ - this commonly happens when a user pastes a DOI from a PDF DOIs with special characters such as \u0026lt;, \u0026gt;, #, * and + (Crossref no longer accepts deposits with special characters). Problems with URL-encoded DOIs - the handle resolver supports URL-encoded DOIs. The resolution logs sometimes misrepresent the encoded characters. As a result, some badly-encoded DOIs will appear in your resolution log as correctly-encoded DOIs. This typically happens when an already-encoded DOI is mistakenly encoded again. For example, DOI 10.5555/example would be correctly encoded as 10.5555%2Fexample (the / is encoded as %2F). If the DOI is encoded again, the % in the DOI becomes %25, making the DOI 10.5555%252Fexample. This DOI will not resolve but will appear in the failed DOI report as 10.5555%2Fexample (a valid DOI). Resolution Report FAQs +- Where do the resolution report statistics come from?\rResolution statistics are based on the number of DOI resolutions made through the DOI proxy server on a month-by-month basis. These statistics give an indication of the traffic generated by users clicking DOIs. CNRI (the organization that manages the DOI resolver) sends us resolution logs at the end of every month.\n+- When can I expect to receive my monthly resolution report?\rEach month we deliver resolution reports to the primary contact on your account (or, anyone else affiliated with your account who has been added as a recipient). In the past, you could expect these reports somewhere between the 5th and 10th day of each month. We\u0026rsquo;d receive logs from the Corporation for National Research Initiatives (CNRI) and then use those logs to generate your reports. As more and more content is registered with us and resolutions logged with CNRI, that means both CNRI and Crossref have more and more data to process.\nThe volume of resolutions for DOIs registered with us continues to grow, which means you can and should expect that reports will arrive to you later than they may have arrived in the past. Once we begin the report, it takes our system three to five days to distribute those reports to the contacts on each account.\nThus, we think it\u0026rsquo;s realistic that distribution of resolution reports (to be completed to all of our members) will be the second week of each month.\n+- What do you mean by \u0026#39;unique DOI?\u0026#39;\rThe unique DOI number is the number of distinct DOIs that have been resolved. If your report lists 20 resolutions and 1 unique DOI, this means 1 DOI was resolved 20 times.\n+- My resolution report tells me I have a high failure rate - what should I do?\rThe ideal failure rate is 0%, but 2-3% is the norm. If you are new to Crossref, or have only deposited metadata for a small number of content items, you may have a high failure percentage (for example, 1 failure and 9 successes = 10% failure rate).\nIf your overall resolution failure rate is higher than around 2-4%, look closely at the .csv file of failed DOIs for your account, and make sure the DOIs listed have definitely been registered. Our troubleshooting section will help with other possible reasons for a high failure rate.\n+- This DOI is registered - why is it on my failed DOI list?\rWe send you all failed resolutions from the preceding month, even if the content has subsequently been registered. If an unregistered DOI is clicked on December 5 but not registered until December 6, we count that as a resolution failure. If you find a lot of registered DOIs in your failed DOI list, you should make sure you aren’t distributing your DOIs before they have been registered.\n+- Will a high failure rate affect my membership?\rIt won’t affect your membership status (unless you’re truly negligent and regularly distribute DOIs without registering them) but a Crossref membership is of limited value if you don’t register your DOIs and provide quality metadata.\nLearn more about the history and current state of our popular monthly resolution reports.\n", "headings": ["The monthly resolution report shows the number of successful and failed DOI resolutions for the previous month.","What is a resolution?","What\u0026rsquo;s in your Resolution report?","What should I do with my resolution report? ","Troubleshooting: possible reasons for DOI failures and how to fix them ","Resolution Report FAQs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/schematron-report/", "title": "Schematron report", "subtitle":"", "rank": 4, "lastmod": "2024-07-19", "lastmod_ts": 1721347200, "section": "Documentation", "tags": [], "description": "A Schematron report tells you if there’s a metadata quality issue with your records. Schematron is a pattern-based XML validation language. We try to stop the deposit of metadata with obvious issues, but we can’t catch everything because publication practices are so varied. For example, most family names in our database that end with jr are the result of a publisher including a suffix (Jr) in a family name, but there are of course surnames ending with ‘jr’.", "content": " A Schematron report tells you if there’s a metadata quality issue with your records. Schematron is a pattern-based XML validation language. We try to stop the deposit of metadata with obvious issues, but we can’t catch everything because publication practices are so varied. For example, most family names in our database that end with jr are the result of a publisher including a suffix (Jr) in a family name, but there are of course surnames ending with ‘jr’.\nWe do a weekly post-registration metadata quality check on all journal, book, and conference proceedings submissions, and record the results in the schematron report. If we spot a problem we’ll alert your technical contact via email. Any identified errors may affect overall metadata quality and negatively affect queries for your content. Errors are aggregated and sent out weekly via email in the schematron report.\nWhat should I do with my schematron report? The report contains links (organized by title) to .xml files containing error details. The XML files can be downloaded and processed programmatically, or viewed in a web browser:\nShow image × Download Crossref\u0026rsquo;s Schematron rules for batch processing\n", "headings": ["A Schematron report tells you if there’s a metadata quality issue with your records.","What should I do with my schematron report? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/members-area/all-about-invoicing-and-payment/", "title": "All about invoicing and payment", "subtitle":"", "rank": 4, "lastmod": "2023-01-03", "lastmod_ts": 1672704000, "section": "Hello, members", "tags": [], "description": "The payment portal gives members a way to view, download and pay their invoices. Crossref utilizes a third party application to operate our online payment portal and as such are subject to blacked out access in some regions. We apologize for any inconvenience this may cause.\nAs a not-for-profit membership organization, we are sustained by fees. We have a duty to remain sustainable and manage our finances in a responsible way.", "content": "The payment portal gives members a way to view, download and pay their invoices. Crossref utilizes a third party application to operate our online payment portal and as such are subject to blacked out access in some regions. We apologize for any inconvenience this may cause.\nAs a not-for-profit membership organization, we are sustained by fees. We have a duty to remain sustainable and manage our finances in a responsible way. Financial sustainability means we can keep the organization afloat and keep our dedicated service to scholarly communications running.\nAs a member (or a Sponsor who represents members), you\u0026rsquo;ll receive your annual membership fee invoice each January. If you participate in Similarity Check, this invoice will contain your Similarity Check annual service charge, and you\u0026rsquo;ll also receive a separate Similarity Check document-checking invoice for the documents you\u0026rsquo;ve checked in the previous year. We invoice for content registration on a quarterly basis. If you are a member through a sponsor, all Crossref billing goes to your Sponsor. If you are a service provider or use a paid-for metadata retrieval service, you\u0026rsquo;ll receive your annual invoice in January too. We send out invoices by email to the billing contact on your account - please do update us immediately if you need to change your billing contact. Invoices have a due date of net 45 days, and you can always view both paid and unpaid invoices in our payment portal. You\u0026rsquo;ll need login credentials for the payment portal - these are different from the account credentials that you use to register your content with us. The billing contact for each member account is automatically sent credentials for the portal, and you can request payment portal login credentials for others at your organization too.\nWhen you receive invoices from us you will see a pay now link in the body of the email. This link takes you to our payment portal where you can pay using a credit/debit card or by ACH. You\u0026rsquo;ll also be able to see any outstanding invoices in one central place. If you aren\u0026rsquo;t able to pay using the payment portal, you have the option to pay by wire or check from a US bank.\nAn important part of our accounting process is the automated invoice reminder schedule. We send out automated reminders to the billing contact 7 days before the due date, and then 15 days past the invoice due date. If you still have unpaid invoices after this, we\u0026rsquo;ll send a further email to all the contacts we hold on your account (Billing, Primary, Voting, Technical and Metadata Quality) to notify you that your service is at risk of suspension. If your invoices remain unpaid after this, we suspend your account and remove your access to register content.\nIf an account becomes suspended for non-payment, then your membership of Crossref becomes at risk of being ‘terminated’. If your membership is terminated, you need to contact our membership specialist to discuss whether you can rejoin Crossref. You would need to pay any outstanding invoices before you can re-apply.\nWe understand there are many factors that can make prompt payment a challenge for some people: international transfer delays or fees, funding for your publishing operations may end, change of contacts, problems receiving our emails, etc. We really don\u0026rsquo;t want to see you go, so our billing team works closely with members to make sure they can pay their invoices promptly. We send numerous notifications/reminders before suspension or termination takes place, and we can always be reached at billing@crossref.org for any invoice inquiries you may have - please include your account name, prefix, and invoice number.\nTips for smoother payments Here are some things you can do to help speed up, or simplify payments:\nPay with a credit card, using our online payment portal. This is fast, convenient, and lower in fees. Always reference an invoice number on the payment to ensure that it’s applied to your account efficiently. Be sure to make billing@crossref.org a ‘safe’ email address, so that you receive our invoices and reminders. Always keep us up-to-date with any contact changes at your organization, to ensure that we have accurate information for invoicing and other communication. We recommend giving us a generic email address for the billing contact on your account (such as accounts@publisher.com) rather than the email address for one person. This means that if one person leaves your billing team, invoices can still get through to your organization. Billing FAQs Here are the answers to some frequently asked questions about:\nInitial subscription order for your first year of membership General billing processes The payment portal Content registration invoices Similarity Check invoices Unpaid invoices and suspensions The Global Equitable Membership Program Initial subscription order for your first year of membership +- The subscription order is for less than I expected - is this a mistake?\rDon’t worry, this isn’t a mistake.\nWhen you first apply to join Crossref, you’ll receive a pro-rated Subscription Order for the remainder of that calendar year. So depending on when you join, you’ll only pay for the remaining months of that year.\nThe calculation will also reflect whether you apply in the first or second half of the month. For example, if you join before the middle of July (15th of the month), your membership order will be for six months. If you join after the middle of July, your membership order will be for five months.\nThen, in the following January, you’ll receive an invoice for the whole of that calendar year, and will continue to receive invoices every subsequent January.\n+- Can you change \u0026#39;subscription order\u0026#39; on the document to \u0026#39;subscription invoice\u0026#39;?\rUnfortunately, no, we cannot change the document type. We have hundreds of organizations that apply for membership with good intentions, but then decide that timing, or other factors, delay them from completing the joining process. For this reason, we issue a Subscription Order instead of a Subscription Invoice, as an order more accurately reflects the status of the joining process in our accounting system.\nGeneral billing processes +- When will I be billed?\rMembers There are two different types of invoice that all members receive from us - your annual membership fee invoice, and your content registration fee invoices. If you participate in Similarity Check, there\u0026rsquo;s a third invoice you\u0026rsquo;ll receive - your Similarity Check document checking fees invoice.\nIf you are a member of Crossref through a Sponsor, your Sponsor will pay these invoices on your behalf. They may charge you for their services, so you need to discuss their invoicing schedule with them.\nYour annual membership fee invoice This allows you to remain a member of our organization and take advantage of our services and the reciprocal relationship with other members. Members receive this invoice in January each year to cover their membership for that year - so in January 2021 you\u0026rsquo;ll receive a membership invoice for 2021. If you participate in Similarity Check, your annual service fee for Similarity Check will also be included in this invoice.\nContent registration invoices There\u0026rsquo;s a charge for each item you register with Crossref, and we invoice for this in arrears - this means that we send you the invoice after you\u0026rsquo;ve registered the content, so we know exactly how much to charge.\nThese invoices are usually sent out on a quarterly basis and cover the deposit fees for the content you registered with us during the previous quarter:\nIn April, you’ll receive an invoice for the content you registered in the first quarter of the year (January - March) In July, you’ll receive an invoice for the content you registered in the second quarter of the year (April - June) In October, you’ll receive an invoice for the content you registered in the third quarter of the year (July - September) In January you’ll receive an invoice for the content you registered in the fourth quarter of the previous year (October - December) However, you may not receive an invoice every single quarter. If your content registration charges are below USD 100 for a quarter, those charges will roll forward to the next quarter. This is to avoid members having to pay lots of smaller invoices which may incur international charges.\nThese charges don\u0026rsquo;t roll on past a full year though - so even if your total content registration fees haven\u0026rsquo;t hit USD 100 by the end of the year, you\u0026rsquo;ll receive a content registration invoice in January to cover all your content registration fees for the previous year.\nTo put it another way - you’ll be invoiced when your total charges exceed USD 100, or in the last quarter of the year, whichever occurs first.\nSimilarity Check document checking invoices If you participate in the Similarity Check service, you\u0026rsquo;ll receive an extra invoice each January to cover the fees for all the documents you\u0026rsquo;ve checked in the previous year. Your first 100 documents are free though, so if you check fewer than 100 documents, you won\u0026rsquo;t receive an invoice.\nMetadata subscribers You\u0026rsquo;ll receive your annual subscription invoice each January.\nService providers You\u0026rsquo;ll receive your annual invoice each January.\n+- What are the payment terms - how long do I have to pay?\rPayment is due 45 days after the date of the invoice.\n+- What are your current fees?\rOur current fees are always available on our fees page.\n+- How do I pay - what are the payment methods?\rWe usually send out invoices by email to your named billing contact. The email will include full payment details including account numbers, but here are the basic payment methods. Please note we can only accept payment in US dollars.\n1 Credit or debit card payment using our payment portal We recommend using our payment portal where possible, as other payment methods incur fees. We send out payment portal credentials to the billing contact on each new member account. If you don\u0026rsquo;t already have credentials for our payment portal, please contact us. Please note: your username and password for the payment portal is different from the Crossref account credentials you use to register your content with us.\nThe portal accepts most major credit cards, plus debit cards with a VISA or Mastercard symbol. The credit/debit card needs to be able to make international payments in US$.\nMembers based in the US can also make ACH payments through the payment portal.\nYou can find the answer to frequently asked questions about the payment portal here.\n2 Other payment methods If you aren\u0026rsquo;t able to pay using our payment portal, we offer two other payment methods.\nBank transfers\nWe accept wire transfers from most banks. If your bank is outside the US you\u0026rsquo;ll need to add US$ 35 for a wire transfer fee. We can also accept Automated Clearing House (ACH) payments from US banks. There are no extra fees for ACH payments from US banks. Checks from banks We prefer checks drawn on US banks. If you are sending payment from a US$ bank account outside the US, please add US$ 50 to your payment to cover processing fees. Please mail checks, with a copy of the invoice or with the invoice number referenced on the check, to:\nPublishers International Linking Association, Inc. dba Crossref 50 Salem St. Building A Suite 304 Lynnfield, MA 0194.\nIf you have not been receiving invoices, please contact our membership team to update the billing email address for your account. We recommend you give us a generic departmental email address such as accounts@company.org to avoid emails bouncing back from the accounts of colleagues who have left your organization. Thank you!\n+- Can I get copies of my invoices?\rYou can find copies of your invoices in our payment portal. You can download them and print them off if required.\nFor more information about our payment portal, take a look at our payment portal FAQs.\n+- Can you make a change to my invoice after I’ve received it?\rWhat we can change If the invoice hasn’t yet been paid, we can make the following changes:\nWe can update your organization name or address if this has changed. We can update the detail if there’s an error on the invoice. For example, if you’ve been charged for current content when you should have been charged for backfile content (due to an error in registering the publication date), we can amend the invoice once you’ve updated your metadata. What we can’t change\nWe can’t change dates and due dates, so please pay the invoices as soon as you receive them. We can’t add wire fees into the invoice as they aren\u0026rsquo;t a standard charge for everyone - only for those who use wire transfer as a payment method. Wire fees are USD 35, so you’ll need to add this to your total if you’re paying by wire transfer. +- Will any tax be added to my invoice?\rNo tax will be added to your invoice - there\u0026rsquo;s no tax on membership fees or any of the services we offer.\nThe payment portal The payment portal is a third party tool that we use to give members a way to view, download and pay their invoices.\n+- How do I get access to the payment portal?\rThe billing contact on each member account is automatically set up with access to our payment portal. After a new organization joins, their billing contact is sent an email with a link where they can set their password. Once this password is set, the billing contact will be able to login to the portal using their email address as their username.\n(Please note - this email address and password for the payment portal is separate and different from the credentials you\u0026rsquo;ll use to access our other systems to register your content).\nYou can request that others at your organization have access to the payment portal too by contacting our billing team. This request will need to come from one of the key contacts that we hold for the account. The people who also need access to the payment portal will then each be sent an email with a link where they can set their own password. Once they\u0026rsquo;ve created their own password, they will be able to access the payment portal for their organization using their own email address and the password. This means that different contacts at your organization will have their own separate set of credentials for the payment portal.\n+- Where do I find my invoices in the payment portal?\rOn the menu on the left hand side of the portal you\u0026rsquo;ll see the following:\nOpen invoices - these are your unpaid invoices Paid invoices - these are the invoices that you\u0026rsquo;ve paid in the past +- Can I find the invoices that I\u0026#39;ve already paid in the portal?\rYes you can. In the menu on the left hand side of the portal you will have an option that says \u0026ldquo;Paid invoices\u0026rdquo;. Click here to see the invoices that you have paid in the past.\n+- How do I pay for invoices in the portal?\rYou can use the portal to pay using your credit card or most major debit cards. If you are located in the US, you can also set up an ACH in the portal.\n+- How do I reset my password in the payment portal?\rIf you\u0026rsquo;ve forgotten your password or you need to reset it in future, you can do this by clicking on the \u0026ldquo;forgot password\u0026rdquo; link on the portal homepage. This will send an email to you with a link to reset your password.\n+- I\u0026#39;ve received a link to reset my password, but it only lasts for 4 hours and it\u0026#39;s expired\rDon\u0026rsquo;t worry - you can just request another link by clicking on the \u0026ldquo;forgot password\u0026rdquo; link on the portal homepage. This will send another email to you with a link to reset your password.\n+- I thought I used account ID for my username, not my email address?\rIn our old payment portal, everyone at a member organization shared one set of credentials, and the username was the account ID.\nHowever, in our new payment portal, each person at each member organization will access the portal using different credentials based on their personal email address. This will keep things secure, and if you forget your password, you can request that a reset link is sent to your inbox using the portal - you don\u0026rsquo;t have to ask our billing team to send one to you. This should make things much faster for you.\n+- Can I update our organization billing address in the portal?\rNo. In the portal, you can only change details related to your payment card.\nIf you want to update the billing address that appears on your invoice, you\u0026rsquo;ll need to contact our membership team.\n+- I\u0026#39;m having problems accessing the portal. I\u0026#39;m based in Russia/China/Iraq.\rOur payment portal is provided by a third party, and unfortunately they have blocked access to organizations in Russia, China and Iraq. We recommend that members in these regions use other methods to pay us, or reach out to our billing team to discuss other options.\nContent registration invoices +- Can you explain my content registration invoice to me?\rThere’s a charge for each item you register with Crossref and we invoice for this in arrears - this means you receive the invoice after you’ve registered the content so we know exactly how much to charge.\nThese invoices are usually sent out on a quarterly basis, and cover the deposit fees for the content you’ve registered with us during the previous quarter. However, we do sometimes roll smaller charges on to the next quarter, so you may not receive an invoice every single quarter - and the next quarter you might find charges in your invoice from previous quarters.\nThe information on your Content Registration invoice will look something like this:\nItem Description Unit Quantity Unit price Amount 40801 CY Journal 08/2020: 10.5555 : CY Journal article (users: aelt) EA 30 $1.00 $30.00 40802 BY Journal 08/2020: 10.5555: BY Journal article (users: aelt) EA 10 $0.15 $1.50 40816 CY Book Titles 08/2020: 10.5555: CY Book title (users: aelt) EA 12 $1.00 $12.00 40801 CY Journal 09/2020: 10.5555 : CY Journal article (users: aelt) EA 27 $1.00 $27.00 40802 BY Journal 09/2020: 10.5555: BY Journal article (users: aelt, fort) EA 30 $0.15 $4.50 Sub total $75.00 You’ll see there are different lines on the invoice, and a total at the end.\nYour content registration fees are split out onto separate lines on your invoice by:\nPrefix Month the content was registered Content type Whether the content is current (CY) or backfile (BY). This is because there are different charges for different record types, and different charges depending on whether the publication date of the content is current or backfile. Learn more about content registration fees.\nYou can also see on the invoice which role or roles were used to register the content you’re being charged for.\nHere\u0026rsquo;s a bit more information about each section of the invoice.\nShow image\r×\rThis part of the invoice shows the type of content that this charge relates to, and whether the content is current or backfile. In the example above, the charge is for current year (CY) journal articles.\nShow image\r×\rThis part of the invoice shows the month that this content was registered. In the example above, this content was registered in August 2020.\nShow image\r×\rThis part of the invoice shows which prefix the content was registered with. In the example above, the content was registered under the prefix 10.5555.\nShow image\r×\rThis part of the invoice shows which role was used to register the content. In the example above, the role was aelt.\nShow image\r×\rSometimes more than one role has been used to register content. In the example above, both aelt and fort have been used.\nShow image\r×\rAll prices are in USD, and we can only accept payment in USD.\n+- Why haven’t I received a content registration invoice this quarter?\rWe send invoices for the metadata you register with us on a quarterly basis. However, if the amount comes to less than USD 100, we roll it on to the next quarter. If you haven’t reached USD 100 in fees by the last quarter of the year, we send out an invoice anyway.\nThis is to avoid members having to pay lots of ‘small’ invoices, which may incur international charges.\n+- What do the *CY* and *BY* stand for on my content registration invoice?\rCY stands for current content (Current Year), and BY stands for backfile content (Back Year). You’re charged a different amount depending on the record type you’re registering, and also whether the content is current (CY) or backfile (BY).\nCurrent content is anything registered with us with a publication date in the current year, or up to two years previously. For example, in 2024, current content is anything with a publication date in 2024, 2023 or 2022.\nBackfile content is anything registered with us with a publication date older than this. So in 2022, backfile content is anything published in 2019 or earlier. In 2023 this will become anything published in 2020 or earlier.\n+- Why was I charged the CY fee for BY articles?\rContent Registration fees differ according to whether the content you register is current (published during this year or the previous two years) or backfile (older than that).\nA record is determined to be either backfile or current based on the publication date in your metadata. If you have different dates for print and online (for example, if you\u0026rsquo;re registering archival content), then we look at the print date.\nIf you use our web deposit form, the system looks at the information you’ve entered into the publication date field. If you deposit XML directly with us, the system looks at the date in the \u0026lt;publication_date\u0026gt; element. And we look at each individual item separately—so even if you’ve put a publication date at the journal level, you still need to put it at the journal article level too.\nIf you’ve been charged \u0026lsquo;current\u0026rsquo; fees for content that is actually backfile, it’s probably because the wrong date was put in the publication date field. We have had instances where members have accidentally put the date they registered the content into that field, rather than the date of publication.\nYou can fix this by updating your metadata with the correct publication dates. Please let us know as soon as you’ve done this so we can provide you with an amended invoice.\n+- It\u0026#39;s January, and I\u0026#39;ve just received a content registration invoice. I haven\u0026#39;t received one before - why have I received one now?\rThere are two sets of fees associated with Crossref membership - the annual membership fee (which covers your membership) and the content registration fees (which are a one-off fee for each item you register with us). The membership invoice is sent out at the beginning of each year to cover the forthcoming year, and the content registration invoices are sent out quarterly in arrears.\nHowever, you won\u0026rsquo;t necessarily receive a content registration invoice every single quarter. If the amount of content you register in a quarter comes to less than USD 100, we roll it on to the next quarter. This is to avoid members having to pay lots of ‘small’ invoices, which may incur international charges. But even if you haven’t reached USD 100 in fees by the last quarter of the year, we send out an invoice anyway. This means that if you\u0026rsquo;ve only registered a small number of DOIs in your first year, you won\u0026rsquo;t receive an invoice until right at the end of the year, in your Q4 invoice which is sent out the following January. So you might go a full year before you receive a content registration invoice from us.\nThere are also a very small number of members who might not receive a content registration invoice from us for a few years. If you\u0026rsquo;ve only registered a couple of DOIs, we won\u0026rsquo;t send you an invoice even in Q4. So if you\u0026rsquo;ve had a few years of registering just a tiny handful of DOIs, you might not receive a content registration invoice for a few years.\nSimilarity Check invoices +- When will I receive my Similarity Check invoices?\rThere are two sets of fees for the Similarity Check service - the annual service fee, and document checking fees.\nAnnual service fee This will be included in the annual membership invoice you receive each January.\nDocument checking fees You will be sent this invoice in January each year.\n+- Why doesn\u0026#39;t my Similarity Check document checking invoice exactly match the number of submissions/documents I\u0026#39;ve checked in iThenticate?\rUsers of our Similarity Check service receive an invoice each January for the documents they\u0026rsquo;ve checked in the previous year. The Similarity Check administrator for each organization can monitor their spend throughout the year by checking the reports section of the iThenticate platform (under the Manage Users tab).\nHowever, sometimes you may see a difference between the number of documents that you\u0026rsquo;re invoiced for and the number that the report in iThenticate tells you that you\u0026rsquo;ve checked. There are a few possible reasons for this.\nDocuments above a certain size are considered more than one document\nFor billing purposes, a single document is considered anything of 25,000 words or fewer.\nSo if you check a document of 25,001-50,000 words, it will be considered 2 document checks. If you check a document of 50,001-75,000 words, it will be considered 3 document checks. And so on.\nCheck the \u0026lsquo;documents\u0026rsquo; column in the iThenticate report and not the \u0026lsquo;submissions\u0026rsquo; column as you are invoiced for the number of documents checked and not files submitted.\nNo charge for accidental duplicates\nIf you accidentally check the same document several times, we treat this as a duplicate and don\u0026rsquo;t charge you for it. This includes any documents with exactly the same filename and exactly the same Similarity Score that are submitted within the same 24 hour period. The \u0026lsquo;Manage Users\u0026rsquo; report in iThenticate isn\u0026rsquo;t able to detect this but these are detected and removed from your invoice before we send it.\nThis means that you may see slightly fewer document checks on your invoice than you see in your iThenticate report.\nYour first 100 documents are free of charge\nYour first 100 documents are free of charge, so you\u0026rsquo;ll see 100 fewer document checks on your invoice than you see in your iThenticate report.\nUnpaid invoices and suspensions +- My service has been suspended due to unpaid invoices - can you extend the payment deadline?\rUnfortunately not. A suspension is not a termination of your membership, it just temporarily suspends your ability to register content with us. As soon as payment for past due balances is received, your service will be restored and you will be able to register content again.\nThe Global Equitable Membership Program (GEM) +- What is the Global Equitable Membership Program?\rThe Global Equitable Membership (GEM) program offers relief from membership and content registration fees for members in the least economically-advantaged countries in the world. Eligibility for the program is based on a member’s country. We have curated the list of eligible countries based on the International Development Association list (provided by the World Bank) and excluded anywhere we are bound by international sanctions. Find out more.\n+- When did the GEM Program begin?\rThe GEM Program began on 1st January 2023. Existing members who were eligible for the GEM program were not sent a membership fee invoice for 2023, and will not be charged for content registered after 1st January 2023. They will still be responsible for any fees incurred before 1st January 2023 however, and they will also continue to be invoiced for any optional paid-for services that they subscribe to, such as Similarity Check. Find out more.\nSri Lanka entered the GEM Program in March 2023 after the country was newly added to the IDA list.\n+- Does the GEM Program cover all Crossref fees?\rNot all. The annual membership fee and content registration fees are waived for GEM-eligible members from 1st January 2023. But participation in other paid services, such as Similarity Check and Metadata Plus, will be charged at the usual fees.\n+- Do GEM members get sent any invoices?\rWe don\u0026rsquo;t send out annual membership invoices to members who are eligible for the GEM Program, but we do send out zero value content registration invoices so GEM-eligible members have a record of how many DOI records they\u0026rsquo;ve registered with us.\nIf GEM-eligible members subscribe to any additional paid-for services such as Similarity Check, they will be sent an invoice for these services.\nBack to members area\n", "headings": ["Tips for smoother payments ","Billing FAQs ","Initial subscription order for your first year of membership ","General billing processes ","Members","Metadata subscribers","Service providers","1 Credit or debit card payment using our payment portal","2 Other payment methods","The payment portal ","Content registration invoices ","Similarity Check invoices ","Unpaid invoices and suspensions ","The Global Equitable Membership Program (GEM) "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/members-area/canceling/", "title": "Canceling your membership", "subtitle":"", "rank": 4, "lastmod": "2022-11-23", "lastmod_ts": 1669161600, "section": "Hello, members", "tags": [], "description": "We expect organizations to remain members of Crossref for the long run - by committing to our membership terms, you’ve committed to the long-term stewardship of your metadata and content.\nHowever, there are sometimes reasons why members need to cancel. Perhaps your organization has ceased publishing, or been acquired, or perhaps you are no longer able to afford the fees.\nIf you can no longer afford the fees, do take a look at our Global Equitable Membership (GEM) Program page before canceling, to see if you qualify for assistance, or if working with a sponsor will help.", "content": "We expect organizations to remain members of Crossref for the long run - by committing to our membership terms, you’ve committed to the long-term stewardship of your metadata and content.\nHowever, there are sometimes reasons why members need to cancel. Perhaps your organization has ceased publishing, or been acquired, or perhaps you are no longer able to afford the fees.\nIf you can no longer afford the fees, do take a look at our Global Equitable Membership (GEM) Program page before canceling, to see if you qualify for assistance, or if working with a sponsor will help.\nImportant: you need to tell us that you want to cancel To cancel, contact us by filling out this form and providing as much information about your account as possible (account name, prefix, etc) and the reason for the cancellation. This is extremely important - you aren’t able to pause your membership, so if you just stop using the service and don’t tell us that you want to cancel, we will still continue to send you annual membership fee invoices. And if you then want to start using the service again in future years, you will need to pay these outstanding membership invoices. But if you actually contact us to cancel your membership, we can stop these annual membership invoices from being created.\nWhat happens to your DOIs after you cancel your membership? We have responsibilities after you cancel your membership. We will ensure that the DOIs that you have already registered with us will continue to exist and to resolve to your registered landing page, and that your metadata will continue to be openly shared through our APIs. That’s what makes your DOI a persistent identifier! However, after you cancel, you won’t be able to update this metadata, or register any more DOIs.\nYou also have responsibilities after you cancel your membership. You must ensure that you don’t display any new, unregistered DOIs. You also need to make sure that your existing DOIs continue to resolve to a live landing page - and we can help with that.\nHow to ensure that your DOIs continue to resolve to a live landing page There are different options here depending on your situation.\n1. You’re not going to be publishing any new content but are continuing to host your existing content If you continue hosting your content in the same location, this is fine. The DOIs that you’ve already registered will continue to resolve to the resource resolution URL that you’ve registered with us. If this changes in future, please get in contact. We will be able to help update your resource resolution URLs, even if you are no longer a member.\n2. You’re not going to be publishing any new content and will no longer be hosting your existing content We recommend that all members work with an archive provider. If you can no longer host your content, we will be able to work with the archive to get your resource resolution URLs to resolve to the archive version, ensuring that the DOIs continue to resolve to a version of the content.\n3. One or more of your journals has been acquired by another publisher If your content has been acquired by a different publisher, we will be able to work with you and the acquiring publisher to transfer ownership of your title(s) to the new owner in our system.\nThis will transfer ownership of all your existing DOIs to the new publisher. The new publisher will then be able to update the metadata (and resource resolution URLs) for these existing DOIs, even though these DOIs are based on your DOI prefix. This means that the DOIs that you originally registered for this content will remain the persistent identifier for the content for the long term. And if the new publisher needs to register new DOIs for new content on their own prefix, they can do this. You can read more about our title transfer policy here. Our colleague Isaac also gives more background on this subject in this blog post.\n4. Your organization has been acquired by another publisher If your organization has been acquired by another publisher, we will be able to work with you and the acquiring publisher to transfer ownership of your prefix(es) and existing DOIs to them. They can then choose whether they want to continue to use your prefix to register new DOIs, or transfer your titles to their prefix and use their own prefix for future DOIs. Either way, your existing DOIs will continue to work and should continue to be used. Find out more about prefix transfers.\nBack to members area\n", "headings": ["Important: you need to tell us that you want to cancel","What happens to your DOIs after you cancel your membership?","How to ensure that your DOIs continue to resolve to a live landing page","1. You’re not going to be publishing any new content but are continuing to host your existing content","2. You’re not going to be publishing any new content and will no longer be hosting your existing content","3. One or more of your journals has been acquired by another publisher","4. \tYour organization has been acquired by another publisher"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/member-setup/", "title": "Setting up as a member", "subtitle":"", "rank": 1, "lastmod": "2025-02-20", "lastmod_ts": 1740009600, "section": "Documentation", "tags": [], "description": "You need to be a member of Crossref in order to get your Crossref prefix and register your content with us. Membership of Crossref is about more than just registering DOIs - find out more on our membership page. You can apply to join there too.\nAfter you’ve applied for membership and paid your pro-rated membership fee for the remainder of the current year, we set you up with your own Crossref DOI prefix.", "content": "You need to be a member of Crossref in order to get your Crossref prefix and register your content with us. Membership of Crossref is about more than just registering DOIs - find out more on our membership page. You can apply to join there too.\nAfter you’ve applied for membership and paid your pro-rated membership fee for the remainder of the current year, we set you up with your own Crossref DOI prefix. We also help you set up the Crossref account credentials that you’ll use to access our systems and register your content.\nThere are three key steps to getting started, and you can even start step one before you’ve received your new prefix and credentials.\nPrepare to register your content Register and verify Display your DOIs Step 1: Prepare to register your content a) Choose your content registration method In order to get working DOIs for your content and share your metadata with the scholarly ecosystem, you need to register your content with Crossref.\nYour metadata is stored with us as XML. Some members send us XML files directly, but if you’re not familiar with writing XML files, you can use a helper tool instead. There are three helper tools available - these are online forms with different fields for you to complete, and this information is converted to XML and deposited with Crossref for you. A big decision to make as a new member is which of our content registration methods to use.\nFind the best option for you.\nb) Decide how you’ll construct your DOI suffixes A DOI has several sections, including a prefix and a suffix. A DOI will always follow this structure:\nhttps://doi.org/[your prefix]/[a suffix of your choice]\nWe provide you with your prefix, but you decide what’s in the suffix for each of your DOIs when you register them with us. Your DOIs will look something like this:\nShow image\r×\rIf you use the Crossref XML plugin for OJS, they can provide suffixes for you by default, but otherwise you’ll need to decide on your own suffix pattern. It’s important to keep this opaque.\nAs a DOI is a persistent identifier, the DOI string can\u0026rsquo;t be changed after it\u0026rsquo;s been registered. It\u0026rsquo;s therefore important that your DOI string is opaque and doesn\u0026rsquo;t include any human-readable information. This means that the suffix should just be a random collection of characters. It should not include any information about the work that could be changed in the future, to avoid a difference between the information in the DOI string, and the information in the metadata.\nFor example, 10.5555/njevzkkwu4i7g is opaque (and correct), but 10.5555/ogs.2016.59.1.1 is not opaque (and not correct); it encodes information about the publication name and date which may change in the future and become confusing or misleading. So don’t include information such as publication name initials, date, ISSN, issue, or page numbers in your suffix string.\nLearn more about constructing your DOIs.\nc) Ensure your landing pages meet the guidelines You’ll need a live, unique landing page on your website for each item you register and this landing page will need to contain specific information\nLearn more about landing pages.\nStep 2: Register and verify a) Set the password on your Crossref account credentials You’ll need a set of Crossref account credentials to access our content registration tools. We\u0026rsquo;ll send you an email so you can set your password.\nb) Register your content You should assign Crossref DOIs to anything that’s likely to be cited in the scholarly literature - journals and journal articles, books and book chapters, conference proceedings and papers, reports, working papers, standards, dissertations, datasets, and preprints.\nBecause DOIs are designed to be persistent, a DOI string can’t be changed once registered, and DOIs can’t be fully deleted. You can always update the metadata associated with a DOI, but the DOI string itself can’t change, and once it’s been registered, it will be included in your next content registration invoice. It’s important that you only register a DOI that you definitely want to use.\nWorking with Crossref is about more than just DOIs. When you register content with us, you do register the DOI and the resolution URL, but you also register a comprehensive set of metadata - rich information about the content. This metadata is then distributed widely and used by many different services throughout the scholarly community, helping with discoverability of your content.\nIf you are registering DOI records for journal articles, you will include metadata about the journal title that this article was published in. When you register your first article for a journal, be really careful about the journal title you enter - this will create a journal title record and any future submissions will have to match this. Your journal title doesn’t have to match the title in the ISSN portal, but if you do want it to match, make sure to check what this is before you register your first item.\nContent registration instructions for helper tools:\nCrossref XML plugin for OJS Web deposit form Content registration instructions for direct deposit of XML:\nUpload XML files using our admin tool XML deposit using HTTPS POST Upload JATS XML using the web deposit form c) Verify your registration When you register your content, you’ll receive a message telling you whether your submission has been successful, or whether there are any problems. If there are problems, your DOI may not be live so do check this message carefully.\nLearn more about how to verify your registration.\nStep 3: Display your DOIs Don’t forget to display your DOI on the landing page of each item you register - this is an obligation of membership. You’ll need to display your DOI as a link like this:\nhttps://doi.org/10.xxxx/xxxxx\nLearn more about Crossref DOI display guidelines.\nHow to get help and support Our support team is available to help if you have any problems, and you may find help from others in our community on our Crossref Forum. We also run regular \u0026ldquo;Ask Me Anything\u0026rdquo; webinars for new members - learn more about our webinars and register to attend.\nWhat happens next? Once you’ve started registering your content with Crossref and displaying your DOIs on your landing pages, it doesn’t stop there. After you first join, we send you a series of onboarding emails to help you through the next stages. If you want to get started straight away, take a look at how to get started constructing your DOIs.\n", "headings": ["Step 1: Prepare to register your content ","a) Choose your content registration method ","b) Decide how you’ll construct your DOI suffixes ","c) Ensure your landing pages meet the guidelines ","Step 2: Register and verify ","a) Set the password on your Crossref account credentials ","b) Register your content ","c) Verify your registration ","Step 3: Display your DOIs ","How to get help and support ","What happens next? "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/org-chart/", "title": "Organisation chart", "subtitle":"", "rank": 1, "lastmod": "2025-02-09", "lastmod_ts": 1739059200, "section": "Our people", "tags": [], "description": "Here is our organisational chart. Also take a look at our people directory and click through to read about each of our roles and areas of responsibility. If you\u0026rsquo;re using Firefox and can\u0026rsquo;t view the interactive chart below, here is static image, correct as of January 2025.\nInteractive org chart ", "content": "Here is our organisational chart. Also take a look at our people directory and click through to read about each of our roles and areas of responsibility. If you\u0026rsquo;re using Firefox and can\u0026rsquo;t view the interactive chart below, here is static image, correct as of January 2025.\nInteractive org chart ", "headings": ["Interactive org chart"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2025/", "title": "2025", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/", "title": "Archives", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/", "title": "Authors", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/", "title": "Blog", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Blog", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/", "title": "Categories", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/martyn-rittman/", "title": "Martyn Rittman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/research-integrity/", "title": "Research Integrity", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/rest-api/", "title": "REST API", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/retraction-watch/", "title": "Retraction Watch", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/retraction-watch-retractions-now-in-the-crossref-api/", "title": "Retraction Watch retractions now in the Crossref API", "subtitle":"", "rank": 1, "lastmod": "2025-01-29", "lastmod_ts": 1738108800, "section": "Blog", "tags": [], "description": "Retractions and corrections from Retraction Watch are now available in Crossref’s REST API. Back in September 2023, we announced the acquisition of the Retraction Watch database with an ongoing shared service. Since then, they have sent us regular updates, which are publicly available as a csv file. Our aim has always been to better integrate these retractions with our existing metadata, and today we’ve met that goal.\nThis is the first time we have supplemented our metadata with a third-party data source.", "content": "Retractions and corrections from Retraction Watch are now available in Crossref’s REST API. Back in September 2023, we announced the acquisition of the Retraction Watch database with an ongoing shared service. Since then, they have sent us regular updates, which are publicly available as a csv file. Our aim has always been to better integrate these retractions with our existing metadata, and today we’ve met that goal.\nThis is the first time we have supplemented our metadata with a third-party data source. Until now, our APIs have included metadata provided by Crossref members along with outputs from our internal enrichment workflows, such as matches found for bibliographic reference matching and funders. Third party metadata has been gathered in Event Data, but this has been stored and delivered separately.\nKnowing when work has been retracted is critical for assessing the integrity of research, and this enhancement of the data will be a great benefit to the community.\nWhere does the data come from? Retraction Watch carefully curates retractions, pulling them from several non-Crossref sources, including PubMed and publisher websites. Each entry is manually checked and annotated before being added to the database. The high level of curation and broad coverage is what made a partnership between Crossref and Retraction Watch attractive, and our shared goal of making changes to metadata more visible.\n\u0026ldquo;Our goal with the Retraction Watch Database has always been for it to be as useful to as many people as possible, and available from as many sources as possible,” says Ivan Oransky, co-founder of Retraction Watch and executive director of The Center For Scientific Integrity, its parent nonprofit organization. “Integration with Crossref’s REST API is a huge step in that direction.”\nWhere can I see the retractions? If you use a service that collects Crossref metadata, you will start to see the Retraction Watch retractions as they are picked up. To access the data directly, you can find retractions from both Crossref members and Retraction Watch in our REST API, for example with the following request for all retractions:\nhttps://api.crossref.org/v1/works?filter=update-type:retraction\nOr for an individual record:\nhttps://api.crossref.org/v1/works/10.1177/17588359231172420\nIn the results here you will see an update-to field:\n\u0026#34;update-to\u0026#34;: [ { \u0026#34;updated\u0026#34;: { \u0026#34;date-parts\u0026#34;: [ [2023,4,22] ], \u0026#34;date-time\u0026#34;: \u0026#34;2023-04-22T00:00:00Z\u0026#34;, \u0026#34;timestamp\u0026#34;: 1682121600000 }, \u0026#34;DOI\u0026#34;: \u0026#34;10.1177/1758835920922055\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;retraction\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;publisher\u0026#34;, \u0026#34;label\u0026#34;: \u0026#34;Retraction\u0026#34; }, { \u0026#34;updated\u0026#34;: { \u0026#34;date-parts\u0026#34;: [ [2023,4,22] ], \u0026#34;date-time\u0026#34;: \u0026#34;2023-04-22T00:00:00Z\u0026#34;, \u0026#34;timestamp\u0026#34;: 1682121600000 }, \u0026#34;DOI\u0026#34;: 10.1177/17588359231172420\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;retraction\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;retraction-watch\u0026#34;, \u0026#34;label\u0026#34;: \u0026#34;Retraction\u0026#34;, \u0026#34;record-id\u0026#34;: 44124 } ] The source field states where the retraction came from. Currently, it can have two values: publisher or retraction-watch. Note that the same retraction may be included multiple times from different sources.\nRetraction Watch retractions will remain available on Gitlab in csv format and be updated on working days. The record-id refers to the entry in the csv file with further details, such as the reason for retraction.\nThere is full documentation available for the Crossref REST API and if you are new to REST APIs, see our learning hub to get started which includes a tutorial about accessing retractions.\nWhat can I do with the retractions? Like the rest of our metadata, the retractions are freely available. If you use or operate a tool that ingests retractions, the new entries will start to be picked up immediately. The Retraction Watch database includes a larger number of retractions than the Crossref database, so you should see an increase in the total.\nWe have heard from organisations that would like to build new research integrity tools based on this data. We look forward to seeing the benefits brought by wider availability of the Retraction Watch retractions, and how they can provide better context to research outputs.\nWhile Crossref metadata is freely available to reuse without a license, if you make use of the Retraction Watch retraction metadata in a published work, we kindly request that you provide a citation to the source.\nIf you have questions or comments, please head over to the section of our forum dedicated to integrity of the scholarly record.\n", "headings": ["Where does the data come from?","Where can I see the retractions?","What can I do with the retractions?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/community/", "title": "Community", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/ed-pentz/", "title": "Ed Pentz", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/posi/", "title": "POSI", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/posi-2.0-feedback/", "title": "POSI 2.0 feedback", "subtitle":"", "rank": 1, "lastmod": "2025-01-28", "lastmod_ts": 1738022400, "section": "Blog", "tags": [], "description": "As a provider of foundational open scholarly infrastructure, Crossref is an adopter of the Principles of Open Scholarly Infrastructure (POSI). In December 2024 we posted our updated POSI self-assessment. POSI provides an invaluable framework for transparency, accountability, susatinability and community alignment. There are 21 other POSI adopters.\nTogether, we are now undertaking a public consultation on proposed revisions for a version 2.0 release of the principles, which would update the current version 1.", "content": "As a provider of foundational open scholarly infrastructure, Crossref is an adopter of the Principles of Open Scholarly Infrastructure (POSI). In December 2024 we posted our updated POSI self-assessment. POSI provides an invaluable framework for transparency, accountability, susatinability and community alignment. There are 21 other POSI adopters.\nTogether, we are now undertaking a public consultation on proposed revisions for a version 2.0 release of the principles, which would update the current version 1.1 of the principles, released in November 2023.\nThis is a crucial step in ensuring that POSI evolves to meet the needs of the community. Whether you are part of an organization that has adopted POSI, is considering adoption, interacts with POSI-aligned groups, or you have a personal interest in open scholarly infrastructure, your perspective is invaluable.\nSome additional context about POSI POSI is not an organization; POSI adopters are an informal group of those that have conducted self-assessments.\nThe POSI principles are not rules or a checklist; organizations or groups can adopt or interpret them to fit many different circumstances.\nOur goal is for POSI self-assessments to be made publicly available and for interested communities to assess and monitor updates and progress.\nHow to Participate If your organization has adopted POSI, is considering adoption, interacts with POSI-aligned groups, or you have a personal interest in open scholarly infrastructure, your perspective is invaluable.\nReview the Proposed POSI 2.0 Revisions.\nShare your thoughts via our short survey.\nDeadline: March 5, 2025\nTogether, we can shape the future of open scholarly infrastructure. Join the conversation and make your voice heard!\n", "headings": ["Some additional context about POSI","How to Participate"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/sustainability/", "title": "Sustainability", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2025-01-16-data-scientist/", "title": "Data Scientist", "subtitle":"", "rank": 1, "lastmod": "2025-01-16", "lastmod_ts": 1736985600, "section": "Jobs", "tags": [], "description": "Applications for this position will be closed on February 5, 2025. Are you interested in using data to understand the scholarly landscape better and help the scholarly community? Would you like to help Crossref make better-informed decisions? Join us as a Data Scientist.\nLocation: Remote and global (to partially overlap with working hours in European timezones) Type: Full-time Remuneration: 120k USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer.", "content": " Applications for this position will be closed on February 5, 2025. Are you interested in using data to understand the scholarly landscape better and help the scholarly community? Would you like to help Crossref make better-informed decisions? Join us as a Data Scientist.\nLocation: Remote and global (to partially overlap with working hours in European timezones) Type: Full-time Remuneration: 120k USD or local equivalent. Note this is a general guide (as there is no universal currency) and local currency analysis will take place before the final offer. Reports to: Director of Data Science, Dominika Tkaczyk Timeline: Advertise in January/February and offer by February/March About the role Crossref operates an open infrastructure that connects thousands of scholarly publishers, millions of research articles, and research objects to serve an increasingly diverse set of communities within scholarly publishing, research, funding, and beyond. Our system acts as the backbone for preserving and sharing the scholarly record. We offer a wide array of services to ensure that scholarly research metadata is registered, linked, and distributed. When members register their content with us, we collect both bibliographic and non-bibliographic metadata. We process it so that connections can be made between publications, people, organizations, and other associated outputs. We preserve the metadata we receive as a critical part of the scholarly record. We also make it openly available across a range of interfaces and formats so that the community can use it and build tools with it.\nOver the last few years, we have witnessed substantial growth in the scholarly community, which has been reflected in the increase in the volume and variety of the data we deal with. On the one hand, this growth opens new possibilities for using the data to understand the scholarly landscape better, better serve the community, and make more informed decisions in a data-driven way. On the other hand, we are facing challenges related to the scale and complexity of the data. To fulfil our data-related ambitions and better address the challenges, Crossref has introduced a new Data Science team. The Data Science team will use scientific research and data science to deliver, assess, improve, and enrich scholarly metadata.\nThe Data Science team will provide in-house data expertise to the Programs and Technology teams relating to system improvements, community impact, metadata enrichment, and other key initiatives. We work in matrix program groups across three areas of focus: Co-create and reflect community trends; Contribute to the research nexus; and Open \u0026amp; sustainable operations. The Data Scientists will be embedded in program steering groups.\nWe are looking for two Data Scientists to join our Data Science team. The roles will have different focuses:\nData analysis \u0026amp; insights: The first role will be responsible for processing and analyzing the scholarly and operational data to help the scholarly community and inform Crossref’s strategy and decisions, as well as proposing new ideas for how we can use the data to fulfil our mission. This role will closely collaborate with all other Crossref teams to co-create ideas and transform them into new knowledge and working solutions. Data availability \u0026amp; engineering: The second role will be responsible for detecting and assessing issues and gaps in the scholarly metadata, as well as researching strategies to increase the completeness and accuracy of the metadata and relationships, using internal and external data sources. This role will collaborate with the Technical and Program teams on transforming the research results into production-level services and workflows. Key responsibilities Data Analysis \u0026amp; Insights Working with scholarly metadata and Crossref operational data to answer questions and gather evidence supporting or disproving hypotheses Detecting, diagnosing and assessing problems and gaps in the scholarly metadata using automated and semi-automated techniques gathering insights from available data to help Crossref make well-informed strategic decisions Analyzing trends and monitoring the results of various decisions and policies researching and proposing new data sources and research opportunities that help to support Crossref’s strategy Evaluating and adopting appropriate data analysis tool(s) for the organisation to use for insights and reporting Presenting the insights and new knowledge learned through data science activities internally and externally Collaborating with all Crossref teams to understand their needs, co-create ideas and research questions, and propose data-driven approaches to address them Collaborating with the data science and academic research community in the fields of bibliometrics, scientometrics, digital libraries, and similar Engaging with members, users, and partner organisations to understand trends and needs, and contribute to others’ community initiatives and awareness Implementing and promoting good practices around research, data management, data governance, and transparency Data Availability \u0026amp; Engineering Detecting, diagnosing and assessing problems and gaps in the scholarly metadata using automated and semi-automated techniques Researching automatic and semi-automatic strategies to increase the completeness and accuracy of the metadata and relationships, for example, through data cleaning, metadata matching, metadata extraction from unstructured sources Using evaluation techniques to estimate the quality of automated strategies Proposing additional metadata sources, assessing the overlap between different databases and researching strategies for metadata merging Collaborating with the Metadata team on modelling of the metadata gathered from multiple sources and inferred automatically, considering provenance information Collaborating with the Technology and Program teams on transforming the research results into production-level services Communicating the insights and new knowledge learned through data science activities internally and externally Collaborating with the data science and academic research community in the fields of bibliometrics, scientometrics, digital libraries, and similar Engaging with members, users, and partner organisations to understand trends and needs, and contribute to others’ community initiatives and awareness Implementing and promoting good practices around research, data management, data governance, and transparency About you Essential experience and skills:\nMinimum 3 years of hands-on experience in data science, data engineering, applied research, or similar Proven track record of designing, running, and communicating data science experiments Experience with using and developing data science-based tools and services Experience with software and data engineering Strong analytical and problem-solving skills Expertise in Python programming language Familiarity with machine learning concepts and methods Familiarity with relational databases and REST APIs Willingness to learn new skills and work with a variety of technologies Ability to work independently in a self-directed way while consulting with others and collaborating openly Ability to plan and project manage i.e. think ahead, outline goals, and organize steps to achieve the desired outcomes Good communication skills with the ability to explain technical concepts to non-technical audiences Awareness of the limitations of data e.g. relating to cultural or geographic biases Nice-to-have skills:\nExperience with scholarly metadata Experience with metadata modelling Knowledge of the dynamics of research communications and relevant communities Experience with integrating data from multiple sources Familiarity with JSON and mixed-content model XML Experience with natural language processing techniques Experience with statistical inference and sampling Experience with large-scale data processing frameworks such as Spark Experience with AWS services Experience with search engines such as Elasticsearch Experience with deploying and maintaining machine learning solutions in production Experience with data visualization tools About Crossref \u0026amp; the team We’re a nonprofit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart, the latest Annual Meeting recordings, and our financial information here.\nThinking of applying? We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record.\nClick here to apply!\nPlease strive to submit your application by February 5, 2025.\nAnticipated salary for this role is approximately 120k USD-equivalent, paid in local currency. Crossref offers competitive compensation, benefits, flexible work arrangements, professional development opportunities, and a supportive work environment. As a nonprofit organization, we prioritize mission over profit.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","Data Analysis \u0026amp; Insights","Data Availability \u0026amp; Engineering","About you","About Crossref \u0026amp; the team","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/adam-buttrick/", "title": "Adam Buttrick", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/data-science/", "title": "Data Science", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/dominika-tkaczyk/", "title": "Dominika Tkaczyk", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/linking/", "title": "Linking", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata/", "title": "Metadata", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata-matching/", "title": "Metadata Matching", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-matching-beyond-correctness/", "title": "Metadata matching: beyond correctness", "subtitle":"", "rank": 1, "lastmod": "2025-01-08", "lastmod_ts": 1736294400, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/axeer1ee\nIn our previous entry, we explained that thorough evaluation is key to understanding a matching strategy\u0026rsquo;s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs.", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/axeer1ee\nIn our previous entry, we explained that thorough evaluation is key to understanding a matching strategy\u0026rsquo;s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs. In this blog post, the last one in the metadata matching series, we outline a subjective set of criteria we would recommend you consider when making decisions about matching.\nOpenness Matching tools come in many different shapes and sizes: web applications, APIs, command-line tools, sometimes even enchanted crystal balls showing matched identifiers emerging from a mysterious mist! No matter what form they take, an important consideration is whether the source code and all the related resources for the matching are openly available.\nMatching strategies that are either closed-source, or rely on closed-source services for their matching logic, make it difficult to fully understand and explain matching processes. This lack of transparency also makes it impossible to adjust or improve the matching logic, since we cannot understand or improve code we cannot see.\nUsers are similarly impeded from identifying flaws or suggesting improvements to processes they are unable to examine. By blocking this community participation, we also lose the proven cycle of real-world testing, refinement, and validation that has strengthened myriad of open source projects. The cumulative impact of both minor and major community-driven refinements over time is incredibly valuable and should not be underestimated.\nUsing open source matching will also help build trust in the matching workflows and results. This is one reason why open source is one of the tenets of the Principles of Open Scholarly Infrastructure, adopted by Crossref, DataCite, ROR, and other organizations who build and maintain open scholarly infrastructure.\nWhen evaluating matching strategies, we strongly recommend prioritizing those that are fully open source. This not only ensures their transparency and trustworthiness, but also allows for the kind of continuous improvement that results from this visibility and community engagement.\nExplainability In terms of our ability to understand and improve a matching strategy, using an open source model is only the first step. What typically matters most in the context of building and maintaining matching services is that we are able to understand their underlying code and have a clear model of how matches are derived from their corresponding inputs. Even if the matching code itself and all of the resources used in the matching are open, if they are poorly documented, lack reproducibility or tests, or are otherwise opaque, there is no guarantee that it will be possible to understand or improve the strategy. Striving for a high level of interpretability in our matching plays a determinative role in how well we can understand and modify our strategies in the future.\nBeing able to explain the behaviour of the matching will also help you to respond to and incorporate user feedback. When users encounter errors, you will be able to do things like advise them on how to modify or clean their inputs so that the results are better. Conversely, examining the behaviour of the strategy relative to user inputs and feedback can provide you with ideas for improving the matching.\nTypically, heuristic-based strategies, such as those that use forms of search or string similarity measures, like edit distance, are easier to explain than, say, machine learning models. If a strategy uses machine learning, at least some internal decisions might be made by passing data through a complex network of algebraic equations. Those can be mysterious, non-deterministic, and are famous for being hard to interpret. This doesn\u0026rsquo;t mean they should be avoided entirely - we have built and use many machine-learning based tools ourselves! Instead, it is a good idea to weigh how their inherent lack of explainability could affect your ability to continue work on the strategy and respond to user needs, relative to all the available options.\nComplexity Complexity is another aspect that can greatly affect how easy it is to maintain the strategy. Complexity is related to how many different components the strategy has and how difficult they are to use and maintain. When a strategy has multiple interconnected parts, each component becomes a potential failure point that requires discrete assessment and maintenance.\nConsider, for example, two different approaches to a matching strategy: one that uses a single machine learning model versus another that uses an ensemble of models. A single model requires maintaining one set of training data, a single training pipeline, and one deployment process. If the model\u0026rsquo;s performance unexpectedly deteriorates, whether because of an issue with the training data, a configuration error, or the need for additional input sanitization, the source of the problem is easier to isolate and fix.\nThe ensemble, by contrast, combines multiple, specialized models, each requiring its own training data, tests, updates, and deployments. If one model in the ensemble is found to reduce the performance of the strategy, the interdependence between models can cause this degradation to cascade through the entire system and undermine its overall reliability. Correcting for these errors becomes more challenging. If fixing one model\u0026rsquo;s performance requires retraining or adjusting its outputs, this could require recalibrating the entire ensemble to maintain the balance between models, identify regressions, and prevent new errors from emerging.\nIn general, preferring simpler strategies not only reduces operational overhead, but also makes it easier to diagnose issues, test changes, and iterate on user feedback. When problems arise, having fewer moving parts means less places to look for the root cause and fewer components that could be affected by any fixes.\nFlexibility The metadata to which we match grows and changes over time. New records are created, existing ones are updated, with schemas changing and evolving alongside. The resources that underlie our matching are also not static. The libraries we depend on may deprecate features between versions or the taxonomies we used to categorize results might undergo significant revisions. We thus rarely have the luxury of deploying a matching strategy once and using it forever without any changes. A good strategy has to be flexible enough to adapt to such changes, with this adaptation also being both technically feasible and practical to implement.\nMuch of this flexibility is also determined by a matching strategy\u0026rsquo;s ability to incorporate new data. Strategies that use continuously updated databases or indices can immediately match against new metadata as it appears in the system. By contrast, some machine learning-based approaches require training on target matches and can thus be limited in flexibility and face more constraints. While some models can be incrementally updated to recognize new matches, others require retraining from scratch to incorporate these changes - a process that can be both time-consuming and resource-intensive.\nPaying close attention to a strategy\u0026rsquo;s flexibility and favoring this aspect, when possible, can significantly impact its long-term viability. When comparing different matching strategies, flexibility should thus be a primary concern in your decision-making process.\nResources Matching strategies can vary significantly in their resource requirements, including things like CPU and GPU utilization, memory consumption, storage capacity, and network bandwidth. These requirements are directly related to infrastructure costs and energy consumption, so when evaluating a matching strategy, it is necessary to assess its resource demands across all phases of the matching lifecycle. This includes things like initial model training, re-training, index construction, updates and management for all aspects of the strategy, as well as the real-world processing of matching requests. It is a good idea to measure and monitor resource usage carefully in considering which strategies to use, as the best performing strategy may also be too resource intensive to run as a service or might grow to this state over time with additional utilization.\nSpeed Matching strategies can operate at a wide range of speeds, from milliseconds to minutes per match. Since the overall response time of a strategy can affect both system scalability and user experience, we should always assess the strategy\u0026rsquo;s performance for different usage scenarios and scales of data. While some strategies might perform adequately with small datasets, they can also exhibit exponential slowdowns as data volume and complexity increases or as concurrent requests grow in number. We should therefore consider carefully how requirements for matching speed might evolve with increased usage, data complexity, and total anticipated growth. The fastest matching strategy might not always be the best choice if it comes at the cost of reduced accuracy or requires large amounts of resources, but unacceptable latency can make an otherwise excellent strategy unusable in practice for many use cases.\nPutting it all together The typical life cycle of developing a metadata matching strategy is as follows:\nScoping: we define the matching task, along with its inputs and outputs. Research: we research what existing strategies are available for our task and/or we develop our own. Evaluation: we evaluate all available strategies, internally or externally-developed, exploring all of the aspects described above. Decision: we choose which strategy (if any) we want to use in our production system. Production setup: we prepare the production models, indexes, and other resources needed for the matching. Maintenance: we monitor and adapt the strategy relative to changing data, user feedback, and new resource requirements. In practice, these phases do not happen all at once, nor in this strict order. Often we need to proceed through multiple iterations of them to arrive at the best strategy. For example, if initial evaluation of a strategy yields poor results, we might return to the research phase to investigate other strategies or refine our understanding of the task. Often, during the maintenance phase, we receive feedback from users that indicates potential areas of improvement and then pursue them with a new round of research and evaluation.\nAs we cycle through these phases, ideally all the aspects described in this entry, along with the results of the evaluation, would be taken into account. Of course, this means that these decisions have to be based on multiple criteria and by making trade-offs between their performance and all other considerations. In making these complex and difficult choices, it is useful to consider two primary questions:\nAre any of the considered matching strategies good enough for our use case? Out of all the considered strategies that are sufficient for our use case, which would be the best? The first question requires us to create clear and quantifiable criteria that allow for eliminating some of the potential strategies. As we have indicated, these could include things like the strategy being open source, minimum performance baselines using measures like precision or recall, and operational thresholds, like the strategy being able to return results quickly, relative to user expectations or the volume of data to be processed. It should be fairly easy to test these requirements and eliminate any strategies that fall short of them. If the strategies are difficult to assess, that is likely a mark against them.\nIf no strategies meet these criteria, we have two options: either to abandon matching entirely or to reassess and relax our criteria to align with the available options. While the former is always an option, adopting a more pragmatic lens, framing in terms of potential value (or harm) to the users, might be beneficial. Sometimes we approach matching tasks with too high expectations and a dose of realism helps us to re-center our perspectives. After more consideration, you might decide that your criteria were too stringent or realize that you need to better define and decompose the tasks to fit the available options.\nWhen multiple strategies appear viable, the selection process becomes more nuanced. When evaluating strategies across these various dimensions, we should try to avoid placing undue weight on minor performance differences. Evaluation metrics are useful estimates of performance, but do not always translate to real-world applications and changing data. In cases where a more complex strategy offers only marginal improvements over a simpler alternative, the maintenance and operational benefits of the simpler solution often outweigh small performance gains.\nThis concludes our series on metadata matching, where we described the conceptual, product, and technical aspects of matching and its applications. We hope this overview was instructive and helps you to make better decisions about the use of matching in your own tools and services!\n", "headings": ["Openness","Explainability","Complexity","Flexibility","Resources","Speed","Putting it all together"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/fees/", "title": "Fees", "subtitle":"", "rank": 1, "lastmod": "2025-01-06", "lastmod_ts": 1736121600, "section": "Fees", "tags": [], "description": "We have different fees for different kinds of community participation. These include annual membership fees and content registration fees for members, optional member fees for Similarity Check, and annual subscription fees for Metadata Plus.\nOur Membership and Fees Committee regularly reviews our fees and makes recommendations to our Board. In July 2019 our board voted to approve our fee principles. We haven\u0026rsquo;t increased the regular annual membership or content registration fees since 2004 years but please note that fees may be changed soon as part of the Resourcing Crossref for Future Sustainability (RCFS) Program that started in 2024.", "content": "We have different fees for different kinds of community participation. These include annual membership fees and content registration fees for members, optional member fees for Similarity Check, and annual subscription fees for Metadata Plus.\nOur Membership and Fees Committee regularly reviews our fees and makes recommendations to our Board. In July 2019 our board voted to approve our fee principles. We haven\u0026rsquo;t increased the regular annual membership or content registration fees since 2004 years but please note that fees may be changed soon as part of the Resourcing Crossref for Future Sustainability (RCFS) Program that started in 2024.\nMember fees If you want to get a Crossref DOI prefix for your organization so you can create unique and persistent links and distribute metadata through us, you’ll need to become a member. Most members pay an annual fee at the beginning of each calendar year plus a one-time fee for each content item registered. There are also fees for the optional Similarity Check service.\nAnnual membership fees If you are a member through a sponsor, your sponsor will pay membership fees on your behalf (but they may also charge you for their services - more here).\nIf you are eligible for the GEM programme, you will not pay annual membership fees - more here.\nAll other members pay an annual membership fee to Crossref. There are different membership fee tiers depending on whether you will be registering published content with us, or whether you are a funder and will be registering grants. These fees are tiered, depending on your publishing revenues or expenses, or the value of the grants that you award.\n(IMPORTANT: your membership fee gives you access to our services, but DOES NOT include content registration. There are separate content registration fees payable.)\nAnnual membership fee tiers (for organizations registering published content) Our membership fees for organizations who will be registering published research outputs are tiered depending on the publishing revenue or expenses of your organization. Please select your tier from the table below, and use the higher number of either:\nTotal annual publishing revenue from all the divisions of your organization (the member is considered to be the largest legal entity) for all types of activities (advertising, sales, subscriptions, databases, article charges, membership dues, etc). Or, if no publishing revenue then: Total annual publishing operations expenses including (but not limited to) staff costs, hosting, outsourcing, consulting, typesetting, etc. Total publishing revenue or expenses Annual membership fee \u0026lt;USD 1 million USD 275 USD 1 million - USD 5 million USD 550 USD 5 million - USD 10 million USD 1,650 USD 10 million - USD 25 million USD 3,900 USD 25 million - USD 50 million USD 8,300 USD 50 million - USD 100 million USD 14,000 USD 100 million - USD 200 million USD 22,000 USD 200 million - USD 500 million USD 33,000 \u0026gt;USD 500 million USD 50,000 Annual membership fees (for funders who will be registering grants) Since 2019, funders are able to register DOIs for the research grants they have awarded. Their annual membership fees are lower than for all other members, but their grant registration fees are higher. Funder fees are tiered depending on your annual award value.\nTotal annual award value (USD) Annual membership fee \u0026lt; 500k USD 200 0.5-2 million USD 400 2.1-10 million USD 600 10.1-500 million USD 800 500.1 million - 1 billion USD 1,000 \u0026gt; 1 billion USD 1,200 Content Registration fees Content Registration (metadata deposit) fees are one-time fees for the initial registration of DOI metadata records with us, and they are usually billed quarterly in arrears. If you are a member through a sponsor, your sponsor will pay registration fees on your behalf (but they may also charge you for their services - more here). If you are eligible for the GEM program, you will not pay registration fees - more here.\nThere are different fees for different record types, and some record types have discounts for older content - newer items are charged at the \u0026ldquo;current record\u0026rdquo; price, and older items are charged at the \u0026ldquo;backfile record\u0026rdquo; price. Some record types also have volume discounts available.\nAfter the initial registration fee, there are no further fees for updating the metadata associated with existing records.\nContent registration fees by record type Record type Registration fee per current record Registration fee per backfile record Volume discounts? Journal articles, book titles, conference proceedings and conference papers, technical reports and working papers, theses and dissertations USD 1.00 USD 0.15 No Peer Reviews (registered by the title owner) USD 0.25 USD 0.25 Yes - see more Peer Reviews (registered by not the title owner) USD 1.00 USD 1.00 Yes - see more Grants USD 2.00 USD 0.30 No Preprints USD 0.25 USD 0.15 Yes - see more Book Chapters USD 0.25 USD 0.15 Yes - see more Standards USD 0.15 USD 0.15 No Databases and datasets USD 0.06 USD 0.06 Yes - see more Components USD 0.06 USD 0.06 Yes - see more *Note: \u0026ldquo;Current record\u0026rdquo; prices are for content that was published (or awarded, in the case of grants) in the current calendar year + the previous two calendar years, and \u0026ldquo;Backfile record\u0026rdquo; prices are for content that is older than that. So for records registered with Crossref in 2025, current records are anything registered with publication/award dates of 2025, 2024, and 2023. Backfile is anything registered with publication/award dates of 2022 or before.\nVolume discounts: peer reviews If you register multiple peer reviews for the same article, then volume discounts apply. The prices and discounts are different depending on whether you are the publisher of the article being reviewed. Volume discounts don’t apply across reviews for different articles.\nRegistered by the title publisher\nTotal number of registered DOIs per article Registration fee per record (current and backfile) First peer review against single article USD 0.25 Second and all further peer reviews against same article USD 0.00 Registered by an organization who is not the publisher of the title being reviewed\nTotal number of registered DOIs per article Registration fee per record (current and backfile) First peer review against single article USD 1.00 Second peer review against same article USD 0.25 Third and all further peer review against same article USD 0.00 Volume discounts: preprints and other posted content Total number of registered DOIs per quarter Registration fee per record (current) Registration fee per record (backfile) 0-1000 USD 0.25 USD 0.15 1,001 - 10,000 USD 0.25 USD 0.12 10,001 - 100,000 USD 0.25 USD 0.06 100,001 and up USD 0.25 USD 0.02 Volume discounts: book chapters and reference entries for a single title If you\u0026rsquo;re depositing a lot of chapters or reference works for the same title in the same quarter year, the following discounts apply:\nTotal number of registered DOIs per title per quarter Registration fee per record (current) Registration fee per record (backfile) 0-250 USD 0.25 USD 0.15 251-1,000 USD 0.15 USD 0.15 1,001-10,000 USD 0.12 USD 0.12 10,001-100,000 USD 0.06 USD 0.06 100,001 and up USD 0.02 USD 0.02 The higher tiers are for encyclopaedias, in case you\u0026rsquo;re wondering how a single title could possibly have 100,000 chapters!\nVolume discounts: datasets and components If you\u0026rsquo;re registering a lot of components or datasets for a single title in the same quarter year, the following discounts apply. There is no difference in price between current and backfile records for datasets and components.\nTotal number of registered DOIs per title per quarter Registration fee per record (current and backfile) 1-10,000 USD 0.06 10,001-100,000 USD 0.03 100,001-1,000,000 USD 0.02 1,000,001-10,000,000 USD 0.01 10,000,001 and up USD 0.005 Similarity Check fees Members are able to participate in Similarity Check. As a participant, you pay an annual service fee to use Similarity Check plus a per document charge each time you check a document. Even members who are part of the GEM Program will pay these fees.\nSimilarity Check annual service fee Members of Crossref can participate in Similarity Check. The service fee is 20% of your annual membership fee and is included in your annual membership fee invoice.\nTotal annual revenue/expenses Crossref annual membership fee Similarity Check annual subscription fee \u0026lt;USD 1 million USD 275 USD 55 USD 1 million - USD 5 million USD 550 USD 110 USD 5 million - USD 10 million USD 1,650 USD 330 USD 10 million - USD 25 million USD 3,900 USD 780 USD 25 million - USD 50 million USD 8,300 USD 1,660 USD 50 million - USD 100 million USD 14,000 USD 2,800 USD 100 million - USD 200 million USD 22,000 USD 4,400 USD 200 million - USD 500 million USD 33,000 USD 6,600 \u0026gt;USD 500 million USD 50,000 USD 10,000 Similarity Check per-document fees Each document run through Similarity Check is charged at a per-document-checking fee, and there are volume discounts. There is a separate invoice for Similarity Check document checking fees. This invoice is sent annually in January for the previous years\u0026rsquo; usage.\nNumber of documents checked per year Price per document up to 25,000 words 1 - 100 USD 0.00 101 - 2,000 USD 0.75 2,001 - 25,000 USD 0.65 25,001 - 50,000 USD 0.55 50,001 - 100,000 USD 0.45 100,001 - 200,000 USD 0.30 \u0026gt;200,001 USD 0.25 Metadata Plus subscriber fees If you want to get and use our metadata, you don’t need to be a member. We have a free public REST API that you can use without contacting us. Metadata Plus is a dedicated pool of servers and is a good option if you want a more predictable service level than we can support with the free API, and it comes with monthly snapshots of XML or JSON data dumps for you to do more flexible and customised analyses. There is an annual subscription fee which is payable each January or wll be pro-rated for the remainder of the calendar year.\nPlease select your tier from the table below, using the following criteria:\nThe fee tier is selected based on whichever is the higher between\nyour total annual revenue (including earned and fundraised, e.g. grants); or your annual operating expenses (including staff and non-staff, e.g. occupancy, equipment, licenses etc.). The subscriber is always considered to be the largest legal entity, unless:\nyou are a university, then the subscriber is considered to be the university department(s), school(s), or faculty(ies) that is using the service e.g. Department of Earth Sciences, School of Law, or Faculty of Medicine. you are a government body, then the subscriber is considered to be the agency(ies), department(s), or ministry(ies) that is using the service, e.g. Agence de l\u0026rsquo;Innovation Industrielle, Department of Energy, or Ministry of Consumer Affairs. Total annual revenue/funding or operating expenses Annual fee \u0026lt; USD 500,000 USD 550 USD 500,001-USD 999,999 USD 2,200 USD 1 - 5 million USD 3,300 USD 5,000,001-USD 9,999,999 USD 11,000 USD 10 million-USD 25 million USD 16,500 \u0026gt; USD 25 million USD 44,000 Sponsor fees If you’re an organization who works on behalf of groups of smaller organizations that want to register content with Crossref, you’ll be set up as a Sponsor. Sponsors work directly with us in order to provide administrative, technical and\u0026mdash;if applicable\u0026mdash;language support to the communities they work with.\nThe annual membership fee you pay as a Sponsor is based on the total annual publications revenue, income or funding of all the organizations you support, whichever is higher.\nTotal annual revenue or income or funding Annual fee \u0026lt;USD 1 million USD USD 275 USD 1 million - USD 5 million USD 550 USD 5 million - USD 10 million USD 1,650 USD 10 million - USD 25 million USD 3,900 USD 25 million - USD 50 million USD 8,300 USD 50 million - USD 100 million USD 14,000 USD 100 million - USD 200 million USD 22,000 USD 200 million - USD 500 million USD 33,000 \u0026gt;USD 500 million USD 50,000 Each quarter you will also be sent an invoice for any content items that have been registered by the organizations that you sponsor. The charges are listed by each organization’s prefix on the invoice. There are different fees for different record types and the fees are also different depending on the publication date of the content. You will be asked to recategorize your annual fee at the end of every year, as the number of sponsoring members you represent grows.\nHow to pay us and other FAQs There\u0026rsquo;s more information about how to pay us, when you\u0026rsquo;ll be billed and other billing FAQs here.\nDo contact our member support team if you have any further questions.\n", "headings": ["Member fees","Annual membership fees","Annual membership fee tiers (for organizations registering published content)","Annual membership fees (for funders who will be registering grants)","Content Registration fees","Content registration fees by record type","Volume discounts: peer reviews","Volume discounts: preprints and other posted content","Volume discounts: book chapters and reference entries for a single title","Volume discounts: datasets and components","Similarity Check fees","Similarity Check annual service fee","Similarity Check per-document fees","Metadata Plus subscriber fees","Sponsor fees","How to pay us and other FAQs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/gem/", "title": "Global Equitable Membership (GEM) program", "subtitle":"", "rank": 3, "lastmod": "2024-12-30", "lastmod_ts": 1735516800, "section": "Global Equitable Membership (GEM) program", "tags": [], "description": "In order to meet our mission of a truly global and connected research ecosystem, it is important to ensure that participation in Crossref and all our services and metadata is accessible to everyone involved in documenting scholarly progress.\nCrossref membership is open to all organizations that produce scholarly and professional materials and content. But cost and technical capabilities can be barriers to joining, and where that\u0026rsquo;s the case, we aim to reduce these in a number of ways.", "content": "In order to meet our mission of a truly global and connected research ecosystem, it is important to ensure that participation in Crossref and all our services and metadata is accessible to everyone involved in documenting scholarly progress.\nCrossref membership is open to all organizations that produce scholarly and professional materials and content. But cost and technical capabilities can be barriers to joining, and where that\u0026rsquo;s the case, we aim to reduce these in a number of ways. These include: partnering with organizations such as the Public Knowledge Project to support plugins for OJS users; by developing our Sponsor program where members are supported by an organization that aggregates our fees and provides local language technical support.\nMembership equitability and accessibility: introducing the GEM program For many years we have also waived content registration fees via specific Sponsor agreements for members in some countries and accounted for that as \u0026ldquo;donated deposits\u0026rdquo; under a \u0026ldquo;fee assistance\u0026rdquo; program. Starting in January 2023, this was expanded to be consistent across the world and to encompass the annual membership fee.\nThe Global Equitable Membership (GEM) program offers relief from membership and content registration fees for members in the least economically-advantaged countries in the world. Members in GEM-eligible countries do not pay Crossref membership or content registration fees. As we move toward realizing the vision of a connected Research Nexus, building a network for the global community must include input from the global community.\nList of eligible countries (as of 2025) The countries currently eligible under the Global Equitable Membership (GEM) program are:\nAfghanistan Gambia Maldives Solomon Islands Bangladesh Ghana Mali Somalia Benin Guinea Marshall Islands South Sudan Bhutan Guinea-Bissau Mauritania Sri Lanka Burkina Faso Guyana Micronesia (Federated States of) Sudan Burundi Haiti Mozambique Tajikistan Cambodia Honduras Myanmar Tanzania Central African Republic Kiribati Nepal Togo Chad Kosovo Nicaragua Tonga Comoros Kyrgyz Republic Niger Tuvalu Democratic Republic of the Congo Lao PDR Rwanda Uganda Côte d\u0026#39;Ivoire Lesotho Samoa Vanuatu Djibouti Liberia São Tomé and Principe Yemen Eritrea Madagascar Senegal Zambia Ethiopia Malawi Sierra Leone Eligibility Eligibility for the program is based on a member’s country. We have curated the list of eligible countries based on the International Development Association list and excluded anywhere we are bound by international sanctions.\nThe IDA is part of the World Bank. Its criteria is more nuanced than ‘low income’ or ‘lower-middle income’ as it takes into account GNI per capita as well as creditworthiness, which is especially important in countries where the gap between rich and poor is very large. We are not including the ‘blended’ countries (that is, countries that are on their way economically and therefore need less financial support).\nReviewing the eligibility criteria We will review the eligibility criteria annually and note any changes here. The IDA reviews the list annually, and while it is not common to have lots of movement, we will notify members whose country moves on or off the IDA list of any upcoming fees (or the removal of them) so they can plan and budget.\nWe ask for both mailing and billing addresses on our membership application form and both of these need to be in an eligible country (not necessarily the same one) in order to qualify.\nGEM program specifics Here are the details of what is waived and what is not, along with some answers to some frequently answered questions.\nQ: Which fees are waived and which are not? The annual membership fee is waived (irrespective of the member’s organizational size or revenue; it’s the country that determines the eligibility). All content registration fees are also waived for all record types. Participation in other paid services, such as Similarity Check and Metadata Plus, will be charged at the usual fees. Check out our standard fees to learn how much eligible members are saving. You can find out more about invoicing for GEM-eligible members on our Billing FAQs page\nQ: Can GEM program members also work with a Sponsor? The GEM program is open to independent members as well as members joining via a Sponsor. However, note that sponsors may only work with members who would normally be categorized in the lowest membership fee tier. Please also note that Sponsors may charge for their services (which include local language, technical, and other support), so it is important to discuss the terms with the Sponsor. Crossref will be actively seeking organizations to become Sponsors in GEM countries to build out support in these countries.\nJoin as a member Find out more about your benefits and obligations as a member and apply today. If your mailing and billing adresses are in a GEM country you will automatically be exempt from membership and content registration fees.\nJoin today Contact our member experience team with any questions.\n", "headings": ["Membership equitability and accessibility: introducing the GEM program","List of eligible countries (as of 2025)","Eligibility","Reviewing the eligibility criteria","GEM program specifics","Q: Which fees are waived and which are not?","Q: Can GEM program members also work with a Sponsor?","Join as a member","Join today"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2024/", "title": "2024", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-progress-update-and-a-renewed-commitment-to-community/", "title": "A progress update and a renewed commitment to community", "subtitle":"", "rank": 1, "lastmod": "2024-12-12", "lastmod_ts": 1733961600, "section": "Blog", "tags": [], "description": "Looking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata.\nIn this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what\u0026rsquo;s paused and why.", "content": "Looking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata.\nIn this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what\u0026rsquo;s paused and why. We\u0026rsquo;ll describe how we have been making resourcing and prioritisation decisions, including a revised management structure, and introduce new cross-functional program groups to collectively take the work forward more effectively.\nIt’s important to acknowledge that Crossref has evolved significantly from just five years ago - our member count has more than doubled from 10,000 to 21,000 organisations since 2019 and they include all kinds of organisations such as funders, universities, government bodies, NGOs, and of course scholar- and library-led publishers. The smaller organisations now collectively contribute the majority of Crossref funding. We’ve gone from 100 million records to 160 million in five years, and our metadata is retrieved more than 2 billion times monthly, quadrupling what it was five years ago.\nIt’s within this context that we’ve spent quite a lot of time thinking about scalability, how we collect and process feedback and contributions from many organisations, how to automate our operations, and refining the plans for the next few years.\nOur strategic agenda remains the same A few times a year we update the strategy page where there is a quadrant of projects showing what’s completed, in progress, up next, and in planning/ideas - for each strategic theme. We also link from there to our live public roadmap which shows more specifics about individual projects, including projected timelines, and is updated more frequently.\nIf you’ve been watching the strategy page, checking in on the public roadmap or this blog, or joining webinars and annual meetings, you’ll know that we’ve had some longstanding plans to—among other things—reduce technical debt, rebuild our metadata management system, move to the cloud, modernise our schema, support multiple languages, and partner with multiple data sources to build the Research Nexus.\nYou’ve heard us talk about these initiatives a lot, but you\u0026rsquo;ve not seen particularly swift action.\nMoving the work forward more effectively Earlier this year, it became clear that our almost three-year project to build a new relationships API had not worked out. The project, dubbed ‘manifold’, was to initially deliver data citations, and eventually replace our central metadata system, but what was prototyped didn’t scale, even with a subset of our metadata. We weren’t confident enough about the project’s timeline or costs to justifiably continue investing further time and resources.\nMeanwhile, we’d barely scratched the surface of our aim to pay down technical and operational debt, and we’d also been neglecting to keep the live system up to date with the numerous metadata changes that have been queued up, waiting to be implemented.\nWe knew the manifold project was ambitious – our system has grown in complexity over the years. We were trying to rebuild the car while driving it (our system needed to continue to operate and be maintained by our team) while trying to design a new approach to manage the many relationships between 160+ million database records. In the years we worked on this project, we learned a lot that will inform future plans for a large system redesign.\nIn March this year, we decided to pause the manifold project. We apologised to our community partners for not delivering the promised data\u0026lt;-\u0026gt;literature matches they hoped to use. They were frustrated but thankfully understanding.\nWe then resolved to focus on backend infrastructural changes, conduct cross-training so that all of our staff would become familiar with current in-use systems instead of greenfield tech (for now), and start to make a dent in the backlog of bugs and long-promised schema updates in our mainstream services.\nWe’re happy to report some movement on these things and some milestones that have been achieved in these areas in recent months.\nFostering a happy and dedicated team Any kind of work can only happen when our staff are in a good place, feeling supported and comfortable to question things, and well-equipped with information, purpose, and clear priorities. In June, when the whole staff met up in person, we had some really good conversations about culture, communication, and about sharing responsibilities. Some people ran birds-of-a-feather sessions to explore the issues that had been keeping them up at night, such as authentication/security, and rebuilding the Crossref System (CS), and the team also co-created a set of prioritisation drivers that are now in use within our roadmap and planning processes.\nTaking on feedback from the all-staff meeting and then the July board meeting, we thought strategically about the organisational structure Crossref would need over the next few years to reflect the growth in scope and size, and fulfil its longer term goals. We have long had an ambitious agenda but realised we didn’t yet have the capacity to do it all. So we came to the conclusion that we needed an updated team and management structure to take us through the next phase of our development.\nThe structural changes were concluded at the end of November. They included:\nMoving Technology under Operations, since Technology\u0026mdash;though a vital enabler\u0026mdash;still works in service to our mission and in support of our community, just like other operational things like board governance and finance. Reframing product development as Programs and Services, and reducing our workstreams from five product portfolios to three programs. We formed cross-team steering groups around clearly articulated program areas (more on those below). Broadening the leadership to include an Executive team and an extended Director team, and forming a Senior Management Team (SMT). These changes ensure that the collective responsibility for Crossref now rests on a wider group of experts who can back each other up and share the risk and the knowledge, rather than on just a few individuals. We started recruiting for directors for two new leadership positions. We’ll welcome a new Director of Programs and Services and a new Director of Technology in the new year. Evolving the strategic initiatives team into a data science team, integrating research \u0026amp; development functions throughout all teams and with the SMT taking collective responsibility for strategic initiatives. Unfortunately, with the shift in approach for product development and by sharing responsibility for strategic initiatives and research among the wider team, we made the difficult decision that four positions would no longer work within the new structure.\nA new approach: joined-up initiatives and cross-functional programs Research has always been an important role for Crossref, but as this function had been annexed from our regular work, it became hard to coordinate strategic initiatives across the wider organisation. In recent years we inadvertently created more technical debt for ourselves, i.e., built multiple prototype tools without plans for adoption or moving them into production. Strategic initiatives, by their nature, need thorough research and high-level alignment, so we made such initiatives—things like Resourcing Crossref for Future Sustainability (RCFS) and improving the Integrity of the Scholarly record (ISR)—the responsibility of the whole senior management team.\nSome useful research had been conducted, but we were never in a position to act on any of it. Particularly promising work has been in the field of metadata matching, and with the growth in the community reliance on our metadata, and attention on data quality rightly increasing, we decided to create a new data science team to be dedicated to this work, led by Dominika Tkaczyk.\nWe had also struggled with a traditional product management approach since all our tools and activities are interconnected, and we found we were trying to do too many things at once but not all of them very effectively. We also acknowledged that product management comes from the commercial e.g. retail world and therefore is designed to help companies sell/upsell, which is not our goal. So we looked to other approaches more suitable to mission-based nonprofits.\nIntroducing three programs We have introduced cross-functional program management in order to work towards the following:\nbetter cross-team alignment shared responsibility improve communication and learning make more progress on the things members need. Supporting the strategic theme of co-creation, a new program, facilitated by Program Lead Lena Stoll, now manages and oversees all activities around co-creation and community trends. A cross-team steering group just began meeting regularly and will be responsible for interfaces such as reports/dashboards, record registration interfaces, connections and collaborations such as Open Funder Registry, ROR, ORCID auto-update, as well as OJS and other partner integrations. This program also includes the Crossref website and any front-end things to support other programs. And it includes ISR (the integrity of the scholarly record) and our tools in this area such as Crossmark and retraction/correction tooling, and Similarity Check for text comparisons.\nSupporting the strategic theme of complete and global metadata and relationships, a new program, facilitated by Program Lead Martyn Rittman, now manages and oversees all activities relating to contributing to the Research Nexus. Working particularly closely with the metadata team, led by Patricia Feeney, this program addresses how metadata is modelled, used, enriched, and extended. Work includes our APIs, incorporating external data sources like Retraction Watch and Event Data, building out metadata matching services with the new data science team, supporting the community of metadata users with API sprints and more modern options for retrieving metadata based on usage and need.\nSupporting the strategic theme of open and sustainable operations and keeping to the POSI framework, a new program, facilitated by Program Lead Sara Bowman, now manages and oversees all activities relating to making our operations more open, transparent, and sustainable. This program focuses on supporting and strengthening the core functions our members rely on and enabling future growth. It includes metadata deposit and processing, most apps for e.g. managing titles, authentication, and architectural and infrastructural projects like moving from the data centre to the AWS cloud service. This program also includes modernising our operations in general, which is not just technology but also finance and human resources, so projects like membership process automation, fee modelling and financial analyses, and business system integrations.\nThe Programs will start to be reflected across our website and in our communications from next year.\nWhat are Crossref\u0026rsquo;s new prioritisation drivers? These are the drivers that our ~40 staff co-created in June that are guiding decisions about the priorities on our roadmap. New ideas will be evaluated in the following areas:\nEncourage participation from new or under-represented communities Respond to and lead trends in scholarly communications Benefit the greatest number of members and users Reflect on how the community works with each other and allow members to self-serve Expand to support and connect relevant resource types and metadata fields Make it easier to create and update metadata Enhance metadata for completeness and accuracy Make it easier to retrieve and use metadata Automate repetitive/manual tasks Address technical and operational debt Maintain critical systems and operations and ensure their security Control or reduce costs - to Crossref, our community, or the environment We’re happy to report that the changes made this year have resulted in a productive last few months of the year. As reported in our annual meeting, here is the progress update.\nWhat’s paused A relationships API endpoint and, therefore, a specific data citation feed Manifold, the the three-year effort to modernise our tech stack Most of the strategic initiatives prototypes that can’t yet be scaled, such as Labs API and Labs reports What’s recently completed We succeeded in moving the entire Crossref corpus to an open-source database, PostgreSQL Fixed numerous REST API data quality issues and lots of troublesome bugs Schema development - support for ROR as a Funder identifier is live and currently in testing We automated some very manual membership and billing processes, saving hundreds of staff hours a year Released a new form for journal article record registration, building on the grant registration form Upgraded Participation Reports to include Affiliations and ROR IDs Launched a new API Learning Hub Since the rest of the community stops for no Crossref product roadmap issue, we also progressed a number of community and governance initiatives:\nThe Grant Linking System (GLS) reached 5 years with over 40 funders joining Crossref and registering over 130,000 grants and awards, including use of facilities and projects Our research for Resourcing Crossref for Future Sustainability (RCFS) with the Membership \u0026amp; Fees Committee is going well, and we’ll have new fee proposals for review in 2025 The integrity of the Scholarly Record (ISR) conversations have deepened, and we’ve formed strong relationships with editorial experts and research integrity sleuths, who are getting up to speed on our metadata, and we’re working with some sleuthing consultants to change our processes to handle deceptive member behaviour such as paper mills, cloned journals, and citation manipulation. The new data science team plays a role here, along with membership and governance. What’s currently in focus In our efforts to do less but do it more effectively, we have two current priorities:\nGet out of the physical data centre and into the cloud. Develop Schema 5.4. These two projects are underway, involving lots of communication and learning. Since we haven’t released any schema updates in many years, all our staff are learning for the first time how a metadata schema model is interpreted in a systemic way, learning about the structure of research objects, and honing the process as they go. We’ve high hopes we’ll be in a position to release continuous metadata schema versions and catch up on the backlog over the coming years.\nWhat’s next Continuous metadata development, with contributor roles up next Retraction Watch data integrated into the REST API so users have a single source of retraction/correction data Upgraded preprint matching and notifications Modelling more equitable fees through the RCFS projects Piloting a non-voting membership category Once we’re fully in the cloud and in the groove of metadata updates, and with the support of newly-hired technology and program directors joining in the new year, we’ll turn our attention to rebuilding the central metadata system that we call the Crossref System, or “CS” and report more on this next year.\nSo that was our summary of 2024 and an indication of what’s coming in 2025 and beyond; sorry it’s so long, and thanks for reading this far! Next year we’ll get back to more regular updates as the strategic agenda and the programs progress.\n", "headings": ["Our strategic agenda remains the same","Moving the work forward more effectively","Fostering a happy and dedicated team","A new approach: joined-up initiatives and cross-functional programs","Introducing three programs","What are Crossref\u0026rsquo;s new prioritisation drivers?","What’s paused","What’s recently completed","What’s currently in focus","What’s next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/ginny-hendricks/", "title": "Ginny Hendricks", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lucy-ofiesh/", "title": "Lucy Ofiesh", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/product/", "title": "Product", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/programs/", "title": "Programs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/strategy/", "title": "Strategy", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-summary-of-our-annual-meeting/", "title": "A summary of our Annual Meeting", "subtitle":"", "rank": 1, "lastmod": "2024-12-09", "lastmod_ts": 1733702400, "section": "Blog", "tags": [], "description": "The Crossref2024 annual meeting gathered our community for a packed agenda of updates, demos, and lively discussions on advancing our shared goals. The day was filled with insights and energy, from practical demos of Crossref’s latest API features to community reflections on the Research Nexus initiative and the Board elections.\nOur Board elections are always the focal point of the Annual Meeting. We want to start reflecting on the day by congratulating our newly elected board members: Katharina Rieck from Austrian Science Fund (FWF), Lisa Schiff from California Digital Library, Aaron Wood from American Psychological Association, and Amanda Ward from Taylor and Francis, who will officially join (and re-join) in January 2025.", "content": "The Crossref2024 annual meeting gathered our community for a packed agenda of updates, demos, and lively discussions on advancing our shared goals. The day was filled with insights and energy, from practical demos of Crossref’s latest API features to community reflections on the Research Nexus initiative and the Board elections.\nOur Board elections are always the focal point of the Annual Meeting. We want to start reflecting on the day by congratulating our newly elected board members: Katharina Rieck from Austrian Science Fund (FWF), Lisa Schiff from California Digital Library, Aaron Wood from American Psychological Association, and Amanda Ward from Taylor and Francis, who will officially join (and re-join) in January 2025. Their diverse expertise and perspectives will undoubtedly bring fresh insights to Crossref’s ongoing mission. The meeting started with a recap of our mission and priorities. Ed Pentz reiterated the Research Nexus vision of increasing transparency of the connections that make up the scholarly record and underpin the research ecosystem.\nCrossref is dedicated to openness, community ownership, and a stable, accessible infrastructure that researchers, publishers, funders, and institutions can rely on for the long term. This is demonstrated by Crossref’s commitment to the the Principles of Open Scholarly Infrastructure (POSI), which constitute commitments to building a resilient and transparent infrastructure for research—sustainability, community governance, and openness. Ed emphasized how Crossref is aligning with these principles and collaborates with other adopters to reflect and continuously align these with the needs of the scholarly community, with a public consultation on proposed revisions to POSI forthcoming next year.\nGinny Hendricks highlighted key membership and metadata trends. She noted that as of 2024, half of Crossref members are based in Asia. This year, as always in recent years, we saw many new organizations from Indonesia, Turkey, India, and Brazil join us. Removing those fast-growing countries for the chart’s clarity, we can see that some of the next most active countries are Pakistan, Mexico, Spain, Bangladesh, and Ecuador, among others.\nThere are now ~163 million open metadata records with Crossref DOIs, and Ginny pointed out increases in the registration of preprints, peer-review reports, and grants. In terms of metadata elements, it\u0026rsquo;s good to see that more publishers recognize the importance of including abstracts and ROR IDs in their metadata records. Also, in line with the community’s concerns about integrity, our members have been enriching their records with direct assertions of retractions.\nThen, Ginny went on to report on the progress towards our strategic goals:\nContribute to an environment where the community identifies and co-creates solutions for broad benefit A sustainable source of complete, open, and global scholarly metadata and relationships Manage Crossref openly and sustainably, modernizing and making transparent all operations so that we are accountable to the communities that govern us. Foster a strong team because reliable infrastructure needs committed people who contribute to and realize the vision and thrive in doing it. Demos Lena Stoll and Patrick Vale’s session gave members a practical preview of our latest tools.\nPatrick started by reflecting on the challenge of making our identifiers useful for people using screen readers (and other assistive technologies). He thanked all who responded to our past consultation on the topic and presented the Crossref DOI Accessibility Enhancer – the browser plug-in initially available for Firefox (and soon also for Chrome). He shared the Gitlab repo for anyone interested in trying it and invited feedback as we’re hoping to iterate on this.\nPatrick then went on to talk about our openness to community contributions to Crossref tools, with an example of the recent contribution from CWTS Leiden to our Participation Reports. Thanks to their work, our members can now see the proportion of works they’ve registered that include affiliation information and ROR IDs, alongside the previously available key metadata such as references, abstracts, ORCID iDs, funding information, or Crossmark.\nFinally, Lena demonstrated the latest extension of our record management tool that’s just been made available to make manual registration of metadata records for journal articles easier. The new form is flexible and driven by our metadata schema. Importantly for our members, it simplifies the workflow with input validations and automated ISSN matching, and it enables members to register author affiliations with an integrated ROR look-up. We hope this will support our smaller members, who are relying on our helper tools to register their content.\nThroughout the session, members were encouraged to use these tools and explore new resources available through Crossref. We believe that by taking advantage of these resources, you can enhance your research and publishing experience, and contribute to the growth and development of the scholarly community.\nThe discussion about open scholarly infrastructure The panel on open scholarly infrastructure brought together experts with a wide range of experience in the field. Moderated by Lucy Ofiesh, Crossref’s Chief Operating Officer, the discussion featured six invited speakers who shared their insights on the opportunities and challenges facing the scholarly ecosystem: Ed Pentz, Crossref; Sarah Lippincott, Dryad; Amélie Church, Sorbonne University; Joanna Ball, DOAJ; Ann Li, Airiti; and Richard Bruce Lamptey, Kwame Nkrumah University of Science and Technology.\nThe panel talked about what openness in scholarly infrastructure means, why it’s important, its sustainability, and how to tackle challenges and gaps across the ecosystem. They highlighted frameworks like the Principles of Open Scholarly Infrastructure (POSI), the Barcelona Declaration, and the FOREST Framework as key tools for guiding work on governance, sustainability, and equity. The discussion highlighted the need for more collaboration, inclusivity, and practical ways to ensure open infrastructure remains sustainable in the long run.\nThey also stressed how openness supports research integrity. How transparent systems allow researchers to question methods, verify findings, and preserve data. Amelie Church expanded on this point, underscoring the important role of open infrastructure in addressing challenges to integrity. She explained that such transparency enables the scholarly community to scrutinize research processes, ensuring the quality of outputs and their impact on society. Without openness, researchers face barriers to maintaining trust in their work, making open infrastructure necessary for research integrity and public confidence in science.\n“By focusing on accessibility, transparency, and community engagement, open infrastructure can reshape academic and research ecosystems in transformative ways.” ~Richard Bruce Lamptey\nRegarding sustainability, Sarah Lippincott stressed the importance of aligning funding models with community needs while addressing governance challenges. She pointed out that while initial funding can launch infrastructure, long-term sustainability requires consistent community investment and robust governance frameworks. This balance, she explained, is essential to ensure equity and transparency.\nCollaboration was another important topic. Joanna Ball and Sarah Lippincott shared examples of how pooling expertise and resources—such as in the global support for ROR—can strengthen systems and make them more sustainable. These initiatives show the power of collective efforts in addressing technical and resource barriers. However, inclusivity remains an ongoing challenge.\nThe panel discussed the ways in which language barriers, resource limitations, and reliance on proprietary systems continue to exclude researchers from underrepresented regions. Ann Li highlighted how addressing these disparities is critical to ensuring the global accessibility of open infrastructure. By fostering inclusive practices, the scholarly community can mitigate biases and build tools that reflect a broader range of research contributions.\n”My hope is that open infrastructure can have the resources that it needs to thrive, not just merely survive, and also that open infrastructure communities and organizations look to the value of frameworks that we\u0026rsquo;ve talked about today to help align themselves and improve their policies and practices, because there\u0026rsquo;s always room for growth, even in the best, most well-intentioned communities.” ~Sarah Lippincott, Dryad\nThe panel wrapped up the discussion by expressing optimism for the future of open scholarly infrastructure and emphasized the importance of continued investment, collaboration across organizations, and transparency in operations. The discussion reinforced the idea that open infrastructure provides a strong foundation for research that is equitable, sustainable, and accessible to all.\nUpdates from our Community We enjoyed talks from our community about increasing their participation in the Research Nexus by adopting, using and enhancing metadata in different ways. Robbykha Rosalien hosted talks from the EuropePMC, Dutch Research Council, eLife, and CSIRO featured in Session I, and Amanda French hosted CLOCKSS, Sciety, and Redalyc in Session II.\nMichael Parkin talked about preprints in Europe PMC. Europe PMC is a database for life science literature and a platform for content-based innovation. They started indexing preprints via Crossref REST API in 2018. Michael presented their work on discoverability of preprints in their database, including reflections on early challenges, as well as the latest efforts in surfacing available community reviews.\nHans de Jonge talked about the Dutch Research Council\u0026rsquo;s (NWO) dedication to open science, with policies ensuring that publications and data funded by NWO are openly available. They embrace open science principles for their own metadata and is a signatory of the Barcelona Declaration on Open Research Information. Hans focused on NWO\u0026rsquo;s recent introduction of Grant IDs through Crossref’s Grant Linking System (GLS). He shared their approach, the motivations behind introducing Grant IDs, and some challenges they faced.\nFrederick Atherden explained how eLife, a nonprofit led by scientists, use Crossref’s Grant Linking System to include grant DOIs in their publication metadata. It allows authors to add grant DOIs during submission, and they developed a tool to match grant numbers with DOIs during the proofing process to improve accuracy. Their goal is to follow best practices for metadata, making content easier to find and link to.\nBrietta Pike covered how CSIRO is working to improve metadata quality for its journals, making research more discoverable and trustworthy. CSIRO faced challenges like inconsistent XML tagging, outdated systems, and data loss. To address these, they formed a project team, created a clear XML stylesheet, and updated their workflows. Recent progress includes better funding data, clearer license information, and more complete affiliation tagging. These efforts aim to support a more transparent and accessible research environment.\nAlicia Wise of CLOCKSS talked about recent collaborations seeking to safeguard our cultural and scholarly heritage over the long term. CLOCKSS, a community-run archive, is dedicated to preserving scholarly content to remain accessible and unchanged for future generations. True preservation requires securely storing content in trusted archives that are actively maintained. A group of librarians and publishers developed a guide to help publishers preserve content, they also established an archival standard for EPUB formats to ensure ebooks can be stored effectively, and launched a pilot project to track preserved books, helping libraries and scholars identify safely stored titles.\nMark Williams from Sciety talked about how Sciety uses Crossref metadata to create detailed preprint histories. By partnering with organizations and communities worldwide, Sciety platform gathers public reviews, highlights, and recommendations on preprinted research, helping researchers evaluate the quality and relevance of new studies. Through linking related preprints and journal articles, Sciety builds a connected view of each research work. Although challenges like inconsistent terminology and identifier gaps persist, these efforts enhance the visibility and credibility of preprints.\nArianna Becerril-García of AmeliCA/Redalyc shared insights on diamond open-access journals in Latin America. Redalyc is an open-access infrastructure that supports journals by providing free services like visibility and production tools. Redalyc has a role in sustaining Latin America’s unique approach to open-access publishing, where most journals are backed by academic institutions and public funds, allowing free access for both readers and authors. Arianna stressed the need to treat these journals as digital public goods and urged the communities they serve to help ensure their long-term sustainability. Despite limited resources and global under-recognition, these journals serve an international research audience, including authors from Europe, Africa, and Asia. Redalyc and other open infrastructures play a key role by offering tools that reduce production co-sts and improve discoverability, all without financial barriers. Noted was how this approach aligns with UNESCO’s open science framework, which promotes inclusivity and addresses long-standing inequalities in scholarly publishing.\nAfternoon of more resources and updates from Crossref After a mid-day break (in Europe), Luis Montilla kicked off the second session with a practical tutorial of Crossref’s REST API. Following his last year’s intro to the Crossref API, this time he offered a step-by-step guide to help attendees maximize the API’s capabilities for metadata retrieval with advice on:\nManaging large data requests with pagination and iterations Incorporating safety mechanisms - to avoid hitting rate limits, Luis recommended adding pauses between requests and sharing example scripts to streamline this. For those interested in learning more, look at the new Crossref API Learning Hub— a new resource offering guides, scripts, and training materials to simplify complex queries. Please share questions about things you\u0026rsquo;re not sure about in our community forum, to help guide development of future demos.\nPatricia Feeney followed with updates on metadata schema changes. She introduced our recent shift to integrate the Funder Registry with ROR, which allows members to use a single identifier system, simplifying data management by reducing redundancy. Patricia explained that, for now, the current identifiers remain valid, so members won’t need to make immediate changes. She also outlined planned support for version metadata, typed citations, and future plans to expand support for contributor role vocabularies, and invited community participation in a planned multilingual metadata working group.\nNext, Kora Korzec offered an update on the progress in our research on Resourcing Crossref for Future Sustainability and opened up a discussion about the best ways of assessing our members’ size and ability to pay. In light of our ambition to streamline discounts, we also invited suggestions for discounts to support accessibility and fuller participation in the Research Nexus.\nAs part of the discussion, we’ve learned who was in attendance during the session:\nWe’ve heard a lot of support for our current GEM program. While it was clear from our poll that publishing revenue is not the most relevant measure of size or capacity for all those present – establishing a good alternative proved challenging. The idea of considering the size of the organization as its largest entity has been discussed, and important points were raised about budgets in different types of distributed organizations (e.g., on the position of libraries within large universities).\nThe official Annual Meeting part commenced after the discussion, with a report on the State of Crossref from Lucy Ofiesh, and commenced with our Board election. Lucy highlighted some of the key accomplishments of the year so far, including:\nResearch for Resourcing Crossref for Future Sustainability (RCFS) Integrity of the Scholarly Record (ISR) Grant Linking System (GLS) reached 5 years Automated some very manual membership processes Released new form for journal article record registration Upgraded Participation Reports to include Affiliations and ROR IDs Launched a new API Learning Hub Paused further development of a Relationships API Migrated to a new open-source database Schema development - ROR as Funder identifiers REST API bug fixes and metadata consistency fixes. Then she reflected on the membership growth––Crossref is now made up of 21,000 organizations from 160 countries. We reviewed our 2024 year-end financial forecast. As we’re bouncing back from COVID-19, our travel expenses have grown this year, and so have the fees for cloud services hosting. These are all as planned and happen in the context of healthy growth, including that from adoption and increased usage of paid services. We’re in a healthy financial position as membership revenue and usage fees, like content registration and Similarity Check document checking fees, continue to grow from the previous year.\nThank you to everyone who joined us for Crossref2024. This year\u0026rsquo;s meeting showcased our collective dedication to advancing open, accessible research infrastructure and underscored the power of collaboration in building a stronger scholarly community. As we reflect on the rich discussions and insights shared during the event, it’s clear our community is committed to advancing open and sustainable scholarly infrastructure.\nLooking ahead, we’ll continue collaborating with members and partners to tackle challenges, expand accessibility, and foster collaboration. A key focus will be enhancing tools and metadata standards to serve the community better. Through innovative solutions and strategic initiatives like the Research Nexus, our collective efforts will make research more connected and accessible for all.\nFor anyone who couldn’t attend live, recordings are now available on our website. We’re excited to see how the ideas exchanged during this meeting spark progress across the scholarly ecosystem in the coming months.\n", "headings": ["Demos","The discussion about open scholarly infrastructure","Updates from our Community","Afternoon of more resources and updates from Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/annual-meeting/", "title": "Annual Meeting", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/governance/", "title": "Governance", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/kornelia-korzec/", "title": "Kornelia Korzec", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/meetings/", "title": "Meetings", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/rosa-morais-clark/", "title": "Rosa Morais Clark", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2024-posi-audit/", "title": "2024 POSI audit", "subtitle":"", "rank": 1, "lastmod": "2024-12-07", "lastmod_ts": 1733529600, "section": "Blog", "tags": [], "description": "Background The Principles of Open Scholarly Infrastructure (POSI) provides a set of guidelines for operating open infrastructure in service to the scholarly community. It sets out 16 points to ensure that the infrastructure on which the scholarly and research communities rely is openly governed, sustainable, and replicable. Each POSI adopter regularly reviews progress, conducts periodic audits, and self-reports how they’re working towards each of the principles.\nIn 2020, Crossref’s board voted to adopt the Principles of Open Scholarly Infrastructure, and we completed our first self-audit.", "content": "Background The Principles of Open Scholarly Infrastructure (POSI) provides a set of guidelines for operating open infrastructure in service to the scholarly community. It sets out 16 points to ensure that the infrastructure on which the scholarly and research communities rely is openly governed, sustainable, and replicable. Each POSI adopter regularly reviews progress, conducts periodic audits, and self-reports how they’re working towards each of the principles.\nIn 2020, Crossref’s board voted to adopt the Principles of Open Scholarly Infrastructure, and we completed our first self-audit. We published our next review in 2022.\nThe POSI adopters have continued to review the principles, reflecting on the effects of adopting them and providing a revision to the principles in late 2023. We use the revised principles for this latest review.\nKey We use a traffic light system to indicate where we believe we stand against each of the 16 principles. Now with up/down arrows to show any significant movement, and an \u0026lsquo;i\u0026rsquo; where there is something of note with narrative.\nred indicates we are not fulfilling the principle. yellow indicates we are making progress towards meeting the principle. green indicates we are fulfilling the principle. or means this is a new change, where we\u0026rsquo;ve moved \u0026lsquo;up\u0026rsquo; the traffic lights, in comparison to the previous audit. We would use the same if \u0026lsquo;down\u0026rsquo; ever happens too. or means that something has changed of note and in comparison to the previous audit.\nGOVERNANCE Coverage across the scholarly enterprise Stakeholder governed Non-discriminatory participation or membership Transparent governance Cannot lobby Living will Formal incentives to fulfil mission \u0026amp; wind-down\nWhat’s changed with governance Stakeholder governed We’ve been yellow and we’re still yellow, but it has been improving. In the past, we’ve reported that we are working towards this but we’re not there yet because we didn’t have representation on the board from certain types of members, specifically research funders and research institutions. In the incoming 2025 board class, we have both. Six out of our 16 board seats are held by universities, university presses, or libraries. We also look forward to adding a new research funder, the Austrian Science Fund (FWF), to the board in January.\nNone of this, though, is hardcoded into the structure of the board. We extend an open call for board interest; any active member can apply for consideration. The Nominating Committee prepares a slate with a diverse range of candidates and organizations, and it is then up to the membership to elect board members.\nWith only 16 board seats and \u0026gt;21,000 members in 160 countries, being fully stakeholder-governed is challenging. Further, there are important contributors to the community that we all rely on who are not eligible for board seats because they are not members, as defined in our by-laws, such as sponsors, service providers, and metadata users.\nWe don’t consider this principle fulfilled, and that’s a good thing to keep note of; we must keep aspiring to have a broader, more comprehensive representation of our evolving community. The board continues to discuss stakeholder representation.\nSUSTAINABILITY Time-limited funds are used only for time-limited activities Goal to generate surplus Goal to create financial reserves Mission-consistent revenue generation Revenue based on services, not data\nWhat’s changed with sustainability Goal to create financial reserves This was yellow and is now green. In 2023, we met our goal of maintaining a contingency fund of 12 months of operating costs. We also topped up this fund in 2024 to keep pace with our growing operating expenses. The revisions for POSI 1.1 actually removed the specificity of a 12-month timeline, allowing each adopting organisation to set its own goal; in Crossref’s case, 12 months remains appropriate.\nINSURANCE Open source Open data (within constraints of privacy laws) Available data (within constraints of privacy laws) Patent non-assertion\nWhat’s changed with insurance Open source This was yellow and still is, but we’re making improvements. In September of this year we migrated our database off of a closed-source solution and onto PostgreSQL. This has improved the performance of the system and is an important step towards paying down technical debt and moving the system fully into the cloud.\nPatent non-assertion This was yellow and is now green. We confirm that we do not hold any patents, and we have a published policy on it that is available for inspection and reuse by anyone in the community.\nIn summary These are the main changes of note for our 2024 POSI update. The summary is that we\u0026rsquo;ve maintained all our greens, and of the four principles that were yellow last time, two have moved to green (financial reserves; patent non-assertion) and two have remained yellow but seen some progress of note (stakeholder governed; open source).\nPlease let us have any comments or questions; by commenting here it will add a public record of the discussion on our community forum. Here is an image to share, if needed.\nWe continue to learn from the POSI adopters group\u0026mdash;now numbering 23 organisations\u0026mdash;and the group will soon share a draft of POSI v2 for community comment. We look forward to the ongoing discussions with this group, and others, to keep improving and holding ourselves to account.\n", "headings": ["Background","Key","GOVERNANCE","What’s changed with governance","Stakeholder governed","SUSTAINABILITY","What’s changed with sustainability","Goal to create financial reserves","INSURANCE","What’s changed with insurance","Open source","Patent non-assertion","In summary"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/2024-12-05-community-engagement-manager/", "title": "Community Engagement Manager (Funders)", "subtitle":"", "rank": 1, "lastmod": "2024-12-05", "lastmod_ts": 1733356800, "section": "Jobs", "tags": [], "description": "Applications for this position will be closed on January 8, 2025. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our new Community Engagement Manager for the funding community.\nLocation: Remote and global (to at least partially overlap with working hours in European timezones) Type: Full-time Remuneration: 70-78k USD or local equivalent, depending on experience.", "content": " Applications for this position will be closed on January 8, 2025. Do you want to help improve research communications in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge as our new Community Engagement Manager for the funding community.\nLocation: Remote and global (to at least partially overlap with working hours in European timezones) Type: Full-time Remuneration: 70-78k USD or local equivalent, depending on experience. Note this is a general guide (as there is no universal currency) and local benchmarking will take place before the final offer. Reports to: Director of Community, Kora Korzec Timeline: Advertise in December/offer by February About the role The Crossref members document the progress of knowledge through our sustainable, global, and open system. We run an infrastructure so they can curate, share, and preserve metadata, which is information that underpins and describes all research activities such as funding, authorship, dissemination, and attention—and the relationships between these activities.\nIncreasingly, the organisations that drive open science through policy, funding, and related support are turning their attention to the underlying data infrastructure and, as part of that, are joining Crossref to provide their part of the picture and link their awards (including the use of facilities and equipment) with Crossref’s vast corpus of publications and other outputs.\nFor five years, the Crossref Grant Linking System has been steadily gaining traction. We are now at a tipping point, and we need an experienced community manager with strong connections with funders and funding platforms to supercharge the program and take it to the next level by growing membership and facilitating even more integrations with grant platforms.\nKey responsibilities Increase awareness and grow global funder membership\nCreate opportunities to engage funders globally—especially in Africa, Asia, and South America—and listen to and learn their needs, introduce Crossref, and invite them to engage with our part of the research ecosystem. Contribute to the Funder Advisory Group and help make their work mutually beneficial, i.e. facilitate the sharing of experiences between funders and help the group contribute to development such as schema evolution and governance and fee changes. Develop the pipeline of interested and eligible funders to grow membership and, therefore, participation in Crossref through more records and metadata for the Grant Linking System. Support funders in preparing for and meeting membership obligations and help onboard new members so that they can take advantage of all relevant services. Facilitate new integrations\nCreate an action-focused campaign to incentivise publishers and other Crossref members to report funding acknowledgements using Crossref grant links. Create action plans with the 20+ grant management systems to encourage and facilitate integrations with the Crossref GLS. This could mean working with community consultants and developers to commission and oversee such projects. Forge partnerships with related organisations\nIdentify organisations and initiatives with which to collaborate for mutual benefit, particularly outside of Western countries. Strengthen relationships with long-standing partners such as Altum and Europe PMC. Contribute to community initiatives and others’ working groups that support the shared vision for a connected research nexus. Support all Crossref programs and the Research Nexus vision\nListen to the sentiment and feedback of our community, share insights with colleagues and contribute to other programs and services. Represent Crossref, attending and speaking at relevant industry events, online and in-person, on topics even beyond funding and grants, and use the role to bring stakeholders together. Create content, such as writing articles and blogs, slides and diagrams, updating documentation, and creating new resources in support of the above work. About you We are looking for a proactive candidate with a unique blend of customer service skills, analytical trouble-shooting skills, and a passion to help others. You’ll have an interest in data and technology and will be a quick learner of new technologies. You’ll be able to build relationships with our community members and serve their very diverse needs - from assisting those with basic queries to really digging into some knotty technical queries. Because of this, you’ll also be able to distill those complex and technically challenging queries into easy-to-follow guidance.\nYou’ll need:\nAs scientific community engagement is an emerging profession, practical experience in this area is more important to us than traditional qualifications.\nWe will prioritise candidates who can demonstrate most of these characteristics:\nExperience working with or within the research funding community, either in grant-making (directly or indirectly) or in leading a community initiative centred around funding and policy-making. Experience in community building and management and/or planning, executing and evaluating participatory initiatives, including group facilitation and relationship management. Collaborative attitude and evidence of co-creation Experience working within technical or metadata-focused initiatives and systems Curiosity to explore complex concepts and to learn new skills and perspectives Excellent communication both written and spoken Track record of project management, working to budget and timelines, and reporting on progress against clear goals. Confidence in public speaking in-person and online, including delivery of webinars/workshops. Tried and tested strategies for ensuring that your programs are equitable, diverse, and inclusive. It would be a bonus if you also have any of the following:\nAbility to communicate in languages other than English Experience working in global or multicultural settings Experience with or strong understanding of open infrastructure and metadata About the team The role is based within the Community team. We collaborate across a variety of projects and programs, and you will be asked to represent other programs and communities where practical. We adopt an approachable tone and style in our communications and enjoy systematic planning combined with flexibility and resourcefulness. We’re looking to re-engage with our community through face-to-face opportunities as well as online, so the work will involve some travel (according to our thinking on travel and sustainability).\nOur team’s primary aim is to engage colleagues from member organisations and other stakeholders to be actively involved in documenting the scholarly progress and making it transparent. This contributes to co-creating a robust research nexus. As part of the wider Community group at Crossref, we seek to encourage wider adoption and development of best practices in research communications with regard to metadata and the persistence and integrity of the scholarly record. Colleagues across the organisation are helpful, easy-going and supportive, so if you’re open-minded and ready to work as part of the team and across different teams, you will fit right in. Watch the recording of our recent event celebrating 5 years of the GLS to learn more about the current conversations in our community.\nAbout Crossref We’re a nonprofit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so. “We” means 20,000+ members from 160 countries, 160+ million records, and nearly 2 billion monthly metadata queries from thousands of tools across the research ecosystem. We want to be a sustainable source of complete, open, and global scholarly metadata and relationships.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation long-term, including our financials and our ongoing commitment to the Principles of Open Scholarly Infrastructure (POSI). The governance area describes our board and its role in community oversight.\nIt also takes a strong team – because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it. We are a distributed group of 46 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide— which we take a bit more seriously than ourselves. We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nWe can offer the successful candidate a challenging and fun environment to work in. Together we are dedicated to our global mission and we are constantly adapting to ensure we get there. Take a look at our organisation chart, the latest Annual Meeting recordings, and our financial information here.\nThinking of applying? We encourage applications from excellent candidates especially from people with backgrounds historically under-represented in research and scholarly communications. You can be based anywhere in the world where we can employ staff, either directly or through an employer of record. This position is full-time, however we can make accommodations for alternative schedules on request. Our team is fully remote and distributed across time zones and continents. This role will require some regular work in European time zones. Our main working language is English, but there are many opportunities in this job to use other tongues if you’re able. If anything here is unclear, please contact Kora Korzec, the hiring manager, on kora@crossref.org.\nPlease apply via this form which allows us to sort your application materials into neat folders for a faster review. One of the best ways of offering evidence of your suitability within the cover letter is with an example of a relevant project you’re particularly proud of – we would particularly welcome mentions of your work with research funders. If possible, we’d also love to see an example of content you’ve created – a link to a recording of your talk, blog post, infographic, or something else. There is space to share documents and links within the application form.\nLastly, if you don’t meet the majority of the criteria we listed here, but are confident you’d be natural in delivering the key responsibilities of the role, we encourage your interest and would still like to hear what strengths you would bring.\nWe aim to start reviewing applications on January 8, 2025. Please strive to send us your documents by then.\nThe role will report to Kora Korzec, Director of Community at Crossref. She will review all applications along with Michelle Cancel, our HR Manager, and Ginny Hendricks, Chief Program Officer.\nWe will invite selected candidates to a brief initial call to discuss the role as soon as possible following an initial review. Following that, shortlisted candidates will be invited to an interview. You will receive all questions in advance, and the interview will include some exercises you’ll have a chance to prepare for. All interviews will be held remotely on Zoom.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law.\nThanks for your interest in joining Crossref. We are excited to hear from you! ", "headings": ["About the role","Key responsibilities","About you","About the team","About Crossref","Thinking of applying?","Equal opportunities commitment","Thanks for your interest in joining Crossref. We are excited to hear from you!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/environment/", "title": "Environment", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/summary-of-the-environmental-impact-of-crossref/", "title": "Summary of the environmental impact of Crossref", "subtitle":"", "rank": 1, "lastmod": "2024-12-05", "lastmod_ts": 1733356800, "section": "Blog", "tags": [], "description": "In June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas:\nThe environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!", "content": "In June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas:\nThe environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!\nTo be honest, the picture is mixed. On the positive side, we are traveling less and differently compared with 2019. Most of our events have been online, with some regional in-person ones, reducing our carbon footprint and increasing inclusivity with more people attending Crossref events. On the negative side, it hasn’t been easy to collect the data and figure out the best tools for calculating emissions, and we certainly haven’t captured all of our carbon emissions. Our approach has been to not let the perfect be the enemy of the good and we’ve focused on our largest source of carbon emissions - air travel.\nSome of the positive things: We have maintained our strategic approach to consider environmental, inclusion, and work/life balance issues when we plan travel and to make the most of in-person events by focusing on those that involve interaction, such as listening and learning from our members and users, deepening relationships, co-creating, and forming new alliances Crossref Annual Meetings and community updates have been online and in different time zones. Crossref board meetings have been reduced from three in-person meetings per year to one face-to-face and two online meetings per year. We had an optional all-staff in-person meeting in June 2023 (and this year too). For the in-person board and staff meetings, we have selected locations that minimize the overall amount of travel and maximize direct flights. We have maintained our country focus for in-person local meetings supported by regional Ambassadors. We met our goal of keeping total travel and meeting expenses below 60% of 2019 costs even though we have more staff and membership growth has continued. The amount of money spent is a rough proxy for our carbon impact. We no longer have an office in Oxford and will not renew the lease on our Lynnfield, MA office, so we will have no physical offices by the end of 2024. This is not a large carbon emission reduction and is more a result of being a “distributed first” organization with staff in 11 different countries. We recorded data on staff travel (flights, trains, cars, hotels) for 2023 to use as a baseline for comparison with future years. In 2023 the carbon emissions from travel and meetings was about 105 tCO2e. We used tools provided by Amazon Web Services (AWS) and Zoom to estimate the impact of these services. In 2023 this was 0.266 tCO2e for AWS and .1 tCO2e for Zoom. Some challenges Compiling data is difficult and time-consuming for a small organization There are many different calculators and metrics to use and it’s difficult to decide which to use and how much detail to go into We haven’t yet estimated the carbon footprint of staff home working We were able to calculate the emissions from AWS but not our data center We didn’t estimate the emissions from our offices. We had a small office in Oxford until November 2023, and we have an office near Boston - we won’t be renewing the lease in 2025 so won’t have any offices. Total travel and meetings spending Year Amount Percentage of 2019 2019 actuals $585,482 100% 2020 actuals $91,700 16% 2021 actuals $19,066 3% 2022 actuals $74,416 13% 2023 actuals $305,737 52% 2024 budget $333,500 56% We have recorded carbon emissions from travel at about 105 tCO2e, so we will compare 2023 with future years. Now that we have started collecting travel data, it will be easier—staff can do it as they travel throughout the year.\nOur Executive Director, Ed Pentz, looked at his personal and work flights and the carbon emissions in 2019 were 18 tCO2e and in 2023 were 2.7 tCO2e so this is a big change in the right direction.\nHosting services We use AWS for hosting our REST APIs, Crossref Metadata Search, the website, and Labs projects. Our main metadata registry is still in a data center, which is not included in this calculation. For 2023 Amazon reports Crossref’s carbon emissions were 0.216 tCO2e compared with 0.266 tCO2e in 2022. Crossref is planning to move out of the data center and fully to AWS by the end of 2024 so this will increase our AWS usage and therefore our emissions from related activities will increase. Compared to travel, the footprint from AWS is minimal.\nOnline meetings As a distributed, remote-first organization Crossref is a heavy Zoom user –– it’s essential for staff and for engaging with our community. However, Zoom doesn’t provide tools or estimates of the carbon impact of Zoom meetings. We used a tool provided by Utility Bidder, which makes a lot of estimates and assumptions. In 2023 Crossref had almost 800,000 meeting minutes. This translated into an average of 1.92 kg of CO2 emissions per week, or 100 kg per year.\nSome studies have estimated that turning off video reduces the carbon footprint of meetings. However, this can be a false savings since video is often important for creating a connection and having a productive meeting, and a Zoom meeting with video is still much, much better than traveling, particularly if flying is involved.\nTools we used In order to calculate emissions for flights and train journeys, we chose to use Carbon Calculator. We didn’t calculate emissions from hotel stays but looked at the Hotel Footprinting tools and may add hotels to calculations in the future.\nOffsetting We don’t offset our emissions from travel or other operations and don’t have plans to do this. Offsetting emissions is problematic in a number of different ways so we don’t feel confident in doing it.\nWe did tree-planting as a “thank you” for the time of respondents in our metadata survey. Intended as an alternative to more commercial types of incentives rather than off-setting for our emissions, this resulted in 921 trees planted for the Gewocha Forest, Ethiopia via Ecologi.\nWrapping up Moving forward, we’ve learned a lot over the last couple of years. Collecting accurate data is challenging and time-consuming, especially for a small organization. For us, this has been a new lens for viewing our activities, and it remains a true learning journey and we have made permanent changes. In 2024 and beyond we are going to continue to follow our travel, meetings, and events policies that we announced in 2022. We will continue to capture our air travel emissions, and in 2025 we will more accurately capture train journeys and hotel stays. We will also continue calculating our Zoom and AWS emissions as best as we can. What we\u0026rsquo;ve learnt in the process of capturing and calculating our 2023 emissions helped us set things up to enable more prompt reporting on these impacts in the future.\nWe expect that many of our members and our community at large assess their environmental impact or are embarking on similar projects, to understand and curb emissions. We’re keen to discuss this and learn together to reduce our environmental impact as an organization.\n", "headings": ["Some of the positive things:","Some challenges","Total travel and meetings spending","Hosting services","Online meetings","Tools we used","Offsetting","Wrapping up"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/patent-policy/", "title": "Patent non-assertion policy", "subtitle":"", "rank": 4, "lastmod": "2024-12-04", "lastmod_ts": 1733270400, "section": "Operations & sustainability", "tags": [], "description": "As part of our committment to the Principles of Open Scholarly Infrastructure we have adopted a patent non-assertion policy. The patent non-assertion policy is intended to prevent open infrastructure organizations from inhibiting the community\u0026rsquo;s ability to replicate the infrastructure.\nCrossref does not currently hold any patents.\nTo the extent possible, we make our policies publicly available for inspection or reuse by others.", "content": "As part of our committment to the Principles of Open Scholarly Infrastructure we have adopted a patent non-assertion policy. The patent non-assertion policy is intended to prevent open infrastructure organizations from inhibiting the community\u0026rsquo;s ability to replicate the infrastructure.\nCrossref does not currently hold any patents.\nTo the extent possible, we make our policies publicly available for inspection or reuse by others.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/jen-mellor/", "title": "Jen Mellor", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/keara-mickelson/", "title": "Keara Mickelson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/magaly-taylor/", "title": "Magaly Taylor", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-beyond-discoverability/", "title": "Metadata beyond discoverability", "subtitle":"", "rank": 1, "lastmod": "2024-12-03", "lastmod_ts": 1733184000, "section": "Blog", "tags": [], "description": "Metadata is one of the most important tools needed to communicate with each other about science and scholarship. It tells the story of research that travels throughout systems and subjects and even to future generations. We have metadata for organising and describing content, metadata for provenance and ownership information, and metadata is increasingly used as signals of trust.\nFollowing our panel discussion on the same subject at the ALPSP University Press Redux conference in May 2024, in this post we explore the idea that metadata, once considered important mostly for discoverability, is now a vital element used for evidence and the integrity of the scholarly record.", "content": "Metadata is one of the most important tools needed to communicate with each other about science and scholarship. It tells the story of research that travels throughout systems and subjects and even to future generations. We have metadata for organising and describing content, metadata for provenance and ownership information, and metadata is increasingly used as signals of trust.\nFollowing our panel discussion on the same subject at the ALPSP University Press Redux conference in May 2024, in this post we explore the idea that metadata, once considered important mostly for discoverability, is now a vital element used for evidence and the integrity of the scholarly record. We share our experiences and views on the metadata significance and workflows from the perspective of academic and university presses – thus we primarily concentrate on the context of books and journal articles.\nThe communication of knowledge is facilitated by tiny elements of metadata flitting around between thousands of systems telling minuscule parts of the story about a research work. And it isn’t just titles and authors and abstracts – what we think of as metadata has really evolved as more nuance is needed in the assessment and absorption of information. Who paid for this research and how much, how exactly did everyone contribute, what data was produced and is it available for me to reuse it, as well as, increasingly, things like post-publication comments, assertions from “readers like me”, who has reproduced this research or refuted these conclusions.\nDifferent types of published works are described by different types of metadata – journal articles, book chapters, preprints, dissertations. And those metadata elements can be of varying importance for different users. In this article, we will talk about metadata from the perspectives of four personas highlighted by the Metadata 20/20:\nMetadata Creators, who provide descriptive information (metadata) about research and scholarly outputs. Metadata Curators, who classify, normalise and standardise this descriptive information to increase its value as a resource. Metadata Custodians, who store and maintain this descriptive information and make it available for consumers. Metadata Consumers, who knowingly or unknowingly use the descriptive information to find, discover, connect, and cite research and scholarly outputs. Our approach delineates the metadata lifecycle, from authorship, through production, discovery and through continuous curation. Though some of the metadata is generated outside of that linear process, and much happens before the authorship step, we see it as a clear and useful breakdown of how metadata contributes to a new piece of content.\nAuthorship The first stage in the metadata lifecycle, authorship, is just the beginning of a dynamic process with many collaborators. A formative piece of the puzzle, authorship involves the authors or contributors, the editorial team and/or the marketing team and this is when the shape of the project and its metadata takes form. During this stage, the book or journal\u0026rsquo;s metadata exists only between the originators and the publisher, allowing the most opportunity for creativity and enhancement. Once the metadata reaches the next checkpoint along the lifecycle and is sent out externally, it\u0026rsquo;s more difficult and riskier to make major changes to the key metadata elements. In scholarly monograph publishing especially, we have the advantage of longer production lead times during which to amend and manipulate metadata during this stage.\nAt this stage, authors may have ideas of titles, subtitles and descriptions and it is up to the editors and other team members at the publisher to think strategically about how this can be optimised. The marketing and sales teams may be thinking about how the abstracts, keywords, and classifications can be best optimised for the web, leading to increased sales. Discoverability and interoperability of metadata for a book or journal, especially the use of persistent identifiers, is beneficial both for the author – in that their book is easily discovered, used, and cited – and for the publisher – increased visibility, sales, and usage.\nCurrent challenges at the authorship stage include changing goalposts for metadata standards and accessibility requirements, which also have knock-on effects in subsequent stages in the metadata lifecycle. One of the key challenges with these is that they require buy-in from multiple players to keep up with and amend, and publishers must think closely about how these changes may affect metadata workflows for books at different stages of publication.\nProduction As a book or journal article comes into production, it’s time to update and release the metadata to retailers, libraries, data aggregators and distributors. The metadata should be updated and checked to make sure that it’s still a good reflection of the product or the content that it describes and complete enough to release, including a final cover image in the case of books. This is still very much a collaborative effort with multiple roles involved. Technical details, such as spine width, page extents, and weight, are added, capturing the final specification. The editorial team may update metadata entered into systems earlier in the process. For example reviewing the prices, updating subject classification codes or amending the chapter order. If any of the content is to be published open access, appropriate licensing and access metadata need to be included, so that users of the content are clear about what they can (and can’t!) do with it. Metadata that’s not yet captured upstream can be added or enhanced. For example, vendors already involved in the production process can verify that persistent identifiers (PIDs) are present and correct in funding metadata.\nMore and more metadata elements are being requested by supply chain partners. For example, new requirements being introduced to provide commodity codes, spine width, carton quantities, gratis copy value and country of manufacture. There may be differences in metadata depending on the methods of production. For example, country of manufacture will be supplied differently when using traditional print methods where the whole print run is carried out at a location, or where a title is manufactured print on demand and the location of printing is determined by the delivery address.\nIn an XML-first workflow, metadata can be captured with the content files to aid with discovery. This usually requires multiple systems, both internal and external. These systems need to be able to work together to ensure that only up-to-date metadata is used. Metadata will change throughout the production process, whether it’s the publication of an accepted manuscript through to the final version of record, or pre-order information to the published version, so updates need to feed out regularly.\nThe right metadata needs to go to the right recipient. Some is not useful or cannot be processed by certain recipients. For example, a printer, retailer, librarian or data aggregator each have their own needs and use cases and may receive and process metadata in different formats or require different fields.\nDiscovery Discovery is the series of actions taken by an end user to retrieve and access relevant content they do not know about. Discovery can happen everywhere: Google (a search engine), a library catalog, a publisher platform, etc. However, Discovery is associated with using Discovery systems in the academic sector.\nThe technological landscape of libraries has developed in the last 15 years. Discovery systems are tools libraries subscribe to in order to allow their end users to have one search experience within their library holdings. It is paramount for librarians that library collections are used; hence, it is very important for them that the discovery system of their choice contains all the relevant metadata. Libraries expect their discovery service to include their content coverage as comprehensively as possible. Content items not represented or misrepresented in a discovery system create challenges to libraries in how they might otherwise ensure that these materials are discovered and accessed.\nLibraries\u0026rsquo; adoption and usage of discovery systems are surrounded by the belief that the great benefits of this technology are the one search box and the configuration flexibility, which are the most important benefits. Libraries invest a significant amount of money in discovery services. The increase in usage is the success indicator of this adoption and a positive return on investment.\nThe backbone of discovery systems is formed by three crucial elements: a user interface, a metadata index, and a link resolver or Knowledge Base. These elements, along with a back-end control panel for librarian configuration, are the key components that enable the discovery process.\nThe discovery index, a database storing descriptive data from various content providers, data sets, and content types, is a testament to the collaborative efforts of content providers and discovery systems vendors. Their work under the Discovery Metadata Sharing partnership agreements, which establish the format, scope, frequency, and support of the collaboration, is instrumental in meeting librarians\u0026rsquo; expectations.\nFormat The discovery metadata integration processes have settled down for most cases in these two metadata delivery workflows.\nMetadata for the index of discovery: Discovery systems have traditionally made efforts to work with various metadata formats like MARC, proprietary templates, etc., but the preferred format is XML. This metadata could include all the bibliographic information data, including index terms and full text at the article and chapter level.\nMetadata for link resolvers and Knowledge bases: Knowledge bases are tools that contain information about what is included in a product, packages, and/or databases. KBART is the preferred format in this area. It includes a set of basic bibliographic descriptions at the publication level and linking information for direct and OpenURL syntaxes.\nFrequency The delivery channels vary, and the frequency could vary daily to yearly, depending on the publication schedule.\nScope Library collections include various content types, including archival materials, open access, and multimedia alongside the more traditional books and periodicals. Different content types will require different metadata elements to make a comprehensive discovery-friendly description, and the metadata elements will impact the formats in use.\nDiscovery services will receive this data and prioritise uploading. They will select and manipulate the required metadata elements according to their system requirements. These metadata tweaks and selections are not always communicated to the content providers and/or libraries. Ultimately, librarians decide which metadata will be visible on their discovery tool and the linking methods of their choice.\nAs described, Discovery is a complex area where the activities of its main stakeholders are interconnected. The success of the end users\u0026rsquo; discovery journey from search to access depends on the successful integration, implementation, and maintenance of the discovery systems. This necessitates a combined effort from the three discovery stakeholders: content providers, discovery system providers, and libraries. Their collaborative work is not just crucial, but integral to supporting discovery and fulfilment in the most efficient manner possible. Your active involvement in this process is what makes it successful.\nHow do we ensure discoverability? Electronic resources do not exist in isolation but are assessed and used depending on their level of integration in the discovery landscape where libraries and patrons are active. From a content provider\u0026rsquo;s perspective, discoverability is about the number and efficiency of entry points to our products created in third-party discovery products.\nThe level of discovery integration has a direct impact on sales and upsell opportunities. Products that are not discoverable are difficult to work with, and the opposite is true for products that are considered discoverable. Your role in ensuring discoverability directly influences the user experience and sales, making your work crucial and impactful. The term \u0026lsquo;Discoverability\u0026rsquo; is critical in discovery library systems. It refers to the extent to which eResources are searchable in a discovery system, and it directly influences the ease with which users can find the information they need, thereby enhancing their overall experience. In practical terms, the degree of discoverability will be impacted by the quality of the metadata supplied, the transformations the metadata suffers in the integration process to discovery systems, and the configuration\u0026rsquo;s maintenance.\nThe general principles of metadata quality also apply in this area: accuracy, completeness, and timely delivery. Your attention to these principles is crucial to contributing to the effectiveness of the discovery process. Metadata enrichment practices like identifiers and standards are also applicable. Your meticulous attention to detail in maintaining metadata quality ensures the effectiveness of the discovery process.\nDiscovery as a mindset in the publishing process will increase discoverability, as it will be influenced by product designs (whether the content is linkable) and which metadata outputs are possible. For example, author-generated index terms will be more effective for meeting research search terms, and detailed article titles will probably be more discoverable than general titles. Finally, all the integration, descriptive metadata, configurations, etc., leave much room for errors. The flow is complex; on occasion, the products and content are more complicated to describe than tools can handle, and there are millions of holdings per library to manage. Constant maintenance and troubleshooting are crucial elements to maintaining and increasing discoverability.\nMetadata beyond publication In the lead-up to publication, finalising rich complete metadata can seem like establishing a fixed set of information. Post-publication, however, the metadata workflow should be dynamic, able to evolve to keep pace with new demands and opportunities. Think of metadata as a journey rather than a one-time destination, and look at ways to futureproof your metadata by actively adapting to some of the following types of change.\nChanging Publisher Goals and Product Needs Metadata should align with changing priorities for a publisher. Developing new formats, shifts in commissioning focus or building new distribution partnerships may require metadata updates. For instance, re-releasing content in audiobook form or digitising a backlist title warrants a metadata review to ensure current and prospective readers find accurate, relevant information.\nChanging Technology and Metadata Standards Advances in technology, from artificial intelligence to emerging metadata standards, offer enhanced possibilities for capturing and updating metadata. AI, for example, can help enrich metadata with more precise subject tagging, while new metadata formats may offer greater compatibility across platforms and discovery services. Staying current with these tools can help publishers manage metadata more efficiently and enhance discoverability.\nChanging Societal Values As society evolves, so do expectations for inclusive and socially responsible metadata. Utilising new categorisation codes, such as those for the United Nations Sustainable Development Goals, can align metadata with emerging social goals. Similarly, publishers may need to revisit keywords and category codes to reflect language changes, balancing the integrity of historic records with the need for current, appropriate terminology.\nChanging Industry Priorities Commitments to accessibility and sustainability have prompted developments in metadata. Increasingly, publishers need to be able to use metadata to build a record of sustainable production methods, such as paper sources, printing methods or ink types. New metadata fields for accessibility specifications will also support more inclusive reader experiences going forward. Metadata will play an increasingly vital role in meeting industry standards for accessibility, EUDR and EAA compliance, and environmental transparency.\nChanging Customer and Librarian Expectations Finally, as the metadata expectations of customers grow and the nature of roles and responsibilities in library and collection management professions develops, teamwork and making good use of available resources are essential. Publishers don’t have to tackle this alone. Working with organisations such as Crossref or Book Industry Communication (BIC), signing up to newsletters and webinars, and forming an in-house discovery group are all great ideas for sharing ideas and best practice, and ensuring your metadata workflow is adaptable and responsive. Be part of the conversation now rather than struggling to keep up down the line!\nWhat are some challenges and opportunities with metadata? JM: Metadata that establishes permanence is a real opportunity in a digital landscape where content can move or be taken down, links can rot, website certificates can expire. Persistent identifiers like ORCiDs for people and DOIs for content are key examples of metadata that establish enduring routes to, and provenance of, published digital content.\nKM: Metadata creation, maintenance and change has long been seen as a manual process. AI tools offer a real opportunity for metadata creation and review, especially for keywords and classification codes, at a scale and speed that has the potential to transform metadata workflows. Especially for backlist transformation, AI could offer real opportunities in this area. A challenge we face for monograph metadata more specifically is that much of the scholarly metadata infrastructure is built around the journal article, and it can be difficult to fit longer form content into these systems of discovery.\nMT: Metadata is crucial. Good metadata (complete, accurate, and timely) is the base for smooth integrations and easy discovery interactions with eResources. Bad metadata (inaccurate, incomplete, late) will be the main reason for undiscovered content. At this point, the eResources industry is still based on different versions of the same metadata, which is the leading cause of problems. It is probably time to start considering a unique record approach. This unique record, which will be complete and accurate, could be used by different systems for different purposes. I know there are many details to define here, but if you think about it, it is not impossible and could solve the many known issues.\nHow do you ensure the quality and completeness of your metadata? Do you have ways of auditing it? SP: Validation of data is really important, so choosing or building a system that’s set up to do this is an important foundation. It’s straightforward to check for completeness of fields and I run daily checks on our book metadata to make sure there’s nothing missing in the files feeding out. Quality can be more challenging to monitor. Feedback from data recipients is key, and accreditation schemes such as the BIC Metadata Excellence Award are a great way to benchmark progress. Good training and clear documentation help to make sure that everyone involved in creating and updating metadata understands exactly what they need to do and the standards they need to meet.\nKM: Earlier this year we completed a year-long data cleansing project as part of our move to a new title management database. This gave us the time to address gaps in backlist metadata as well as to identify any inconsistencies across records for the same book, and enrich key metadata fields like classification codes, keywords and PIDs. For frontlist titles, each person owns a number of fields to ensure they are complete before a book\u0026rsquo;s metadata is distributed – some of these have validation tools which will prevent a book\u0026rsquo;s metadata from being sent out unless it is complete.\nMT: Strict and consistent internal processes are essential to ensure quality and completeness. Following the different standards and industry recommendations helps to keep the quality at high standards. Random manual checks and system-based checks help to ensure everything is good. We carry out projects where we work with specific aspects of the metadata. This building-blocks approach ensures the different data layers are as good as possible. As with any project, metadata projects should have specific goals, outcomes, resources, and documentation.\nHow do you know if (and how much) metadata helps achieve your goals? JM: Take any available opportunities to find out what people think of your metadata – via library conferences, institutional customer feedback, and by working with the library team at our home institution, we’ve had some really useful and interesting conversations about MUP’s metadata and where we can improve it to make it as relevant as possible for different stakeholder needs.\nMT: Customers and Discovery partners will inform us if something is incorrect. Usage data is also a good indicator of how healthy our metadata is. Following industry standards is another good reference point for assessing the metadata. Finally, the metadata is only good when we know what we want to use it for. So, always considering what we are trying to achieve helps us understand how effective the metadata is.\nKM: As the others have noted here, and we represent a range of different types and sizes of publishers, measuring the direct impact of metadata is an ongoing challenge. We think about the different end users who might encounter our metadata further down the supply chain – retail customers searching on Amazon, librarians filtering results on purchasing platforms, researchers finding our books and journals through citations on popular online search engines – and consider what elements of our metadata might help reach those people in the right ways.\nJM: Ideally, you’ll see an uplift in sales or usage for every metadata element that you add, review or expand, although it can be challenging to quantify and prove a direct correlation between richer metadata and higher revenue or discoverability, as there are will be other factors involved. For my Operations team, what is certain is that richer, more comprehensive metadata means fewer errors are thrown up by the distribution systems and feeds we use, which means colleagues save time and gain productivity by not having to resolve and rerun failed jobs, chase missing information from other teams, or manually send information to third parties. My job is also made easier because things like size and weight of every printed product are recorded in our bibliographic database as standard, easy to report on and analyse, which helps with forecasting costs for inventory storage or shipping. Metadata can be powerful.\n", "headings": ["Authorship","Production","Discovery","Format","Frequency","Scope","How do we ensure discoverability?","Metadata beyond publication","Changing Publisher Goals and Product Needs","Changing Technology and Metadata Standards","Changing Societal Values","Changing Industry Priorities","Changing Customer and Librarian Expectations","What are some challenges and opportunities with metadata?","How do you ensure the quality and completeness of your metadata? Do you have ways of auditing it?","How do you know if (and how much) metadata helps achieve your goals?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/publishing/", "title": "Publishing", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/research-nexus/", "title": "Research Nexus", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/sylvia-pegg/", "title": "Sylvia Pegg", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/books/", "title": "Books interest group", "subtitle":"", "rank": 1, "lastmod": "2024-11-29", "lastmod_ts": 1732838400, "section": "Working groups", "tags": [], "description": "The Books Interest Group serves as a resource for Crossref and its participants to surface, discuss and make progress on metadata and workflow issues unique to book publishers. The group comprises Crossref members and nonmembers, and is led by a Chair and Crossref staff facilitators.\nWhat we\u0026rsquo;re working on This year, the group is focused on:\nReviewing book and chapter types in our metadata schema, as well as books representation in JSON outputs.", "content": "The Books Interest Group serves as a resource for Crossref and its participants to surface, discuss and make progress on metadata and workflow issues unique to book publishers. The group comprises Crossref members and nonmembers, and is led by a Chair and Crossref staff facilitators.\nWhat we\u0026rsquo;re working on This year, the group is focused on:\nReviewing book and chapter types in our metadata schema, as well as books representation in JSON outputs. Best practice for books. This guide has had some recent revisions and work on it will continue this year. Bringing others from the industry into discussions on topics of particular interest, such as books on multiple platforms. Education and outreach: ongoing, with use cases of particular interest and most recently including Publisher Participation Reports Suggestions for topics are welcome.\nParticipants Chair: David Woodworth, OCLC\nFacilitators: Kora Korzec\nDiane Needham, ACS Emily Ayubi, American Psychological Association (APA) Eva Winer, American Psychological Association (APA) Timothy McAdoo, APA Fatima Abulawi, Atypon Dawn Ingram, Atypon Elli Rapti, Atypon Dan Vernooj, Brill Mike Eden, Cambridge University Press Rachael Kendall, Cambridge University Press Saskia Wenzel, De Gruyter Mike Taylor, Digital Science Allison Belan, Duke University Press Patty Chase, Duke University Press Patty Van, Duke University Press Keara Mickelson, Edinburgh University Press Melissa Kreitzer, Elsevier Marc Segers, Geoscienceworld Jim Beardow, IMF Patricia Loo, IMF Bruce Rosenblum, Inera Wendy Queen, Johns Hopkins University Press Lauren Lissaris, JSTOR Jabin White, JSTOR Bill Kasdorf, Kasdorf \u0026amp; Associates, LLC Ardie Bausenbach, Library of Congress Sharla Lair, LYRASIS Todd Carpenter, NISO Christina Drummond, OAeBU Ursula Rabar, OAeBU/OPERAS Ronald Snijder, OAPEN Claire Holloway, OCLC David Woodworth, OCLC Rupert Gatti, Open Book Publishers Shawna Sadler, ORCID Mark Dunn, Oxford University Press Amber Fischer, Oxford University Press Matthew Treskon, Project MUSE Giovanna Brito Castelhano, SciELO Stephanie Dawson, ScienceOpen Nina Tscheke, ScienceOpen Nicola Parkin, Taylor and Francis John Normansell, The University of Manchester Paige MacKay, Ubiquity Press Tom Mowlam, Ubiquity Press Erich Van Rijn, University of California Press Krista Coulson, University of Chicago Press Charles Watkinson, University of Michigan Press Jeremy Morse, University of Michigan Press Peter Potter, Virginia Tech/TOME Pascal Ssemaganda, World Bank Publications Ron Denton Sun Huh How the group works (and the guidelines) The group meets quarterly and comprises a staff facilitator, a chair and number of members and subscribers who commit to regular attendance and participation. Guest attendance is welcome on a per-meeting basis.\nPlease contact our community team with any questions or to join the calls.\n", "headings": ["What we\u0026rsquo;re working on","Participants","How the group works (and the guidelines)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/about-sponsors/", "title": "Working with a sponsor", "subtitle":"", "rank": 1, "lastmod": "2024-11-26", "lastmod_ts": 1732579200, "section": "Become a member", "tags": [], "description": "We have thousands of small members from all over the world. Being small-scale doesn’t limit your ability to connect your content with the global network of online scholarly research. Each Crossref member also gets to cast their vote to create a board that represents all types of organizations.\nWhy join via a Sponsor? If you publish one journal or thousands, you’re welcome to join our growing community.\nWe know that cost and technical capabilities can be barriers to participation.", "content": "We have thousands of small members from all over the world. Being small-scale doesn’t limit your ability to connect your content with the global network of online scholarly research. Each Crossref member also gets to cast their vote to create a board that represents all types of organizations.\nWhy join via a Sponsor? If you publish one journal or thousands, you’re welcome to join our growing community.\nWe know that cost and technical capabilities can be barriers to participation. Joining through a Sponsor can help. Members who take this option have the same obligations and benefits as any other member, but they have someone representing them for Crossref services and they don\u0026rsquo;t pay fees to Crossref directly.\nInstead, the sponsor pays one membership fee to Crossref for all the members that they work with, and sponsors also pay the content registration fees for any content registered by their members. Many sponsors then pass on these fees to members and/or charge the members for their services, so it’s important for members to discuss the agreement carefully with a sponsor before starting to work with them.\nSponsors have to fulfill strict criteria to be accepted. Different sponsors offer different services, but most:\nFacilitate content registration with Crossref on behalf of the members they work with Provide administrative, technical, and (if applicable) local language support Handle Crossref billing (which is in US$) on behalf of these members Are able to receive payment for their services in local currency. How to join via a Sponsor Some Sponsors only work with organizations that are connected to them in some way - for example, they may be part of the same university consortium or the organization may be using the Sponsor\u0026rsquo;s publishing platform. You can find a list of Sponsors below. If they\u0026rsquo;re based in your region and have contact details below then they will be happy to discuss acting on your behalf for Crossref membership.\nDo contact them directly if you\u0026rsquo;d like to explore working with them; we cannot broker that arrangement for you. If you agree to work together, your Sponsor will send you a dedicated link to apply for Crossref membership under their sponsorship. Please don\u0026rsquo;t use the standard link to apply for membership that you can find on this website.\nUpon receipt of your application, we usually provide your Sponsor with your prefix and Crossref account credentials within four working days and they will be in touch with you from there.\nFind a Sponsor The following organizations are currently acting as Crossref Sponsors. Jump to your region:\nAsia Pacific Central and South Asia Central/Eastern Europe Latin America/Caribbean North Africa/Middle East Sub-Saharan Africa US/Canada Western Europe Asia Pacific +- APUB (South Korea)\rAPUB was established in 2018 and provides IT services such as online publishing, online paper submission review system and journal homepages for many academic societies.\nContact Homepage:http://www.apub.kr/\nEmail: master@apub.kr\n+- CNPIEC Kexin Co., Ltd (China)\rCNPIEC Kexin is an AI technology company affiliated to China National Publications Import and Export (Group) Co., Ltd. (CNPIEC). CNPIEC Kexin focuses on providing academic services solutions for the scholar community. It is a thriving innovative technology company with the advanced R\u0026amp;D team and well-developed infrastructure, aiming to breakthrough the frontiers of knowledge.\nIn 2023, CNPIEC Kexin launched the journal platform “Auto Operation”, particularly helping journals to build independent brand operation and smooth academic communication channels. DataDimension is the one-stop scientific research service platform providing whole process scientific research tool solution for researchers.\nFrom 2024 onwards, CNPIEC Kexin becomes a Crossref Sponsor, helping China’s academic publishers to facilitate their DOI registration of contents with Crossref service. We are committed to playing our part in better serving members with both Crossref services and our trusted products to accelerate their publishing and increase the influence of their research outputs.\nContact Address: No. 16 Gongtidong Road, Chaoyang, Beijing\nWebsite:https://note.kxsz.net/\nEmail: gongyingyi@kxsz.net\n+- DocuHut Co., Ltd. (South Korea)\rDocuHut is a professional publisher in Korea. We provide total solutions from first publication, online journal, manuscript submission \u0026amp; review system, manuscript editing, copyediting, proofreading, printing, JATS XML, to DB indexing.\nContact Homepage: http://www.docuhut.com Email: support@docuhut.com Phone: +82-2-2274-6771\n+- Guhmok Publishing Company (South Korea)\rGuhmok is an academic publishing company located in Korea. We provide one-stop solutions for journals—Manuscript submission system, copyediting, proofreading, printing, JATS XML, journal site, etc.— and are currently working only with Korean organizations.\nContact guhmok@guhmok.com Phone: +82-2-2277-3324\nHomepage: http://www.guhmok.com\n+- Neliti (Indonesia)\rNeliti is a web-based software platform for creating, hosting, managing and indexing institutional repositories, academic journals and conference proceedings.\nNeliti offers DOI registration as an auxiliary service to help institutions assign DOIs and utilize other Crossref services.\nNeliti runs a helpdesk for Crossref-related enquiries which is available for all institutions, including those not directly working with Neliti. They can be contacted via email at hello@neliti.com. For more information, please visit https://www.neliti.com/.\nContact hello@neliti.com\n+- Publishing House for Science and Technology, Vietnam Academy of Science and Technology (Vietnam)\rProviding support services for journal publishing, Publishing House for Science and Technology is a not-for-profit publisher of open access scientific research journals. We have over 10 years of scientific publication experience. We are a group of highly-motivated educationists, researchers and technology enthusiasts who are committed to promoting open access publications, thus enabling speedy propagation of quality research information. We are eager to share our experience and help any journal willing to register their content with Crossref.\nOur services include:\npreparation and validation of article metadata design and allocation of DOIs to articles depositing validated metadata to Crossref as per their XML format training and support. Contact To learn more about our full suite of editorial services, please reach out to us at cip@vjs.ac.vn.\n+- Relawan Jurnal Indonesia (Indonesia)\rRelawan Jurnal Indonesia (RJI)/ Indonesian Journal Volunteers were established after a meeting of the journal managers in Indonesia.\nRepresented parties agreed upon a voluntary spirit and contributions of thoughts, energy and materials related to electronic journal management with other journal managers in other universities, research institutions, and other institutions publishing journals throughout Indonesia without any differentiation. To realize its vision, Relawan Jurnal Indonesia registered its organization with the Ministry of Law and Human Rights of Indonesia numbered AHU-0005712.AH.01.07.YEAR 2017, on Certification of Legitimately Established Organization of the Relawan Jurnal Indonesia/Indonesian Journal Volunteers. The vision of Relawan Jurnal Indonesia/Indonesian Journal Volunteers, is to assist journal managers\u0026rsquo; publication processes into at least a national-level of high quality, and well-respected electronic journal management.\nType of organization we can work with:\nPublishers or Universities using Open Journal System for their journals or Open Monograph Press for their Books, or Eprints for their Repositories. However, any similar platforms are welcome. Come from Indonesia or Southeast Asia. Other regions in Asia are welcome if they are not a profit oriented organization. Languages: English, Bahasa (Indonesia). Organization types: Universities, Governments, or other non profit organizations are welcome. Fee category: We welcome publisher or organization with total publishing revenue or expenses less than USD 1 million USD/Year Content types: Journals, books, conference proceedings, conference papers, theses, dissertations. For more information see: http://doi.relawanjurnal.id/\nContact Email: contact@relawanjurnal.id\n+- Sin-Chn Scientific Press Pte. Ltd (Singapore)\rSIN-CHN SCIENTIFIC PRESS is a publishing company in Singapore and it is the first Crossref Sponsoring Organization there. As a major business in addition to publishing scientific journals, books and datasets, the company focuses on introducing advanced standards, technological platforms, and services from the international publishing industry, to provide local services for publishers, researchers, and research institutions in Singapore and Malaysia. Meanwhile, the company will also provide technical support and consultation services, to enhance the international influence of the publishers and academic journals.\nContact: doi@sin-chn.com Address: 73 UPPER PAYA LEBAR ROAD #07-02B, CENTRO BIANCO, Singapore 534818 Website: https://doi.sin-chn.com/ +- Additional Sponsors (Asia Pacific)\rAiriti, Inc. (Taiwan) Aliansi Jurnal Ekonomi dan Bisnis Indonesia (Indonesia) Asosiasi Pengelola Jurnal Indonesia (Indonesia) Conferences.id (Indonesia) EArticle (South Korea) Forum Pengelola Jurnal Manajemen (Indonesia) Inforang (South Korea) Japan Science and Technology Agency (JST) (Japan) Korea Scholar (South Korea) Korean Association of Medical Journal Editors (KAMJE) (South Korea) Korean Studies Information (KSI) (South Korea) Kyobobook Center (South Korea) M2PI (South Korea) Nurimedia Co., Ltd. (South Korea) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) Research and Social Study Institute (ReSSI) (Indonesia) UniveID and Pubmedia Group (Indonesia) Vietnam Citation Gateway (Vietnam) Central and South Asia +- Academic Research Publishing Group (Pakistan)\rAcademic Research Publishing Group (ARPG) is a publisher of peer-reviewed international journals. ARPG was established in 20015. We have over 6 years of scientific publication experience. ARPG promote research the World over in numerous disciplines including science, engineering, management, technology, social sciences, economics, education, language and literature. ARPG is an open-access publisher because all journals of international repute provide free access to the complete text of articles just on single click. ARPG publishes articles after they are peer-reviewed and edited by some World’s leading researchers, authors and scholars. ARPG allows anyone in the World to adapt copy and use the work printed. The original work and sources are cited properly. We are eager to share our experience and help any journal willing to get DOI number for their content. Our DOI services include:\nPreparation and validation of the article’s metadata. Design and allocation of DOI numbers to articles. Contact Website: https://www.arpgweb.com/ Email: info@arpgweb.com\n+- Applied and Natural Science Foundation (India)\rApplied and Natural Science Foundation is a not-for-profit publisher of open access scientific research journals. We have over 10 years of scientific publication experience. We are a group of highly motivated educationists, researchers, and technology enthusiasts who are committed to promoting open access publications, thus enabling speedy propagation of quality research information.\nWe are eager to share our experience and help any journal willing to get DOI number for their content. Our DOI services include:\nPreparation and validation of the article’s metadata. Design and allocation of DOI numbers to articles. Deposit validated metadata to Crossref as per their XML format. Training and support. Contact To learn more about our full suite of editorial services, please reach out to us at info@ansfoundation.org\n+- Digital Publishing Central Asia (DPCA) (Kazakhstan)\rЦифровое издательство Центральной Азии (ЦИЦА) — это организация, преследующая цель стандартизировать огромное количество научных публикаций в Казахстане. ЦИЦА специализируется на оказании широкого спектра услуг, связанных с консалтингом и аудитом изданий с целью обеспечения их соответствия требованиям ведущих библиографических баз данных. Полная автоматизация от момента создания научных публикаций автором до редакционных процессов и управления сайтом.\nЦИЦА оказывает техническую и методическую поддержку по работе с DOI и другими сервисами Crossref на русском и казахском языке.\nСвязаться телефону: +7 (727 ) 323-1-300, + 7 (708) 888 08 10\nпочтой: info@dpca.kz\nВеб-сайт: https://www.dpca.kz/\nКонтактное лицо: Самат Несипбаев\nDigital Publishing Central Asia (DPCA) is an organisation that aims to standardise a huge number of scientific publications in Kazakhstan. DPCA specialises in providing a wide range of services related to consulting and auditing of publications to ensure their compliance with the requirements of leading bibliographic databases. Full automation from the moment of creation of scientific publications by the author to editorial processes and website management.\nDPCA provides technical and methodological support for working with DOI and other Crossref services in Russian and Kazakh.\nContact Telephone: +7 (727) 323-1-300, +7 (708) 888 08 10\nEmail: info@dpca.kz\nWebsite: https://www.dpca.kz\nContact person: Samat Nessipbaev\n+- Erudite Data Science \u0026amp; Analytics (India)\rPurpose: Rendering web technologies support \u0026amp; research datasets evaluation services to publishers. We work with both academic and corporate publications. Our services include web development, web security, web metadata development, web publishing workflow design, SEO, video \u0026amp; text based media content creation \u0026amp; promotion, academic publishing support services, data creation \u0026amp; evaluation, data refinement \u0026amp; applications.\nRead more about Publishers Evaluation Criteria at https://eruditedata.com/publishers.html.\nContact Address: Erudite Data Science \u0026amp; Analytics, #1/1, Tagore Garden, Yamuna Nagar (HR) India PIN 135001\nEmail: service.desk@eruditedata.com\n+- EScience Press (Pakistan)\rEScience Press is a not-for-profit publisher of scientific research journals and books. Our team is committed to achieving the highest quality science to drive progress in research and innovation.\nFor over 10 years we have been helping students, researchers, scientific societies, and academic institutions to achieve their goals in an ever-changing world. By partnering with leading academic institutions and learned scientific societies, we support researchers to communicate discoveries that make a difference.\nEScience Press provides cloud hosting and professional website solutions for institutions/organizations publishing scientific journals. We offer fine-tuned, blazing fast and managed editorial workflow system, Open Journal System (OJS) hosting with a professionally run server environment that includes an up-to-date, secure version of the software plus training and support. We help our member organizations collaborate internationally on research programs that we coordinate in almost every scientific domain.\nEScience Press is partnering with DOI Pakistan, and works as a Crossref sponsor to offer all of its core services to sponsored members globally.\nContact Website: https://esciencepress.net/ Email: info@esciencepress.net Phone: +17249900670\nDOI Support\nWebsite: https://doi.org.pk Email: info@doi.org.pk Phone: +923006812118\n+- Informatics (India)\rInformatics was promoted three and a half decades ago with a vision to be a leading global player in the electronic information Business. Focusing on both domestic and global markets, we serve global publishing clients by providing editorial and compilation services that includes – Online Journal Management System (OJMS), digitization of backfiles, DOI, Plagiarism Check Service, Layout Designing, online \u0026amp; print publishing.\nContact Phone: 9900236751 Email: publishing@informaticsglobal.ai\n+- Institute of Metallurgy and Ore Benefication (Kazakhstan)\rKazakh:\n«Металлургия және кен байыту институты» АҚ Қазақстан Республикасындағы ғылыми жарияланымдарды стандарттауды мақсат ететін DOI (Digital Object Identifier) жүйесін енгізуді ұсынады. Бұл жүйе арқылы ғылыми жұмыстарды Crossref арқылы әртүрлі тәсілдермен табуға болады.\nБайланыс:\nТелефон: +7 (727) 298-45-02 Электрондық пошта: imio@imio.kz Веб-сайт: http://kims-imio.com/index.php/main Байланыс тұлғасы: Гулжайна Касымова\nEnglish: JSC \u0026ldquo;Institute of Metallurgy and Ore Beneficiation\u0026rdquo; offers to implement the DOI (Digital Object Identifier) system, aimed at standardizing scientific publications in the Republic of Kazakhstan. This system allows others to find your scientific works in various ways through Crossref.\nContact:\nTelephone: +7 (727) 298-45-02 Email: imio@imio.kz Website: http://kims-imio.com/index.php/main Contact person: Gulzhaina Kassymova +- KVR Scientific Services (KVRSS Group) (India)\rKVR Scientific Services (KVRSS Group) offers its services to national and international clients with unique support channel availability 24/7/365 days, in multiple languages. We provide ready-to-use applications/solutions, tech infrastructure, and resources that can make publishers\u0026rsquo; work easier. Our services are as follows:\nGeneral services\nJournal management (JMS or OJS); Book publishing house management; Conference/scientific event management; Web development \u0026amp; hosting; Application/software development; Typesetting (designing papers for the perfect layout); Editing/proofreading of the manuscripts; Support for the Open Journal System (OJS); Indexing of published papers in best possible indexing platforms; Digital repository for publishers; Plagiarism check service (Turnitin); SEO support; Domain registration; Cloud storage; Content writing; Journal indexing; Book/book chapter indexing; Google workspace and related services for professional emails with your domain; Microsoft 365 for professional emails with your domain; Zoho and related services for professional emails with your domain. Crossref services\nDOI registration for journals, books, conference proceedings, conference papers, theses/dissertations, datasets, media, and code; Preparation and validation of article metadata; Design and allocation of DOIs to manuscripts; Deposit of validated metadata to Crossref per their metadata schema; Content registration \u0026amp; updating; Reference linking; Crossmark; Similarity Check (powered by Turnitin\u0026rsquo;s iThenticate); Cited-by (DOI-based citations) registration \u0026amp; integration. Contact Website: https://kvrssgroup.com/pc/publishing\nEmail: mou@kvrssgroup.com\nPhone/WhatsApp: +91-6281333383\n+- MRI Publication Pvt. Ltd. (OPC) (India)\rAt MRI we provide end-to-end publishing services to authors and publishers. Our service portfolio consists of online submission on a manuscript review system with user-friendly access for the authors, reviewers, and editors; online publishing; DOI submission; XML-based process; digital preservation; working closely with the indexing bodies; and content promotion.\nContact For more information, please visit our website https://mripub.com or contact Pradeep Tiwari at pradeep@mripub.com.\n+- NSM Limited (Bangladesh)\rPurpose: NSM Limited is dedicated to supporting academic and research institutions in Bangladesh by providing technical and administrative services for scholarly content management. Our mission is to enhance the visibility and accessibility of research outputs through Crossref services.\nWho We Work With: We collaborate with a diverse range of organizations, including universities, research institutes, and independent publishers, to facilitate their participation in the global research community.\nServices Provided: Our services include DOI registration, metadata management, training, and support for utilizing Crossref tools and best practices.\nContact Name: Showmik Zaman Chowdhury\nChief Technology Officer\nEmail: showmik@nsmlimited.com / showmik10zaman@gmail.com\nAddress: G.L Ray Road, Kamal Kasna, Rangpur – 5400, Bangladesh\nPhone: +8801740206205\nWebsite: www.nsmlimited.com\n+- QTanalytics India (India)\rQTanalytics was established in 2016 with a leading data analytics, research, and training provider firm based in Delhi, India. We operate globally, serving both national and international clients, and we provide complete journal management services to publishers, offering DOI, Plagiarism Check Service, Layout Designing, online \u0026amp; print publishing to scholars, academic institutions, societies, associations, and corporations.\nWe have an experienced team to serve publishers joining Crossref. We offer content registration, reference linking, Crossmark, and Similarity Check.\nContact Send us a message: https://qtanalytics.in/contact\nURL: https://qtanalytics.in\nEmail:qtanalyticsindia@gmail.com Phone: +91-9458270556\n+- Research and Innovation Center LLC (Uzbekistan)\rOur goal is to support science and innovation projects, improve the quality of scientific research, and support the integration of science and industry.\nContact For more information, please visit our website https://iric.uz or contact us at support@iric.uz / uziric@gmail.com\n+- Sequence Research \u0026amp; Development Private Limited (India)\rInnovation through Research Company is aiming to empower the research community around the world with the help of technology \u0026amp; innovation. Provide ready-to-use solutions, community platforms, the technology infrastructure, and resources that can make the researcher’s life easier. Providing services to:\nPublisher, University, Research Scholar, Open Journal System (OJS), Open Monograph Press (OMP), Open Preprint Systems, Dspace or their Repositories Organization types: Universities, Governments, or other non-profit organizations are welcome. Content types: Journals, books, conference proceedings, conference papers, theses, dissertations. Website: https://sequencernd.com Email: sequencernd@gmail.com\n+- Scientific Research Solution Private Limited (SCIRESOL) (India)\rScientific Research Solution Private Limited (SCIRESOL) offers a wide range of journal maintenance and management services to Public Universities, Deemed to be Universities, Societies, and individual journal owners.\nWe provide a dedicated platform to support and maintain scientific journals, helping in journal allied services like DOI, XML, ePUB, PDF, and metadata support to the journal owners.\nContact Website: www.sciresol.co, www.manuscriptcommunicator.com\nEmail: info@sciresol.com\nPhone: -91+9845883696\n+- Technology Research and Innovation Markaz (SMC-Private) Limited (Pakistan)\rWelcome to Technology Research and Innovation Markaz (TRIM) - established in 1999 and registered with the SECP - your one-stop solution for research journal websites; journal management system (OJS) and journal indexing; as well as Crossref services including DOI (content registration); reference linking; Crossmark; Cited-by; and Similarity Check, for universities, think tanks, and research organizations.\nContact Website: www.trim.pk Email: info@trim.pk\nPhone (WhatsApp): +92 333 520 9933\n+- Ubitech Solutions Pvt Ltd (India)\rUbitech Solutions Private Limited was established on 17 August 2006. They provide an end-to-end Journal Management System for publishers, academic institutions, societies, and associations. They also provide services like journal website designing, custom XML creation and typesetting along with CrossRef services: DOI (content registration), reference linking, Crossmark, Cited-by and Similarity Check.\nContact Website: https://ubijournal.com\nEmail: sales@ubitechsolutions.com\nPhone: +91 98262 74403, +91 7773000234\n+- VS Infosolution (India)\rWe at VS Infosolution offer web technologies support \u0026amp; services to publishers. We work with both academic and corporate publications. Our services include web development; web security; web metadata development; web publishing workflow design; SEO; video and text-based media content creation and promotion; academic publishing support services; data creation and evaluation; data refinement and applications. We are eager to share our experience and help any journal/publisher from South East Asia willing to get DOI number for their content. Our DOI services include:\nPreparation and validation of the article’s metadata. Design and allocation of DOI numbers to articles. Deposit validated metadata to Crossref as per their XML format. Provide other services of Crossref like Crossmark, Reference linking and Funder Registry etc. Training and support. Contact Email: info@vsinfosolution.com\nAddress: VS Infosolution, G-20, Phase II, Heramb Paradise, Wayale Nagar, Kalyan (West), Thane (MS) India. Pin: 421301\n+- Wanfang Data (China)\rWanfang Data focuses on science \u0026amp; technology, academic contents over 25 years as the leading information service provider in China. It was founded by Institute of Scientific and Technical Information of China (ISTIC) in 1993, which is subsidiary of Ministry of Science and Technology of P. R. China, became the first state-owned shareholding high technology enterprise in information service area in 2000.\nIn 2007,Wanfang Data \u0026amp; ISTIC established the first DOI registration service in Asia - \u0026ldquo;Chinese DOI\u0026rdquo; jointly. By the end of 2017, we’ve registered more than 27 million DOIs for contents published in China, including 25 million journal papers from 7400 journal titles, 2 million dissertations, science data sets, books, conference proceedings, etc.\nIn 2013, Wanfang Data became a Sponsoring Affiliate of Crossref, to help academic publishers in China to register DOIs for their English contents with Crossref service. Our agent service for Crossref combined with Chinese DOI service can help publishers promote the influence for their English contents quickly and widely, both in China and abroad.\nContact cuixl@wanfangdata.com.cn, doi@istic.ac.cn\nPhone: +861058882665\nAddress: Room 216, 15 Fuxing Rd., Beijing China, 100038\nQQ Group: 277120936 (please note the name of your journal or organization when you join the group)\nWebsite: http://www.chinadoi.cn, http://www.doi.org.cn, http://www.wanfangdata.com.cn\n+- Additional Sponsors (Central and South Asia)\rAcademic Journal Incorporated (Uzbekistan) Advance Educational Institute and Research Center (Pakistan) Clever Consult (Kazakhstan) EScience Press (Pakistan) I Edu Group (Uzbekistan) Marwah Infotech (India) MiCrewSoft (Pakistan) Mongolian Digital Knowledge Solutions LLC\t(Mongolia) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) Tadqiqot (Uzbekistan) TechWheels.net (India) Central and Eastern Europe +- ASOS Publishing (Türkiye)\rDergi Platformu for academic journals published anywhere in the world, in Turkey or in electronic media hosting and management services offering editorial process. All processes are executed automatically by the system. It has an easy to use interface. Users and referees are informed about the broadcast status via regular e-mail and sms. Advanced statistics such as individual and regional (city-based) downloads and readings of articles are provided by the system. You can use the system from ready-made designs or with an tailored interface. Articles uploaded to the system can be submitted to the similarity test automatically via intihal.net. Articles published for journals scanned in SOBİAD can be indexed directly through the SOBIAD - Citation Index . There is a messaging module between authors and editors within the system.\nContact Website: http://dergiplatformu.com For more information: info@dergiplatformu.com\n+- Association For Science (Georgia)\rAssociation For Science is a non-profit (non-commercial) legal entity, staffed with highly qualified staff, for whom word and work are inseparable concepts for us. That is why our association has implemented various projects since its establishment, namely: Association for Science created an e-journal platform and cooperates with Georgian and non-Georgian organizations, publishing houses, universities, individual authors etc. See more: https://journals.4science.ge, the portal of monographs https://monographs.4science.ge/index.php/SS and e-books https://4science.ge/en/ebooks.\nBy registering in various indexing databases, we also take care of the future promotion of electronic materials placed in our portals. We offer support on all core elements of the publishing process. We have clients from all around the globe across all academic disciplines and support English and Georgia languages.\nContact Website: https://4science.ge\nEmail: info@4science.ge\n+- Digital Publishing Service LLC (Ukraine)\rDigital Publishing Service is an organization that provides digital and publishing services to universities and scientific institutions and is an official partner of Crossref with the status of Sponsor and authorized to represent universities, scientific institutions and publishing houses (editions) in Crossref.\nDigital Publishing Service runs a helpdesk for Crossref-related enquiries, which is available for all institutions, including those not directly working with Digital Publishing Service.\nThey can be contacted via email at doi@csr.com.ua.\nOur services include:\npreparation and validation of article metadata; design and allocation of DOIs to articles; depositing validated metadata to Crossref as per their XML format; training and support. Contact Website: https://csr.com.ua/doi.en\nYou can contact us by email at info@csr.com.ua\n+- ECO-Vector (Russia)\rEco-Vector — group of publishing companies Publishing scientific periodicals and books; publishing and promoting Russian journals, books, publications, and information.\nOnline publishing of scientific journals on our editorial and publishing platform EVESYST.\nOrganization of national subscription to scientific resources by Elsevier, Wiley, and Cambridge CDC (Russian Foundation for Basic Research).\nOur online academy SciCraft conducts scientific events and information as well as offering training, courses, and webinars for scientists.\nContact filippov@eco-vector.com\nhttps://eco-vector.com/service/\n«Эко-Вектор» — российская группа компаний Издание научной периодики и книг: выпуск и продвижение российских журналов, книг, публикаций, данных.\nРазмещение научных журналов и обеспечение работы редакций в сети Интернет на собственной редакционно-издательской платформе EVESYST.\nОрганизация национальной подписки на научные ресурсы Elsevier, Wiley, Cambridge CDC (РФФИ).\nПроведение научных мероприятий SciCraft и информационно-консультационных тренингов, курсов и вебинаров для ученых на базе собственной Онлайн Академии\nКонтакты filippov@eco-vector.com\nhttps://eco-vector.com/service/\n+- Editorum (Russia)\rUniversal publishing platform EDITORUM (www.editorum.io) for publishers/universities and authors/scientists. Full automation from science paper creation to editorial work-flow processes for educational and scientific publishers, university publishers and other publishing organizations and a wide range of content such as: (journals, monographs, textbooks, conference proceedings etc.) and CMS system.\nУниверсальная издательская платформа EDITORUM (www.editorum.ru) — программное онлайн решение для научных издателей, организаций и ученых. Полная автоматизация от момента создания научных публикаций автором до редакционных процессов и управления сайтом. Система позволяет организовать и автоматизировать работу издательства со всеми типами научного контента (журналы, монографии, учебники, материалы конференций и т.п.)\nContact Phone: +7 (499) 350-54-81\nsupport@editorum.ru\nhttps://editorum.ru\nКонтакты Phone: +7 (499) 350-54-81\nsupport@editorum.ru\nhttps://editorum.ru\n+- Han Yazılım Co (Türkiye)\rAs Han Yazılım Co. we are serving our customers with our software solutions since 2001. Based on the requirements of daily improvements, we did update our abilities as well and jumped to .NET technology in 2006 additional to traditional .asp and .html software. We have our own server in İzmir, and we keep all our projects in there. After being rewarded by the Ministry of Education as the second best project education software holder in 2006, we fired up our operations and turned into a national service provider rather than a localized one. With our customers located in the local market and even international market – from Istanbul to Paris – we are adopting internet technology more and more for our business. Our team consists of academic professionals, software engineers, and diligent young people. We always believe that a successful project needs imagination, defining and accurate planning and correct strategy, considering and foreseeing all possible circumstances, getting the benefit of technology, and a perspective of modern IT approach.\nWe have been acting as a Sponsor for Crossref since 2012 and currently work with hundreds of publishers. The Crossref services we support include Content Registration, Similarity Check, Crossmark and Cited-by. We offer our customers a custom web app (http://doi.doidestek.com) which they can access and easily submit or edit DOIs and metadata with our simple, user-friendly interface. Demo accounts are available for testing. We also provide our Sponsored Members tutorial videos in order to let them start registering content right away. We are ready to work with any publisher who is willing to get benefits of using CrossRef services.\nContact Any potential Sponsored Member (international) may get in touch with us by sending an email to info@hanyazilim.com or they can reach us via Whatsapp (+90 537 727 8803). For those local customers - within Turkey - our call center number is +90 850 303 1955.\n+- METADATA, LLC (Republic of Moldova)\rMETADATA aims to promote and provide permanent access to various digital research resources, thus putting into practice Open Science practices among different stakeholders. We assign DOIs to research outputs, improving their discoverability, accessibility, citation and reuse.\nWe welcome collaboration with research organisations, universities, government and not-for-profits.\nFeel free to contact us for more details.\nContact Email: info@metadata.md\n+- National Academy of Sciences of Ukraine (Co. LTD Ukrinformnauka) (Ukraine)\rThe subscription agency Ukrinformnauka started its activities in 2009. The purpose of the agency is to inform the scientific community about the publishing activities of the institutions of the National Academy of Sciences of Ukraine, as well as to simplify the search for and ordering of scientific periodicals and monographs.\nIn 2010, Ukrinformnauka was included in the State Register of publishers, producers, and distributors of Publishing Products.\nIn 2014, Ukrinformnauka became a Crossref sponsor and supports the initiative to provide open lists of references under the Initiative for Open Citations program.\nUkrinformnauka has focused its efforts on ensuring the effective use of the opportunities that open up to Crossref members after registering metadata in the Crossref database.\nContact Website: https://u-i-n.com.ua/en/\nEmail: ukrinformnauka@gmail.com\nPhone: +38 (044) 288-03-46\nMobile: +38 (050) 154-77-83\n+- National and University Library in Zagreb (Croatia)\rNational and University Library in Zagreb is the host institution of the Croatian DOI Office. The role of the Office is to act as an intermediary between Croatian publishers of scientific journals and the Crossref DOI registration agency, manage DOI-related administrative operations (e.g. DOI membership, payment of DOI services), provide technical support and organise the promotion of the DOI system.\nIt intermediates in the assignment of DOIs and Content Registration for titles, volumes, issues and articles of Croatian current online scientific and professional journals whose publishers are interested in membership in the DOI system, or those that are already members of the system but wish to use the advantages available as part of services provided by the Office. The standards required for DOI assignment for journals are that they regularly publish full-text articles in the Croatian online environment, that their bibliographic data and other related details are available according to Crossref recommendations and that they have an ISSN.\nMembership through the Croatian DOI Office enables publishers to use a unique prefix, or, in the case of publishers who were part of the DOI system before registering with the Office, to keep using the prefix that they were assigned at their first independent DOI registration request. The Office provides its technical support through a newly developed unique national DOI system (DOI-HR). The National and University Library in Zagreb, as an authorised archive, is responsible for the permanent storage of metadata and objects assigned DOIs. Thus, in the case that an object disappears from the internet, its DOI is redirected to its archived copy in order to ensure the permanency of its link to the object. The expenses for all DOI services and membership during one year for publishers applying for DOIs through the Croatian DOI Office are covered by subsidies provided by the Croatian Ministry of Science and Education.\nContact Homepage: http://www.nsk.hr/doi/ E-mail: doi@nsk.hr\n+- National Science Library Development Foundation (Georgia)\rNational Science Library Development Foundation is a a nonprofit organization. Our organization provides publishing infrastructure to universities, research institution and libraries throughout Georgia. Our platform of academic e-journals (openjournals.ge) was designed to meet the needs of editors, writers, readers and publishers in open access online publishing.\nContact Homepage: www.openjournals.ge Email: natali.g@sciencelib.ge +- NEICON (Russia)\rNot-for-profit Partnership National Electronic Information Consortium (NEICON) was established in 2002 as an independent union of Russian scholar and educational organizations which produces and use science information. The membership is free. Currently NEICON joins over 1000 scientific and educational organizations from Russia. Today NEICON works on various directions\nInformation support on over 400 international resources and databases Education Center on legal and practical matters Conferences, seminars, master-classes, etc. Represents ORCID, COUNTER, DOAJ, Crossref Elpub editorial platform Publishing House National Open Access Repository Project (NORA) NECON runs a Russian helpdesk for Crossref queries - this is available even to members not cooperating directly with NEICON. They can be contacted by phone on: +7 (499) 754-99-93, or via email: crossref@neicon.ru. See: https://elpub.ru/crossref\nNEICON cooperates with all types of organizations.\nНекоммерческое партнерство Национальный электронно-информационный консорциум (НЭИКОН) был образован в 2002 г. как независимое объединение российских научных и образовательных организаций-потребителей и генераторов научной информации. Членство в Консорциуме для всех типов организаций – бесплатное. На настоящий момент НЭИКОН в своем составе имеет более 1000 научно-исследовательских и образовательных учреждений России. Сегодня НЭИКОН – это:\nИнформационное обеспечение организаций – более 400 ресурсов Учебный центр способствует повышению информационной и правовой грамотности Проведение семинаров, мастер классов, конференций по актуальным вопросам издательского и библиотечного сообществ Партнерство с ORCID, COUNTER, DOAJ, Crossref Система комплексной поддержки научного журнала – Elpub Издательство Проект «Национальный агрегатор открытых депозитариев российских университетов» (НОРА) НЭИКОН сотрудничает с организациями любого типа НЭИКОН оказывает техническую и методическую поддержку по работе с DOI и другими сервисами Crossref на русском языке. Сервис доступен для всех, а не только для членов Консорциума и пользователей Платформы Elpub. Связаться со Службой можно по телефону +7 (499) 754-99-93, или почтой crossref@neicon.ru.\nContact Maxim Mitrofanov/Максим Митрофанов\nPhone: +7(499)754-99-93 https://elpub.ru/crossref\n+- Open Science in Ukraine (Ukraine)\rOpen Science in Ukraine (OSU) is a project for the comprehensive support of scientific journals on the Internet.\nWhat do we usually do:\nInstalling, Configuring, and Running Open Journal Systems (OJS) Registering content with DOIs Advising, providing useful information to authors and editors Feel free to contact! We are always happy to help!\nContact Contact person: Usenko Pavel Phone: +380667791427 Email: mail@openscience.in.ua\n+- OpusJournal (Bosnia \u0026amp; Herzegovina)\rOur team at Opus Journal comprises more than 30 members. The team possesses a formidable depth of experience and an array of skillsets suited ideally to the enrichment of your technological infrastructure of peer review management and article production in order to allow you to engage in data-driven transformative processes of open access scholarly publishing and to fulfill all open access publishing mandates.\nOur aim is to solve problems and deliver solutions and we think systems should be as convenient as possible for your users. That is why we tailored OpusJournal specifically for the scholarly publishing industry offering a genuinely innovative way to publish.\nContact person: Milan Vukic Phone: +38765616339 Email: info@opusjournal.com Website: www.opusjournal.com\n+- Publishing House “Helvetica” (Ukraine)\rPublishing House “Helvetica” is a team of highly qualified professionals with many years’ experience in preparing and publishing books, study guides, monographs, scientifical periodicals, etc. Its shared sense of purpose is to provide high-quality services. The team gives individual attention to every author and client, and all orders are carried out at a high professional level.\nPublishing House “Helvetica” is a reliable partner for more than 70 state and private universities and specialized institutions of higher education. Besides Ukrainian universities, Publishing House “Helvetica” actively cooperates with 30 universities in neighbouring countries including Hungary, Romania, Slovakia, Poland, Lithuania, Latvia and Estonia.\nToday Publishing House “Helvetica” manages more than 100 scientific journals on various subject areas. Services include: working with authors of scientific articles; coordination of scientific review by researchers; proofreading with the involvement of philologists; professional page layout; printing (both monochrome and colour); registration of journals in international scientometric databases; and administration of journal websites; as well as assistance for journal founders in cooperating with the authorities on issues affecting scientific periodicals.\nDon’t hesitate to contact us. We are glad to be of service to you!\nContact Email: mailbox@helvetica.com.ua Telephone: +38 097 713 35 50\n+- Russian Agency for Digital Standardization (RADS) (Russia)\rRussian Agency for Digital Standardization (RADS) is an organization that aims to standardize a huge number of scientific publications in Russia and the CIS. RADS is a sponsoring member of the Crossref registration agency and a direct member of DataCite agency. RADS is authorized to assign a DOI unique prefix to sponsored organizations, which is an integral part of DOI system.\nRADS contributes to the development of the DOI (Digital Object Identifier) ​​standard in Russia and the CIS countries. RADS assigns DOIs to academic digital data to improve its recognition and subsequent citation. The data is also placed in the RADS repository and in a number of world databases supporting the OAI open-access technology.\nDOI registration with RADS complies with the Decree of the Government of the Russian Federation of November 16, 2015 No. 1236 (number in the State Register of Computer Programs 2019663477.\nРусское агентство цифровой стандартизации (РАЦС) это организация, преследующая цель стандартизировать огромное количество научных публикаций в России и СНГ. РАЦС является членом-спонсором регистрационного агентства Crossref и членом консорциума и регистрационного агентства DataCite. РАЦС уполномочено закреплять за организациями уникальный префикс, неотъемлемую часть DOI, для того, чтобы те могли регистрировать идентификаторы цифрового объекта (DOI).\nРАЦС способствует появлению в России и СНГ стандарта DOI (Digital Object Identifier) — идентификатора цифрового объекта. Наша организация присваивает DOI академическим цифровым данным для улучшения их распознания и последующего цитирования. Данные размещаются в репозитории РАЦС и ряде мировых баз данных, поддерживающих технологию открытого доступа OAI.\nРегистрация DOI с РАЦС соответствует Постановлению Правительства Российской Федерации от 16 ноября 2015 г. № 1236 «Об установлении запрета на допуск программного обеспечения, происходящего из иностранных государств, для целей осуществления закупок для обеспечения государственных и муниципальных нужд». РАЦС использует собственное, и зарегистрированное в Российской Федерации программное обеспечение (номер в Государственном реестре программ ЭВМ 2019663477.\nРАЦС оказывает техническую и методическую поддержку по работе с DOI и другими сервисами Crossref на русском языке. Сервис доступен для всех.\nContact Website: http://rads-doi.org/pricing/\nEmail: info@rads-doi.org\nPhone: +7 (343) 286-83-22\n+- Shapovalov Scientific Publishing OU (Estonia)\rShapovalov Scientific Publishing OU is an Estonian Scholarly Publisher available internationally. We publish peer-reviewed indexed journals, books and thesis. Our main goal is to spread Open Science through the publication in open access scientific articles. We focus, as publishers, on the following scientific fields: medicine, pharmacy, jurisprudence, engineering, social science, and information technologies. We also provide Crossref sponsorship for all services, with our gentle assistance to help reach your goals.\nWe have multilingual team of native speakers of various languages to provide you with the best support possible. We\u0026rsquo;re based in Estonia, but provide our services in Europe and the Caspian region in the following countries: Estonia, Finland, Latvia, Lithuania, Moldova, Ukraine, Kazakhstan, Uzbekistan, Azerbaijan, Turkey, Turkmenistan, Georgia, Tajikistan, and Armenia.\nOur lectures and webinars are freely accessible to everyone, where we spread our knowledge of how Crossref can help you in your work.\nFeel free to contact us.\nContact Website: https://crossref.ssp.ee\nEmail: crossref@ssp.ee\n+- URAN Publishing Service (Ukraine)\rURAN Publishing Service is a Ukrainian technology company, working for sector of universities and research institutions of the Eastern European region. The main activity of the company is development of software for scientific information analytics and dissemination. Acting as a Sponsor of PILA for Ukrainian scholarly publishers, URAN Publishing Service supports DOI regional registry (Ukrainan registry of research outputs) and acts as an authorized intermediary between Ukrainian publishers and the Crossref DOI registry.\nMore information about URAN Publishing Service can be found at http://www.uran.ua/~eng/ps-ltd.htm (English) or http://www.uran.ua/~ukr/ps-ltd.htm (Ukrainian).\nOur services for publishers include PILA/Crossref membership benefits as a part of membership in the project \u0026ldquo;Scientific Periodicals of Ukraine\u0026rdquo; (http://journals.uran.ua/). This project is a national technology platform of Ukrainian scholarly periodicals (journals and proceedings). The resource is being developed on the basis of voluntary mutually beneficial partnership of publishers, academic libraries and information centers of Ukraine.\nURAN Publishing Service provides publishers with full technical assistance and consultation support. Fee categories (tariff packs) depends on amount of published content.\nunder 250 articles per year: 10 800 UAH\n251-500 articles per year: 18 960 UAH\n501-1000 articles per year: 37 020 UAH\n1001-2000 articles per year: 72 000 UAH\n2001-5000 articles per year: 175 020 UAH\nContact Local support service: support@journals.uran.ua\n+- Yazilim Parki Bilisim Teknolojileri D.O.R.P. Ltd. Sti. (Türkiye)\rYazılım Parkı is a software development company established in 2013, located in Turkey. We provide online scholarly journal publishing and peer-review (manuscript submission) software fully integrated with Crossref services, technical consultation for inclusion scientific indexes, JATS XML preparation and online congress abstract submission and review software solutions. Our clients are including, but not limited to non-profit organizations, academic societies, research institutes and universities. We can provide specific solutions considering modern technologies, user friendly design and accessibility with superior knowledge and experience.\nContact To learn more about our services, please visit https://yazilimparki.com.tr or contact us at bilgi@yazilimparki.com.tr.\n+- Additional Sponsors (Central/Eastern Europe)\rAkdema Bilisim Yayincilik ve Dan. Tic. Ltd. Sti. (Türkiye) Albanian Canadian Development Alternative (Albania) Association of Lithuanian Serials (Lithuania) BAYT Bilimsel Arastirmalar Basin Yayin Ltd. (Türkiye) European Scientific Platform (Ukraine) Hiras Software, Education and Consultancy Ltd. (Türkiye) Institute of Knowledge Management (North Macedonia) Institute of Metallurgy and Ore Benefication (Kazakhstan) Laboratory of Intellect, Ltd (Belarus) LIBCOM Piotr Karwasinski (Poland) Library and Information Centre, Hungarian Academy of Sciences (Hungary) LLC Integration Education and Science (Russia) Lucian Blaga Central University Library of Cluj (Romania) National and University Library - St. Klement of Ohrid - Skopje (North - Macedonia) National and University Library of Bosnia and Herzegovina (Bosnia and Herzegovina) National Library of Armenia (Armenia) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) State Scientific Institution - Ukrainian Institute of Scientific and - Technical (Expertise and Info) (Ukraine) Tubitak Ulakbim DergiPark (Türkiye) Turkish Primary Education Association (Türkiye) University of Belgrade Faculty of Law (Serbia) Latin America and Caribbean +- Acesso Academico (Brazil)\rACESSO ACADÊMICO serves associations, foundations, publishers, universities and public administrations, with or without profit purposes, that use or want to use a software platform to manage their publications or magazines, such as Open Journal Systems (OJS). We offer all the services necessary for the setup, hosting, maintenance and dissemination of scientific journal content.\nAs Crossref sponsors we support publishers for the acquisition of the DOI prefix, content registration, and all other Crossref-related services (Similarity Check, Cited-by, etc.).\nMoreover, we host online scientific events including conferences, symposiums, seminars and workshops to facilitate the evaluation of new ideas and new research in an innovative, resourceful and creative setting.\nContact Contact: contato@acessoacademico.com.br\n+- Biteca SAS (Latin America)\rWe are PKP and Crossref sponsors. We offer editorial services and technical support in Spanish for Open Journal Systems, Open Monograph Press, Open Conference Systems and Open Preprints Server. We have more than 15 years of experience providing services to scientific journals. We are experts in XML-JATS markup (For Scielo and PubMed Central), Crossref technical support for the management of DOI (Digital Object Identifier), Reference Linking, Cited by and Crossmark. We also develop custom plugins for OJS and mobile APPs for iOS and Android for journals.\nSomos patrocinadores en PKP y Crossref. Ofrecemos servicio técnico especializado en español para Open Journal Systems, Open Monograph Press, Open Conference Systems y Open Preprints Server. Contamos con más de 15 años de experiencia prestando servicios a revistas científicas. Somos expertos en marcación XML-JATS (Para Scielo y PubMed Central) , soporte técnico Crossref para la gestión de DOI (Digital Object Identifier), Reference Linking, Cited by y Crossmark. También desarrollamos plugins personalizados para OJS y aplicaciones móviles (APPs) para iOS y Android para revistas.\nContact: Carlos Bermúdez\nhttps://www.biteca.com\nAvenida Caracas # 34 – 86 oficinas 401-402\nBogotá – Colombia – Sur América\nColombia\n+- GeniusDesign Marketing Digital e Editora (Brazil)\rGeniusDesign provides hosting, OJS installation/configuration, technical support, and publishing services to independent publishers, universities, non-profit and for-profit institutions. We also provide digital marketing services to increase traffic and visibility to scientific journals.\nContact: contato@geniusdesign.com.br\ntelegram.me/geniusdesignbrasil\n+- Journals \u0026amp; Authors SAS (Colombia)\rSomos una empresa que quiere ayudar al aumento de la calidad y mejoramiento de la difusión de la literatura científica. Nuestros esfuerzos se concentran en brindar a los editores de revistas y autores de manuscritos, asesorías y servicios calificados para la redacción, edición, publicación, difusión y posicionamiento de los textos científicos. Nos distinguimos por la calidad y puntualidad de nuestros servicios, el profesionalismo, experiencia y buen trato. Nuestro equipo está conformado por profesionales de las áreas de la ingeniería de sistemas, traducción, salud pública, ciencias sociales y humanas, publicidad, bibliotecología, diseño gráfico y web y la administración financiera.\nPropendemos por una difusión del conocimiento en el marco de buenas prácticas editoriales que eviten las violaciones éticas y favorezcan el crecimiento razonable de la literatura especializada.\nEstamos siempre atentos para conocer las necesidades de nuestros clientes y dispuestos a construir relaciones duraderas basadas en la confianza que nos permitan poner en práctica las estrategias más eficaces e innovadoras para su revista o artículo científico. Nuestros mayores clientes son las Universidades por trabajar con publicaciones científicas, sin embargo, no tenemos exclusiones en el momento.\nOur company aims to contribute to enhancing the quality and dissemination of scientific literature. We are committed to provide researchers and journal editors with qualified services and consulting for drafting, editing, publishing, disseminating and positioning their scientific publications. We stand by the quality and timeliness of our services, professionalism, expertise and friendly treatment. Our team is comprised of professionals in the fields of systems engineering, translation, public health, social and human sciences, advertising, library and information science, graphic and web design and financial administration.\nWe promote access to knowledge within the framework of good editorial practices that prevent ethical violations and encourage the reasonable growth of scientific literature. We are always responsive to the needs of our customers and are willing to build lasting relationships with them based on trust, which allow us to implement the most effective and innovative strategies for their journal or papers.\nOur biggest clients are Universities for working with scientific publications, however, we do not have exclusions at the moment.\nContact Operamos desde Medellín, Colombia. Si usted desea conocer más nuestros servicios por favor contáctenos. Escríbanos a info@jasolutions.com.co y con gusto le daremos toda la información que necesite o visítenos en www.jasolutions.com.co.\nWe are headquartered in Medellin, Colombia. If you wish to quote our services, please write to us to info@jasolutions.com.co or visit www.jasolutions.com.co.\n+- High Rate Consulting (Latin America)\rHigh Rate Consulting is a company committed to the dissemination of scientific knowledge. Our services focus on consulting for the creation and maintenance of journal on the OJS platform, edition of individual and collective books, as well as training associated with these processes. We also provide complementary services in terms of web design and social networks. We distinguish ourselves by having highly qualified personnel in each of these tasks. We support good editorial practices in favor of the growth of open access scientific publications, compliance with quality standards for indexing and the promotion of the digital image of both organizations and individuals.\nWe work to develop close processes with our clients in order to provide them with the best services and permanent support in the processes. We are currently focused on supporting emerging scientific journals initiatives that require registration with Crossref to promote open access dissemination, especially those located in Latin America and Africa.\nThe journals that are part of Crossref through us receive ongoing information and advice on improving processes to achieve the objectives of indexing and international positioning, as well as advice on the use of tools such as Similarity Check, Crossmark, Cited-by and Reference Linking, to name a few.\nHigh Rate Consulting es una empresa comprometida con la difusión del conocimiento científico. Nuestros servicios se centran en asesorías para la creación y mantenimiento de revistas en la plataforma OJS, edición de libros individuales y colectivos, así como, adiestramiento asociado a dichos procesos. También prestamos servicios complementarios en cuanto a diseño web y redes sociales. Nos distinguimos por contar con personal altamente calificado en cada una de estas labores. Apoyamos las buenas prácticas editoriales en favor del crecimiento de las publicaciones científicas en acceso abierto, cumplimiento de los estándares de calidad para la indexación y la promoción de la imagen digital tanto de organizaciones como de personas.\nTrabajamos por desarrollar procesos de cercanía con nuestros clientes en función de brindarle los mejores servicios y acompañamiento permanente en los procesos. Actualmente nos enfocamos en apoyar iniciativas de revistas emergentes que requieren la inscripción ante Crossref para impulsar la difusión en acceso abierto, sobre todo aquellas ubicadas en Latinoamérica y África.\nLas revistas que forman parte de Crossref a través de nosotros reciben información y asesoría permanente sobre el mejoramiento de los procesos para el logro de los objetivos de indexación y posicionamiento internacional, además de asesoría en el uso de herramientas como Similarity Check, Crossmark, Cited-by y Reference Linking, por nombrar algunas.\nContact Contact/contacto: https://www.highrateco.com/\nContact person/persona contacto: Wileidys Artigas\nEmail/correo: wile@highrateco.com\n+- Hipertexto-Netizen (Latin America)\rWe are Hipertexto – Netizen, a specialized company that deploys tech solutions, media, platforms, and technologies that help our publishers to strengthen the processes related to the generation, transformation, distribution, and delivery of content. We aim to spread knowledge through technology. We have different operations, including specialized metadata systems and platforms, focusing on academic, universities, and STM publishers. We serve more than 12 countries in LatAm, Spain, Brazil, and Portugal.\nSomos Hipertexto - Netizen, una empresa especializada que despliega soluciones tecnológicas, medios, plataformas y tecnologías que ayudan a nuestros editores a fortalecer los procesos relacionados con la generación, transformación, distribución y entrega de contenidos. Nuestro objetivo es difundir el conocimiento a través de la tecnología. Contamos con diferentes operaciones, incluyendo plataformas y sistemas de metadatos especializados, con foco en editoriales académicas, universitarias y STM. Atendemos a más de 12 países en LatAm, España, Brasil y Portugal.\nContact Headquarters:\nCOLOMBIA CIO - Centro Internacional de Operaciones\nCalle 32 A No. 19 - 24 Barrio Teusaquillo\nBogotá - Colombia Tel.: +57 601 643 4389\nMÉXICO Ciudad de México, México. Centro de Operaciones México Avenida Desierto de los Leones 4855, oficina 47-3 Colonia Tetelpan, Delegación Álvaro Obregón 01700 Tel. MX: +52 (55) 7827 7068\nWebsite: https://www.hipertexto.com.co/\nEmail: Dirección Científica SIMEH sac@hipertexto.com.co\n+- InfoEduTec (Peru)\rInfoedutec is focusing on supporting small publishers, journal editors, research institutions and universities to improve the research and publishing management, to increase the quality and performance of scholarly journals, and monitoring the visibility or scientific impact through use of open source software and best practices in information and knowledge management.\nInfoedutec is a legal company with operations in Peru under the trade name INFOEDUTEC.COM and registered name INFOEDUTEC - INFORMATION EDUCATION TECHNOLOGIES AND PUBLICATION SERVICES E.I.R.L. The main economic activities of the company include - but are not limited to - the following:\nInformation technology and computer service activities Professional, scientific and technical activities Publishing of newspapers, magazines and other periodical We offer services and products related to IT TECHNOLOGIES AND SOLUTIONS: digital repositories, e-journals, websites, virtual campus; library systems, and CRIS systems; SCIENTIFIC PUBLICATION: thesis revision, research advisoring and article copy-editing; EDUCATION AND TRAINING: virtual courses, on-demand workshops and customized in-house programs; INFORMATION MANAGEMENT: journal indexing, XML JATS formatting; bibliometric reports and Crossref services (membership, DOI registration, etc.).\nInfoedutec apoya a pequeños editores, editores de revistas, instituciones de investigación y universidades públicas o privadas. Se centra en contribuir a mejorar la gestión de la investigación y la publicación, aumentar la calidad y el rendimiento de las revistas académicas y monitorear la visibilidad e impacto científico mediante el uso de software de código abierto y las mejores prácticas de gestión de la información y del conocimiento.\nInfoedutec es un empresa legalmente constituida con operaciones en todo el territorio peruano bajo la denominación comercial INFOEDUTEC.COM y denominación registral INFOEDUTEC – INFORMACIÓN EDUCACIÓN TECNOLOGÍAS Y SERVICIOS DE PUBLICACIÓN E.I.R.L. Entre las principales actividades económicas de la empresa, pero no limitadas a, se encuentran las siguientes:\nActividades de tecnología de la información y de servicios informáticos Actividades profesionales, científicas y técnicas Edición de periódicos, revistas y otras publicaciones periódicas Ofrecemos servicios y productos relacionados con TECNOLOGÍAS Y SOLUCIONES TI: repositorios digitales, revistas electrónicas, páginas web, campus virtuales; gestión de bibliotecas y sistemas CRIS; PUBLICACIÓN CIENTÍFICA: asesoría de tesis, investigación y revisión de artículos; EDUCACIÓN Y CAPACITACIÓN: cursos virtuales, talleres a demanda y programas in-house a medida; GESTIÓN DE INFORMACIÓN: indización de revistas, marcación XML JATS; informes bibliométricos y trámites Crossref (membresía, registro DOI, etc.).\nContact Website: https://www.infoedutec.com/\nPlease email us at the following addresses:\nContact information: contacto@infoedutec.com\nAdministration and payments: administracion@infoedutec.com\nSupport and technical assistance: sistemas@infoedutec.com\n+- OJSBR (Brazil)\rOJSBR - Brazil OJSBR serves independent publishers, non-profit and for-profit institutions. We are specialists in the OJS platform and perform all related technological services. We also provide training for editors and institutions. As Crossref sponsors, we serve for-profit publishers for the acquisition of the DOI prefix, content registration, and all other Crossref-related services.\nContact Telephone/WhatsApp: +55 11 91261-2688 support@ojsbr.com (technical support)\ncontato@ojsbr.com (budget and questions)\nWebsite: https://ojsbr.com.br\n+- Open Journal Systems Chile (Chile)\rOpen Journal Systems Chile - Chile Open Journal Systems Chile es una empresa creada para brindar apoyo al trabajo editorial de los editores en Chile.\nOpen Journal Systems Chile trabaja exclusivamente con Instituciones y Universidades que utilizan una plataforma Online para la difusión de sus contenidos científicos. Open Journal Systems ofrece todos los servicios que son necesarios para la instalación, mantenimiento y difusión del contenido de las revistas científicas: hosting, Crossref (DOI, Similarity Check, Reference Linking, Cited-by, etc.), indexación, instalación y personalización del sistema OJS, etc.\nCon qué tipo de organización acepta / trabaja como Miembro Patrocinado: Región geográfica específica: Chile\nContact Dirección de contacto para consultas: contacto@openjournalsystems.cl\n+- Open Journal Solutions (Brazil)\rOpen Journal Solutions - Brazil Open Journal Solutions provides hosting services, technical support, editorial support, and training for journals in OJS.\nContact Telephone/Telegram/WhatsApp: +55 11 98959-9988\nE-mail: contato@openjournalsolutions.com.br\n+- Universidad Mayor de San Andres (Bolivia)\rUniversidad Mayor de San Andres - Bolivia Universidad Mayor de San Andrés (Bolivia), brinda servicio a instituciones públicas y privadas, con o sin fines de lucro. Ofrecemos todos los servicios necesarios para la instalación, alojamiento, mantenimiento y difusión de contenidos de revistas científicas. Apoyamos a los editores para la adquisición de DOI, registro de contenido y todos los demás servicios relacionados con Crossref. Asimismo, ofrecemos apoyo en buenas prácticas de publicación.\nContact Sitio web: https://iiaren.agro.umsa.bo/\nCorreo electrónico: iiaren.agronomia@umsa.bo\n+- Additional Sponsors (Latin America/Caribbean)\rGaloa (Brazil) AmeliCA Conocimiento Abierto (Mexico) Asociacion Uruguaya de Revistas Academicas (AURA) (Uruguay) Associacao Brasileira de Editores Cientificos do Brasil (ABEC) (Brazil) Corporación Ecuatoriana para el Desarrollo de la Investigación y la Academia (CEDIA) (Ecuador) Dossier Soluciones S.A.S (Colombia) Grupo Anltyk S.A. de C.V. (Mexico) Infotegra S.A.S. (Colombia) Lepidus Tecnologia\t(Brazil) Meta-datos (Mexico) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) OpenCiencia (Guatemala) Paideia Studio (Argentina) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) Scientificomm LLC (Mexico) Telecomexpert (Ecuador) Terceiro Andar International (Brazil) North Africa and Middle East +- Indexall Data LLC (Algeria)\rIndexall Data LLC offers the following services:\nfacilitate content registration with Crossref on behalf of sponsored members ojs,omp web hosting for journals and books indexing services and scientific IDs for researchers and journals We work with publishers from Algeria and all African and Arabic states who are using OJS for their journals, OMP for their books, or Eprints for their repositories. However, any similar platforms are also welcome.\nWe support the following record types:\njournals and journal articles; conference papers and proceedings, theses and dissertations; books, book chapters and reference works; datasets; grants; peer reviews, preprints, reports and working papers. We can provide support in English, French and Arabic.\nContact Website: https://www.polimpact.com/ You can contact us by email at editor@maspolitiques.com and by telephone at +21 3794222878.\n+- Open Science Community Iraq (OSCI)\rThe Open Science Community Iraq (OSCI) is dedicated to promoting open science practices among Iraqi researchers and academicians. We provide a range of services, including workshops, seminars, and one-on-one support, to foster a culture of open research and collaboration. Our work primarily targets early-career researchers, academic institutions, and research groups, aiming to enhance their understanding and implementation of open science principles.\nContact Website: https://www.osc-iraq.com/\nEmail: info@osc-iraq.com\n+- Additional Sponsors (North Africa/Middle East)\rDar Almandumah Inc. (Saudi Arabia) Justech (Tunisia) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) Sub-Saharan Africa +- Additional Sponsors (Sub-Saharan Africa)\rKenya Libraries and Information Services Consortium Sabinet Online Ltd (South Africa) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) US and Canada +- Additional Sponsors (US/Canada)\rAltum (USA) Erudit/Coalition Publica (Canada) Longleaf Services Inc (USA) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) Public Knowledge Project (PKP) (Canada) Western Europe +- Associations of Lithuanian Serials (Lithuania)\rThe mission of the Association is to use its best endeavours to unite Lithuanian research journal publishers and to promote the development of innovative technologies in publishing to speed up and facilitate scholarly research.\nThe purposes and tasks of the Association’s activities are to:\nPromote innovations in scholarly journal publishing in Lithuania. Collaborate with international organisations and initiatives to promote Lithuanian journals in the academic community worldwide. Develop and establish ethical principles in scholarly journal publishing. Take care of consultancy and training for the members of the Association. Members: https://serials.lt/lithuanian-journal-publishers/\nContact Website: https://serials.lt/\nFor more information: info@serials.lt\n+- Ciencia Avui (Spain)\rCiència Avui es una empresa catalana que trabaja para facilitar el registro de metadatos mediante el OJS o la interfaz estándar Crossref. El registro de metadatos permite aumentar la visibilidad del artículo y el número de referencias, obtener subvenciones y aumentar el prestigio de la publicación. Colaboramos con los principales institutos de investigación, universidades y editoriales en España y Europa. También, ofrecemos los servicios relacionados con la plataforma OJS y bases de datos científicas (WoS, Scopus).\nContact Para saber más, por favor visite nuestra página web https://ciencia-avui.org/contactes/ o escriba directamente al correo electrónico hola@ciencia-avui.org.\n+- Cultural Hosting (Spain)\rCultural Hosting offers web hosting services for cultural heritage organizations, including scientific and academic rights. To offer a valuable service, we have become a Crossref Sponsor to help small organizations with DOI management.\nThe company works with associations, foundations, publishers, universities and public administrations that use or want to use a software platform to manage their publications or magazines, such as Open Journal Systems (OJS).\nAs a Sponsor, we only provide services to entities based in the European Union or in any country in the Americas, in English or Spanish (Portuguese available, but less fluent).\nCultural Hosting ofrece servicios de alojamiento web para instituciones culturales, incluyendo organizaciones académicas y científicas. Para ofrecer un servicio más completo, hemos alcanzado un acuerdo para ser una Organización Patrocinadora de Crossref y así poder ayudar a las organizaciones más pequeñas con la gestión DOI.\nLa empresa trabaja con asociaciones, fundaciones, editoriales, universidades y administraciones públicas que utilizan o desean utilizar una plataforma de software para administrar sus publicaciones o revistas, como OJS (OpenJournal Systems).\nComo Organización Patrocinadora, solo brindamos los servicios a entidades con sede en la Unión Europea o en cualquier país de América, en inglés o español (portugués disponible, pero con menos fluidez).\nContact If you want more information about our services, please write to info@culturalhosting.com or visit https://culturalhosting.com/\nSi desea información sobre los servicios, escriba a info@culturalhosting.com o visite https://culturalhosting.com\n+- Ediser/mEDRA (Italy)\rEdiser, the service company of the Italian Publishers Association, runs mEDRA DOI Registration Agency.\nmEDRA is active since 2004 and provides DOI registration and related services, training and support to over 800 publishers, academic institutions, scientific societies, and research centres in Italy and internationally. mEDRA operates both directly and through local partners, such as MVB (a subsidiary of the German Booktrade Association) and Sinaweb (a specialised service provider for academic publishing in Iran).\nThe mEDRA system can be used by small and big organisations, for commercial and open access publications, with or without having technical skills:\nUser friendly web editors for journal articles, issues, titles, books and chapters, and to deposit citations and references. The web editors are available in English, Italian, and German. XML upload interface and web services for all record types, citations, and references. The XML schemas supported are ONIX for DOI, JATS, and BITS (headers and fulltext) OJS plug-in (mEDRA import/export plug in) Easy monitoring area The mEDRA team is available to support and train customers individually in Italian, English, French, and German.\nmEDRA also acts as a Crossref Sponsoring Organisation and, thanks to the interoperability layer, the mEDRA system can be used as an interface to the Crossref system.\nCrossref services offered include content registration, cross-linking, Cited-by, and Similarity Check.\nContact Technical support: support@medra.org\nSales support: sales@medra.org\nWebsite: https://www.medra.org/\n+- Open Academia (Sweden)\rOpen Academia is partnering with Public Knowledge Project (PKP) to provide a complete production and publication service that includes processing manuscripts through copyediting, typesetting, proofreading (liaising with authors), online publication, indexing and archiving. The Open Academia staff also serves as the peer review support contact for all system users: editors, section editors, authors, reviewers and readers.\nBasically, we offer support on all core elements of the publishing process. We have clients from all around the globe across all academic disciplines, and support English and Scandinavian languages.\nOpen Academia also offers individual services ranging from editorial support, production, OJS 3 website design and maintenance. For OJS journals that have recently upgraded to OJS 3, Open Academia can provide training sessions for editorial staff as well as website makeovers.\nWe form partnerships with academic societies and we also support smaller presses looking to publish open access journals.\nRead more at https://openacademia.net.\nMust members use a specific platform? Open Journal Systems (all versions)\nOrganization type (gov’t, university, not for profit) – all of the above! We typically work with academic societies and universities but welcome work with gov’t and non-profit organizations as well.\nContent type – Open Access scholarly journals spanning all sizes and academic disciplines.\nContact info@openacademia.net\n+- Stichting OpenAccess (Netherlands)\rStichting OpenAccess (SOAP) offers affordable and fast solutions for the dissemination of research content in the arts, architecture, built environment and design domain. We host open access e-print, pre-print, conference proceedings, book and journal archives.\nThe definition of \u0026lsquo;research content\u0026rsquo; includes, ‘art’, \u0026lsquo;design\u0026rsquo;, \u0026rsquo;engineering\u0026rsquo; or \u0026lsquo;planning\u0026rsquo; as long as that design, engineering or planning contributes to the body of knowledge in their respective fields.\nThe open access label we use for all submitted, accepted and published publications that we handle is CC-BY-4.0. We don\u0026rsquo;t deal with delayed or limited open access content.\nSOAP is also a proud Crossref sponsor. Sponsors provide a solution for small organizations who want to register metadata for their research and participate in Crossref but that are not able to join directly due to financial, administrative, or technical barriers.\nSOAP is based in The Netherlands but works internationally. For more information go to https://www.openaccess.ac.\n+- Additional Sponsors (Western Europe)\rGudinfo SL (Spain) Lund University Library (Sweden) National Library of Sweden (Sweden) Open Access Scholarly Publishers Association (OASPA) (based in The Netherlands but work with organizations globally) OpenJournals (The Netherlands) Public Knowledge Project (PKP) (based in Canada but work with organizations globally) Prepress Projects, Ltd. (United Kingdom) The Federation of Finnish Learned Societies (Finland) Thoth (United Kingdom but work with organizations globally) Ubiquity (United Kingdom) University of Zurich (Switzerland) Xercode Media Software, S.L. (Spain) For more information please contact our membership team.\n", "headings": ["Why join via a Sponsor?","How to join via a Sponsor","Find a Sponsor","Asia Pacific","Central and South Asia","Central/Eastern Europe","Latin America/Caribbean","North Africa/Middle East","Sub-Saharan Africa","US/Canada","Western Europe","Asia Pacific","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Central and South Asia","Contact","Contact","Связаться","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Central and Eastern Europe","Contact","Contact","Contact","Contact","Контакты","Contact","Контакты","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Contact","Latin America and Caribbean","Contact","Contact","Contact","Contact","Contact","OJSBR - Brazil","Contact","Open Journal Systems Chile - Chile","Contact","Open Journal Solutions - Brazil","Contact","Universidad Mayor de San Andres - Bolivia","Contact","North Africa and Middle East","Contact","Contact","Sub-Saharan Africa","US and Canada","Western Europe","Contact","Contact","Contact","Contact","Contact"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/crossmark/", "title": "Crossmark", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/madhura-amdekar/", "title": "Madhura Amdekar", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/research-integrity-roundtable-2024/", "title": "Research Integrity Roundtable 2024", "subtitle":"", "rank": 1, "lastmod": "2024-11-15", "lastmod_ts": 1731628800, "section": "Blog", "tags": [], "description": "For the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers.\nSince the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.", "content": "For the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers.\nSince the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward. As in previous years, we were able to include a range of organisations, which led to lively and interesting discussions. See below for the full participant list.\nCrossmark feedback We started by presenting Crossmark and a summary of the consultation process. There are a number of areas where we have learned more about how the community operates or found that Crossmark needs to adapt. These include:\nImplementation: Our members have struggled to implement Crossmark and uptake is low. At the same time, in many organisations the workflows for handling retractions are not well-defined because they are rarely used, if ever. The responsibility for updating Crossref metadata can be unclear and this may be a factor in the low uptake.\nEducation: There are different levels of understanding about how to handle retractions. Some members are very defensive when asked about retractions, others state they will never make updates to published works. How can we have a constructive conversation where the value of communicating updates appropriately is recognised?\nCommunity engagement: Given the different scales, locations, disciplines, and technologies used by our members, it looks like one size will not fit all when it comes to updates. How can we get continual, representative feedback on new tools and processes?\nMetadata assertions: Crossmark allows the deposit of metadata using custom field names, however this metadata seems to have low usefulness and is not highly valued by the community. Should we continue to collect it? Can we make some of the most-used field names part of our standard schema?\nChanging the Crossmark UI: Although we didn’t specifically ask about it during the consultation, the look of the Crossref logo often came up, and concern that it is not recognised and not well-used. Can we change the look and behaviour so that it has more impact?\nNISO Recommendations Patrick Hargitt represented the NISO group on Communication of Retractions, Removals, and Expressions of Concern (CREC). The group’s recommendations were published earlier this year and cover how retractions are communicated. CREC arose from an earlier project, IRSRS. A large part of the motivation is that retracted works continued to be cited, with citing authors apparently unaware of the retraction. Patrick presented the CREC recommendations, which cover:\nMetadata receipt, display, and distribution, Which metadata elements to communicate, How to implement the recommendations, Discussion of some special cases, Key stakeholders and their responsibilities. The two presentations prompted discussion, which was taken into the first of two workshops.\nFirst workshop: Improving collection of retractions and Crossmark The first workshop looked at proposed changes to Crossmark and how to encourage more members to deposit their retractions, corrections, and other post-publication updates. Several important themes emerged.\nFirst, the question of whose responsibility it should be to provide metadata on retractions and similar updates. Crossref has a responsibility to work with the community to obtain high quality and complete metadata; publishers should take responsibility for handling issues of research integrity and reporting them to relevant downstream services, like Crossref; and platforms need to provide tools that allow easy reporting of retractions.\nThe value of Crossmark appearing in PDFs was reiterated. The fact that a PDF can be downloaded, and years later there is a way to tell whether it has been retracted or not is highly valued. There was also the suggestion that the Crossmark logo on web pages can indicate a change before it has been clicked. This is something that we have been considering at Crossref and it was useful to have the idea reinforced. Another suggestion was that a browser plugin would make a good complement to Crossmark.\nImplementation issues with Crossmark were raised, including that it’s difficult to validate whether a specific implementation is complete. There are a number of different changes (to metadata deposit and content, and websites) that need to work together to have Crossmark fully functional. There were several questions and a discussion about Retraction Watch data. Some were about understanding its collection and validation. A number of participants are actively using the data and it was great to see the variety of applications.\nSecond workshop: Community use of retraction metadata The second workshop focused on a broader set of downstream organisations that might want to make use of retraction metadata. We looked at stakeholders and their needs, and attempted to match them up with existing tools. Several gaps were identified as a result, which may provide opportunities for new services or collaborations to fill them.\nWe identified a number of tools available for publishers, editorial systems, metadata researchers, and readers. A good example is reference managers, many of which are now highlighting retracted works to authors. This can help to reduce the number of retracted works being cited. Publishing platforms are also providing support to editors, using tools that include retraction metadata.\nSome of the stakeholders identified have limited tools for identifying retractions that are relevant to them. These include funders, archives and repositories, journalists, and institutions.\nOften, there are pathways for retraction data to be communicated but they are not being sufficiently used. There needs to be a concerted effort to improve the quality of retraction metadata for tools to function better. For example, a second author on a paper might not know that a correction or retraction is planned for their article. If their email or ORCID isn’t included in the metadata, an alerting tool wouldn’t be able to let them know. A similar argument can be made for institutions or funders if they are not well-identified in the metadata.\nThe question of standardisation of metadata was raised. It seems too early to implement a full set of standards at the moment. CREC and similar initiatives have documented and accommodated for a range of practices while providing guidance and principles to work towards. More discussion is needed in the community to work out paths that could be applied across the broad spectrum of scholarly communication.\nConclusion The event was very valuable in bringing up a range of topics related to retraction and communication of post-publication changes to scholarly works. We are grateful to all of the participants for their contributions and sharing their diverse experience and opinions with us.\nResearch integrity is an area of flux, with significant changes over the past few years. While there has been progress, there remain gaps in metadata and tools to communicate retractions. This is something that Crossref will continue to contribute to, and Crossmark clearly still has a role to play.\nSome of the ideas and suggestions from the discussion can be implemented in the near future. Others need further development, and we will continue to engage the community. Reading this, there may be topics where you feel you have a role to play. We are keen to partner with other organisations in this space as we continue to improve the transparency and communication of metadata for post-publication updates.\nParticipants Many thanks to the participants. Here is the full list of those that attended:\nName Role Organisation Aaron Wood Head, Product \u0026amp; Content Management American Psychological Association Adya Misra Associate Director, Research Integrity Sage Bianca Kramer Sesame Open Science Constanze Schelhorn Head of Indexing MDPI Guillaume Cabanac Full Professor University of Toulouse Hong Zhou Director of AI Product Wiley Jennifer Wright Head of Publication Ethics and Research Integrity Cambridge University Press Johanssen Obanda Community Engagement Manager Crossref Joris van Rossum Program Director STM Solutions Kathryn Weber-Boer Data \u0026amp; Analytics Digital Science Kornelia Korzec Director of Community Crossref Kruna Vukmirovic Publisher- Journals The Institution of Engineering and Technology Lena Stoll Product Manager Crossref Leslie McIntosh VP, Research Integrity Digital Science Liying Yang Professor CAS Library Luis Montilla Technical Community Manager Crossref Madhura Amdekar Community Engagement Manager Crossref Martyn Rittman Progam Lead Crossref Maryna Kovalyova Member Experience Manager Crossref Mina Roussenova Project Manager, Strategic Projects Karger Osnat Vilenchik VP Content Operations Ex Libris, part of Clarivate Patrick Hargitt Senior Director of Product Management Atypon/Wiley Paul Davis Tech Support \u0026amp; R\u0026amp;D Analyst Crossref Sami Benchekroun CEO Morressier Scott Delman Director of Publications Association of Computing Machinery (ACM) Shilpi Mehra Head, Research Integrity \u0026amp; Paperpal Preflight Cactus Communications Sichao Tong Chinese Academy of Sciences, Library ", "headings": ["Crossmark feedback","NISO Recommendations","First workshop: Improving collection of retractions and Crossmark","Second workshop: Community use of retraction metadata","Conclusion","Participants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-good-is-your-matching/", "title": "How good is your matching?", "subtitle":"", "rank": 1, "lastmod": "2024-11-06", "lastmod_ts": 1730851200, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/ief7aibi\nIn our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it\u0026rsquo;s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!\nHow about we start with a quiz?", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/ief7aibi\nIn our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it\u0026rsquo;s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!\nHow about we start with a quiz? Imagine a database of scholarly metadata that needs to be enriched with identifiers, such as ORCIDs or ROR IDs. Hopefully, by this point in our series this is recognizable as a classic matching problem. In searching for a solution, you identify an externally-developed matching tool that makes one of the below claims. Which of the following would demonstrate satisfactory performance?\nIt is a cutting-edge, state-of-the-art, intelligent-as-they-come, bullet-proof technology! All the big players are using it. You won\u0026rsquo;t find anything better! The tool was tested on the metadata of 10 articles we authored, and many identifiers were matched. The quality of our matching is 98%. Okay, okay, trick question. The correct answer here is to opt for secret answer #4: \u0026ldquo;I wouldn\u0026rsquo;t be satisfied by any of these claims!\u0026rdquo; Let\u0026rsquo;s dig in a bit more to why this is the correct response.\nThe importance of the evaluation Before we decide to integrate a matching strategy, it is important to understand as much as possible about how it will perform. Whether it is used in a semi or fully automated fashion, metadata matching will result in the creation of new relationships between things like works, authors, funding sources, and institutions. Those relationships will then, in turn, be used by the consumers of this metadata to guide their understanding and perhaps even to make important decisions about those same entities. As organisations providing scholarly infrastructure, we must therefore take it as our paramount responsibility to understand any caveats or shortcomings of the scholarly metadata we make available, including that resulting from matching.\nProper evaluation is what allows us to do this, as it is impossible to know how well a given matching strategy will perform in its absence. This is true no matter how simple or complex a matching strategy may seem. Complex methods can be tailored to data with specific characteristics and might fail when faced with something different from this. Simple methods might be only appropriate for clean metadata or a narrow set of use cases.\nBeyond complexity, matching strategies themselves vary widely in character, inheriting biases from their design, training data, or how a problem has been formulated. Some prioritise avoiding false negatives, while others focus on minimising false positives. Even a generally high-performing strategy might not be perfectly aligned with your specific needs or data. In some cases, the task also itself might be too challenging, or the available metadata too noisy, for any matching strategy to perform adequately.\nEvaluation is, again, how we understand these nuances and make informed decisions about whether to implement matching or avoid it altogether. By now, it should also be clear that the notion \u0026ldquo;we don\u0026rsquo;t need to evaluate\u0026rdquo; is far from ideal! Given its importance, let\u0026rsquo;s explore how evaluation is actually done.\nEvaluation process In general, a proper evaluation procedure should follow the following steps:\nPreparation of an evaluation dataset containing many examples of matching inputs and the corresponding expected outputs. Applying the strategy to all inputs from the dataset and recording the responses. Comparing the expected outputs with the outputs from the strategy. Converting the results of the above comparison into evaluation metrics. From this accounting, we can see that there are two primary components for the evaluation process: an evaluation dataset and metrics.\nEvaluation dataset It\u0026rsquo;s useful to conceive an evaluation dataset as the specification for an ideal matching strategy, describing what would be returned from our forever-elusive perfect matching. When creating such a dataset, what this means in practice is that it should contain a number of real-world, example inputs, along with the corresponding ideal or expected outputs, and that all data should be in the same format as the strategy is expected to process. The outputs should themselves also confirm the strategy\u0026rsquo;s overall requirements, for example, by being consistent with its cardinality, meaning whether zero, one, or multiple matches should be returned and under what circumstances. In terms of size, it\u0026rsquo;s generally useful to calculate the ideal number of evaluation examples using a sample size calculator or using standardised measures, but as a quick rule of thumb: less than 100 examples is probably insufficient, more than 1,000 or 2,000 is generally acceptable.\nIt is also important that the evaluation dataset be representative of the data to be matched in order to ensure reliable results. Using unrepresentative data, even if convenient, can lead to biassed or misleading evaluations. For example, if matching affiliations from various journals, building an evaluation dataset solely from one journal that already assigns ROR IDs to authors\u0026rsquo; affiliations might be tempting. The data, having been already annotated, allow us to avoid the tedious work of labelling, and we might even know that it is produced by a high-quality source. This is still, unfortunately, a flawed approach. In practice, such datasets are unlikely to represent the entire range of affiliations to be matched, potentially leading to a significant discrepancy between the evaluated quality and the actual performance of the matching strategy, when applied to the full dataset. To assess a matching strategy\u0026rsquo;s effectiveness, we have to resist shortcuts and instead do our best to create truly representative evaluation datasets to be confident that we\u0026rsquo;ve accurately measured their performance.\nEvaluation metrics Evaluation metrics are what allow us to summarise the results of the evaluation into a single number. Metrics give us a quick way to get an estimation of how close the strategy was to achieving perfect results. They are also useful if we want to compare different strategies with each other or decide whether the strategy is sufficient for our use case, removing the need to compare countless evaluation examples from different strategies against one another.\nThe simplest metric is accuracy, which can be calculated as the fraction of the dataset examples that were matched correctly. While a commonsense benchmark, accuracy can be misleading, and we generally do not recommend using it. To understand why, let\u0026rsquo;s consider the following small dataset and the responses from two strategies:\nInput Expected output Strategy 1 Strategy 2 string 1 ID 1 ID 1 ID 1 string 2 ID 2 ID 3 Empty output string 3 Empty output Empty output Empty output Both strategies achieved the same accuracy, 0.67, making one mistake each on the second affiliation string. However, a closer examination reveals that these error types are distinct. The first strategy matched to an incorrect identifier, while the second refused to return any value illustrating the limitation of accuracy as a measure: it generally fails to capture important nuances in strategy behaviour. In our example, the first strategy appears more permissive, returning matches even in unclear circumstances, while the second is more conservative, withholding them when uncertain. Although using such a small dataset would preclude drawing any definitive conclusions, it highlights how relying on accuracy alone can obscure differences in performance.\nFor evaluating matching strategies, we instead recommend using two metrics: precision and recall. To recap from our previous blog post:\nPrecision is calculated as the number of correctly matched relationships resulting from a strategy, divided by the total number of matched relationships. It can also be interpreted as the probability that a match is correct. Low precision indicates a high rate of false positives, which are incorrect relationships created by the strategy. Recall is calculated as the number of correctly matched relationships resulting from a strategy, divided by the number of true (expected) relationships. It can also be interpreted as the probability that a true (correct) relationship will be created by the strategy. Low recall means a high rate of false negatives, which are relationships that should have been created by the strategy but were not made. Applying these measures to our prior example, the strategies achieved the following results:\nStrategy 1: accuracy 0.67, precision 0.5, recall 0.5 Strategy 2: accuracy 0.67, precision 1.0, recall 0.5 As we can see, while both strategies have the same accuracy, using precision and recall better describes the difference between the two sets of results. Strategy 1\u0026rsquo;s lower precision indicates it made false positive matches, while Strategy 2\u0026rsquo;s perfect precision shows that it made none. The identical recall scores show both identified half of the possible matches.\nOf course, results calculated using such a small dataset are not very meaningful. If we obtained these scores from a large, representative evaluation dataset, it would indicate to us that Strategy 1 risks introducing many incorrect relationships, while Strategy 2 would be unlikely to do so. In both cases, we would still expect approximately half of the possible relationships to be missing from the strategies\u0026rsquo; outputs.\nWhich one is more important to prioritise, precision or recall? It depends on the use case. As a general rule, if you want to use the strategy in a fully automated way, without any form of manual review or correction of the results, we recommend paying more attention to precision. Privileging precision will allow you to better control the number of incorrect relationships added to your data. If you want to use the strategy in a semi-automated fashion, where there is a manual examination of and a chance to correct the results, pay more attention to recall. Doing so will guarantee that enough options are presented during the manual review stage and fewer relationships will be missed as a result.\nTo get a more balanced estimation of performance, we can also consider both precision and recall at the same time using a measure called F-score. F-score combines precision and recall into a single number, with variable weight given to either aspect. There are three commonly used types, each calculated as the weighted harmonic mean of precision and recall:\nF0.5: Precision is weighted more heavily. It can be understood as a score that is 50% more sensitive to precision than recall. A high F0.5 score indicates a measure of performance that minimises false positives. F1: Equal weight is given to both precision and recall. It can be interpreted as the most balanced score in this set. High F1 indicates good overall performance, with both false positives and false negatives being minimised equally. F2: Recall is weighted more heavily. It can be understood as a score that is 50% more sensitive to recall than precision. A high F2 score indicates a measure of performance where false negatives are minimised. Each of these variants allows for fine-tuning the evaluation metric to align with your expectations for a specific matching task. Choose whichever reflects the relative importance of precision versus recall for your use case.\nTo summarise, to avoid falling prey to misleading sales pitches or silly quizzes, it is important to have a good understanding of the performance of any strategies you are building or integrating. With thorough evaluation, including a representative dataset and carefully considered metrics, we can estimate the quality of matching and, by extension, its resulting relationships.\nNow that we\u0026rsquo;ve covered how to evaluate effectively, we can move on to some other aspects of metadata matching. Our next blog post will take a final, more holistic view of matching, exploring some complementary considerations to all of the preceding. Stay tuned for more!\n", "headings": ["The importance of the evaluation","Evaluation process","Evaluation dataset","Evaluation metrics"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/crossref-annual-meeting/2024-annual-meeting/", "title": "Crossref annual meeting and board election 2024", "subtitle":"", "rank": 1, "lastmod": "2024-10-29", "lastmod_ts": 1730160000, "section": "Crossref Annual Meeting", "tags": [], "description": "#Crossref2024 online, 29 October 2024 Our annual meeting, #Crossref2024, was held online on 29 October 2024 starting at 8:00 AM UTC to 18:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2024 below, and cite the outputs as `#Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], https://doi.", "content": "#Crossref2024 online, 29 October 2024 Our annual meeting, #Crossref2024, was held online on 29 October 2024 starting at 8:00 AM UTC to 18:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2024 below, and cite the outputs as `#Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/1KJ1GBDA9B:\nIf you attended any portion of the meeting, please take our survey to help inform furture events.\nSession I\nTime Topic 0:00 Welcome \u0026amp; Crossref updates 1:20 Strategic programs \u0026amp; annual meeting 31:49 Demos 59:08 Updates from the Community I 1:01:12 - Michael Parkin, EMBL-EBI- [Slides] 1:09:14 - Hans de Jonge, Dutch Research Council NWO - [Slides] 1:23:22 - Fred Atherden, eLife - [Slides] 1:32:02 - Brietta Pike, CSIRO - [Slides] 1:54:12 Panel discussion - Opportunities and challenges of the open scholarly infrastructure 3:10:37 Reflections break-outs (ISR, RCFS, Research Nexus, Reflections) Slides Session II\nTime Topic 0:00 Welcome and introduction 1:38 Beyond the basics: Crossref API Workshop 25:08 Metadata Schema 56:10 Resourcing Crossref for Future Sustainability (RCFS) 1:43:30 The state of Crossref 2:13 Board Election 2:32 Updates from the Community II 2:35:16 - Alice Wise, CLOCKSS- [Slides] 2:48:03 - Mark Williams, Sciety - [Slides] 2:58:34 - Arianna Garcia, AmeliCA/Redalyc - [Slides] 3:27:00 Reflections break-outs (ISR, RCFS, Research Nexus, Reflections) 3:32:21 Closing Remarks Posters\nSlide deck\nThe annual meeting archive Browse our archive of annual meetings with agendas and links to previous presentations from 2001 through 2015. Its a real trip down memory lane!\nPlease contact us with any questions.\n", "headings": ["#Crossref2024 online, 29 October 2024","The annual meeting archive"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/amanda-bartell/", "title": "Amanda Bartell", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/fees/", "title": "Fees", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/ryan-mcfall/", "title": "Ryan McFall", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/update-rcfs/", "title": "Update on the Resourcing Crossref for Future Sustainability research", "subtitle":"", "rank": 1, "lastmod": "2024-10-28", "lastmod_ts": 1730073600, "section": "Blog", "tags": [], "description": "We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.\nCrossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:\nMaking fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.", "content": "We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.\nCrossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:\nMaking fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above. This is an update on the research and discussions that have been underway with our Membership \u0026amp; Fees Committee and our Board, and what we’ve learned so far in each of these areas.\nGoal 1: More equitable fees. To ensure our fees going into the future are more equitable, we’re carrying out two parallel projects: evaluation of the lowest membership tier, and the review of the basis for deciding the membership tiers and distribution of membership across them.\nProject 1: Evaluate the lowest membership tier and propose a more equitable pricing structure. All Crossref members pay an annual membership fee. These fees are tiered, and different members pay a different fee depending on the annual publishing revenue that their organisation receives (or publishing expenses if they don’t receive any publishing revenue).\nWe entered into this project recognising that we have too many membership tiers and the definition we use to size members is not consistent and can be confusing (e.g. different basis for funders than other organisations, and both are different still from subscribers to our Metadata Plus service). The idea of the membership tiers was to use publishing revenue as a proxy for “ability to pay”. We really want to develop proposals for a more equitable pricing structure. However we don’t know enough about our members’ capacity to pay to be able to model an alternative approach.\nOur current lowest fee tier is $275 (USD) for any organisation with annual publishing revenue (or publishing expenses where the organisation doesn’t receive publishing revenue) of $0 to $1 million, and this is the tier where we focus our attention in our first project of the RCFS program. The difference between an organisation with revenue or expenses of US$0, and an organisation with revenue or expenses of US$1 million, is huge. Hardly any new members have joined in any other tier in the past several years. Of the 21,000 active members, more than 20,000 fall into the US$275 tier - either directly (as an independent member) or indirectly (through a sponsor, where their fees would be lower). A fee structure that would fit better with the realities of our community might entail breaking our current $275 fee tier down into two or more more granular tiers.\nAt the moment, the majority of Crossref’s revenues come from the bottom membership tiers; 65% of membership revenues come from organisations in the $275 tier. We also know that many of those members (86%) are paying more in membership dues than in content registration, whereas other members have the inverse relationship between annual dues and content registration. Overall, the members in the $275 tier contributed 34% of Crossref’s revenue last year, and the members in the \u0026gt;$50mln tier – contributed 29%.\nMembers’ survey Between April and May this year, we surveyed all independent members in the $275 tier. We asked questions about their operating size, how they’re funded, and how Crossref’s fees affect them. At the time of the survey, there were 8,027 members in this category. We received 1,054 responses; with a 13% response rate and broad representation globally, we are confident in the sample size. One-third of respondents said they were part of a larger organisation (such as a department or a library in a research institution).\nChart 1: Organisation revenue or funding The majority of respondents in this category (65%) have annual revenue or expense of less than 100,000 USD; with 48% operating with less than 10,000 USD.\nChart 2: Sources of funding When asked about the sources of funding (as an indicator of how stable these organisations might be and how readily accessible their funding is) the most frequent answer was public or government funding, and then article processing charges. If organisations relied on two sources of funding, the most common combination was public funding and article processing charges, and it was relatively rare for these organisations to have multiple sources of funding.\nChart 3: What percentage of expenses do you spend on Crossref fees?\nThe majority (61%) of respondents spend less than 5% of their expenses on Crossref fees. However, we have also learnt that for some volunteer-run publications, Crossref fees might be some of the only expenses they incur. Interestingly, the percentage of expenses spent on Crossref is fairly consistently spread across the continents.\nProject 2: Review the basis and distribution of membership tiers This project examines options for how we define the capacity to pay, how members are distributed across tiers, and the right levels of member fees.\nThere are currently a range of prices for our annual fees, based on an organisation\u0026rsquo;s ability to pay. We have used the metric of annual publishing expense or revenue as an indicator of that ability, but in some cases it doesn’t apply. As per our fee principles, we have not differentiated between organisation types. Nonprofit and commercial entities pay the same price (caveat: research funders still have a separate fee schedule, but that was intended to be temporary).\nWe conducted a review of other annual fee models to benchmark our approach against six like-minded organisations working in the context of scholarly communications and infrastructure. We looked at whether these organisations based their fees on one more more of the following:\nVolume: e.g., research output, # of journals Budget: e.g., total annual revenue or expenses Relevant budget: e.g. publishing revenue Organisation type: e.g. variance in fee based on publisher, institution, or funder Country-level economic data: e.g., discounting based on World Bank classification, discounting based on purchasing power calculation. Chart 4: Annual fee schedules comparisons between Crossref and CORE, DOAJ, Dryad, OA Switch-board, OpenCitations and ORCID.\nThere are three consistent themes among our peers: the total annual revenue and volume levels are the most common basis for membership fees among other organisations, and almost all offer discounted fees to accommodate country-based economic circumstances, utilising World Bank’s data (this is currently achieved at Crossref via the GEM program, which we have full intention of incorporating into our future fees whatever other decisions we might take). Only one other organisation uses publishing revenue or expenses as a basis for annual fees, while the potentially more transparent and less ambiguous data point of the total revenue factors in three other annual fee models.\nFor subscribers to our Metadata Plus service, the fee tier is selected based on whichever is the higher between their total annual revenue (including earned and fundraised, e.g. grants) or annual operating expenses (including staff and non-staff, e.g. occupancy, equipment, licences etc.). At present, we have limited understanding of the budgets of our members and how this may compare to their publishing revenues or expenses. We are looking to learn more about this as part of our annual membership data checking process, where we email all our members to ask them to confirm contact details for their organisation and the staff involved in managing their Crossref account. This year, we’re also asking all members about their organisation’s annual operating budget (or planned annual expenses) to help inform our discussions. In our case, the volume of outputs (in this case the number of items and associated metadata registered with Crossref) is recognised by the registration fees mechanism.\nConsulting with organisations outside Crossref membership To help us inform how our fees can be more equitable, it’s important to invite voices of organisations that may currently be unable to join us - due to fees or technical barriers. We hope that learning more about their circumstances will help us make sure that we improve accessibility of Crossref membership to all organisations that publish scholarly and professional works. We commissioned Accucoms to carry out a consultation on our behalf.\nSo far, from a handful of interviews with publishers from Nigeria, DRC, Canada and USA, we’ve learnt that while virtually all offer open access to their publications, the majority has no publishing income, and where the income is derived via APCs it’s modest and only applicable in rare circumstances. Through institutional funding and/or grants, these organisations have modest operational budgets, yet our respondents lacked clarity over the particulars. In terms of participation in professional networks and international publishing organisations, only one of the organisations we interviewed participates in DOAJ, and another is a member of OASPA, in both cases their participation is free. Among the interviewees, two organisations were interested in Crossref membership in the past but encountered technical barriers to joining.\nWith only five interviews to date, the consultation is still open and we’re keen to hear from more organisations that are not Crossref members but have considered our membership at some point.\nGoal 2: Simplify complex fees Projects 3 \u0026amp; 4: Review volume and backfile discounts for Content Registration Along with our membership fees, our members also pay usage-based registration fees for records (scholarly works and grants) they register with us. Different content types render different costs for our members, and the fees are subject to discounts related to the age of publication and volume of registrations. Records for items older than two years have a lower fee associated with them, to help incentivise registration of such “backfile” materials with great gains for the Research Nexus. There are also discounts related to the volume of transactions – which again depend on the content types.\nThese discounts are intended to encourage certain behaviours, specifically encouraging members to register older records in large quantities to better complete the scholarly record. Not all content types have backfile or volume discounts, and the rate of discount varies. This creates quite a complex system of fees. To the extent that the discount is successful in encouraging this behaviour, we want to preserve it, but in many cases these discounts see little to no activity.\nFollowing the discussions of the Membership and Fees Committee, chaired by Vincas Grigas, Vilnius University, we are preparing to consult with the small number of members who currently receive volume discounts to discuss what the impact would be if we removed them.\nWe plan to identify and preserve the well-used backfile discounts, which encourage registration of old content, such as books, journal articles, grants. However, there are types of discounts that are hardly ever used and we are considering removing these to simplify the fees. This work will focus on the technical implications of removing some of the underused backfile discounts from the billing code and consulting with members to understand any impact .\nGoal 3: Rebalance revenue sources Project 5: Reflect increase in metadata usage and perceived shift of value toward metadata distribution All Crossref metadata is made freely and openly available to everyone. However, some organisations may be looking for a service level agreement in delivery of the metadata, plus more regular snapshots and priority service/rate limits. For those organisations, we have an optional Metadata Plus service.\nThe final project is looking at the fees for this service. We are interested in making sure that Crossref metadata is available and used by the community where it can contribute to their objectives – related to discovery, analysis, integrity, and more. The optional paid service we offer aims to support the external tools that facilitate business and scholarly processes for the community. We are heartened to see that the appetite for the use of metadata seems to be growing, and the value of open research information is increasingly and widely recognised. We want to ensure that the users of metadata contribute proportionally to the maintenance of the records created and curated by our members.\nConclusion At this point, most projects generate a lot of questions and the work is underway to deliver answers related to capacity to pay, discounts as well as available metadata usage, and barriers faced by organisations in our community.\nWhat we have found so far is that two of our goals – simplification and equity – are often at odds with each other, and this is especially true with the $275 tier.\nWe welcome comments, suggestions and questions.\n", "headings": ["Goal 1: More equitable fees.","Project 1: Evaluate the lowest membership tier and propose a more equitable pricing structure.","Members’ survey","Project 2: Review the basis and distribution of membership tiers","Consulting with organisations outside Crossref membership","Goal 2: Simplify complex fees","Projects 3 \u0026amp; 4: Review volume and backfile discounts for Content Registration","Goal 3: Rebalance revenue sources","Project 5: Reflect increase in metadata usage and perceived shift of value toward metadata distribution","Conclusion"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/strategy/", "title": "Strategic agenda and roadmap", "subtitle":"", "rank": 1, "lastmod": "2024-10-16", "lastmod_ts": 1729036800, "section": "Strategic agenda and roadmap", "tags": [], "description": "Welcome to our strategic roadmap\u0026mdash;finalised January 2023\u0026mdash;which sets out Crossref\u0026rsquo;s priorities through 2025 so everyone can see our focus areas and upcoming projects.\nLike others, we envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nCrossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better.", "content": "Welcome to our strategic roadmap\u0026mdash;finalised January 2023\u0026mdash;which sets out Crossref\u0026rsquo;s priorities through 2025 so everyone can see our focus areas and upcoming projects.\nLike others, we envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nCrossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services\u0026mdash;all to help put scholarly content in context.\nThis page presents all our high-level activities, from open governance and sustainability, collaborative projects with different parts of the ever-diversifying scholarly community, work to expand metadata, and delivery of tools and APIs to retrieve works, entities, and their relationships\u0026mdash;all while fostering a strong global team.\nRead on to learn more about where Crossref is heading and let us have your thoughts by starting or joining a discussion in the strategy section of our community forum. Review the archived strategic narratives for 2020-2022 and 2018-2020, and read background on our strategic planning approach on our blog.\nView our live and detailed product roadmap at the bottom of this page\nCrossref\u0026rsquo;s role in the scholarly landscape Governments, funders, institutions, and researchers\u0026mdash;groups who once had tangential involvement in scholarly publishing\u0026mdash;are taking a more direct role in shaping how research is recorded, shared, contextualised, and assessed.\nWe now have more members that self-identify as universities or research institutions than as publishers, and we have seen a rise in library- and academic-led publishing. Many research funders are playing their part by supporting open infrastructure, registering their records with Crossref as members, and seeing this as a direct way of measuring reach and return on their grants and awards.\nAs more people contribute to an evolving scholarly record, Crossref must capture provenance and relationships through metadata, and broaden it\u0026rsquo;s scope even further to collect, clean, and deliver metadata in context.\nWe are scaling our systems, tools, and resources to manage this and we are doing it in the open, to demonstrate our committment to POSI and add some assurance that research is being properly supported, and so we can more easily integrate and co-create with other open infrastructures, whilst making it easy for members to participate and share as much metadata as they have.\nWith a more complete picture of the scholarly record available in the open, everyone will be able to examine the integrity, impact, and outcomes of our collective efforts to progress science and society.\nContribute to an environment where the community identifies and co-creates solutions for broad benefit A sustainable source of complete, open, and global scholarly metadata and relationships Manage Crossref openly and sustainably, modernising and making transparent all operations so that we are accountable to the communities that govern us Foster a strong team\u0026mdash;because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it We want to contribute to an environment in which the scholarly research community identifies shared problems and co-creates solutions for broad benefit We do this in all teams through research and engagement with our expanding global community, responding to and leading trends in scholarly communications.\nSome problems benefit from collective action. As scholarly communications evolves, we need to be proactive within our community to understand how we can help solve shared problems. We continue to focus on collaboration with new and long standing partners, developing programs through cross-team working groups to allow us to move nimbly and involve the community as we do.\nRecently completed Announced plans for merging Funder Registry into ROR Community consultation about Crossmark Explored preservation coverage among Crossref membership and partnering with preservation services Released Participation Reports v1.1 together with CWTS at Leiden University In focus Extend engagement around our Integrity of the Scholarly Record (ISR) program with editors, institutions, and research integrity sleuths Develop Similarity Check to improve the user experience and add features Grow adoption of the Grant Linking System (GLS) with more funders becoming members Align the Open Funder Registry with ROR Work with the Barcelona Declaration signatories to prioritise open metadata efforts Expand and relaunch our Service Provider program Up next Extend support for grant metadata Run a series of API sprints/hackathons to encourage the community to build tools with Crossref metadata. Under consideration Exploring: How do we more efficiently gather metadata corrections and notify the community? Exploring: Monitoring research integrity community developments and tools we could align with We want to be a sustainable source of complete, open, and global scholarly metadata and relationships We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so.\nBuilding a more complete picture of the scholarly record means thinking about our metadata outside the more rigid structures once provided by content containers. In line with community needs, we are building flexibile, clearer assertions of metadata provenance, and charge ourselves with improving the accuracy, transparency, and downstream usage of the metadata we collect and ingest from a range of sources. We will support our members in improving the provision of key metadata fields so that they can easily contribute to the growing network of metadata and relationships.\nRecently completed Celebrated five years of the Crossref Grant Linking System (GLS) Launched Crossref-hosted interim pages for grants (see example) Shared metadata development plans in response to community feedback Retired the inconsistent and unreliable subject codes and removed them from the REST API In focus Broad adoption of the GEM Program to include more of the world\u0026rsquo;s metadata Surveying small members and consulting with non-member journals to better understand fee accessibility Extending record registration form to accept articles and other research objects; planning to retire Metadata Manager tool as a result Developing clear metadata development strategy, priorities, and roadmap Hosting a series of webinars to support members’ metadata quality improvements with Participation Reports Co-creating interactive resources for using metadata, e.g. interviews, demos, and tutorials for working with our API Publishing a blog series on metadata matching Investigating strategies for matching affiliation strings to ROR IDs Developing a proof-of-concept work for DOI assignment and management with static site generators Ongoing bug fixes and metadata completeness checks for our REST API Up next Metadata schema updates to include citation types, versions, and contributor roles - share your input and feedback! Improvements to the design and functionality of the Crossmark button following community consultation Merge the Open Funder Registry into ROR Incorporate new preprint matching approach in our API Incorporate Retraction Watch data into our API Under consideration Develop a training program and resources on metadata registration best practices Extend the list of metadata elements that can be registered using our helper tools We want to manage Crossref openly and sustainably, modernising and making transparent all operations so that we are accountable to the communities that govern us. We do this by actively broadening board representation, managing the finances responsibly and openly, and improving organisational resilience through modernising systems, processes, and policies.\nWe invest time and resources in embedding open practices in our organisation so that you know what we know. We continuously address technical debt and work to make the infrastructure robust and resilient for the future. We use POSI as a decision framework to help improve ways of working and together with the other adoptees, we uncover more ways to be open and accountable to the community as we evolve.\nRecently completed Improved \u0026lsquo;broad stakeholder governance\u0026rsquo;, and progressed \u0026lsquo;open data\u0026rsquo; (See the self-assessments) Released the 2024 public data file Reached our 12-month contingency goal (see: Sustainability) Worked with the other POSI-adopting infrastructures to release v1.1 of the Principles together Published our employee manuals and policies publicly for transparency In focus Compiling latest self-assessment update on POSI compliance Drafting patent non-assertion policy Updating our by-laws to create a new category of membership to broaden participation Increasing transparency of more of our employment practices and general operations Migrate all databases to open-source database management systems Share data, software, and methods used in recent metadata matching explorations for reuse and reproducibility by the community Up next Move from data center to cloud Establish test environments in AWS Reconsider the structure of our REST API to gain efficiencies Decouple and open up additional parts of our legacy, closed codebases Conduct penetration tests in 2025 and publish a report on the findings Under consideration Exploring: Is our governance structure representative and are the processes efficient? (see: Governance) Continue opening up our technical support conversations and processes We want to foster a strong team\u0026mdash;because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it We do this through fair policies and working practices, a balanced approach to resourcing, and accountability to each other.\nOur commitment to collaboration and transparency in the community is reflected in how we operate as a team as well. By making our people operations more transparent, we can ensure that our approach is applied consistently and equitably; potential candidates can get a sense of how we operate; and other organisations can adapt and reuse policies if they wish.\nRecently completed Reviewed and revised our leadership structure to set up our teams to reflect and accommodate Crossref\u0026rsquo;s growth and expansion Agreed and published our new travel and events policy Reviewed recruitment and compensation practices In focus Hiring for two key leadership positions to strengthen our Operations and Programs groups Pursuing a program to better understand how resourcing supports Crossref\u0026rsquo;s sustainability Setting up cross-functional program teams to manage our work more collaboratively Continuously (re-)prioritizing our product roadmap using a rubric of 12 prioritization drivers Automating membership processes to support continued growth Tracking staff carbon emissions and considering ways to reduce our environmental impact Up next Plan 2025 in-person all-staff event Evolve the R\u0026amp;D function into a new Data Science function whose focus will be to analyse and improve how we collect, match, and deliver scholarly metadata Close our last remaining physical office in Lynnfield, MA (USA) Under consideration Exploring: how do we adapt to working in a fully remote environment as a growing organization? Completed activities Recently completed activities are listed on our strategic roadmap above. The following projects and initiatives have been completed since we last revisited our high-level strategy but are no longer considered recent:\nAcquired and opened Retraction Watch database; entered an agreement to grow the service together with Center for Scientific Integrity Brought operational management of ROR in-house, shared across DataCite, CDL, and Crossref Added metadata records with ROR IDs to our API Launched Global Equitable Membership (GEM) program Opened references by default Released new form for funders to register grant records New recommendations from the Preprints Working Group Discovery work on metadata matching and relationships Investigated patterns in and matching of article and preprint metadata Explored preservation coverage among Crossref membership Developed a framework to better track our carbon emissions Developed v2 of the experimental Labs participation reports Paused activities We are not currently actively pursuing the following projects, either because other things have taken priority or because they have been taken as far as they can for now:\nImplement \u0026lsquo;item graph\u0026rsquo; to reflect nuanced relationships and to ensure future schema flexibility Further work on a new API endpoint for relationships including providing a dataset of data citations Build a bridge API to surface information about billing and other internal operations Build a test environment for metadata schema updates Live product roadmap See the full link at Productboard.\n", "headings": ["Crossref\u0026rsquo;s role in the scholarly landscape","Contribute to an environment where the community identifies and co-creates solutions for broad benefit","A sustainable source of complete, open, and global scholarly metadata and relationships","Manage Crossref openly and sustainably, modernising and making transparent all operations so that we are accountable to the communities that govern us","Foster a strong team\u0026mdash;because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it","We want to contribute to an environment in which the scholarly research community identifies shared problems and co-creates solutions for broad benefit","Recently completed","In focus","Up next","Under consideration","We want to be a sustainable source of complete, open, and global scholarly metadata and relationships","Recently completed","In focus","Up next","Under consideration","We want to manage Crossref openly and sustainably, modernising and making transparent all operations so that we are accountable to the communities that govern us.","Recently completed","In focus","Up next","Under consideration","We want to foster a strong team\u0026mdash;because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it","Recently completed","In focus","Up next","Under consideration","Completed activities","Paused activities","Live product roadmap"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/events/metadata-sprint/", "title": "Crossref Metadata Sprint 2025", "subtitle":"", "rank": 1, "lastmod": "2024-09-25", "lastmod_ts": 1727222400, "section": "Webinars and events", "tags": [], "description": "Overview Crossref makes research objects easy to find, cite, link, assess, and reuse. We exist to make scholarly communications better. We emphasise the community\u0026rsquo;s role in contributing to this goal and next Spring, we’re opening a new option to collaborate and co-create new research, tools and initiatives – at our first Metadata Sprint. Read on to learn what we’re hoping to achieve and how you can take part.\nStudying and using the ever-growing magnitude of the scholarly record requires collaboration and joint thinking.", "content": "Overview Crossref makes research objects easy to find, cite, link, assess, and reuse. We exist to make scholarly communications better. We emphasise the community\u0026rsquo;s role in contributing to this goal and next Spring, we’re opening a new option to collaborate and co-create new research, tools and initiatives – at our first Metadata Sprint. Read on to learn what we’re hoping to achieve and how you can take part.\nStudying and using the ever-growing magnitude of the scholarly record requires collaboration and joint thinking. In the case of Crossref’s metadata, we are referring to a body of over 162 million records provided by 21,000 members across the globe. This is actively used, on one hand, to power all sorts of scholarly tools, such as reference managers, open catalogues, information dashboards and many more, but also as the raw data for meta-research.\nWe offer open access to this metadata via our REST API, which supports a wide range of queries, facets and filters. The REST API can be used, for example, to look up the metadata for specific records, search for works mentioning an author’s name or find retractions registered with us. It also allows users to filter on several elements, including funder IDs, ORCIDs, dates and more. Despite API being commonly defined as interfaces for machines, the number of people engaging directly with APIs is accelerating and diversifying today. Additionally, we also provide annual public data files, which you can download via BitTorrent protocol and get your hands on the ~212 GB file containing all of Crossref metadata.\nOur upcoming 25th anniversary lends itself as an opportunity for doing something new and increasing support for community-led initiatives. We want to bring together community members to engage with our REST API and the scholarly metadata therein, and foster innovation and creativity, solve real-world problems, and promote networking and collaboration. Some examples of projects that we have co-created with the community include the JSON Forms release that supported Vuetify, the Vue.js user interface library we use and, more recently, the update to our Participation Reports. We welcome librarians, research integrity experts, scientometricians, meta-scientists, data scientists, coders, engineers, and other open scholarly infrastructure enthusiasts. We invite the participants to suggest and join projects within the following themes:\nIntegrity of the scholarly record - what signals and patterns of lack of integrity can be detected using scholarly metadata? Quality and completeness - tools and strategies to assess and enhance the quality and completeness of the scholarly metadata. Metadata for everyone - making interacting with our REST API easier and more accessible. Metadata by everyone - tools to help the community contribute to the scholarly record in various ways. Outside the box - projects that don’t fit any of the above themes. You can pitch your project or support one proposed by others in the community. This event is not limited to technical-oriented participants. If you are interested in open infrastructure and don’t have a technical or code-oriented background, we encourage you to register your interest too.\nThe Metadata Sprint will take place in the National Museum of Natural Sciences of Madrid, Spain, on the 8 and 9 of April 2025. There is a limited number of spaces – we hope to welcome 30 participants at the event. We will encourage accepted participants to involve others in the community to collaborate ahead of time via our Community Forum, crowdsourcing ideas and comments on their initiative to support productivity on the day.\nIf you need a refresher on our API and what you can access through it, you can visit the REST API section of our documentation. We also encourage you to explore the API documentation and the latest news about our public data file.\nWe\u0026rsquo;re committed to inclusivity and diversity at our Metadata Sprint and will look to offer assistance for accepted applicants for whom travel costs might be a barrier to participation.\nFinally, we encourage you to read our Code of Conduct, which also applies to this event.\nWe are looking forward to meeting you in Madrid!\nRegistration and submission form Your browser does not support iframes. Direct access to the registration and submission form.\n+- Program\rAgenda coming soon\u0026hellip;watch this space.\n+- FAQS\rWhat should I bring?\nBring your laptop with you. The venue will have wifi access and plenty of power sockets. If you are visiting us from outside the EU, make sure you pack your power adapters.\nIs there a fee to participate?\nRegistration is free, but we do have limited seats.\nCan I join if I don’t have a team?\nAbsolutely. We will have slack channels and host group calls to let the participants start interacting with each other in advance. Ultimately, you can always choose to develop your project on your own.\nCan I propose a different theme or topics?\nSure! Be sure to include your proposal in the registration form.\nWhat are the entry requirements for Spain?\nYou can check if you need an entry Visa in this link\nDo you have any hotel suggestions?\nHere’s a selection of hotels near the Museo Nacional de Ciencias Naturales, with options for every budget. All hotels are within walking distance or a short transport ride, making it convenient for your visit. See the map below to see the proximity of the hotels to the venue.\nMap view of all locations shared below.\nNH Collection Madrid Abascal\nDistance: approx. 15-minute walk Price: €260+ a night Google Maps NH Madrid Chamberí\nDistance: approx. 15-minute walk Price: €210+ a night Google Maps Exclusive Rooms in the Heart of the City, Madrid, Spain\nDistance: approx. 15-minute walk Price: €110+ a night Google Maps Hotel NH Madrid Chamberí\nDistance: approx. 15-minute walk Price: €200-€250 a night Google Maps Hotel Suites Barrio De Salamanca\nDistance: approx. 15-minute walk Price: €250-€300 a night Google Maps Hotel NH Madrid Zurbano\nDistance: approx. 9-minute walk Price: €100-€150 a night Google Maps Barceló Emperatriz\nDistance: approx. 11-minute walk Price: €200-€250 a night Google Maps How do I get to the venue?\nWe encourage participants to consider the environment and travel by land wherever practicable. However, if you need to fly, Adolfo Suárez Madrid–Barajas Airport (MAD) is the nearest. By train: Getting to Madrid by Train.\nThe venue is close to the Nuevos Ministerios metro and train station, and it can also be reached by those arriving to the Gregorio Marañón metro station. Both are at less than 10-minute walk.\nContact us If you have any questions, contact events@crossref.org.\nSocial Media Links: LinkedIn, Mastodon, Twitter, Instagram\n", "headings": ["Overview","Registration and submission form","Contact us"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/board/", "title": "Board", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/crossref-live/", "title": "Crossref Live", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/elections/", "title": "Elections", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2024-board-election/", "title": "Meet the candidates and vote in our 2024 Board elections", "subtitle":"", "rank": 1, "lastmod": "2024-09-24", "lastmod_ts": 1727136000, "section": "Blog", "tags": [], "description": "On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.\nEach year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.\nWe maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization\u0026rsquo;s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).", "content": "On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.\nEach year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.\nWe maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization\u0026rsquo;s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers). We have two large member seats and two small member seats open for election in 2024.\nWe were pleased to see the diversity in candidates, with applicants from 24 countries. We also received three applications from research funders, which we specifically identified as a priority in the committee’s remit for this year. The committee was keen to prepare a diverse slate of organization types, individual skills, and global representation.\nThe Nominating Committee presents the following slate.\nThe 2024 slate Tier 1 candidates (electing two seats): Katharina Rieck, Austrian Science Fund (FWF) Lisa Schiff, California Digital Library Ejaz Khan, Health Services Academy, Pakistan Journal of Public Health Karthikeyan Ramalingam, MM Publishers Tier 2 candidates (electing two seats): Aaron Wood, American Psychological Association Dan Shanahan, PLOS Amanda Ward, Taylor and Francis Please read the candidates\u0026rsquo; statements Every member has a vote If your organization is a voting member in good standing as of September 11th, 2024, you are eligible to vote.\nThe voting contact for your organization will receive a ballot from eBallot, a third party election platform. You should receive your ballot by Wednesday, September 25th, and you will have until 15:00 UTC on October 29th to submit your ballot.\nThe election results will be announced at Crossref2024, our anual online meeting on October 29th, 2024.\nIf you have any questions about our election process, please contact me\nHappy voting!\n", "headings": ["The 2024 slate","Tier 1 candidates (electing two seats):","Tier 2 candidates (electing two seats):","Please read the candidates\u0026rsquo; statements","Every member has a vote"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/member-briefing/", "title": "Member Briefing", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/crossref-annual-meeting/", "title": "Crossref Annual Meeting", "subtitle":"", "rank": 1, "lastmod": "2024-09-08", "lastmod_ts": 1725753600, "section": "Crossref Annual Meeting", "tags": [], "description": "#Crossref2024 online, 29 October 2024 Our annual meeting, #Crossref2024, was held online on 29 October 2024 starting at 8:00 AM UTC to 18:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2024 below, and cite the outputs as #Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], [https://doi.", "content": "#Crossref2024 online, 29 October 2024 Our annual meeting, #Crossref2024, was held online on 29 October 2024 starting at 8:00 AM UTC to 18:30 PM UTC (universal coordinated time). We invited all our members from 170+ countries, and everyone in our community, to hear the results of our board election and team updates.\nPlease see information from #Crossref2024 below, and cite the outputs as #Crossref2024 Annual Meeting and Board Election, 29 October 2024 retrieved [date], [https://0-doi-org.libus.csd.mu.edu/10.13003/1KJ1GBDA9B](https://0-doi-org.libus.csd.mu.edu/10.13003/1KJ1GBDA9B):\nSession I\nTime Topic 0:00 Welcome \u0026amp; Crossref updates 1:20 Strategic programs \u0026amp; annual meeting 31:49 Demos 59:08 Updates from the Community I 1:01:12 - Michael Parkin, EMBL-EBI- [Slides] 1:09:14 - Hans de Jonge, Dutch Research Council NWO - [Slides] 1:23:22 - Fred Atherden, eLife - [Slides] 1:32:02 - Brietta Pike, CSIRO - [Slides] 1:54:12 Panel discussion - Opportunities and challenges of the open scholarly infrastructure 3:10:37 Reflections break-outs (ISR, RCFS, Research Nexus, Reflections) Slides Session II\nTime Topic 0:00 Welcome and introduction 1:38 Beyond the basics: Crossref API Workshop 25:08 Metadata Schema 56:10 Resourcing Crossref for Future Sustainability (RCFS) 1:43:30 The state of Crossref 2:13 Board Election 2:32 Updates from the Community II 2:35:16 - Alice Wise, CLOCKSS- [Slides] 2:48:03 - Mark Williams, Sciety - [Slides] 2:58:34 - Arianna Garcia, AmeliCA/Redalyc - [Slides] 3:27:00 Reflections break-outs (ISR, RCFS, Research Nexus, Reflections) 3:32:21 Closing Remarks Posters\nSlide deck\nThe annual meeting archive Browse our archive of annual meetings with agendas and links to previous presentations from 2001 through 2015. Its a real trip down memory lane!\nPlease contact us with any questions.\n", "headings": ["#Crossref2024 online, 29 October 2024","The annual meeting archive"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/jobs/", "title": "Jobs", "subtitle":"", "rank": 2, "lastmod": "2024-09-07", "lastmod_ts": 1725667200, "section": "Jobs", "tags": [], "description": "Help us achieve our mission to make research outputs easier to find, cite, link, assess, and reuse. We\u0026rsquo;re a small but mighty group working with over 20,000 members from 160 countries, and we have thousands of tools and services relying on our metadata, which sees over 2 billion queries every month on average. We are fully remote and have 46 staff spanning San Diego to Hong Kong and we all like to interact with and co-create with our engaged community.", "content": "Help us achieve our mission to make research outputs easier to find, cite, link, assess, and reuse. We\u0026rsquo;re a small but mighty group working with over 20,000 members from 160 countries, and we have thousands of tools and services relying on our metadata, which sees over 2 billion queries every month on average. We are fully remote and have 46 staff spanning San Diego to Hong Kong and we all like to interact with and co-create with our engaged community.\nWe take our work seriously but usually not ourselves\u0026hellip; so come and work with us - where else can you do something a bit geeky and important that is also sometimes fun?!\nCommunity Engagement Manager (Funders), posted December 5, 2024 Data Scientist, posted January 16, 2025 Take a look at our current team and check out the org chart to see where our vacancies fit in. We especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nEqual opportunities commitment Crossref is committed to a policy of non-discrimination and equal opportunity for all employees and qualified applicants for employment without regard to race, colour, religion, sex, pregnancy or a condition related to pregnancy, sexual orientation, gender identity or expression, national origin, ancestry, age, physical or mental disability, genetic information, veteran status, uniform service member status, or any other protected class under applicable law. Crossref will make reasonable accommodations for qualified individuals with known disabilities in accordance with applicable law. We regularly review and revise our code of conduct.\nPlease contact our HR Manager, Michelle Cancel, for any quest\n", "headings": ["Equal opportunities commitment"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/grant-linking-system/", "title": "Grant Linking System (GLS)", "subtitle":"", "rank": 1, "lastmod": "2024-09-04", "lastmod_ts": 1725408000, "section": "Find a service", "tags": [], "description": "The Crossref Grant Linking System (GLS) is a service for research funders to contribute to open science infrastructure. As members of Crossref, funders create unique links and open metadata about their support of all kinds, from financial grants to prizes to use of facilities. This metadata is distributed at scale openly and globally and the unique links are acknowledged in any outputs of the funding, such as publications, preprints, data and code - in order to streamline the reporting process.", "content": " The Crossref Grant Linking System (GLS) is a service for research funders to contribute to open science infrastructure. As members of Crossref, funders create unique links and open metadata about their support of all kinds, from financial grants to prizes to use of facilities. This metadata is distributed at scale openly and globally and the unique links are acknowledged in any outputs of the funding, such as publications, preprints, data and code - in order to streamline the reporting process. Background to the Grant Linking System In 2017, our board agreed that connecting more intentionally with research funding should be a key strategic priority for the Crossref infrastructure which already supported all kinds of research outputs like articles, preprints, standards, datasets, etc. Whilst the scholarly community has long been linking persistently between articles and other objects (Crossref), people (ORCID), and institutions (ROR), the record of the award was not captured in a consistent way across funders worldwide. Researchers and publishers have long been acknowledging funders in their publication metadata but the grants themselves were not easily and persistently linked up with the literature or with researchers or with institutions.\nAfter a board motion in 2017, we reconvened our Funder Advisory Group and worked with other community partners such as Europe PMC to create a new strategic plan. Part of that work was to agree on a sustainability model and fee schedule, and to design a schema that would capture relevant metadata about grants and projects, and in 2019 we launched the Crossref Grant Linking System (GLS).\nNow, over 35 funders have joined as members of Crossref created over 125,000 grants with metadata records and globally unique persistent links that can be connected with outputs, activities, people, and organizations.\nFeatures of the Grant Linking System Globally unique persistent link and identifier for each grant Connected with 160 million published outputs Funder-designed metadata schema, including project, investigator, value, and award-type information Programmatic or no-code methods to send metadata Thanks to the Gordon and Betty Moore Foundation who funded development of the online grant registration form in 2023. Distributed openly to thousands of tools and services Open search and API for all to discover funding outcomes Crossref-hosted landing pages A global community of ~50 funder advisors and \u0026gt;35+ funders already in the Grant Linking System Membership of Crossref; influence the foundational infrastructure powering open research What our GLS funder members say Research funders are a part of the scholarly communications system. We not only provide the funding to do the actual research but can also be the authoritative source of data about the projects we have funded and the outputs arising from that funding. Increasingly, all these elements – grants, researchers, outputs - are linked with persistent identifiers to ensure that research is findable and accessible. As part of its open science policy, NWO will start participating in the Crossref Grant Linking System from July 2025\n\u0026ndash; Hans de Jonge, Director of Open Science NL, part of the Dutch Research Council (NWO)\nGrant DOIs enhance the discovery and accessibility of funded project information and are one of the important links in a connected research ecosystem. I\u0026rsquo;m grateful and proud to contribute to the robustness and interconnectedness of the research infrastructure. Few funders are currently participating in the Crossref Grant Linking System, and I encourage others to consider doing so. This adoption follows the \u0026ldquo;network effect,\u0026rdquo; where the value and utility increase as more people participate, encouraging even wider adoption.\n\u0026ndash; Kristin Eldon Whylly, Senior Grants Manager and Change Management Lead at Templeton World Charity Fund (TWCF)\nThe initiative by FCT to assign unique DOIs to national public funding through Crossref is a game-changer for open science, linking funding directly to scientific outcomes and boosting transparency. Join us in this effort—let\u0026rsquo;s make every grant count and ensure open access to research information!\u0026quot;\n\u0026ndash; Cátia Laranjeira, PTCRIS Program Manager at Fundacao para a Ciencia e a Tecnologis (FCT Portugal)\n+- Benefits for funders\rDuring research management (primarily coming from activity reporting):\nImproved analytics and data quality More complete picture of outputs and impact Better value from investments in reporting services Improved timeliness, completeness and accuracy of reporting More complete information to support analysis and evaluation Streamlined discovery of funded content During reporting and evaluation (with a special component for policy compliance):\nBetter information about publication pathways and policy compliance Better/more comprehensive data about the impact and outcomes of their policies Improved data on policy compliance Improved data on policy progress and impact Streamlined discovery of funded content Better understanding of the effects of investments on the research landscape Clearer data on impact and ‘ROI’ for facility/infrastructure investments Improved analysis and evidence of outcomes of career support Improved publication ethics and research integrity (COIs, funding transparency etc.) Improved picture of long-term ROI and impact +- Benefits for content hosts\rContent hosts include publishers, data repositories, and hosting platforms\nImproved publication ethics and research integrity Improved services to authors Improved transparency on content access More connections within and between platforms and content New platform opportunities and value added services Reduced administrative and information management/verification overhead New value add services Greater ecosystem integration Improved user experiences +- Benefits for research organizations\rThis includes benefits for research administrators and managers, resource managers, project managers, and institutional policy makers.\nResearch administrators and managers benefit from:\nOpportunities to provide additional effective and constructive support for proposal preparation (pre-award) Find it easier to perform due diligence (pre-award) Reduced overhead in data collection (research management) Reduced overhead in compliance and data checking (research management) Reduced time/effort and improved data quality (reporting and evaluation) Improved evidence for decision making (reporting and evaluation) Better evidence for career and organizational impact (reporting and evaluation) Resource managers benefit from:\nBetter intelligence on funding sources and dynamics (pre-award) Better understanding of who is using their facilities (research management) Clearer links to downstream benefits of their work and provision (reporting and evaluation) Improved reporting/analysis capacity (reporting and evaluation) Improved data quality (research management) Simplified data sharing (research management) Institutional policy makers and strategists benefit from:\nUnderstanding funder portfolios to improve grant targeting (pre-award) Reduced data gathering overhead and improved intelligence about their portfolio of outputs (research management) Richer understandings of their research activity portfolio (research management) Better management of APC budgets (research management) Greater insight and evidence for stronger strategic planning (research management) More complete information to support analysis and evaluation (reporting and evaluation) Improved analytics and data quality (reporting and evaluation) Better understanding of outcomes of studentships and postdoctoral positions (reporting and evaluation) Improved connections to alumni (reporting and evaluation) Better data for benchmarking (reporting and evaluation) +- Benefits for researchers\rIn applying for funding, researchers benefit from:\nReduced data entry and improved reusability of information in applications Better tailored institutional support Improved targeting and design of career supporting interventions from funders Improved review Easier completion of applications In conducting research, researchers can benefit from:\nBoosted current awareness Easier access to facilities Reduced administrative overhead In publishing researchers benefit as authors and as readers from:\nShorter publication delays Simplified acknowledgement processes Critical awareness of any potential bias Richer context and simplified discovery Reduced uncertainty and administration around policy compliance In reporting on their activities to funders:\nImproved reporting experiences A shift from data collation/entry to verification Easier acknowledgement of support for their careers In building their careers:\nBoosted impact and enhanced visibility As collaborators, from better understanding of the contributions of others and improved recognition for their own contributions Clearer, more complete and complex career records Enhanced career recognition and support More diverse data sources for recognition and reward At every stage, the core benefits for researchers include:\nBetter career representations and reputational enhancement Simplified administration, reporting and application processes with reduced overhead and duplication of effort Better intelligence about research support and future opportunities for funding and collaboration Funder-designed metadata model One thing to note about Crossref grant records is that they can be registered for all sorts of support for research, such as awards, use of facilities, sponsorship, training, or salary awards. Essentially any form of support provided to a research group. The award type list (agreed by the Funder Advisory Group) is currently:\naward: a prize, award, or other type of general funding contract: agreement involving payment crowdfunding: funding raised via multiple sources, typically small amounts raised online endowment: gift of money that will provide an income equipment: use of or gift of equipment facilities: use of location, equipment, or other resources fellowship: grant given for research or study grant: a monetary award loan: money or other resource given in anticipation of repayment other: award of undefined type prize: an award given for achievement salary-award: an award given as salary, includes intramural research funding secondment: detachment of a person or resource for temporary assignment elsewhere seed-funding: an investor invests capital in exchange for equity training-grant: grant given for training Take a look at the metadata schema described in our schema markup guide for grant metadata, to see the details of how to send (or retrieve) metadata including investigators, funding values and types, unlimited numbers of projects with titles and descriptions, and more.\nOutcomes: funding and outputs connected Matching and analyses Over the years, since the Grant Linking System has evolved, we have been closely watching and analysing the effects of all funding data on the global Crossref infrastructure. This round-up of some of the community's analyses shows the breadth of applications.\nIn 2024, we updated some matching on award IDs in publications with grants registering in Crossref, really showing the linking system in effect.\nnumber of as of 2023-04-16 as of 2024-04-16 grants 76,621 120,819 (+58%) members registering grants 28 34 (+21%) matched relationships 98,593 155,475 (+58%) matched grants 18,114 27,199 (+50%) See earlier reports that show the same sort of analyses of grant\u0026lt;-\u0026gt;output matching as the above table, and their results with more explanation, from 2023: The more the merrier, or how more registered grants means more relationships with outputs; and from 2022: Follow the money, or how to link grants to research outputs.\nThe role of publishers Publishers have been including funding acknowledgements in their publication metadata at Crossref for over a decade. But they did not have a persistent link to allow seamless linking between article and grant. Now with the Crossref system they do - and as more and more funders join and register grants with us, more and more publishers will start to pick these up and include them in their article (and other) metadata. In fact, all 20,000 Crossref members have a responsibility to use Crossref links wherever they can, in reference lists, on interfaces, in search engines, etc.\nReal life example from eLife\u0026lt;-\u0026gt;Wellcome of the GLS in action This unique Crossref link https://0-doi-org.libus.csd.mu.edu/10.7554/eLife.90959.3 is for an eLife article that displays another unique Crossref link (https://0-doi-org.libus.csd.mu.edu/10.35802/212242) to the Wellcome grant on the Europe PMC page.\nThe same example now in the metadata using the funder-designed metadata schema for relationship is-financed-by.\nThat\u0026rsquo;s an example of funders and their grants and publishers and their publications connecting together, using the Grant Linking System within the global open science infrastructure.\nThe role of Grant Management Systems Following Europe PMC helping Wellcome and their other funders to create Crossref XML and host landing pages on their site, and Altum\u0026rsquo;s ProposalCentral integrating with Crossref since 2021, in 2024, the GLS started to see increased interest from other systems in integrating with Crossref. One such example is an open-source plugin for Fluxx, which was kindly funded by OA.Works: https://github.com/oaworks/create-grant-doi-in-fluxx.\nWe\u0026rsquo;re currently reviewing and supporting Crossref integrations within a number of other Grant Management Systems and will add a list of those integrations here soon. Please get in touch if you are able to contribute to this work.\nGetting started If you\u0026rsquo;re reading this far you must be about ready to get going. You\u0026rsquo;ll be joining Wellcome, European Research Council, NWO - Dutch Research Council, FWF - Austrian Science Fund, FCT - Foundation for Science and Technology, Portugal, JST - Japan Science and Technology Agency, CSIRO, Melanoma Research Alliance, OSTI at the US Department of Energy, and many other Crossref funder members.\nYou will need to be a Crossref member in order to participate in the Grant Linking System and register unique links for your grants.\nOnce you\u0026rsquo;re a member (or in preparation for becoming one), take a look at our documentation on registering grants which will walk you through what you need to know and what information you can to send to Crossref (which makes it globally and openly distributed at scale through our APIs). Some things you will need:\nUnique landing page for each grant, which should always also display the unique link An Open Funder Registry ID (find yours by searching for your organisation at search.crossref.org/funding) and noting the ID in the URL (or ask us) Ability to create and send XML metadata OR allocate staff to register using the manual form funded by the Moore Foundation Map your internal award types to the Crossref award types Create communications for your awardees and/or include the Grant Linking System in your agreements. We\u0026rsquo;ve compiled a few examples below. Membership \u0026amp; fees Funders who would like to participate in the Grant Linking System and register their research grants should apply to join as a member. In some cases, your organization may already be a member - so we\u0026rsquo;ll check on that for you as you may be able to register grants under you existing membership. Membership comes with obligations to maintain the metadata record for the long term; our membership terms sets these out. You will also be able to participate in Crossref governance such as voting in or standing for our annual board elections - it\u0026rsquo;s very much encouraged to maintain funder voices in the oversight of Crossref. Your first year\u0026rsquo;s membership invoice needs to be settled before a DOI prefix is assigned and your grant registrations can begin.\nWe have an introductory fee structure for funders which includes for a much lower annual membership fee (from USD $200 to USD $1200 depending on annual award value) and a higher per record fee of USD $2.00 for current grants and USD $0.30 for older grants. This fee schedule was proposed by the original Advisory Group of funders and approved by the board in 2018. It allows the cost to be budgeted into the grant itself, rather than through the often non-existent administration or operations budgets. Please see our fees page for more information. Please note that fees may be changed as part of the Resourcing Crossref for Future Sustainability (RCFS) Program that started in 2024.\nCommunicating with grantees Some of our members have shared their standard notifications with the rest of the funder group. Some funders include mention of the Crossref unique link in their contracts, on acceptance, and some in emailed or online guidance. Some specify how the awardee should acknowledge their funding. We\u0026rsquo;ve noted a few examples that can be adapted depending on your process:\nIn an email notification Dear [principal investigator],\nThrough Crossref, [funder name] has assigned a globally unique identifier to your grant: https://0-doi-org.libus.csd.mu.edu/10.#####/#####. We ask that this link is used in all instances when referencing our funding such as when submitting a journal article, or when posting other elements of your research (e.g. preprints, data). Please enter this unique identifier link in the Award Number field (or equivalent) if one is available, and in the funding acknowledgement section of your work (sample included). The publisher can then collect it and associate it with your work.\nThis will help to accurately identify and recognize any funding you have received, connect your research outputs with the grant automatically, streamline the reporting process, track the outputs of the grant. It can also boost the impact of your research and demonstrate your accomplishments to other funders and the rest of the research community.\nPlease use the following text to acknowledge the funding: \u0026ldquo;This publication is based on research supported by [funder name] (open funder registry ID ##########) under the grant https://0-doi-org.libus.csd.mu.edu/10.#####/#####”.\nFurther details are available [\u0026hellip;link to guidance as relevant].\nIn an agreement Communications clause: Any publication based on or developed under the Grant must, unless otherwise requested by the Grantor, contain an acknowledgment in the following or similar language that includes the Grantor’s open funder registry identifier and the Grant digital object identifier (DOI): \u0026ldquo;This publication is based on research supported by [funder name] (open funder registry ID ##########) under the grant https://0-doi-org.libus.csd.mu.edu/10.#####/#####\".\nAcknowledgements In mid-2024, we celebrated five years of grant linking! While thanks certainly go to our current volunteers from the funding community, we acknowledge that the GLS would not have been possible without early dedicated time and input from the following people and organisations on our working groups for governance and fees, and for metadata modelling:\nYasushi Ogasaka and Ritsuko Nakajima, Japan Science \u0026amp; Technology Agency Neil Thakur and Brian Haugen, US National Institutes of Health Jo McEntyre and Michael Parkin, Europe PMC Robert Kiley and Nina Frentop, Wellcome Alexis-Michel Mugabushaka and Diego Chialva, European Research Council Lance Vowell and Carly Robinson, OSTI/US Dept of Energy Ashley Moore and Kevin Dolby, UKRI/Medical Research Council/Research Councils UK Salvo da Rosa, Children\u0026rsquo;s Tumor Foundation Trisha Cruse, DataCite Please contact our membership specialists with any questions about joining, or our technical support specialists with questions about the grants schema or how to register your grants.\n", "headings": ["Background to the Grant Linking System","Features of the Grant Linking System","What our GLS funder members say","Funder-designed metadata model","Outcomes: funding and outputs connected","Matching and analyses","The role of publishers","Real life example from eLife\u0026lt;-\u0026gt;Wellcome of the GLS in action","The role of Grant Management Systems","Getting started","Membership \u0026amp; fees","Communicating with grantees","In an email notification","In an agreement","Acknowledgements"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/events/metadata-health-check-webinars/", "title": "Metadata health check webinars", "subtitle":"", "rank": 1, "lastmod": "2024-08-28", "lastmod_ts": 1724803200, "section": "Webinars and events", "tags": [], "description": "How good is your metadata? Are you curious about the quality of the metadata you\u0026rsquo;re depositing with Crossref? Join our hands-on session to learn how to use the Participation Reports tool to assess the quality of your metadata with Crossref. We\u0026rsquo;ll show you why good metadata matters and provide help on improving your metadata completeness and quality. Good metadata makes your content easier to find and use.\nWhy attend? Get practical help and advice to improve your metadata on-the-spot, adding useful information for completeness and quality.", "content": "How good is your metadata? Are you curious about the quality of the metadata you\u0026rsquo;re depositing with Crossref? Join our hands-on session to learn how to use the Participation Reports tool to assess the quality of your metadata with Crossref. We\u0026rsquo;ll show you why good metadata matters and provide help on improving your metadata completeness and quality. Good metadata makes your content easier to find and use.\nWhy attend? Get practical help and advice to improve your metadata on-the-spot, adding useful information for completeness and quality.\nWhat to expect? We’ll guide you through the Participation Reports tool, showing how it helps you to understand the quality of your metadata. Many people who have attended these sessions have found them helpful for improving their practices.\nChoose a session that suits your schedule We will have several one-hour sessions, so watch this space and pick a session that works best for you:\nOJS focused - Thursday, Mar 13, 2025 - 08:30 UTC OJS focused - Thursday, Mar 27, 2025 - 16:00 UTC Here is a helpful time converter.\nFor more details on what we\u0026rsquo;ll cover, check out our Participation Reports page. If you\u0026rsquo;d like to dive deeper into the tool and its benefits, visit our detailed documentation.\nSee you on Zoom!\n", "headings": ["How good is your metadata?","Why attend?","What to expect?","Choose a session that suits your schedule"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-myth-of-perfect-metadata-matching/", "title": "The myth of perfect metadata matching", "subtitle":"", "rank": 1, "lastmod": "2024-08-28", "lastmod_ts": 1724803200, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/pied3tho\nIn our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true!", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/pied3tho\nIn our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true! Read on to learn why.\nIf you have stuck with us this far in our series, hopefully, you are at least a bit excited about the possibility of creating new relationships between the works, authors, institutions, preprints, datasets, and myriad other objects in our existing scholarly metadata. Who would not want all of these to be better connected?\nWe have to pause for a moment and be honest with you: metadata matching is a complex problem, and doing it correctly requires significant effort. What is worse, even if we do everything right, our matching won\u0026rsquo;t be perfect. This may be counterintuitive. Perhaps you\u0026rsquo;ve heard that matching is not a hard problem, or have encountered people surprised that a matching strategy returned a wrong or incomplete answer. Sometimes, it is obvious to a person from looking at some specific example that a match should (or should not) have been made, so they naturally assume that a change to account for this has to be simple.\nMisconceptions like these can be problematic. They create confusion around matching, drive users\u0026rsquo; expectations to unreasonable levels, and make people drastically underestimate the effort needed to build and integrate matching strategies. So let\u0026rsquo;s dive right in and debunk a few common myths about metadata matching.\nMyth #1: A metadata matching strategy should be 100% correct Anyone who has built or supported a matching strategy has likely encountered the following belief: it is possible to develop a perfect strategy, meaning one that always returns the correct results, no matter the inputs. The unfortunate truth is that while one\u0026rsquo;s aim should always be to design matching strategies that return correct results, once we move beyond the simplest class of problems or artificially clean data, no strategy can achieve this outcome. In thinking through why this is the case, some inherent constraints become obvious:\nThe inputs to matching are often strings in human-readable formats, which can vary wildly in their structure, order and completeness. Since they\u0026rsquo;re intended to be parsed by people, instead of machines, they\u0026rsquo;re inherently lossy and frequently unstructured, anticipating that a person can infer from the source context what is being referenced. Matching strategies, although built to make sense of unstructured data, unfortunately, don\u0026rsquo;t have the luxury of this flexibility. A strategy has to account for translating a messy, partial, or inconsistent input into a correct and structured match.\nConsider, for example, the following inputs to an affiliation matching strategy:\n\u0026ldquo;Department of Radiology, St. Mary\u0026rsquo;s Hospital, London W2 1NY, UK\u0026rdquo; \u0026ldquo;Saint Mary\u0026rsquo;s Hospital, Manchester University NHS Foundation Trust\u0026rdquo; \u0026ldquo;St. Mary\u0026rsquo;s Medical Center, San Francisco, CA\u0026rdquo; \u0026ldquo;St Mary\u0026rsquo;s Hosp., Dublin\u0026rdquo; \u0026ldquo;St Mary\u0026rsquo;s Hospital Imperial College Healthcare NHS Trust\u0026rdquo; \u0026ldquo;聖マリア病院\u0026rdquo; In order to correctly identify the organisations mentioned here, the matching strategy must be able to distinguish between different ways of representing the same institution, disambiguate multiple institutions that have similar names, and handle variant forms for the parts of each name (Saint/St./St), identify the same name in different languages (\u0026ldquo;聖マリア病院\u0026rdquo; is Japanese for \u0026ldquo;St. Mary\u0026rsquo;s Hospital\u0026rdquo;), and make assumptions about partial or ambiguous locations translating to more precise references. While a person reviewing each of these strings might be able to accomplish these tasks, even here there are some challenges. Does \u0026ldquo;St Mary\u0026rsquo;s Hosp., Dublin\u0026rdquo; refer to the hospital in Ireland or a separate hospital in one of the many cities that share this name? Should we presume that because \u0026ldquo;聖マリア病院\u0026rdquo; is in Japanese, this refers to a hospital in Japan? Would someone, by default, be aware that St. Mary\u0026rsquo;s Hospital in London is part of the Imperial College Healthcare NHS Trust, such that inputs one and five refer to the same organisation?\nAn additional challenge lies in the quality of the data, which in the context of matching, encompasses both the input and the dataset being matched against. In real world circumstances, no dataset is fully accurate, complete, or current and certainly not all three. As a result, there will always be functionally random differences between inputs to the strategy and the entities to be matched. A theoretically perfect matching strategy would thus need to distinguish between inconsequential discrepancies resulting from gaps, errors, and variable forms of reference and actual, meaningful differences indicating an incorrect match. As one might imagine, this would require near total knowledge of the meaning and context for all inputs and outputs, a nigh-on impossible task for any person or system!\nAs a consequence, no metadata matching strategy will ever be perfect. It is unreasonable for us to expect them to be. This does not mean, of course, that all strategies are equally flawed or destined to forever return middling results. Some are better than others and we can improve them over time. Which brings us to the next myth:\nMyth #2: It is always a good idea to adapt the matching strategy to a specific input Matching strategies are not static. They can - and should - be improved. There is, however, a deceptive trap that one can fall into when attempting to improve a matching strategy. Whenever we encounter an incorrect or missing result for a specific input, we treat this problem like a software bug and try to adapt the strategy to work better for it, without considering all other cases.\nThe more complicated reality is that the quality of matching results is controlled through a complex set of trade-offs between precision and recall that determine the kind and number of relationships created between items:\nPrecision is calculated as the number of correctly matched relationships resulting from a strategy, divided by the total number of matched relationships. It can also be interpreted as the probability that a match is correct. Low precision indicates a high rate of false positives, which are incorrect relationships created by the strategy. Recall is calculated as the number of correctly matched relationships resulting from a strategy, divided by the number of true (expected) relationships. It can also be interpreted as the probability that a true (correct) relationship will be created by the strategy. Low recall means a high rate of false negatives, which are relationships that should have been created by the strategy but were not made. The diagram depicts false negatives and false positives. The ideal outcome would be that the ellipses are identical, matched relationships are exactly the same as true relationships, and there are no false negatives or false positives. In practice, we try to make the intersection as big as possible.\nThe tradeoff between precision and recall roughly means that modifying the strategy to improve recall will decrease precision, and vice versa.\nImagine, for example, we received a report about a relationship that was missed by matching because of a partial, noisy, or ambiguous input. We might be tempted to resolve this issue by relaxing our matching criteria. Unfortunately, this will have a cost of a higher overall rate of false positive matches.\nConversely, if we encounter a case where the matching has returned an incorrect match, we might attempt to make the matching strategy stricter to avoid this result. We should remember, however, that this may have the consequence of causing the strategy to skip many perfectly valid matches.\nThe tradeoff between precision and recall. (a) A strict strategy prioritises precision over recall resulting in more false negatives. (b) A relaxed strategy prioritises recall over precision resulting in more false positives.\nStriking this balance becomes even more difficult when attempting to address multiple issues at once, or considering constraints like the time and resources consumed by each aspect of the strategy. Each choice can compound the individual effects in unanticipated and expensive ways. The aim of matching ultimately then can\u0026rsquo;t be to achieve perfect results for every single case. Fixing one particular situation might not be desirable, as it can result in breaking multiple other cases. Instead, we have to find a locally optimal balance that optimises the strategy\u0026rsquo;s utility, relative to these inherent limitations. This means accepting some level of imperfection as not just inevitable, but necessary for implementing a workable strategy. When you consider all this, you might conclude that…\nMyth #3: We shouldn\u0026rsquo;t do large-scale, unsupervised matching Imperfect matching strategies, when applied automatically to real-world large datasets, might:\nFail to discover some relationships (false negatives), an outcome that may not be terribly problematic. In the worst case scenario, we have wasted a great deal of effort developing matching strategies that do not improve our metadata. Create incorrect relationships between items (false positives), what seems like a potentially larger problem, where we have added incorrect relationships to the metadata. Many have the instinct to avoid false positives at any cost, even if this means missing many additional correct relationships at the same time. They might come to the conclusion that if we cannot have 100% precision (see our previous myth), we simply should not allow matching strategies to act in an automated, unsupervised way on large datasets. While there might be circumstances where this belief is rational, in the context of the scholarly record, this notion is seriously flawed.\nFirst, if you are dealing with any medium to large-sized dataset, it almost certainly contains errors, even before you apply any automated processing to it. Even if data is submitted and curated by users, they can still make mistakes, and might themselves be using automated tools for extracting the data from other sources, without your knowledge. It is thus not entirely obvious that applying an (imperfect) matching strategy to create more relationships would actually make the data quality worse.\nSecond, while we cannot eliminate all matching errors, we can place a high priority on precision when developing strategies, with the aim of keeping the number of incorrectly matched results as low as possible. We can also make use of additional mechanisms to easily correct for incorrectly matched results, for example doing so manually, in response to error reports.\nFinally, the results of matching should always contain provenance information to distinguish them from those that have been manually curated. This way, the users can make their own decisions about whether to use and trust the matching results, relative to their use case.\nBy applying those additional checks, we can minimise the negative effects of incorrect matching, while at the same time reap the benefits of filling gaps in the scholarly record.\nMyth #4: We can only ever guess at the accuracy of our matching results In attempting to determine the correctness of our matching, we immediately encounter a number of inherent limitations. The sheer amount of entries in many datasets prevents a thorough, manual validation of the results, but if instead, we use too few or specific items as our benchmarks, these are unlikely to be representative of overall performance. The unpredictable nature of future data adds another wrinkle: will our matching always be as successful as when we first benchmarked it or will its performance degrade relative to some change in the data?\nWith so many unknowns, are we then doomed? No! We have rigorous and scientific tools at our disposal that can help us estimate how accurate our matching will be. How do we use them? Well, that is a big and fairly technical topic, so we will leave you with this little cliffhanger. See you in the next post!\n", "headings": ["Myth #1: A metadata matching strategy should be 100% correct","Myth #2: It is always a good idea to adapt the matching strategy to a specific input","Myth #3: We shouldn\u0026rsquo;t do large-scale, unsupervised matching","Myth #4: We can only ever guess at the accuracy of our matching results"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/terms/", "title": "Metadata Plus service terms", "subtitle":"", "rank": 1, "lastmod": "2024-08-09", "lastmod_ts": 1723161600, "section": "Find a service", "tags": ["Terms"], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": ["Background"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/best-practices/", "title": "Best Practices", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lena-stoll/", "title": "Lena Stoll", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/participation-reports/", "title": "Participation Reports", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/re-introducing-participation-reports-to-encourage-best-practices-in-open-metadata/", "title": "Re-introducing Participation Reports to encourage best practices in open metadata", "subtitle":"", "rank": 1, "lastmod": "2024-07-25", "lastmod_ts": 1721865600, "section": "Blog", "tags": [], "description": "We’ve just released an update to our participation report, which provides a view for our members into how they are each working towards best practices in open metadata. Prompted by some of the signatories and organizers of the Barcelona Declaration, which Crossref supports, and with the help of our friends at CWTS Leiden, we have fast-tracked the work to include an updated set of metadata best practices in participation reports for our members.", "content": "We’ve just released an update to our participation report, which provides a view for our members into how they are each working towards best practices in open metadata. Prompted by some of the signatories and organizers of the Barcelona Declaration, which Crossref supports, and with the help of our friends at CWTS Leiden, we have fast-tracked the work to include an updated set of metadata best practices in participation reports for our members. The reports now give a more complete picture of each member’s activity.\nWhat do we mean by ‘participation’? Crossref runs open infrastructure to link research objects, entities, and actions, creating a lasting and reusable scholarly record. As a not-for-profit with over 20,000 members in 160 countries, we drive metadata exchange and support nearly 2 billion monthly API queries, facilitating global research communication.\nTo make this system work, members strive to provide as much metadata as possible through Crossref to ensure it is openly distributed throughout the scholarly ecosystem at scale rather than bilaterally, thereby realizing the collective benefit of membership. Together, our membership provides and uses a rich nexus of information— known as the research nexus—on which the community can build tools to help progress knowledge.\nEach member commits to certain terms, such as keeping metadata current, updating links for their DOIs to redirect to, linking references and other objects, and preserving their content in perpetuity. Beyond this, we also encourage members to register as much rich metadata as is relevant and possible.\nCreating and providing richer metadata is a key part of participation in Crossref; we’ve long encouraged a more complete scholarly record, such as through Metadata 20/20, and through supporting or leading initiatives for specific metadata, like open citations (I4OC), open abstracts (I4OA), open contributors (ORCID), and open affiliations (ROR).\nWhich metadata elements are considered best practices? Alongside basic bibliographic metadata such as title, authors, and publication date(s), we encourage members to register metadata in the following fields:\nExample participation report for Crossref member University of Szeged\nReferences A list of all the references used by a work. This is particularly relevant for journal articles but the references can include any type of object, including datasets, versions, preprints, and more. Additionally, we encourage these to be added into relationships, where relevant.\nAbstracts A description of the work. These are particularly useful for discovery systems that will promote the work, and are often used in downstream analyses such as for detecting integrity issues.\nContributor IDs (ORCID) All authors should be included in a work’s metadata, ideally alongside their verified ORCID identifier.\nAffiliations / Affiliation IDs (ROR) Members are able to register contributor affiliations as free text, but we are encouraging everyone to add ROR IDs for affiliations as the recommended best practice, as this differentiates and avoids mistyping. These two fields have newly been added to the participation reports interface in the most recent update.\nFunder IDs (OFR) Acknowledging the organization(s) that funded the work. We encourage the inclusion of Open Funder Registry identifiers to make the funding metadata more usable. This will evolve into an additional use case for ROR over time.\nFunding award numbers / Grant IDs (Crossref) A number or identifier assigned by the funding organization to identify the specific award of funding or other support such as use of equipment or facilities, prizes, tuition, etc. The Crossref Grant Linking System includes a unique persistent link that can be connected with outputs, activities, people, and organizations.\nCrossmark The Crossmark service gives readers quick and easy access to the current status of a record, including any corrections, retractions, or updates, via a button embedded on PDFs or a web article. Openly adding corrections, retractions, and errata is critical part of publishing, and the button provides readers with an easy in-context alert.\nSimilarity Check URLs The Similarity Check service helps editors to identify text-based plagiarism through our collective agreement for the membership to access to Turnitin’s powerful text comparison tool, iThenticate. Specific full-text links are required to participate in this service.\nLicense URLs URLs pointing to a license that explains the terms and conditions under which readers can access content. These links are crucial to denote intended downstream use.\nText mining URLs Full-text URLs that help researchers in meta-science easily locate your content for text and data mining.\nWhat is a participation report? Participation reports are are a visualization of the data representing members’ participation to the scholarly record which is available via our open REST API. There’s a separate participation report for each member, and each report shows what percentage of that member’s metadata records include 11 key metadata elements. These key elements add context and richness, and help to open up members’ work to easier discovery and wider and more varied use. As a member, you can use participation reports to see for yourself where the gaps in your organization’s metadata are, and perhaps compare your performance to others. Participation reports are free and open to everyone - so you can also check the report for any other members you are interested in.\nWe first introduced participation reports in 2018. At the time, Anna Tolwinska and Kirsty Meddings wrote:\nMetadata is at the heart of all our services. With a growing range of members participating in our community—often compiling or depositing metadata on behalf of each other—the need to educate and express obligations and best practice has increased. In addition, we’ve seen more and more researchers and tools making use of our APIs to harvest, analyze and re-purpose the metadata our members register, so we’ve been very aware of the need to be more explicit about what this metadata enables, why, how, and for whom.\nAll of that still rings true today. But as the research nexus continues to evolve, so should the tools that intend to reflect it. For example, in 2022, we removed the Open references field from participation reports after a board vote to change our policy and update the membership terms meant that all references deposited with Crossref would be open by default. And now we’ve expanded the list of fields again, adding coverage data for contributor affiliation text and ROR identifiers.\nPutting it in practice To find out how you measure up when it comes to participation, type the name of your member organization into the search box. You may be surprised by what you find—we often speak to members who thought they were registering a certain type of metadata for all their records, only to learn from their participation report that something is getting lost along the way.\nYou can only address gaps in your metadata if you know that they exist.\nMore information, as well as a breakdown of the now 11 key metadata elements listed in every participation report and tips on improving your scores, is available in our documentation.\nAnd if you have any questions or feedback, come talk to us on the community forum or request a metadata Health Check by emailing the community team.\n", "headings": ["What do we mean by ‘participation’?","Which metadata elements are considered best practices?","References","Abstracts","Contributor IDs (ORCID)","Affiliations / Affiliation IDs (ROR)","Funder IDs (OFR)","Funding award numbers / Grant IDs (Crossref)","Crossmark","Similarity Check URLs","License URLs","Text mining URLs","What is a participation report?","Putting it in practice"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-schema-development-plans/", "title": "Metadata schema development plans", "subtitle":"", "rank": 1, "lastmod": "2024-07-22", "lastmod_ts": 1721606400, "section": "Blog", "tags": [], "description": "It’s been a while, here’s a metadata update and request for feedback In Spring 2023 we sent out a survey to our community with a goal of assessing what our priorities for metadata development should be - what projects are our community ready to support? Where is the greatest need? What are the roadblocks?\nThe intention was to help prioritize our metadata development work. There’s a lot we want to do, a lot our community needs from us, but we really want to make sure we’re focusing on the projects that will have the most immediate impact for now.", "content": "It’s been a while, here’s a metadata update and request for feedback In Spring 2023 we sent out a survey to our community with a goal of assessing what our priorities for metadata development should be - what projects are our community ready to support? Where is the greatest need? What are the roadblocks?\nThe intention was to help prioritize our metadata development work. There’s a lot we want to do, a lot our community needs from us, but we really want to make sure we’re focusing on the projects that will have the most immediate impact for now.\nSeveral projects were proposed, based on community demand over time. All are projects we intend to support long-term.\nProjects The projects included in the survey were:\nAlternate names - We proposed adding a repeatable ‘name’ element to allow for names that aren’t separated by given/family/surname. Updates to funding data -this update will be released in the near future and includes: Expand ROR support - Allow members to supply ROR ID instead of funder ID in funding data and grant records. Include Grant DOIs in funding metadata. Publication typing in citations - Support citation type in citation metadata (for example article, preprint, data, software, etc.). Expand contributor role support - Allow multiple contributor roles to be provided per contributor and add support for external vocabularies (like CRediT) Expand abstract support - We currently require all abstracts to be formatted using JATS. We will be adding new abstract formats, including BITS and ONIX (which have been requested), as well as a generic abstract format (non-JATS). Statements - Add support for free text statements such as data availability, acknowledgments, funding, and conflict of interest. Contributor identifiers - Accept contributor identifiers such as ISNI (in addition to ORCID, which is already supported). Conference event IDs - Identifiers for conference events. What’s next? There is a clear preference for publication types in citations and abstract markup, expanded support for multilingual metadata, followed by expanding contributor roles to support multiple roles and the CRediT taxonomy. The results have helped us prioritize our work and we’re advancing several projects soon based on our readiness to move forward.\nFirst up is publication typing in citations and statements - we hope to be able to make this ready for registration in the coming months, but want to confirm a few things first, primarily the list of ‘types’ to apply to citations, so please review and comment: Metadata updates in need of feedback July 2024\nWe also have been discussing expansions to our support for preprints metadata with our Preprints Advisory Group and have a number of preprint-specific updates that will be rolled out in the coming months as well, including support for versions and status. These proposed changes are also available for comment.\nAnd finally, we will be expanding support for contributor roles to include multiple roles per contributor, as well as adding support for the CRediT taxonomy. This update is yet to be scheduled but we do have the inputs and output planning done and welcome any comments on this as well.\nWe will also be continuing work on other projects highlighted in the survey that aren’t quite ready to go:\nMultilingual metadata: Support for multilingual metadata in particular is very important and will require a fairly significant technical effort, so we want to be sure we get this right - at minimum we need to include repeatable fields flagged with language metadata for most items, there may be other considerations as well such as the scope of languages supported.\nAs we develop new metadata segments we’re keeping language metadata in mind, but I’d like to form a short-term working group to help shape this update - this group will be focused on the details of supporting multilingual metadata in our inputs and outputs, so conversations will be very XML and JSON heavy. If you are interested and available please contact pfeeney@crossref.org. Abstract markup: we are currently in the research phase of this project but will be proposing updates and asking for input this fall. At the moment support for BITS and ONIX abstracts have been requested, as well as an agnostic format.\nExpansion of name and contributor ID support: work is under way for this as well, and I should have inputs and outputs for feedback in the coming months.\nWe anticipate more developments and requests for feedback in the future as we still have other projects from the list above to get to. I’ve opened up a ‘Metadata Development’ section in our Community Forum to invite discussion and will be kicking off a renewed Metadata Interest Group in the fall.\n", "headings": ["It’s been a while, here’s a metadata update and request for feedback","Projects","What’s next?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/patricia-feeney/", "title": "Patricia Feeney", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/special-programs/research-integrity/", "title": "Integrity of the scholarly record (ISR)", "subtitle":"", "rank": 1, "lastmod": "2024-07-16", "lastmod_ts": 1721088000, "section": "Get involved", "tags": [], "description": "The integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we’ve launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form, and all of this has been done to make it easier for the research community to assess the trustworthiness of scholarly outputs.\nWe want to ensure that we are definitely collecting and distributing the right metadata “trust signals” that the community needs to preserve the integrity of the scholarly record, while ensuring that we are really clear on the role that Crossref can and cannot play in this.", "content": "The integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we’ve launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form, and all of this has been done to make it easier for the research community to assess the trustworthiness of scholarly outputs.\nWe want to ensure that we are definitely collecting and distributing the right metadata “trust signals” that the community needs to preserve the integrity of the scholarly record, while ensuring that we are really clear on the role that Crossref can and cannot play in this. With this in mind, we’ve created a focused program around Integrity of the Scholarly Record - the “ISR Program”.\nThis page lists the rationale behind this program, and our ongoing and planned initiatives under this program.\nBackground The outputs of the research and publishing process create a “scholarly record”. This scholarly record is more than just the published outputs - it’s also a network of inputs, relationships, and contexts. At Crossref, we talk about the “Research Nexus”,\na rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nWhen published outputs are tied to a persistent and unique identifier, they get a persistent record. Maintaining this record for the long term and having a layer of context associated with the record, establishes the integrity of the scholarly record. The context comes from the metadata associated with the work. In simpler words, scholarly metadata tells us:\nWho are the authors of a work? What are the affiliations of the authors? Which funding programs supported the work? Which datasets arose out of the work, OR how is dataset A connected to paper A? Was the work updated after its publication, i.e. was it retracted or corrected? And more.. By providing this important context, scholarly metadata can act as a signal of trustworthiness.\nCrossref is focused on providing infrastructure which enables those who produce scholarly outputs to provide metadata and relationships (evidence and context) about how they ensure the quality of their content, and how their outputs fit into the wider scholarly record. We do not assess the quality of our members’ content- the presence of a DOI record is not a signal of quality, but the community can use the presence (or lack of) rich metadata associated with a DOI record as evidence of integrity.\nRead more: ISR part one: What is our role in preserving the integrity of the scholarly record\n", "headings": ["Background"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/", "title": "You are Crossref", "subtitle":"", "rank": 1, "lastmod": "2024-07-07", "lastmod_ts": 1720310400, "section": "", "tags": [], "description": "Crossref runs open infrastructure to link research objects, entities, and actions, creating a lasting and reusable scholarly record. As a not-for-profit with over 21,000 members in 160 countries, we drive metadata exchange and support nearly 2 billion monthly API queries, facilitating global research communication, for the benefit of society.", "content": "Crossref runs open infrastructure to link research objects, entities, and actions, creating a lasting and reusable scholarly record. As a not-for-profit with over 21,000 members in 160 countries, we drive metadata exchange and support nearly 2 billion monthly API queries, facilitating global research communication, for the benefit of society.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossmark-community-consultation-what-did-we-learn/", "title": "Crossmark community consultation: What did we learn?", "subtitle":"", "rank": 1, "lastmod": "2024-07-02", "lastmod_ts": 1719878400, "section": "Blog", "tags": [], "description": "In the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That\u0026rsquo;s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the research they are looking at has updates by clicking the Crossmark logo.", "content": "In the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That\u0026rsquo;s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the research they are looking at has updates by clicking the Crossmark logo. They also see useful information about the editorial process, and links to things like funding and registered clinical trials. All of this contributes to what we call the integrity of the scholarly record.\nCrossmark has been around a long time and the context around it is constantly changing. It last had a major update in 2016 and in 2020 we removed fees for its use.\nThe past few years have seen a more intense focus on research integrity among the scholarly communications community, leading to more retractions and calling out large-scale manipulation of editorial processes. At the same time, we haven’t seen an increase in the uptake of Crossmark, which is still used by only a minority of our members. We would like to know why the uptake is low and whether there is more we can do in this area. To dig into this, in the first part of 2024 we reached out to members of our community.\nWhat did we do? We wanted to learn about attitudes towards Crossmark and related aspects of research integrity. This was done in several ways:\nStructured interviews with eight of our members. Round tables at Crossref LIVE events in Bogota and Nairobi Surveying a selection of our members, which led to 94 responses. The topics we asked about were related to how post-publication updates are made and communicated, and which metadata demonstrates good practice.\nWe are extremely grateful to the members who contributed. They provided valuable feedback and have helped to shape the future of Crossmark and our approach to the integrity of the scholarly record.\nWhat did we find? Across the various groups there were a few common themes, which fell into several areas.\nCommunication of updates is highly valued, and seen as the most important role that Crossmark can play. Some of those we spoke to would like readers to see if there is an update as soon as a page opens, without having to open a popup. This could be done by having a logo that changes colour, shape, or size.\nConversely, not as much enthusiasm was shown for the metadata assertions. These are additional fields that can be displayed to readers in the Crossmark popup. There wasn’t a strong consensus on which commonly-made assertions are the most important for research integrity.\nThere is diversity in attitudes towards making updates to published works, what research integrity means, and approaches to workflows for updates. Even within a single organisation, a number of different workflows and multiple staff members might be called on to update published research. This makes things complex and means that it can be difficult to fit Crossmark in.\nThere are technical challenges to getting started with Crossmark. Those responsible for implementing Crossmark are often technical staff who struggle with the documentation we provide in English. There is also no plugin for OJS, a widely-used open source editorial software. It is more difficult to deposit Crossmark metadata for books than journal articles, and many article types don’t permit Crossmark metadata at all. On the other hand, those who successfully installed Crossmark found it easy to use and low-maintenance.\nOverall, it seems that Crossmark still has an important role to play but there are changes and improvements we can make.\nWhat’s next? Here are the main areas we intend to follow up on in the coming months.\nImplementation We need to look at how to make implementation more straight-forward. Can we provide multilingual documentation, plugins, run workshops or webinars, or make changes to Crossmark to lower the barrier to entry?\nUnderstanding workflows Can we collaborate with our members and other organisations to reach a better understanding of how to update published works? Are there alternative workflows we need to support? Have we made it too difficult to understand and implement the options we currently have?\nWhile updates are always likely to be rare, we want to help members understand the benefits of making them. We talked to some members who were proud of never having published a retraction or correction, which left us wondering whether they are missing legitimate opportunities to correct the scholarly record. We also know that for some members and many work types (preprints, for example), updates are made without a separate published notification. Can we better understand the role that the published updates play and communicate updates even if there isn’t a published notice?\nOngoing feedback Clearly one size doesn’t fit all when it comes to implementing and communicating updates. We need to find ways of keeping in touch with the community to test new solutions with as broad a range of members as possible. We want to avoid catering to a minority and leaving others struggling to find ways to implement a solution.\nCustom metadata? Is there an ongoing need for metadata assertions? Many of the assertions currently made are possible as standard metadata and others could be included in our deposit schema. We want to consider removing the option to add assertions. This needs more feedback from the community, especially those who currently make use of assertions.\nRedesign the UI Crossmark doesn’t have the recognition with readers we would like. Is there a way we can redesign it to make it more associated with Crossref and accurate metadata? We intend to explore different designs, and test them with members and readers.\n", "headings": ["What did we do?","What did we find?","What’s next?","Implementation","Understanding workflows","Ongoing feedback","Custom metadata?","Redesign the UI"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/celebrating-five-years-of-grant-ids-where-are-we-with-the-crossref-grant-linking-system/", "title": "Celebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?", "subtitle":"", "rank": 1, "lastmod": "2024-07-01", "lastmod_ts": 1719792000, "section": "Blog", "tags": [], "description": "We’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure.\nThe system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure.", "content": "We’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure.\nThe system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure. Our first activity to highlight the moment was to host a community call last week where around 30 existing and potential funder members joined to discuss the benefits and the steps to take to participate in the Grant Linking System (GLS). Some organisations at the forefront of adopting Crossref’s Grant Linking System presented their challenges and how they overcame them, shared the benefits they are reaping from participating, and provided some tips about their processes and workflows.\nThe funding organisations whose experiences were shared included Wellcome, FCT (Foundation for Science and Technology, Portugal), and NWO (Dutch Research Council). They were joined by a new group of foundations, research councils, and private research funders from around the world\u0026mdash;from Kenya to Singapore to Estonia\u0026mdash;to have a first introduction to the GLS and connect them with colleagues who are further along on their journey.\nWe also heard about tools such as a new open source Crossref plugin for the Fluxx platform, grant management systems with in-built Crossref integrations such as ProposalCentral, Europe PMC GrantFinder which was first to implement the GLS on Wellcome’s behalf and hosts their grants, and one of the first publishers, eLife to start referencing Crossref grant links in their publications both online and in the open metadata for others to retrieve.\nRead on for further information or watch the recording of the event.\nWhat is the Crossref Grant Linking System? The Crossref Grant Linking System, conceptualised in 2017, and launched in 2019, captures and helps clarify funding relationships for scholarly outputs. Thanks to interconnectedness with the 160 million metadata records collected and curated by Crossref members, it enables funders as well as scholars to track and analyse funding patterns and evaluate programmes, and it supports assertions about the integrity of scholarly records.\nFeatures of the GLS Globally unique persistent link and identifier for each grant Connected with 160 million published outputs Funder-designed metadata schema, including project, investigator, value, and award-type information Programmatic or no-code methods to send metadata Thanks to the Gordon and Betty Moore Foundation who funded development of the online grant registration form Open search and API for all to discover funding outcomes; all metadata is distributed openly to thousands of tools and services Crossref-hosted landing pages A global community of ~50 funder advisors and 35+ funders already in the Grant Linking System Membership of Crossref; influence the foundational infrastructure powering open research The last five years has seen the GLS grow through membership, metadata, and community contributions.\nThe momentum for this programme is building - as illustrated by increasing numbers of metadata records (and related relationships we’re seeing). The 35 funder members represent over 100 funding programmes and have created 125,000 grant records already.\nDuring last week\u0026rsquo;s call, it was helpful to hear from the community what they see as key benefits of the Crossref Grant Linking System:\nMeaningfully delivering on and supporting Open Science policies and mandates, and contributing ‘their bit’ to the transparency of the evidence trail in the scholarly ecosystem. Reporting and evaluating the funding programmes, essential for the public funders who need to demonstrate the value for money in allocating their funds and other support. Supporting a more holistic assessment of scholarship and scholars, especially as and when metadata becomes included with a full array of outputs, not limited to books and articles. How the Crossref Grant Linking System supports Open Science policy Since 2020, all the grant records are openly available through our REST API which is queried more than 1.8 billion times every month so these metadata records are distributed to thousands of systems across the research enteprise. In a 2022 blog, Ed Pentz and Ginny Hendricks laid out guidelines for research funders to meet open science guidelines using existing open infrastructure such as Crossref, ORCID, and ROR. Syman Stevens, a grantmaking and private philanthropy consultant, highlighted on the call that the funders he works with are increasingly interested in ways to deliver on their open science policy and that participation in the GLS is a tangible thing they can do to meet this goal.\nAs part of its open science policy, NWO will start participating in the Crossref Grant Linking System from July 2025. Research funders are a part of the scholarly communications system; we not only provide the funding to do the actual research but can also be the authoritative source of data about the projects we have funded and the outputs arising from that funding. Increasingly, all these elements – grants, researchers, outputs - are linked with metadata and unique identifiers to ensure that research is findable and accessible.\n\u0026ndash; Hans de Jonge, Director of Open Science NL, part of the Dutch Research Council (NWO)\nHow funders leverage the Grant Linking System in their reporting and assessment Looking back to the origins of the system, it’s important to recognise the work of the initial working groups. Through their contribution, funders helped design the initial metadata schema for grants as well as establish the governance and fees for this service, and our Advisory Group continues to inform further developments. In this way, the Grant Linking System enables the needs and wishes of funders to contribute and see their data as part of the wider ecosystem.\nAn excellent example of that synergy in action is the use case presented by Cátia Laranjeira, manager of the PTCRIS programme at the Foundation for Science and Technology, Portugal (FCT). PTCRIS is the Foundation’s integrated national information ecosystem that supports scientific activity management. Cátia reflected on the relative fragmentation of spaces where the scientific outputs are found, and PTCRIS’s ambition for aggregating metadata in one place to be able to trace and evaluate programmes in light of the related outputs. At the start of the programme, they identified lack of a persistent identifier for grants as a major shortcoming of the system. Crossref GLS naturally fits in with their goals.\nThe initiative by FCT to assign unique DOIs to national public funding through Crossref is a game-changer for open science, linking funding directly to scientific outcomes and boosting transparency. Join us in this effort—let\u0026rsquo;s make every grant count and ensure open access to research information!\u0026quot;\n\u0026ndash; Cátia Laranjeira, PTCRIS Program Manager at Fundacao para a Ciencia e a Tecnologis (FCT Portugal)\nFCT initially piloted a small subset of their grants (approximately 6,000 recent awards) at the end of 2023. Cátia pointed to researchers’ keen participation in this programme as one of its successes – and thanks to the word of mouth, FCT has already been approached by researchers requesting unique Crossref links for their grants! This appetite for grant IDs will soon be more fully satisfied, as FCT is readying to register all of their grants with Crossref, to enable further insights into funding and outcome flows, supporting them in demonstrating the value for money for the public resources they manage. Via interfaces for grant management and standardised online CVs, the system is also enabling researchers to use the system in their own future reporting and career development.\nIn the ensuing discussion, Rachel Bruce of UKRI mentioned that she’s hopeful that GLS will help funders ‘close the loop’ on more holistic reward and recognition, allowing for inclusion of evidence for a broader set of outputs in those processes.\nHow the community is working to integrate open infrastructure Melissa Harrison, Team Leader at EMBL-EBI, manages Europe PMC and a complementary data science team, who were part of the initial FREYA project – supporting infrastructure delivery for unique identifiers for grants. The team has been adding grant records to Crossref on Wellcome’s behalf since 2019. Melissa highlighted the shortcomings of internal award numbers, which don’t tend to be understood outside of the ecosystem where they are produced (that is the funder’s administrative system), are almost certainly not unique, and don’t resolve to or connect with anything in the wider ecosystem. Therefore internal award numbers can’t signify relationships with other outputs or assets in the wider world. By contrast, Crossref’s Grant IDs are unique, persistent, resolvable, and interrelated with other Crossref metadata, whilst being retrievable for other systems to link to too.\nPersistent identifiers for grants was the next logical step after identifiers for funders - open metadata registered with a PID in a central service like Crossref is invaluable to build the full picture of the research enterprise.\n\u0026ndash; Melissa Harrison, Team Leader, Literature Services at EMBL-EBI)\nEase of execution is important for scaling the Grant Linking System, and enabling its use in a diverse set of circumstances in the open science ecosystem. Altum was the trailblazer, first integrating its grant management platform Proposal Central with GLS. It was good to hear that others are now joining the integration efforts. Syman Stevens talked about the recent work initiated by Joe McArthur at OA Works, to develop a simple, open-source plug-in for any of the major grant management systems, to enable funders to deposit their grant metadata with Crossref GLS with a click of the button. Syman demonstrated the resulting interface in Fluxx, that allows for creating a record and sending grant metadata to Crossref as part of the regular grant management within the platform. He pointed out that, while this integration was developed for Fluxx, all code and documentation is openly available on GitHub and this can potentially be forked or adapted as necessary for reuse in other grant management systems.\nIt is heartening that others in the community are seeing such a need for this that they\u0026rsquo;re funding and creating their own tools to advance participation and use of the GLS.\nFinally, Fred Atherden, Head of Production Operations at eLife, presented how they include Crossref grant identifiers in publication metadata for the version of record of the works published on their platform. eLife is the first publisher to fully integrate Crossref grant identifiers both within the article display and in the metadata. Fred shared that in addition to collecting the data from the authors, eLife also attempts matching, albeit using very restrictive methodology, to enable more grant metadata in their publication records. They recognise that so far there are very few publishers including persistent links for grants in this way, and talked about plans to start collecting and including this data further upstream, and including them in the future for reviewed preprints.\nAcknowledgements and how to participate in the GLS Reflecting on the last five years, thanks must go to the \u0026gt;35 funders who are already participating (see logo mashup below), to our current volunteers and to those partners working to promote and make use of the Grant Linking System. We also acknowledge that the GLS would not have been possible without the Crossref board members at the time, our staff including alumni Josh Brown, Jennifer Kemp, Rachael Lammey, and Geoffrey Bilder, or without the early dedicated time and input from the following people and organisations on our working groups for governance and fees, and for metadata modelling:\nYasushi Ogasaka and Ritsuko Nakajima, Japan Science \u0026amp; Technology Agency Neil Thakur and Brian Haugen, US National Institutes of Health Jo McEntyre and Michael Parkin, Europe PMC Robert Kiley and Nina Frentop, Wellcome Alexis-Michel Mugabushaka and Diego Chialva, European Research Council Lance Vowell and Carly Robinson, OSTI/US Dept of Energy Ashley Moore and Kevin Dolby, UKRI (Research Councils UK / Medical Research Council) Salvo da Rosa, Children\u0026rsquo;s Tumor Foundation Trisha Cruse, DataCite To learn more about the Crossref Grant Linking System, the best place to start is our service page. And for the next step, please reach out to us for a conversation about any questions specific to your organisation and any questions that may need to be addressed in order to enable your full participation.\nGrant DOIs enhance the discovery and accessibility of funded project information and are one of the important links in a connected research ecosystem. I\u0026rsquo;m grateful and proud to contribute to the robustness and interconnectedness of the research infrastructure. Few funders are currently participating in the Crossref Grant Linking System, and I encourage others to consider doing so. This adoption follows the \u0026ldquo;network effect,\u0026rdquo; where the value and utility increase as more people participate, encouraging even wider adoption.\n\u0026ndash; Kristin Eldon Whylly, Senior Grants Manager and Change Management Lead at Templeton World Charity Fund (TWCF)\nYou can email me via feedback@crossref.org or set up a call with me when it suits you (you can overlay your own calendar using the toggle at the top right). We look forward to welcoming even more funders and to see those relationships in the open science infrastructure grow even further in the coming years.\n", "headings": ["What is the Crossref Grant Linking System?","Features of the GLS","How the Crossref Grant Linking System supports Open Science policy","How funders leverage the Grant Linking System in their reporting and assessment","How the community is working to integrate open infrastructure","Acknowledgements and how to participate in the GLS"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/grants/", "title": "Grants", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/identifiers/", "title": "Identifiers", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/infrastructure/", "title": "Infrastructure", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/research-funders/", "title": "Research Funders", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-anatomy-of-metadata-matching/", "title": "The anatomy of metadata matching", "subtitle":"", "rank": 1, "lastmod": "2024-06-27", "lastmod_ts": 1719446400, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/zie7reeg\nIn our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions.\nBasic terminology Metadata matching is a high-level concept, with many different problems falling into this category.", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/zie7reeg\nIn our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions.\nBasic terminology Metadata matching is a high-level concept, with many different problems falling into this category. Indeed, no matter how much we like to focus on the similarities between different forms of matching, matching affiliation strings to ROR IDs or matching preprints to journal papers are still different in several important ways. At Crossref and ROR, we call these problems matching tasks.\nSimply put, a matching task defines the kind or nature of the matching. Examples of matching tasks are bibliographic reference matching, affiliation matching, grant matching, or preprint matching.\nEvery matching task has an input, which is all the data that is needed to perform the matching. Input data can come in many shapes and forms, depending on the matching task. For example, all of the following could be inputs to a matching task:\nDepartment of Molecular Medicine, Sapporo Medical University, Sapporo 060-8556, Japan \u0026lt;fr:program xmlns:fr=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/fundref.xsd\u0026#34; name=\u0026#34;fundref\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;fundgroup\u0026#34;\u0026gt; \u0026lt;fr:assertion name=\u0026#34;funder_name\u0026#34;\u0026gt; European Union\u0026#39;s Horizon 2020 Research and Innovation Program through Marie Sklodowska Curie \u0026lt;fr:assertion name=\u0026#34;funder_identifier\u0026#34;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.13039/501100000780\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;fr:assertion name=\u0026#34;award_number\u0026#34;\u0026gt;721624\u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:assertion\u0026gt; \u0026lt;/fr:program\u0026gt; Everitt, W. N., \u0026amp; Kalf, H. (2007). The Bessel differential equation and the Hankel transform. Journal of Computational and Applied Mathematics, 208(1), 3–19. { \u0026#34;title\u0026#34;: \u0026#34;Functional single-cell genomics of human cytomegalovirus infection\u0026#34;, \u0026#34;issued\u0026#34;: \u0026#34;2021-10-25\u0026#34;, \u0026#34;author\u0026#34;: [ {\u0026#34;given\u0026#34;: \u0026#34;Marco Y.\u0026#34;, \u0026#34;family\u0026#34;: \u0026#34;Hein\u0026#34;}, {\u0026#34;given\u0026#34;: \u0026#34;Jonathan S.\u0026#34;, \u0026#34;family\u0026#34;: \u0026#34;Weissman\u0026#34;, \u0026#34;ORCID\u0026#34;: \u0026#34;http://orcid.org/0000-0003-2445-670X\u0026#34;} ] } Every matching task also has an output. For our purposes, this is almost exclusively zero or more matched identifiers. In the context of a specific matching task, output identifiers may be of a specific type (e.g. we might match to a ROR ID, and never to an ORCID ID). In some cases, there can be a certain target set as well (i.e. matching only to DataCite DOIs). The output identifiers can have different cardinality depending on the task, meaning that the matching task might allow for zero, one, or more identifiers as a result of matching to a single input.\nA matching strategy defines how the matching is done. Multiple strategies can exist for a specific matching task. Compound strategies can run other strategies and combine their outcomes into a single result.\nIn some cases, we may also want the matching strategy to output a confidence score for each matched identifier. A confidence score represents the degree of certainty or likelihood that the matched identifier is correct, typically expressed as a value between 0 and 1. This score may help with post-processing or further interpretation of the results.\nTo summarise, the anatomy of the matching task can be diagrammed as follows:\nHow to specify a matching task Whenever we plan the development or integration of a matching solution, it is good to begin by answering a few basic questions:\nWhat problem do we plan to solve with our matching task? What would we call our matching task and how would we describe it? What do we expect as the input for this matching task? Which input formats do we need to be able to accept? What information do we expect to find in this input? What kind of identifiers should be output? Is there a target set of identifiers? Can our matching output zero/one/or multiple identifiers, and under what conditions might that occur? These sound fairly simple, but the answers to these questions can be remarkably complex. Once one tries to apply these concepts to real-world problems, they might encounter several non-obvious challenges.\nFor example, one common concern is at what level we should define each matching task. Consider the following problems:\nMatching bibliographic reference strings to DOIs. Example input: Everitt, W. N., \u0026amp; Kalf, H. (2007). The Bessel differential equation and the Hankel transform. Journal of Computational and Applied Mathematics, 208(1), 3–19. Matching structured bibliographic reference to DOIs. Example input: { volume: \u0026#34;208\u0026#34;, author: \u0026#34;Everitt\u0026#34;, journal-title: \u0026#34;J. Comput. Appl. Math.\u0026#34;, article-title: \u0026#34;The Bessel differential equation and the Hankel transform\u0026#34;, first-page: \u0026#34;3\u0026#34;, year: \u0026#34;2007\u0026#34;, issue: \u0026#34;1\u0026#34; } Are those discrete matching tasks (unstructured reference matching vs. structured reference matching), or are they the same task (reference matching) that can accept different types of inputs (unstructured or structured)?\nSimilarly, let\u0026rsquo;s compare the following tasks:\nMatching affiliation strings to ROR IDs. Example input: Department of Molecular Medicine, Sapporo Medical University, Sapporo 060-8556, Japan Matching funder names to ROR IDs. Example input: Alexander von Humboldt Foundation Are these different matching tasks (affiliation matching vs. funder matching), or the same task with different inputs (organisation matching)?\nDefining the boundaries of a matching task can also be difficult. Consider, for example, the need to obtain ROR IDs for organisations mentioned in the acknowledgements section of a full-text academic paper. To begin, one may first extract the acknowledgement section from the full text, then run something like a named entity recognition (NER) tool to isolate the organisation names from the extracted text, and finally match these names to ROR IDs. Is this entire process matching, with the input being the full text of a paper? Or perhaps matching starts with the acknowledgement section as the input? Instead, is it only the last phase, where we try to match the extracted name to the ROR ID, that constitutes the matching task, with the extraction phases being completely separate processes?\nThere are also important questions related to the expected behaviour of a matching strategy. Consider, for example, developing an affiliation matching strategy where we define our input as \u0026ldquo;an affiliation string\u0026rdquo;. What should happen when the strategy gets something else on the input, for example, song lyrics? Perhaps the strategy should simply return no matches, or an error, or we could say that in such a situation the behaviour is undefined and it simply doesn\u0026rsquo;t matter what is returned. But what should happen if in this input we have the lyrics of Street Life by Roxy Music, a song that mentions the names of a few universities that happen to have ROR IDs?\nIt is likewise important to consider what should happen if different parts of the input match to different identifiers, like in the following example:\nDepartment of Haematology, Eastern Health and Monash University, Box Hill, Australia Here, \u0026ldquo;Eastern Health\u0026rdquo; matches to https://ror.org/00vyyx863 and \u0026ldquo;Monash University\u0026rdquo; to https://ror.org/02bfwt286. Should the matching strategy return all the identifiers, one of them (if so, which one?), or nothing at all?\nSimilar questions arise when it is possible to match to multiple versions (or duplicates) in the target identifier set. This can happen, for example, in the context of bibliographic reference matching or preprint matching. Multiple matches may occur when there are different editions, reprints, or variations of the same publication in the target dataset, each with its own unique identifier.\nIf you are waiting for an answer to these questions, we unfortunately must disappoint you here. These can only be answered in the context of a specific problem, considering who the users are and what it is they need and expect.\nDid you notice any other subtleties related to metadata matching and its concerns? Are there other non-obvious questions that should be considered when planning to develop or integrate metadata matching strategies? Let us know—we\u0026rsquo;d love to hear from you!\n", "headings": ["Basic terminology","How to specify a matching task"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/subscribe-newsletter/", "title": "Receive updates from us", "subtitle":"", "rank": 1, "lastmod": "2024-06-24", "lastmod_ts": 1719187200, "section": "", "tags": [], "description": "If you are already a Primary or Technical contact on a Crossref member or Sponsor account, you will receive our bimonthly newsletter automatically. Otherwise, sign up here.\nSubscribing to email updates from us means that every two months you’ll receive our community newsletter which aims to keep you up-to-date with our latest developments. We may also send you the occasional blog post or other (relevant) communication.\nYou can unsubscribe from these communications at anytime using the ‘unsubscribe’ link located at the bottom of each newsletter.", "content": "If you are already a Primary or Technical contact on a Crossref member or Sponsor account, you will receive our bimonthly newsletter automatically. Otherwise, sign up here.\nSubscribing to email updates from us means that every two months you’ll receive our community newsletter which aims to keep you up-to-date with our latest developments. We may also send you the occasional blog post or other (relevant) communication.\nYou can unsubscribe from these communications at anytime using the ‘unsubscribe’ link located at the bottom of each newsletter.\nNote: if you are using an adblocker, you may not be able to see the form above. If this is the case, please temporarily disable your adblocker on this page and refresh.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/api-case-study/", "title": "API Case Study", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/apis/", "title": "APIs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/drawing-on-the-research-nexus-with-policy-documents-overtons-use-of-crossref-api/", "title": "Drawing on the Research Nexus with Policy documents: Overton’s use of Crossref API", "subtitle":"", "rank": 1, "lastmod": "2024-06-15", "lastmod_ts": 1718409600, "section": "Blog", "tags": [], "description": "Update 2024-07-01: This post is based on an interview with Euan Adie, founder and director of Overton._\nWhat is Overton? Overton is a big database of government policy documents, also including sources like intergovernmental organizations, think tanks, and big NGOs and in general anyone who\u0026rsquo;s trying to influence a government policy maker. What we\u0026rsquo;re interested in is basically, taking all the good parts of the scholarly record and applying some of that to the policy world.", "content": "Update 2024-07-01: This post is based on an interview with Euan Adie, founder and director of Overton._\nWhat is Overton? Overton is a big database of government policy documents, also including sources like intergovernmental organizations, think tanks, and big NGOs and in general anyone who\u0026rsquo;s trying to influence a government policy maker. What we\u0026rsquo;re interested in is basically, taking all the good parts of the scholarly record and applying some of that to the policy world. By this we mean finding all the documents, finding what\u0026rsquo;s out there, collecting metadata for them consistently, fitting to our schema, extracting references from all the policy documents we find, adding links between them, and then we also do citation analysis.\nWhat do you mean by the good parts of the scholarly record? What I mean by the good parts of the scholarly record is, from a data perspective, having persistent open metadata for items on different stable, interoperable platforms and being able to build up layers of data to suit specific use cases. That\u0026rsquo;s a better approach than trying to do everything in a silo here and a silo there and trying to do stuff bit by bit or in a hundred different ways.\nThere’s also a bad part, which is less to do with metadata and more around citation analysis and responsible metrics. With all this data… as the famous Spiderman quote goes… with great power comes a great responsibility: once you start systematically collecting this data, it’s very easy to fall into the trap of thinking that if we can put numbers on it, and then maybe we could start reading meaning into those numbers, and then it spirals out of control. So the idea for Overton was: can we take the system, some of the infrastructure and apply those ideas? But then come at it already knowing where the later pitfalls are and try to avoid them.\nWhat is your main use of Crossref resources? We rely heavily on Crossref to link policy documents to the scholarly record. The question we’re trying to answer is: does this government document cite academic work? We work a lot with universities, think tanks, and IGOs. They’re asking where is the research we produce ending up? Is it being used by the government? In some countries, like the UK, there\u0026rsquo;s a big impact agenda where it\u0026rsquo;s quite important to demonstrate that for government funding. In the US as well, state universities for example aim to impact the local policy environment. Right? Are we producing things that went on to change life for local residents for the better? And that\u0026rsquo;s really what we\u0026rsquo;re trying to support. And so that\u0026rsquo;s one of the main use cases of the database.\nCan you tell us a little bit more about the story of Overton, how did this idea start? It really came from two things. The first one is that I\u0026rsquo;d always been interested in this area and before Overton, I founded a company called Altmetric.com, which was looking at kind of broader impact metrics for papers. And we looked at Twitter, and news, and blogs, and other things, including policy. But policy wasn\u0026rsquo;t a primary focus.\nWhen I left Altmetric two things were happening in the UK – not that everything is about Brexit, but Brexit was happening, and then COVID happened as well. And in both cases, I think it just drove home to me that other people seemed to be very interested in the evidence that the government has used to make decisions. Be they good decisions like some of the evidence based initatives in COVID or bad decisions like Brexit. So, how can you find out what it was? And it is actually very difficult to do. You can\u0026rsquo;t really track back how this decision was made. I thought that there is a growing need for that kind of impact analysis. So the second thing was, can we do something that helps make it easy to see what evidence goes into policy? The scholarly evidence but also the other kind of policy influence that goes into any document or discussion.\nWhat are the main challenges that you face when you are trying to retrieve these policy documents? Well, first is another thing that the scholarly record does well, which is persistence. We have CLOCKSS and all the dark archives1. So the whole idea is that if you have a DOI, if something moves, we can track it and it maintains the ID, and even if the publisher goes bust it\u0026rsquo;ll never disappear. For citing it, then there\u0026rsquo;s always going to be a copy of it somewhere available even if it\u0026rsquo;s in a library or a dark archive.\nOne of the biggest challenges with policy documents is that kind of persistence doesn’t exist\u0026hellip; There are a lot of statistics about link rot2, and they hold true for policy documents as much as anywhere else. Every year a percentage of the links everywhere basically break because websites are redesigned or a government changes, it\u0026rsquo;s even worse because it can be by design. If you think about it, a new government comes into power, they change… let’s say the Department of Agriculture and they merge it with the Department of Fisheries. That would refer to a completely new third thing. And the other two departments disappear or they start linking off, like, redirecting or whatever.\nOne of the challenges is just keeping track of all the changes in the landscape and constantly trying to stay on top of the data. And that\u0026rsquo;s a big part of what we do. Another challenge for us, and I think about it compared to journals, when you cite something in a scholarly document, you cite it in a given style, but there are no standards for referencing styles in policy documents. So even in the same document, we can see, like, four or five different ways of referring to something, and sometimes they\u0026rsquo;re missing important data and sometimes they\u0026rsquo;re not. And it means when we\u0026rsquo;re using Crossref search, we usually have much more unparsable text.\nHow has your experience been so far using our Crossref API or our services in general? It\u0026rsquo;s been great. I would happily say this anywhere, I always talk about the Crossref API as being one of the best examples of a well-done scholarly infrastructure API. It\u0026rsquo;s well-documented. It\u0026rsquo;s fast. It\u0026rsquo;s clear. The rate limits are clear. It\u0026rsquo;s up when it should be up. I like that you can trust it. So the technical aspect is great. From an organizational aspect, in contrast with a lot of infrastructure in the scholarly world that you don’t know if it\u0026rsquo;s even going to be there in a given time, Crossref is pretty stable.\nWhat would you say are the main challenges or things that we can improve in the future? What other expectations or suggestions do you have? It depends, if we\u0026rsquo;re talking about how the service could be improved versus how the data could be improved. Data-wise, and I appreciate this is a publisher problem, not a Crossref one, but, we still have to pull other data from OpenAlex, for example, for things like affiliations just because it\u0026rsquo;s missing from so many articles. And then equally things like ORCID for authors. And in fact also disambiguation in general. This is a huge problem that either the user doesn’t solve or you end up using a hundred different author disambiguation systems. I don\u0026rsquo;t know if there\u0026rsquo;s necessarily something Crossref wants to get into, but there\u0026rsquo;s definitely not something out there generally accepted already.\nAnother kind of improvement I see is to make sure that changes in one API are reflected in the other, and they don\u0026rsquo;t get out of sync. When somebody updates their ORCID record, I’d like it reflected in the Crossref record if we’re using that as the “canonical” metadata record for the DOI. Retrospectively enriching records.\nI think it\u0026rsquo;s harder than I expected to just find preprints because you can\u0026rsquo;t simply use the item type but I understand that this is maybe a bigger issue. So maybe it\u0026rsquo;s not for a short time.\nFinally, this is very specific, but we experienced friction when going from the snapshots to having something useful, either in Elasticsearch or in, like, Postgres. It might be nice to have some open-source scripts to download and process everything, convert it to relational tables, or send it to an Elasticsearch cluster or something.\nPlatt, C. (2022). What is a Dark Archive? Wiley. Retrieved 10 January, 2024, from\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nLink rot. (n.d.). In Wikipedia. Retrieved 10 January, 2024, from https://en.wikipedia.org/wiki/Link_rot.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": ["What is Overton?","What do you mean by the good parts of the scholarly record?","What is your main use of Crossref resources?","Can you tell us a little bit more about the story of Overton, how did this idea start?","What are the main challenges that you face when you are trying to retrieve these policy documents?","How has your experience been so far using our Crossref API or our services in general?","What would you say are the main challenges or things that we can improve in the future? What other expectations or suggestions do you have?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/euan-adie/", "title": "Euan Adie", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/luis-montilla/", "title": "Luis Montilla", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/api/", "title": "API", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/patrick-polischuk/", "title": "Patrick Polischuk", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rebalancing-our-rest-api-traffic/", "title": "Rebalancing our REST API traffic", "subtitle":"", "rank": 1, "lastmod": "2024-06-04", "lastmod_ts": 1717459200, "section": "Blog", "tags": [], "description": "Since we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research.\nThe REST API is relied on by a large part of the research information community and beyond, seeing around 1.", "content": "Since we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research.\nThe REST API is relied on by a large part of the research information community and beyond, seeing around 1.8 billion requests each month. Just five years ago, that average monthly number was 600 million. Our members are the heaviest users, using it for all kinds of information about their own records or picking up connections like citations and other relationships. Databases, discovery tools, libraries, and governments all use the API. Research groups use it for all sorts of things such as analysing trends in science or recording retractions and corrections.\nSo the chances are high that almost any tool you rely on in scientific research has somewhere incorporated metadata through us.\nOptimising performance For some time, we’ve been noticing reduced performance in a number of ways, and periodically we have a flurry of manually blocking/unblocking IP addresses from requesters that are hammering and degrading the service for everyone else, and this is of course only minimally effective and very short term. You can always watch our status page for alerts. This is the current one about REST API performance: https://0-status-crossref-org.libus.csd.mu.edu/incidents/d7k4ml9vvswv.\nAs the number of users and requests has grown, our strategies for serving those requests must evolve. This post discusses how we’re approaching balancing the growth in usage for the immediate term and provides some thoughts about things we could try in the future on which we’ll gladly take feedback and advice.\nLoad balancing In 2018, we started routing users through three different pools (public, polite, and plus). This coincided with the launch of Metadata Plus, a paid-for service with monthly data dumps and very high rate limits. Note that all metadata is exactly the same and real-time across all pools. We also, more recently, introduced an internal pool. Here\u0026rsquo;s more about them:\nPlus: This is the aforementioned premium option; it’s really for ‘enterprise-wide’ use in production services and is not really relevant here. Public: This is the default and is the one that is struggling at the moment. You don’t have to identify yourself and, in theory, we don’t have to work through the night to support it if it’s struggling (although we often do). Public currently receives around 30,000 requests per minute. Polite: Traffic is routed to polite simply by detecting a mailto in the header. Any system or person including an email is being routed to a currently-quieter pool, this means we can always get in touch for troubleshooting (and only troubleshooting). Polite currently receives around 5,000 requests per minute. Internal: In 2021, we introduced a new pool just for our own tools where we can control and predict the traffic. Internal currently receives around 1,000 requests per minute. The volumes of traffic across public, polite and internal pools are very different and yet each pool has always had similar resources. The purpose of each of these pools has been long-established but our efforts to ask the community to use polite by default have not been particularly successful and it is clear that we don’t have the right balance.\nThe internal pool has been dedicated to our internal services that have predictable usage and that have requests that are not initiated by external users. The internal pool has previously included reference matching but not Crossmark, Event Data, or search.crossref.org, which all use the polite pool instead, along with the community. We have the capacity on the internal pool to shift all of this “internal” traffic across, and in doing so we will create more capacity for genuine polite users and redefine what we consider to be “internal”.\nCreating more capacity on polite will also give us the opportunity to load-balance requests to both polite and public across the two pools. We are at a point where we cannot eke more performance out of the API without architectural changes. In order to buy ourselves time to address this properly, we will modify the routing of polite and public and evenly distribute requests to the two pools 50/50.\nThe public and polite pools have equal resources at the moment yet handle very different volumes of traffic (30,000 req/min vs 5,000 req/min), and with the proposed changes to internal traffic the polite pool would handle a fraction of this. The result would look something like 31,000 req/min evenly distributed across public and polite.\nRate limiting Our rate-limiting also needs review. We track a number of metrics in our web proxy but only deny requests on one of them - the number of requests per second. On public and polite we limit each IP address to sending 50 req/sec and if this rate is exceeded users are denied access for 10 seconds. These limits are generous and we cannot realistically support this volume of request for all users of the public or polite API.\nHowever, when requests are taking a long time to return, we potentially have a separate problem of high concurrency as hundreds of requests could be sent before the first one has returned. We intend to identify and impose an appropriate rate limit on concurrent requests from each IP to prevent a small number of users from disproportionately affecting all users with long-running queries.\nLonger-term So, in the short-term we will revise our pool traffic as described above. We’ll do that this week. Then we will review the current rate limits and reduce them to something more reasonable for the majority of users. And we’ll identify and introduce a rate limit for concurrent requests from each user.\nLonger-term, we need to rearchitect our Elasticsearch pools so that we can:\nReduce shard sizes to improve performance of queries Balance data shards and replicas more evenly Optimise our instance types for our workload Want to help? Thanks for asking!\nFirstly, please, everyone, do always put an email in your API request headers - while the short term plan will help stabilise performance, this habit will always help us troubleshoot e.g. we can always contact you instead of blocking you!\nSecondly, we know many of you incorporate Crossref metadata, add lots of value to it in order to deliver important services, and also develop APIs of your own. We’d love any comments or recommendations from those of you handling similar situations on scaling and optimising API performance. You can comment on this post which is managed via our Discourse forum. We’ll also be adding updates to this thread as well as on status.crossref.org. If you’d like to be in touch with any of us directly, all our emails are firstinitiallastname@crossref.org.\n", "headings": ["Optimising performance","Load balancing","Rate limiting","Longer-term","Want to help?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/stewart-houten/", "title": "Stewart Houten", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-matching-101-what-is-it-and-why-do-we-need-it/", "title": "Metadata matching 101: what is it and why do we need it?", "subtitle":"", "rank": 1, "lastmod": "2024-05-16", "lastmod_ts": 1715817600, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/aewi1cai\nAt Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we\u0026rsquo;ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective.", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/aewi1cai\nAt Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we\u0026rsquo;ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective. In this blog, the first one in a series about metadata matching, we will cover the very basics of matching: what it is, how we do it, and why we devote so much effort to this problem.\nWhat is metadata matching? Would you be able to find the DOI for the work referenced in this citation?\nEveritt, W. N., \u0026amp; Kalf, H. (2007). The Bessel differential equation and the Hankel transform. Journal of Computational and Applied Mathematics, 208(1), 3–19. We bet you could! You might begin, for example, by pasting the whole citation, or only the title, into a search engine of your choice. This would probably return multiple results, which you would quickly skim. Then you might click on the links for a few of the top results, those that look promising. Some of the websites you visit might contain a DOI. Perhaps you would briefly compare the metadata provided on the website against what you see in the citation. If most of this information matches (see what we did there?), you would conclude that the DOI from that website is, in fact, the DOI for the cited paper.\nWell done! You just performed metadata matching, specifically, bibliographic reference matching. Matching in general can be defined as the task or process of finding an identifier for an item based on its structured or unstructured \u0026ldquo;description\u0026rdquo; (in this case: finding a DOI of a cited article based on a citation string).\nBut matching doesn\u0026rsquo;t have to just be about citations and DOIs. There are many other instances of matching we can think of, for example:\nfinding the ROR ID for an organisation based on an affiliation string, finding the ORCID ID for a researcher based on the person\u0026rsquo;s name and affiliation, finding the ROR ID for a funder based on the acknowledgements section of a research paper, finding the grant DOI based on an award number and a funder name. Matching doesn\u0026rsquo;t have to be done manually. It is possible to develop fully automated strategies for metadata matching and employ them at scale. It is also possible to use a hybrid approach, where automated strategies assist users by providing suggestions.\nDeveloping automated matching strategies is not a trivial task, and if we want to do it right, it takes a great deal of time and effort. This brings us to our next question: is it worth it?\nWhy do we need matching? In short, metadata matching gives us a more complete picture of the research nexus by discovering missing relationships between various entities within and throughout the scholarly record:\nThese relationships are very powerful. They provide important context for any entity, whether it is a research output, a funder, a research institution, or an author. Imagine for a moment the scholarly record without any such relationships, where all bibliographic references, affiliations (institution names and addresses), and funding information (funder names and grant titles) are provided as unstructured strings only. In such a world, how would you calculate the number of times a particular research paper was cited? How would you get a list of research outputs supported by a specific funder? It would be incredibly challenging to navigate, summarise, and describe research activities, especially considering the scale. Thankfully, these and many other questions can be answered thanks to metadata matching that discovers relationships between entities in the scholarly record.\nThere are two primary ways we can use metadata matching in our workflows: as semi-automated tools that help users look up the appropriate identifiers or as fully automated processes that enrich the metadata in various scholarly databases.\nThe first approach is quite similar to the example we described at the beginning. If you are submitting scholarly metadata, for example of a new article to be published, you can use metadata matching to look up identifiers for the various entities and include these identifiers in the submission. For example, with the help of metadata matching, instead of submitting citation strings, you could provide the DOIs for works cited in the paper and instead of the name and address of your organisation, you could provide its ROR ID. To make this easier for people, metadata submission systems and applications sometimes integrate metadata matching tools into user interfaces.\nThe second approach allows large, existing sources of scholarly metadata to be enriched with identifiers in a fully automated way. For example, we can match affiliation strings to ROR IDs using a combination of machine learning models and ROR\u0026rsquo;s default matching service, effectively adding more relationships between people and organisations. We can also compare journal articles and preprints metadata in the Crossref database by calculating similarity scores for titles, authors, and years of publication to match them with each other and provide more relationships between preprints and journal articles. This automated enrichment can be done at any point in time, even after research outputs have been formally published.\nThere are fundamental differences between these two approaches. The first is done under the supervision of a user, and for the second, the matching strategy makes all the decisions autonomously. As a result, the first approach will typically (although not always) result in better quality matches. By contrast, the second approach is much faster, generally less expensive, and scales to even very large data sources.\nIn the end, no matter what approach is used, the goal is to achieve a more complete accounting of the relationships between entities in the scholarly record.\nThis blog is the first one in a series about metadata matching. In the coming weeks, we will cover more detail about the product features related to metadata matching, explain why metadata matching is not a trivial problem, and share how we can develop, assess, compare, and choose matching strategies. Stay tuned!\n", "headings": ["What is metadata matching?","Why do we need matching?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2024-public-data-file-now-available-featuring-new-experimental-formats/", "title": "2024 public data file now available, featuring new experimental formats", "subtitle":"", "rank": 1, "lastmod": "2024-05-14", "lastmod_ts": 1715644800, "section": "Blog", "tags": [], "description": "This year’s public data file is now available, featuring over 156 million metadata records deposited with Crossref through the end of April 2024 from over 19,000 members. A full breakdown of Crossref metadata statistics is available here.\nLike last year, you can download all of these records in one go via Academic Torrents or directly from Amazon S3 via the “requester pays” method.\nDownload the file: The torrent download can be initiated here.", "content": "This year’s public data file is now available, featuring over 156 million metadata records deposited with Crossref through the end of April 2024 from over 19,000 members. A full breakdown of Crossref metadata statistics is available here.\nLike last year, you can download all of these records in one go via Academic Torrents or directly from Amazon S3 via the “requester pays” method.\nDownload the file: The torrent download can be initiated here. Instructions for downloading via the “requester pays” method, along with other tips for using these files, can be found on the “Tips for working with Crossref public data files and Plus snapshots” page.\nIn January, Martin Eve announced that we had been experimenting with alternative file formats meant to make our public data files easier to use by broader audiences. This year’s file will be published alongside the tools that can be used on the public data file to produce two experimental formats: JSON-lines and SQLite (and a bonus Rust version). You can read more about our thinking behind this work in Martin’s blog post, and we are keen to hear your thoughts on these alternatives.\nOur annual public data file is meant to facilitate individuals and organizations interested in working with the entirety of our metadata corpus. Starting with the majority of our metadata records in one file should be much easier than starting from scratch with our API, but because Crossref metadata is always openly available, you can use the API to keep your local copy up to date with new and updated records.\nIf you’re curious about what you’ll get with the public data file, we’ve also published a sample version so that you can take a peek before committing to downloading the ~212 gb file. This file includes a random sample of JSON files and is available exclusively via torrent here.\nWe hope you find this public data file useful. Should you have any questions about how to access or use the file, please see the tips below, or share your questions below (you will be redirected to our community forum).\nTips for using the torrent and retrieving incremental updates Use the public data file if you want all Crossref metadata records. Everyone is welcome to the metadata, but it will be much faster for you and much easier on our APIs to get so many records in one file. Here are some tips on how to work with the file.\nUse the REST API to incrementally add new and updated records once you have the initial file. Here is how to get started (and avoid getting blocked in your enthusiasm to use all this great metadata!).\nWhile bibliographic metadata is generally required, because lots of metadata is optional, records will vary in quality and completeness.\nQuestions, comments, and feedback are welcome at support@crossref.org.\n", "headings": ["Tips for using the torrent and retrieving incremental updates"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/isr-online-event-with-research-institutions-2024/", "title": "Integrity of the Scholarly Record (ISR): what do research institutions think?", "subtitle":"", "rank": 1, "lastmod": "2024-05-09", "lastmod_ts": 1715212800, "section": "Blog", "tags": [], "description": "Earlier this year, we reported on the roundtable discussion event that we had organised in Frankfurt on the heels of the Frankfurt Book Fair 2023. This event was the second in the series of roundtable events that we are holding with our community to hear from you how we can all work together to preserve the integrity of the scholarly record - you can read more about insights from these events and about ISR in this series of blogs.", "content": "Earlier this year, we reported on the roundtable discussion event that we had organised in Frankfurt on the heels of the Frankfurt Book Fair 2023. This event was the second in the series of roundtable events that we are holding with our community to hear from you how we can all work together to preserve the integrity of the scholarly record - you can read more about insights from these events and about ISR in this series of blogs.\nResearch institutions are one of the most important stakeholders in the endeavour of research integrity, and any conversation around ISR is incomplete without the views of this key community. This fact was acknowledged at the second ISR roundtable event, and one of the main takeaways from the discussions was to make more focused efforts to hear the viewpoints of researchers and academics.\nAs the first step in this direction, we organised an online discussion on the integrity of the scholarly record, to which we invited: researchers and academics, research integrity experts based at academic institutions, Crossref members, as well as other organisations working on this topic such as COPE and Digital Science. The primary objective of this event was to hear from this community their perspectives on preserving and leveraging the integrity of the scholarly record and to identify opportunities for collaboration in this area. To ensure common ground, we also wanted to share information about Crossref metadata, the Research Nexus vision, and our position and role in the integrity of the scholarly record.\nTo facilitate this, the event started with an introduction by Kora Korzec, Head of Community Engagement and Communication at Crossref, to our mission and vision and the importance of capturing the relationships between the objects, people and places involved in research through the Research Nexus. Amanda Bartell, Head of Member Experience, was next and she spoke about the scholarly record and the role that Crossref plays in preserving the record’s integrity. In her presentation, Amanda emphasised that Crossref’s role is not to assess the quality of content deposited by the members but rather to provide infrastructure that enables the community to provide and use metadata about the scholarly content produced by members. It’s important not to put up barriers to entry, but to work with all publishers to encourage best practices.\nDominika Tkaczyk, Head of Strategic Initiatives, shared details of a few Crossref projects that focus on monitoring and improving metadata completeness, thereby supporting ISR. These projects include improving the Participation Reports, using metadata matching to discover new relationships (e.g., preprint published as work, work supported by funder, etc), and importing more retractions and other updates from the Retraction Watch database that was acquired and made openly available by Crossref. Dominika used these examples to highlight the ways in which open and complete metadata can help in uncovering large scale trends and systemic concerns. The final speaker was Amanda French, ROR Technical Community Manager, who introduced the audience to the Research Organization Registry, or ROR.\nTo accomplish the primary aim of the event, which was to hear the community’s viewpoints, the participants were divided into breakout groups for discussions and given three prompts to answer. The rest of the blog is a summary of what we heard from the participants.\n1. Is Crossref’s role what you expected? What surprised you? What are we missing? An overarching sentiment from the academics in the audience was that Crossref does so much more than is known to researchers! They were surprised by the range of activities underway at Crossref. At the same time, there were calls for Crossref to play a bigger role. Suggestions included playing a leadership role in deciding which metadata elements are a priority, providing guidance on the main metadata components important for signalling trust, playing a greater role in connecting various identifiers to ensure that relationships between different content types are preserved well, and to coordinate the efforts being taken by institutions, publishers and service providers around research integrity, by virtue of Crossref’s unique position in the community. There was a broad agreement that by providing the essential infrastructure, Crossref acts as the base upon which other actors in the scholarly community can build.\n2. What metadata elements do you consider important for signalling trust? Many participants spoke about the various ways in which author identity and affiliation are important as trust signals. Being able to identify when an author has changed institutions, or being able to make a distinction between authors who have the same name is important. Author affiliations that are authentic and verified would go a long way in establishing trust.\nMultiple assertions, e.g. for affiliations, would be welcome. The use cases for this could be when research starts at one institution and is carried over to another, or when researchers affiliated with an institution may perform part of the research overseas. Some of the participants, who actively investigate research data, shared that abstracts are valuable because they can be used for large scale analyses related to research integrity.\nOther metadata elements that came up during this discussion were data on peer review, ethics approval, patient and donor consent in medical research, editorial boards (especially of special issues), pre-registration, funding metadata, datasets and programming scripts.\n3. What value do you see in the integrity and completeness of the scholarly record in the way you operate? How do you contribute to it? How can it support you to achieve your own goals? Participants acknowledged that integrity of the metadata and the scholarly record is essential. Ensuring this integrity is a dynamic process, much akin to the concept of organised scepticism which is the notion that all scientific work should be trusted subject to its verification. Several ideas were shared on how to progress the integrity and completeness of the scholarly record. One recommendation was to use multiple metadata trust markers as that can make it harder for bad actors to game the system, but this may run the risk of making things complicated. Another suggestion was to make metadata part of the onboarding procedure- by gathering staff ORCID iDs during the onboarding process and sharing the institutional ROR ID with staff to promote its use, institutions can ensure that this information is routinely made available. The metadata deposited with Crossref should be integrated with downstream workflows to better facilitate the use of this rich metadata. An example of this is to integrate Crossmark with other research tools such as reference management software.\nThe participants acknowledged that this discussion underlined for them the fact that having identifiers in itself is not an indicator of quality and that the underlying metadata records and wider context is key to understanding trustworthiness of the content.\nThis event was a good first step towards engaging researchers and academics in the conversation about ISR. It connected folks working in different parts of the world who are united by their interest in research integrity. There was good engagement among all and commitment to continue these conversations in the future, with many participants planning to connect at the World Conference on Research Integrity in June (I’ll be attending as well, for anyone who wants to continue the conversation - along with my colleagues Fabienne and Evans).\nAt Crossref, we plan on continuing these conversations with all segments of the community to understand their needs and perceptions around metadata. The greater the awareness about the importance of metadata and its applications, including for research integrity, the richer the metadata that we are able to collect together. This will lead to building a comprehensive Research Nexus and emergence of more relationships therein. Please write in response to this post on our Community Forum if you have any thoughts on this as we’d love to hear from you.\nList of participants Manu Goyal International Journal of Cancer Panagiotis Kavouras University of Oslo Dorothy Bishop University of Oxford Zhesi (Phil) Shen Centre of Scientometrics, National Science Library, Chinese Academy of Sciences Wouter Vandevelde KU Leuven Leslie McIntosh Digital Science Elizabeth Noonan University College Cork Radek Gomola* Masaryk University Press Queensland University of Technology London School of Hygiene \u0026amp; Tropical Medicine Vilnius Gediminas Technical University Library Committee on Publication Ethics (COPE) Ginny Hendricks Chif Program Officer, Crossref Kornelia Korzec Director of Community, Crossref Amanda Bartell Director of Membership, Crossref Dominika Tkaczyk Director of Data Science, Crossref Amanda French Technical Community Manager, Crossref Madhura Amdekar Community Engagement Manager, Crossref *Note: name added 21-May-2024\n", "headings": ["1. Is Crossref’s role what you expected? What surprised you? What are we missing?","2. What metadata elements do you consider important for signalling trust?","3. What value do you see in the integrity and completeness of the scholarly record in the way you operate? How do you contribute to it? How can it support you to achieve your own goals?","List of participants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/trustworthiness/", "title": "Trustworthiness", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/membership/", "title": "Membership", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/request-for-information/", "title": "Request for Information", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/seeking-consultancy-understanding-joining-obstacles-for-non-member-journals/", "title": "Seeking consultancy: understanding joining obstacles for non-member journals", "subtitle":"", "rank": 1, "lastmod": "2024-05-01", "lastmod_ts": 1714521600, "section": "Blog", "tags": [], "description": "Crossref is undertaking a large program, dubbed 'RCFS' (Resourcing Crossref for Future Sustainability) that will initially tackle five specific issues with our fees. We haven’t increased any of our fees in nearly two decades, and while we’re still okay financially and do not have a revenue growth goal, we do have inclusion and simplification goals. This report from Research Consulting helped to narrow down the five priority projects for 2024-2025 around these three core goals:", "content": "Crossref is undertaking a large program, dubbed 'RCFS' (Resourcing Crossref for Future Sustainability) that will initially tackle five specific issues with our fees. We haven’t increased any of our fees in nearly two decades, and while we’re still okay financially and do not have a revenue growth goal, we do have inclusion and simplification goals. This report from Research Consulting helped to narrow down the five priority projects for 2024-2025 around these three core goals:\nScope of the RCFS Program 2024-2025 GOAL: MORE EQUITABLE FEES Project 1: Evaluate the USD $275 annual membership fee tier and propose a more equitable pricing structure, which might entail breaking this down into two or more different tiers. Project 2: Define a new basis for sizing and tiering members for their capacity to pay GOAL: SIMPLIFY COMPLEX FEES Project 3: Address and adjust volume discounts for Content Registration Project 4: Address and adjust backfile discounts for Content Registration GOAL: REBALANCE REVENUE SOURCES Project 5: Reflect the increasing value of Crossref as a metadata source, likely increasing Metadata Plus fees Work to date As part of the RCFS program, we are working closely with our Membership \u0026amp; Fees Committee to discuss insights, gather feedback, and make recommendations to the Board. As a first step, we have surveyed and received responses from around 1000 of the current 8000 Crossref members in our lowest membership fee tier (USD $275). We are now starting to distill that data and will discuss it on our community call on May 8th and subsequently with the M\u0026amp;F Committee to inform recommendations for fee changes that may going into effect in 2025 or 2026.\nRequest For Information (RFI) about community consultation project While we have useful data from existing Crossref members, we know that there are many thousands of journals that are not (yet) members, and we need to understand this group better, in particular, to document and address the financial obstacles as well as the technical or social challenges.\nWe are looking for community facilitation expertise, with multiple language skills, to conduct a series of focus groups with non-member journals, with a summary and insights report (in English) provided by the end of June 2024.\nAll the data and documentation will be available publicly on the dedicated RCFS Program website\nAs well as designing, conducting, and summarising the results of some focus groups (participants for which will be gathered via our own contacts and those of partners such as DOAJ, EIFL, and the Free Journal Network) we would like the consultant to review work such as the DIAMAS institutional publishing report, and identify data relevant to Crossref’s fee model.\nIf you would like to respond, please provide the following information and send it to Kora Korzec at feedback@crossref.org by 15th May:\nYour consultancy organisation and your role within it Examples of similar market research undertaken Languages spoken within your team Confirmation that the timeline is workable Approximate fee, likely range, or structure/basis for your fee Equally, if you represent a journal or group of journals, such as Diamond Open Access journals, and are not yet using Crossref, please get in touch and we can include your group in the research.\nThank you!\n", "headings": ["Scope of the RCFS Program 2024-2025","GOAL: MORE EQUITABLE FEES","GOAL: SIMPLIFY COMPLEX FEES","GOAL: REBALANCE REVENUE SOURCES","Work to date","Request For Information (RFI) about community consultation project"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/special-programs/resourcing-crossref/", "title": "RCFS Program: Resourcing Crossref for Future Sustainability", "subtitle":"", "rank": 1, "lastmod": "2024-04-29", "lastmod_ts": 1714348800, "section": "Get involved", "tags": [], "description": "Background Following discussions at our July 2023 board meeting, Crossref commenced a large-scale program, dubbed RCFS - ‘Resourcing Crossref for Future Sustainability’ with the following stated purpose:\nCore to any discussion of resourcing Crossref is understanding what makes us sustainable long term. Organizational sustainability aligns the impact Crossref makes with the financial position required to support it. We want to conduct a thoughtful, comprehensive review that centers on our long-term vision and plans, is guided by well-defined problems and principles, includes board, staff, and community input, and details an implementation plan.", "content": "Background Following discussions at our July 2023 board meeting, Crossref commenced a large-scale program, dubbed RCFS - ‘Resourcing Crossref for Future Sustainability’ with the following stated purpose:\nCore to any discussion of resourcing Crossref is understanding what makes us sustainable long term. Organizational sustainability aligns the impact Crossref makes with the financial position required to support it. We want to conduct a thoughtful, comprehensive review that centers on our long-term vision and plans, is guided by well-defined problems and principles, includes board, staff, and community input, and details an implementation plan.\nCrossref is in a healthy financial position. We’ve experienced steady growth in our revenue and operating size over the past 20 years. We’re self-reliant on program revenue. Our growth has come from natural, broad adoption of membership, Content Registration, and the development of services like Metadata Plus and Similarity Check. Core revenue lines, like membership dues and Content Registration, have grown through volume rather than price increases. In fact, basic Content Registration and membership fees haven’t increased in over 20 years.\nBut our fee schedules have also strained under the evolution of our membership. They\u0026rsquo;ve grown complex over time with the addition of new fees, new record types, or new membership and user categories. The complexity in our fees is hard to manage programatically, and also means it is difficult for the community to reliably predict their costs.\nThe RCFS program is an umbrella for a few related goals:\nMaking fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources Changes might result in fee increases for some, but only to the extent that fee increases are in service to those goals. The ideal outcome would be that all of these changes result in as close to an overall revenue-neutral position as possible while ensuring long term sustainability.\nThroughout all the project discussions and decisions, we are being mindful of Crossref’s fee principles, which the board adopted in 2019.\nScope of the RCFS Program 2024-2025 We engaged Research Consulting in late 2023 to help us identify 11 potential changes that would have the highest positive impact and be most feasible.\nSince then we have narrowed down the 11 projects to five that we will tackle in 2024-2025. In February 2024, we started working with our expanded Membership \u0026amp; Fees Committee to progress the discussions. Their remit is to give and assess community input and data in order to make recommendations to the Board.\nThis page sets out the work for each of the five projects under the three core program goals.\nGOAL: MORE EQUITABLE FEES Project 1: Evaluate the USD $275 annual membership fee tier and propose a more equitable pricing structure, which might entail breaking this down into two or more different tiers. Project 2: Define a new basis for sizing and tiering members for their capacity to pay GOAL: SIMPLIFY COMPLEX FEES Project 3: Address and adjust volume discounts for Content Registration Project 4: Address and adjust backfile discounts for Content Registration GOAL: REBALANCE REVENUE SOURCES Project 5: Reflect the increasing value of Crossref as a metadata source, likely increasing Metadata Plus fees read on for more about the goals and the five projects and what\u0026rsquo;s happening with each.\nGOAL: MORE EQUITABLE FEES Membership characteristics have evolved over time, with revenue concentrated on the furthest ends of the membership tiers. The goal of reviewing the tiers is to align them with how membership participation has changed, reduce the number of tiers, and examine how we apply the sliding scale of pricing to an organization’s capacity to pay.\nProject 1: Evaluate the lowest membership tier and propose a more equitable pricing structure, which might entail breaking this down into two or more different tiers. The first project in focus is an analysis of the USD $275 membership tier. The vast majority of Crossref’s revenues come from the top and bottom membership tiers. 65% of membership revenues come from organisations in the USD $275 tier, which is for organisations sized at \u0026lt; USD $1 million, and membership fees account for a much greater share of these organisations’ total payments to Crossref, at 44%. There are currently over 8000 members in that USD $275 tier, so we need to understand them better.\nWork to date In March 2024, we distributed a survey to all members in the USD $275 tier and received over 1000 responses. In May 2024, preliminary survey results were shared and discussed in our community call, and the final findings were considered by the Membership \u0026amp; Fees Committee, who highlighted a number of possible paths that need to be modelled, as well as the need for further research into barriers to Crossref membership. What\u0026rsquo;s next Modelling of future impacts of different approaches to fee changes. In August and September 2024, we are inviting feedback from organisations, who are not Crossref members to understand how our fees can be more accessible. We invite volunteers from publishing organisations to come forward and express their interest in sharing their feedback with this form. Project 2: Define a new basis for sizing and tiering members for their capacity to pay We will review the basis for determining annual membership fee. Currently, tier is based on the higher of:\nTotal annual publishing revenue from all the divisions of your organization (the member is considered to be the largest legal entity) for all types of activities (advertising, sales, subscriptions, databases, article charges, membership dues, etc). Or, if no publishing revenue then: Total annual publishing operations expenses including (but not limited to) staff costs, hosting, outsourcing, consulting, typesetting, etc. This criterion has become limiting as it is based on the original premise that all Crossref members are publishers. However, we have government bodies and NGOs, funders, news agencies, museums, pharmaceutical companies, and more - who don’t measure publishing revenue or expenses. It should simply be a way to determine size or capacity to pay.\nAdditionally, even within traditional publishing, so many journals are volunteer-led, that it\u0026rsquo;s been tricky for them to size themselves based on either revenue or expenses, since the volunteer group may be very broad but largely involve just a few snatched hours from many different researchers and editors.\nWork to date Conducting market research of sliding scale fee models GOAL: SIMPLIFY COMPLEX FEES Content Registration fees can be broken down into 14 different record types, which can be further split into 42 different content fees, when accounting for current year (CY), back year (BY) discounts, and volume discounts.\nContent registration fees have not changed substantially in 20 years. Fees have been added over time as new record types have been introduced for new communities such as preprint servers and funders.\nThe number of fee variants creates complexity in our code and billing processes; makes it difficult for members to predict their fees because of computation that can currently only be done at the close of the quarter; and inhibits our ability in the future to change our approach to billing or provide accurate running costs of content registration for members.\nProjects 3 \u0026amp; 4: Review volume and backfile discounts for Content Registration A lot of the complexity in our billing can be attributed to underused content type provisions like volume and back year discounts. Discounted prices, like discounts for registering back year records, were introduced to encourage members to register as much content as they had, in order to better capture the scholarly record.\nMany of these discount categories have very little to no activity in them, but a few are still in use and some conceptually encourage best practice in line with fee principles.\nRemoving or reducing discount types would have a significant impact on simplifying our fees without risking significant financial impact. Although encouraging previous archival content to be registered is an important incentive. In this review, we\u0026rsquo;re considering to what extent the discounted price encourages registration of high volumes of content and/or back year records.\nWork to date Summary of a recent Membership \u0026amp; Fees committee discussion of the usefulness of discounts:\nWhat\u0026rsquo;s next Analysing recent use trends of the respective content types, volume and back year content types. Considering how these content types are used by members based on their membership tier, join date, and country. GOAL: REBALANCE REVENUE SOURCES Project 5: Reflect the increasing value of Crossref as a metadata source, likely increasing Metadata Plus fees Increasing Metadata Plus fees would help reflect the change in the value of Crossref to the community. Initially, Crossref was primarily an end-point role for members; we are (and will remain) custodians of the scholarly record through metadata. Over the last decade—with the growth of search and APIs and the vision of the Research Nexus—Crossref’s role has expanded to be a hub-point for all scholarly stakeholders, we are also now distributors of metadata between and among all parties in scholarly communications who curate or consume metadata.\nIn five years, Crossref has seen the following growth:\nData point 2018 2023 Total annual DOI resolutions 6 billion 13.8 billion Total annual API/Search calls 7.6 billion 14.8 billion Rebalancing revenue between metadata registration and metadata distribution will more accurately reflect Crossref’s purpose as perceived by the wider community we now serve.\nWork to date We have been in touch with a few Query Affiliate subscribers to understand why they use this service We have started analysing the costs to Crossref of supporting the Plus API We have started gathering usage data in case that is a fee model we could move to What\u0026rsquo;s next Present the data to the Plus subscribers and discuss Present the data and subscriber feedback to the Membership \u0026amp; Fees Committee We invite everyone to comment or ask questions about this program in the dedicated RCFS Program category on our community forum.\n", "headings": ["Background","Scope of the RCFS Program 2024-2025","GOAL: MORE EQUITABLE FEES","GOAL: SIMPLIFY COMPLEX FEES","GOAL: REBALANCE REVENUE SOURCES","GOAL: MORE EQUITABLE FEES","Project 1: Evaluate the lowest membership tier and propose a more equitable pricing structure, which might entail breaking this down into two or more different tiers.","Work to date","What\u0026rsquo;s next","Project 2: Define a new basis for sizing and tiering members for their capacity to pay","Work to date","GOAL: SIMPLIFY COMPLEX FEES","Projects 3 \u0026amp; 4: Review volume and backfile discounts for Content Registration","Work to date","What\u0026rsquo;s next","GOAL: REBALANCE REVENUE SOURCES","Project 5: Reflect the increasing value of Crossref as a metadata source, likely increasing Metadata Plus fees","Work to date","What\u0026rsquo;s next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/this-years-call-for-expressions-of-interest-to-join-our-board/", "title": "This year’s call for expressions of interest to join our board", "subtitle":"", "rank": 1, "lastmod": "2024-04-26", "lastmod_ts": 1714089600, "section": "Blog", "tags": [], "description": "The Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in January 2025. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September.\nExpressions of interest will be due Monday, May 27th, 2024\nThis is an exciting time to join the board, as we have a number of active projects underway: We are considering resourcing Crossref for a sustainable future and board members will be part of deciding any changes to our fees scheme and overseeing its implementation.", "content": "The Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in January 2025. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September.\nExpressions of interest will be due Monday, May 27th, 2024\nThis is an exciting time to join the board, as we have a number of active projects underway: We are considering resourcing Crossref for a sustainable future and board members will be part of deciding any changes to our fees scheme and overseeing its implementation. We\u0026rsquo;re focusing on how our community and metadata can contribute to ensuring the integrity of the scholarly record. We’re broadening our metadata record to capture richer funding and institutional affiliations. We\u0026rsquo;re working towards a future where the scholarly record prioritizes relationships between research outputs to build a holistic research nexus. The board helps guide this work.\nAbout the board elections The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board. Board terms are for three years, and this year, there are four seats open for election.\nThe board maintains a balance of seats, with eight seats for smaller members and eight seats for larger members (based on total revenue to Crossref). This is an effort to ensure that the scholarly community\u0026rsquo;s diversity of experiences and perspectives is represented in decisions made at Crossref.\nThis year, we will elect two of the larger member seats (membership tiers $3,900 and above) and two of the smaller member seats (membership tiers $1,650 and below). You don’t need to specify which seat you are applying for; we will provide that information to the nominating committee.\nThe online election will open in September, with results announced at the annual meeting on October 29th, 2024. New members will begin their term in January 2025.\nAbout the Nominating Committee The Nominating Committee reviews the expressions of interest and selects a slate of candidates for election. The slate put forward will exceed the total number of open seats. The committee considers the statements of interest, organizational size, geography, and experience.\n2024 Nominating Committee\nJames Phillpotts*, Director of Content Transformation and Standards, Oxford University Press, committee chair Oscar Donde*, Editor in Chief, Pan Africa Science Journal Rose L’Huillier*, Senior Vice President Researcher Products, Elsevier Ivy Mutambanengwe-Matanga, Chief Operating Officer, African Journals Online Adam Sewell, Chief Technology Officer, IOP Publishing (*) indicates Crossref board member\nWhat is the committee looking for this year The committee looks for skills and experience that will complement the rest of the board. Candidates from countries and regions not currently reflected on the board are strongly encouraged to apply. Successful candidates often have some or all of these characteristics:\nDemonstrate a commitment to or understanding of our strategic agenda or the Principles of Open Scholarly Infrastructure; Have expertise that may be underrepresented on the board currently; Hold senior/director-level positions in their organizations; Have experience with governance or community involvement; Represent member organizations that are active in the scholarly communications ecosystem; Demonstrate metadata best practices as shown in the member’s participation report The board is also encouraging Crossref members who are research funders to apply.\nBoard roles and responsibilities Crossref’s services provide a central infrastructure for scholarly communications. Crossref’s board helps shape the future of our services and by extension, impacts the broader scholarly ecosystem. We are looking for board members to contribute their experience and perspective.\nThe role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The board is representative of our membership base and guides the staff leadership team on trends affecting scholarly communications. The board sets strategic directions for the organization while also providing oversight into policy changes and implementation. Board members have a fiduciary responsibility to ensure sound operations. They do this by attending board meetings as well as joining more specific board committees.\nWho can apply to join the board? Any active member of Crossref can apply to join the board. Crossref membership is open to organizations that produce content, such as academic presses, commercial publishers, standards organizations, and research funders.\nWhat is expected of board members? Board members attend four meetings each year that typically take place in January, March, July, and November. Meetings have taken place in a variety of international locations and travel support is provided when needed. January, March, and November board meetings are held virtually, and all committee meetings take place virtually. Each board member should sit on at least one Crossref committee. Care is taken to accommodate the wide range of time zones in which our board members live.\nWhile the expressions of interest are specific to an individual, the seat that is elected to the board belongs to the member organization. The primary board member also names an alternate who may attend meetings in the event that the primary board member is unable to. There is no personal financial obligation to sit on the board. The member organization must remain in good standing.\nBoard members are expected to be comfortable assuming the responsibilities listed above and to prepare and participate in board meeting discussions.\nHow to apply Please click here to submit your expression of interest. We ask for a brief statement about how your organization could enhance the our board and a brief personal statement about your interest and experience with Crossref.\nPlease contact me with any questions at lofiesh@crossref.org\n", "headings": ["About the board elections","About the Nominating Committee","What is the committee looking for this year","Board roles and responsibilities","Who can apply to join the board?","What is expected of board members?","How to apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/common-views-and-questions-about-metadata-across-africa/", "title": "Common views and questions about metadata across Africa", "subtitle":"", "rank": 1, "lastmod": "2024-04-24", "lastmod_ts": 1713916800, "section": "Blog", "tags": [], "description": "This past year has been a captivating journey of immersion within the Crossref community, a mix of online interactions and meaningful in-person experiences. From the engaging Sustainability Research and Innovation Conference in Port Elizabeth, South Africa, to the impactful webinars conducted globally, this has been more than just a professional endeavour; it has been a personal exploration of collaboration, insights, and a shared commitment to pushing the boundaries of scholarly communication.", "content": "This past year has been a captivating journey of immersion within the Crossref community, a mix of online interactions and meaningful in-person experiences. From the engaging Sustainability Research and Innovation Conference in Port Elizabeth, South Africa, to the impactful webinars conducted globally, this has been more than just a professional endeavour; it has been a personal exploration of collaboration, insights, and a shared commitment to pushing the boundaries of scholarly communication.\nWorking collaboratively with research funders and research organisations Cocreation activity in smaller groups at the SRI conference.\nThe adventure began with a significant in-person event, the Sustainability Research and Innovation Conference. In the coastal city of Port Elizabeth, South Africa, I had the honour of hosting a parallel co-creation session titled \u0026ldquo;Connecting Science to Society: A Network Approach to Improving Science Communication in the Global South.\u0026rdquo; The co-creation session addressed research discoverability and accessibility among early-career researchers. Apart from some immediate feedback from the researchers in the room about how they might use co-creation beyond the conference to improve their research experience and outcome, I also had conversations with research funders from the Belmont Forum, Future Earth, and National Research Foundation - South Africa and the National Research Foundation - Mozambique about connecting their grants and grantees with their published outputs referencing Crossref’s Open Funder Registry and research grants registration. A different side conversation was about a community organisation in Botswana that is interested in registering patents with Crossref for proper referencing and protecting the intellectual property of their research on the indigenous communities’ innovations and the associated published work. These conversations are ongoing, unveiling a new understanding of unique needs and opportunities to pursue with research funders and research organisations working on indigenous knowledge and innovations.\nLearning from organisations in GEM-eligible countries The journey extended globally through a series of webinars conducted in Bangladesh, Tanzania, Nepal, and Ghana. Collaborating with dedicated Ambassadors and my colleagues leading the Global Equitable Membership (GEM) program, we witnessed an increase in Crossref membership from the GEM countries and initial metadata registration. The GEM Program offers relief from both Crossref membership and Crossref content registration fees for organisations in the least economically advantaged countries in the world, based on the World Bank\u0026rsquo;s IDA list. Susan, in her blog post, \u0026ldquo;The GEM Program: Year One\u0026rdquo;, elaborated on the significance of these efforts and their impact on fostering equitable access to scholarly resources and communication through the expansion of Crossref\u0026rsquo;s membership base in underrepresented regions, such as Bangladesh, Tanzania, Nepal, and Ghana. Specific concerns encountered while presenting the GEM program included feedback expressing reservations about the program\u0026rsquo;s approach, particularly in deciding on eligible countries, and advocating eligibility for the program to be extended to all the non-GEM countries in Africa. Additionally, a conversation with some organisations brought up concerns regarding the program\u0026rsquo;s sustainability, with inquiries about whether GEM was merely a free trial or freemium service, and seeking assurances against future fees. The audience found these sessions helpful, acknowledging that joining fees were no longer going to be a barrier, yet questions about the program\u0026rsquo;s longevity brought out the need for sustained support.\nDiscussing how The Research Nexus can support the community My journey then led me to Makerere University in Uganda for the Consortium of Uganda University Libraries (CUUL 2023) conference and the Forum for Open Research in MENA (FORM 2023) in Abu Dhabi. In Uganda, I noticed the synergy between university libraries, institutional repositories, and the research and education network service provider formed a consortium that played a crucial role in bridging the digital gap and supporting the adoption of open infrastructure. The event was mainly attended by librarians from different universities in Uganda. Most of those I connected with needed more information about Crossref and had questions about how Crossref DOIs are different from ARKs, which they commonly use in their publishing workflows. At FORM 2023, in my presentation titled, \u0026ldquo;The Research Nexus: A Rich and Reusable Open Network of Relationships in the Scholarly Record,\u0026rdquo; I shared Crossref\u0026rsquo;s vision for a connected research ecosystem with the audience that comprised of researchers, research administrators, and funders, and a good number of big publishers like IEEE and Taylor \u0026amp; Francis. The Research Nexus seeks to reveal relationships beyond persistent identifiers, utilising rich metadata to connect various scholarly components. I also took the opportunity at both events to share about The Publishers Learning And Community Exchange (PLACE), an online forum promoting best practices in scholarly publishing. The goal was to show attendees how they can actively contribute to and benefit from this vision, fostering a robust and interconnected research community through Crossref\u0026rsquo;s open infrastructure.\nPhoto with Dr. Salwan Abdulateef, rossref Ambassador - Iraq\nI enjoyed the opportunity to join the National Open Science Dialogue by TCC Africa, which provided crucial insights, emphasising the need for assessing awareness, implementing comprehensive policies, and fostering collaboration around Open Science. Higher education institutions were recognized as influencers in the global Open Science movement, while a call for an inclusive research environment was underscored through open access and data sharing. The dialogue emphasized a collective effort involving policymakers, educators, researchers, and institutions, focusing on inclusivity and collaboration to advance Open Science in East Africa.\nExploring how rich metadata can provide trust signals with members in Kenya Reflecting on the Crossref Nairobi event that happened in February 2024, it was an enriching experience exploring key issues shaping scholarly publishing in Kenya. The discussions also touched on the role of metadata as a trust signal and a tool for the persistence of the scholarly record, particularly in regions where data protection challenges persist. This is exemplified by concerns raised during the event about the fear of data theft, misuse, or loss, especially in places with comparatively weaker data protection laws. The presence of robust metadata, particularly with detailed provenance information, becomes crucial in such contexts, as it enables better identification and handling of potential misuse. Thus, through effective metadata implementation and the persistence facilitated by identifiers, the management of data risks can be significantly improved.\nThe insights from existing Crossref members pointed out contextual challenges, regional differences, and the importance of effective post-publication processes. The conference served as a valuable platform for dialogue, emphasising the collective commitment to continuous improvement of scholarly communication in the country, and the need for continuous awareness and training on making the most of Crossref services. The roundtable discussions during the Crossmark service consultation brought to light various reflections and considerations regarding post-publication changes in publishing workflows. The Crossmark service was a new discovery for most participants, with potential value recognized in facilitating current updates on articles. However, there are existing barriers such as a lack of awareness and technical expertise, suggesting the need for further education to facilitate adoption. Overall, the consultation provided a platform for introspection and exploration of avenues for improving post-publication practices in scholarly publishing.\nCrossref Nairobi group photo\nWe organised the Crossref Nairobi event with the help of colleagues from the outreach team and local Ambassadors, Mercury Shitindo of Kenya, Baraka Ngussa of Tanzania and our Board Members in Kenya, Oscar Donde. It was the first time I saw both my colleagues and Ambassadors in action and working closely together - making presentations and accommodating last-minute facilitation changes to the program. Compared to attending or speaking at an event, organising one was a unique experience requiring a lot of planning in advance for logistics and the event program, identifying and keeping in touch with important stakeholders, ushering guests and being on standby for any matters that come up about the event. All of that went very well thanks to the team on the ground and cooperative participants.\nExploring the role of open infrastructure for African universities Attending the recent WACREN 2024 conference was an eye-opening experience, unfolding the role of open infrastructure in addressing challenges faced by African universities. A focus on open access systems and advocacy for decolonizing knowledge were voiced too, including challenges of affordability of DOIs and questions of local ownership amidst global initiatives. Global persistent identifier providers, including ORCID and DataCite too, had a presence at the conference, alongside passionate advocates for more locally managed, decentralised infrastructure. These are concerns that Crossref needs to understand better, as we seek to find effective ways of supporting equitable participation in the Research Nexus. The conference resonated with a call for continued work in fostering accessibility, sharing, and leveraging resources to accelerate research and innovation in Africa.\nPhoto with our Ambassadors from West Africa at WACREN 2024 event: Blessing Abumere - Nigeria, Audrey Kenni Nganmeni - Cameroon, Richard Lamptey - Ghana and Oumy Ndiaye - Senegal.\nConversations with Crossref Ambassadors brought about a shared narrative across universities in some African countries. These institutions are actively embracing digital shifts, setting up institutional repositories using platforms like DSpace and OJS. However, challenges persist, particularly in funding and technical capacity. It\u0026rsquo;s heartening to see how national and regional research and education networks step in to help in internet connectivity, opening up collaboration opportunities with other interoperable infrastructure, setting up repositories, providing hosting services and event managing content identifiers.\nDeceptive publishing practices remain a shared concern, and we’ve had requests at these meetings for stricter inclusion criteria for membership of Crossref to ensure quality and trustworthiness of articles accessible through Crossref metadata.\nWe’ve explained to those we’ve met that Crossref doesn’t (and can’t) assess the quality of content or the integrity of the research process. We don’t have the people or the skills, and it isn’t our mission to be the gatekeepers of research quality. A DOI record is just an indication that something was published, it isn’t an indication of quality.\nHowever, we do still have a vital role in preserving the integrity of the scholarly record. We provide the infrastructure which enables those who produce scholarly outputs to provide metadata (effectively evidence) about how they ensure the quality of content and how the outputs fit into the scholarly record. The scholarly record - that network of published outputs, inputs, relationships and contexts - is captured through the metadata records that our members register with us, and that we then distribute freely and openly through our API. The richer and more comprehensive Crossref records are, the more context there is for our members and for the whole scholarly research ecosystem to make their own decisions around trustworthiness. Blocking access to the infrastructure creates gaps in the scholarly record, but also potentially blocks legitimate newcomers.\n“Crossref is focused on enriching metadata to provide more and better trust signals while keeping barriers to membership and participation as low as possible to enable an inclusive scholarly record.” Read more about Crossref’s role in preserving the integrity of the Scholarly record in the blog post by Amanda Bartell.\nWhile the landscape of digital scholarly publication witnesses significant strides, a crucial need persists, the importance of preserving and interconnecting metadata to the global scholarly record. It\u0026rsquo;s not just about discoverability, a theme resonating strongly within the community, but about enabling reproducibility, upholding research and editorial integrity, and facilitating reporting and assessment.\nThe path forward As I reflect on this year of immersing myself within the Crossref community, building awareness in new communities, and learning more about the different perceptions across the region, it feels like a personal progression of growth and discovery. From the captivating in-person moments to the global webinars and collaborative efforts to address challenges in scholarly communication, this journey is not just a professional pursuit; it\u0026rsquo;s a personal exploration. The path forward involves continued support, intensified awareness-building, and sustained dialogue, ensuring that the scholarly ecosystem continues to thrive, evolve, and leave a lasting impact.\n", "headings": ["Working collaboratively with research funders and research organisations","Learning from organisations in GEM-eligible countries","Discussing how The Research Nexus can support the community","Exploring how rich metadata can provide trust signals with members in Kenya","Exploring the role of open infrastructure for African universities","The path forward"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/johanssen-obanda/", "title": "Johanssen Obanda", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/outreach/", "title": "Outreach", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/authorization/", "title": "Authorization", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/labs/", "title": "Labs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/martin-eve/", "title": "Martin Eve", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/testing-times/", "title": "Testing times", "subtitle":"", "rank": 1, "lastmod": "2024-04-03", "lastmod_ts": 1712102400, "section": "Blog", "tags": [], "description": "One of the challenges that we face in Labs and Research at Crossref is that, as we prototype various tools, we need the community to be able to test them. Often, this involves asking for deposit to a different endpoint or changing the way that a platform works to incorporate a prototype.\nThe problem is that our community is hugely varied in its technical capacity and level of ability when it comes to modifying their platform.", "content": "One of the challenges that we face in Labs and Research at Crossref is that, as we prototype various tools, we need the community to be able to test them. Often, this involves asking for deposit to a different endpoint or changing the way that a platform works to incorporate a prototype.\nThe problem is that our community is hugely varied in its technical capacity and level of ability when it comes to modifying their platform. Some mega-publishers, for instance, outsource their platforms and so are dependent on third party developers/organizations when they want to make a change. Many smaller publishers, by contrast, use systems such as OJS, which come with Crossref plugins that make life very easy… but that require hard code changes to accommodate prototypes. Such changes are way beyond the technical capacity of most journal editors.\nSo how can we prototype new ideas and test them? One way is by creating new interstitial interfaces that allow people to manually supplement metadata or register for prototype services. Of course, this requires additional work on behalf of the user. Every time they wish to participate they have to visit an extra web page and re-input details that, surely, were included in the original deposit.\nAnother way would be for plugin developers to have an advanced option field that allowed end-users to change their deposit endpoint. It would be excellent to see this feature in OJS, Janeway, and also proprietary systems. This would allow us to work with the community to test new prototype mechanisms, without forcing anyone to edit code. Many systems already include the ability to switch between Crossref’s “test” system and our live deposit API. All I am really suggesting here is the logical next step: allow advanced users to specify a deposit endpoint of their own choosing so that we can give them access to prototype systems.\nOf course, it’s not always that simple. Sometimes, prototype systems will require new data fields on submission, for example. In those cases, there is nothing for it except to modify the plugin or to provide a separate interface. But sometimes, as in the case of the Op Cit project (more on which soon), all the data is already in place; we just need to direct users to a different endpoint. Such changes would definitely make testing times less trying.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/tools/", "title": "Tools", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/engineering/", "title": "Engineering", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/joe-wass/", "title": "Joe Wass", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/mending-chestertons-fence-open-source-decision-making/", "title": "Mending Chesterton’s Fence: Open Source Decision-making", "subtitle":"", "rank": 1, "lastmod": "2024-03-18", "lastmod_ts": 1710720000, "section": "Blog", "tags": [], "description": "When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write.\nBy the time the next person looks at that code, some of that context will have evaporated.", "content": "When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write.\nBy the time the next person looks at that code, some of that context will have evaporated. There may be helpful code comments, tests, and specifications to explain how it should behave. But they don\u0026rsquo;t explain the path not taken, and why we didn\u0026rsquo;t take it. Or those occasions where the facts changed, so we changed our mind.\nSome parts of our system are as old as Crossref itself. Whilst our process still involves coffee, it\u0026rsquo;s safe to say that most of our working assumptions have changed, and for good reasons! We have to be very careful when working with our oldest code. We always consider why it was written that way, and what might have changed since. We\u0026rsquo;re always on the look out for Chesterton\u0026rsquo;s Fence!\nLeaving a Trail We\u0026rsquo;re building a new generation of systems at Crossref, and as we go we\u0026rsquo;re being deliberate about supporting the people who will maintain it.\nWhen our oldest code was written, the software development team all worked in an office with a whiteboard or three, and the code was proprietary. Twenty years later, things are very different. The software development team is spread over 8 timezones. Thanks to POSI, all the new code we write is open source, so the next people to read that code might not even be Crossref staff.\nWorking increasingly asynchronously, without that whiteboard, we need to record the options, collect evidence, and peer-review them within the team.\nSo for the past couple of years the software team has maintained a decision register. The first decision we recorded was that we should record decisions! Since then we have recorded the significant decisions as they arise. Plus some historical ones.\nThese aren\u0026rsquo;t functional specifications, which describe what the system should do. It\u0026rsquo;s the decisions and trade-offs we made along the way to get to the how. Look out for another blog post about specifications.\nBy leaving a trail of explanations as we go, we make it easier for people to understand why code was written, and what has changed. We\u0026rsquo;re writing the story of our new systems. This makes it easier to alter the system in future in response to changes in our community, and the metadata they use.\nDifficult Decisions There are some fun challenges to building systems at Crossref. We have a lot of data. Our schema is very diverse, and has a vast amount of domain knowledge embedded in it. It\u0026rsquo;s changed over time to accommodate 20 years of scholarly publishing innovations. Our community is diverse too, from small one-person publishers with a handful of articles, through to large ones that publish millions.\nWhat might be an obvious decision for a database table with a thousand rows doesn\u0026rsquo;t always translate to a million. When you get to a billion, things change again. An initially sensible choice might not scale. And a scalable solution might look over-engineered if we had millions of DOIs, rather than hundreds of millions.\nThe diversity of the data also poses challenges. A very simple feature might get complicated or expensive when it meets the heterogeneity of our metadata and membership. What might scale for journal article or grant metadata might not work for book chapters.\nThe big decisions need careful discussion, experimentation, and justification.\n2NF or not 2NF One such recent decision was how we structure our SQL schema for the database that powers our new \u0026lsquo;relationships\u0026rsquo; REST API endpoint, currently in development.\nThe data model is simple: we have a table of Relationships which connect pairs of Items. And each Item can have properties (such as a type). The way to model this is straightforward, following conventional normalization rules:\nWe built the API around it, and all was well.\nWe then added a feature which lets you look up relationships based on the properties of the subject or object. For example \u0026ldquo;find citations where the subject is an article and the object is a dataset\u0026rdquo;. This design worked well in our initial testing. We loaded more data into it, and it continued to work well.\nAnd then, the context changed. Once we tested loading a billion relationships in the database, the performance dropped. The characteristics of the data: size, shape and distribution, reached a point where the database was unable to run queries in a timely way. The PostgreSQL query planner became unpredictable and occasionally produced some quite exciting query plans (to non-technical readers: databases are neither the time nor the place for excitement).\nThis is a normal experience in scaling up a system. We expected that something like this would happen at some point, but you don\u0026rsquo;t know when it will happen until you try. We bounced around some ideas and came up with a couple of alternatives. Each made trade-offs around processing time, data storage and query flexibility. The best way to evaluate them was to use real data at a representative scale.\nOne of the options was denormalisation. This is a conventional solution to this kind of problem, but was not our first choice as it involves extra machinery to keep the data up-to-date, and more storage. It would not have been the correct solution for a smaller dataset. But we had the evidence that the other two approaches would not scale predictably.\nBy combining the data into one table, we can serve up API requests much more predictably, and with much better performance. This code is now running with the right performance. Technical readers note that this diagram is simplified. The real SQL schema is a little different.\nWithout writing this history down, and explaining what we tried, someone might misunderstand the reason for the code and try to simplify it. Decision record DR-0500 guards against that.\nBut one day, when the context changes, future developers will be able to come back and modify the code, because they understand why it was like that in the first place.\n", "headings": ["Leaving a Trail","Difficult Decisions","2NF or not 2NF"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/mending-chestertons-fence-open-source-decision-making/", "title": "Mending Chesterton’s Fence: Open Source Decision-making", "subtitle":"", "rank": 1, "lastmod": "2024-03-18", "lastmod_ts": 1710720000, "section": "Blog", "tags": [], "description": "When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write.\nBy the time the next person looks at that code, some of that context will have evaporated.", "content": "When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write.\nBy the time the next person looks at that code, some of that context will have evaporated. There may be helpful code comments, tests, and specifications to explain how it should behave. But they don\u0026rsquo;t explain the path not taken, and why we didn\u0026rsquo;t take it. Or those occasions where the facts changed, so we changed our mind.\nSome parts of our system are as old as Crossref itself. Whilst our process still involves coffee, it\u0026rsquo;s safe to say that most of our working assumptions have changed, and for good reasons! We have to be very careful when working with our oldest code. We always consider why it was written that way, and what might have changed since. We\u0026rsquo;re always on the look out for Chesterton\u0026rsquo;s Fence!\nLeaving a Trail We\u0026rsquo;re building a new generation of systems at Crossref, and as we go we\u0026rsquo;re being deliberate about supporting the people who will maintain it.\nWhen our oldest code was written, the software development team all worked in an office with a whiteboard or three, and the code was proprietary. Twenty years later, things are very different. The software development team is spread over 8 timezones. Thanks to POSI, all the new code we write is open source, so the next people to read that code might not even be Crossref staff.\nWorking increasingly asynchronously, without that whiteboard, we need to record the options, collect evidence, and peer-review them within the team.\nSo for the past couple of years the software team has maintained a decision register. The first decision we recorded was that we should record decisions! Since then we have recorded the significant decisions as they arise. Plus some historical ones.\nThese aren\u0026rsquo;t functional specifications, which describe what the system should do. It\u0026rsquo;s the decisions and trade-offs we made along the way to get to the how. Look out for another blog post about specifications.\nBy leaving a trail of explanations as we go, we make it easier for people to understand why code was written, and what has changed. We\u0026rsquo;re writing the story of our new systems. This makes it easier to alter the system in future in response to changes in our community, and the metadata they use.\nDifficult Decisions There are some fun challenges to building systems at Crossref. We have a lot of data. Our schema is very diverse, and has a vast amount of domain knowledge embedded in it. It\u0026rsquo;s changed over time to accommodate 20 years of scholarly publishing innovations. Our community is diverse too, from small one-person publishers with a handful of articles, through to large ones that publish millions.\nWhat might be an obvious decision for a database table with a thousand rows doesn\u0026rsquo;t always translate to a million. When you get to a billion, things change again. An initially sensible choice might not scale. And a scalable solution might look over-engineered if we had millions of DOIs, rather than hundreds of millions.\nThe diversity of the data also poses challenges. A very simple feature might get complicated or expensive when it meets the heterogeneity of our metadata and membership. What might scale for journal article or grant metadata might not work for book chapters.\nThe big decisions need careful discussion, experimentation, and justification.\n2NF or not 2NF One such recent decision was how we structure our SQL schema for the database that powers our new \u0026lsquo;relationships\u0026rsquo; REST API endpoint, currently in development.\nThe data model is simple: we have a table of Relationships which connect pairs of Items. And each Item can have properties (such as a type). The way to model this is straightforward, following conventional normalization rules:\nWe built the API around it, and all was well.\nWe then added a feature which lets you look up relationships based on the properties of the subject or object. For example \u0026ldquo;find citations where the subject is an article and the object is a dataset\u0026rdquo;. This design worked well in our initial testing. We loaded more data into it, and it continued to work well.\nAnd then, the context changed. Once we tested loading a billion relationships in the database, the performance dropped. The characteristics of the data: size, shape and distribution, reached a point where the database was unable to run queries in a timely way. The PostgreSQL query planner became unpredictable and occasionally produced some quite exciting query plans (to non-technical readers: databases are neither the time nor the place for excitement).\nThis is a normal experience in scaling up a system. We expected that something like this would happen at some point, but you don\u0026rsquo;t know when it will happen until you try. We bounced around some ideas and came up with a couple of alternatives. Each made trade-offs around processing time, data storage and query flexibility. The best way to evaluate them was to use real data at a representative scale.\nOne of the options was denormalisation. This is a conventional solution to this kind of problem, but was not our first choice as it involves extra machinery to keep the data up-to-date, and more storage. It would not have been the correct solution for a smaller dataset. But we had the evidence that the other two approaches would not scale predictably.\nBy combining the data into one table, we can serve up API requests much more predictably, and with much better performance. This code is now running with the right performance. Technical readers note that this diagram is simplified. The real SQL schema is a little different.\nWithout writing this history down, and explaining what we tried, someone might misunderstand the reason for the code and try to simplify it. Decision record DR-0500 guards against that.\nBut one day, when the context changes, future developers will be able to come back and modify the code, because they understand why it was like that in the first place.\n", "headings": ["Leaving a Trail","Difficult Decisions","2NF or not 2NF"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/credential-checking-at-crossref/", "title": "Credential Checking at Crossref", "subtitle":"", "rank": 1, "lastmod": "2024-03-15", "lastmod_ts": 1710460800, "section": "Blog", "tags": [], "description": "It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.", "content": " It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.\nThere are two modes of usage. First, a command-line interface that allows you to run a basic command and get feedback:\nUsage: cli.py [OPTIONS] USERNAME PASSWORD DOI\nSecond, you can use it as a programmatic library in Python:\nimport cred\ncredential = cred.Credential(username=username, password=password, doi=doi)\nif not credential.is_authenticated():\n\u0026hellip;\nif credential.is_authorised():\n\u0026hellip;\nThe tool splits down authentication (whether the given username and password are valid) and authorisation (whether the valid credentials are usable against a specific DOI/prefix).\nFor technical information, the way this works is by attempting to run a report on the specific DOI in question and then scraping the response page. We hope, at some future point, that there will be a real API for this, but for now this solves the problem as a bridge.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/credential-checking-at-crossref/", "title": "Credential Checking at Crossref", "subtitle":"", "rank": 1, "lastmod": "2024-03-15", "lastmod_ts": 1710460800, "section": "Blog", "tags": [], "description": "It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.", "content": " It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.\nThere are two modes of usage. First, a command-line interface that allows you to run a basic command and get feedback:\nUsage: cli.py [OPTIONS] USERNAME PASSWORD DOI\nSecond, you can use it as a programmatic library in Python:\nimport cred\ncredential = cred.Credential(username=username, password=password, doi=doi)\nif not credential.is_authenticated():\n\u0026hellip;\nif credential.is_authorised():\n\u0026hellip;\nThe tool splits down authentication (whether the given username and password are valid) and authorisation (whether the valid credentials are usable against a specific DOI/prefix).\nFor technical information, the way this works is by attempting to run a report on the specific DOI in question and then scraping the response page. We hope, at some future point, that there will be a real API for this, but for now this solves the problem as a bridge.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/subject-codes-incomplete-and-unreliable-have-got-to-go/", "title": "Subject codes, incomplete and unreliable, have got to go", "subtitle":"", "rank": 1, "lastmod": "2024-03-13", "lastmod_ts": 1710288000, "section": "Blog", "tags": [], "description": "Subject classifications have been available via the REST API for many years but have not been complete or reliable from the start and will soon be deprecated. dfdfd\nThe subject metadata element was born out of a Labs experiment intended to enrich the metadata returned via Crossref Metadata Search with All Subject Journal Classification codes from Scopus. This feature was developed when the REST API was still fairly new, and we now recognize that the initial implementation worked its way into the service prematurely.", "content": "Subject classifications have been available via the REST API for many years but have not been complete or reliable from the start and will soon be deprecated. dfdfd\nThe subject metadata element was born out of a Labs experiment intended to enrich the metadata returned via Crossref Metadata Search with All Subject Journal Classification codes from Scopus. This feature was developed when the REST API was still fairly new, and we now recognize that the initial implementation worked its way into the service prematurely.\nWhile subject classifications in Crossref metadata could be very useful, the current implementation in the REST API is problematic for three primary reasons:\nThey are misleadingly exposed in the API as a property of the work, when in fact they are a property of the container (e.g. a journal or conference proceeding). Just because a journal’s broad topic category is “X” doesn’t mean that a particular article in the journal is about “X.”\nExisting works may have outdated subjects. Originally, subject codes were not updated periodically. However, subjects exposed in the /journals route are now updated once a day. Those exposed via the /works endpoint are indexed along with works, and so when a new subject list is ingested, new DOIs start getting new subjects, but existing works may have outdated subjects. We don’t have a mechanism for forcing updates when incorrect subject values are returned via the REST API, so this data can be stale and incorrect.\nThey are not applied to everything. This is because the Scopus list does not cover all the journals that Crossref has (conversely, the Scopus list contains some journals Crossref does not have), and does not contain other container types.\nThe Labs team investigated options for improving subject classification coverage but ultimately concluded that there are insufficient solutions to the coverage problem. For more, please see Esha Datta’s findings published at Force11’s Upstream: https://0-doi-org.libus.csd.mu.edu/10.54900/n6dnt-xpq48\nWhere does that leave us? Rather than continuing to supply unreliable and misleading subject category metadata, we will be deprecating this feature in the coming weeks. To minimize disruption and avoid breaking changes at this time, we will be removing this data from our index, so the subject element will simply be empty. We may remove the subject element in the future.\nWe know that the community’s desire for subject-based analysis of metadata is very strong, and we have supported efforts to establish a multidisciplinary taxonomy. Inaccurate codes in the meantime do not help but actually hinder these efforts, giving the false impression that they are correct.\nWe aim to deprecate the subject codes in April of this year.\nPlease let us know if you have any questions or concerns by leaving a comment below, which will start a thread in our community forum.\nFrequently asked questions\nQ. Will the subject field continue to be available and functional?\nA. The subject metadata element will continue to be included in the JSON response but will not return any values.\nQ. Will new subject codes be added in the future?\nA. We do not have any current plans to add new subject codes in the future.\nQ. I received a notification about this, but we don’t use subject codes. Do I need to do anything?\nA. No, if you do not currently use the subject element, you do not need to do anything about this change.\nQ. I noticed that wrong or inaccurate subject codes were assigned to my works. Is this a solution?\nA. Yes. Until we can identify an accurate and sustainable system for assigning subject codes to Crossref metadata records, we want to stop assigning inaccurate subject codes and remove all existing assignments.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/learning/", "title": "API Learning Hub", "subtitle":"", "rank": 1, "lastmod": "2024-03-07", "lastmod_ts": 1709769600, "section": "API Learning Hub", "tags": [], "description": "We want everybody to have access to the metadata in our API but we also acknowledge that this is not a trivial task and some help does not hurt. Here we will collect some of the tools and resources that our team prepare and that you can freely use to start your metadata exploring adventure.\nChoose your path: First steps API 101 for publishers, researchers, and librarians with Postman and Crossref: Postman offers an friendly interface to build and modify your API queries.", "content": "We want everybody to have access to the metadata in our API but we also acknowledge that this is not a trivial task and some help does not hurt. Here we will collect some of the tools and resources that our team prepare and that you can freely use to start your metadata exploring adventure.\nChoose your path: First steps API 101 for publishers, researchers, and librarians with Postman and Crossref: Postman offers an friendly interface to build and modify your API queries. In this collection you will find templates to which you can add or modify the parameters of your choice. Postman collection | Video tutorial | Slides Intro to Crossref API using code: if your aim is to create workflows to download, analyze, and visualize data, you will probably want to create programs and scripts. We currently have available the following tutorials using R and/or Python that you can use and modify to your convenience. Python notebook | R notebook Retrieve specific metadata Crossref API for funding data: how to query data from the funders endpoint and grant-type records.\nR dashboard Get Crossref citations: this project contains a Jupyter notebook that shows how to compare citation counts from different Crossref endpoints.\nPython notebook Get journal-level metadata from Crossref’s API using R: how to retrieve journal-level metadata from a list of ISSN.\nNotebook Get Retraction Watch metadata from Crossref’s API: how to retrieve Retraction Watch and general updates.\nR notebook Cursor-based pagination using R: how to retrieve long lists of records using cursors.\nR notebook Retrieve all the metadata You have an alternative to the REST API if your goal is to obtain the entire body of Crossref’s records. With the public data file you have access to every DOI ever registered with Crossref. Learn more about the Public Data File\nSome convenient tools: API cheatsheet: we prepared a quick-reference sheet that you can use to get started with building your queries.\nJSON-file viewer: When you make a request to the REST API you will get a JSON file in the output. If you are making requests from your web browser and depending on its version, perhaps you will need a JSON-viewer plugin. For example, click this simple request. If you see a string of seemingly disorganized text, you will need to install a plugin. Alternatively, you can use other viewers such as JSON hero, which provides an extra layer of interactivity.\nRelated resources Crossref Unified Resource API: Here you will find a detailed list of the REST API endpoints, and parameters that you can use. Tips for using the Crossref REST API: This section from our documentation should be one of the first resources you should check for your API journey. Crossref Gitlab tutorial repository: Download the source code of our selection of tutorials. ", "headings": ["Choose your path:","First steps","Retrieve specific metadata","Retrieve all the metadata","Some convenient tools:","Related resources"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/learning/public-data-file/", "title": "Crossref Public Data File", "subtitle":"", "rank": 1, "lastmod": "2024-03-07", "lastmod_ts": 1709769600, "section": "API Learning Hub", "tags": [], "description": "You have an alternative to the REST API if your goal is to obtain the entire body of Crossref’s records. With the public data file you have access to every DOI ever registered with Crossref.\n2024 Public data file Release notes. Public data file. Data file sample. Tips for working with Crossref public data files and Plus snapshots Important considerations: The records are in JSON. Metadata is supplied by our members and, as such, not all records have the same completeness or quality of metadata.", "content": "You have an alternative to the REST API if your goal is to obtain the entire body of Crossref’s records. With the public data file you have access to every DOI ever registered with Crossref.\n2024 Public data file Release notes. Public data file. Data file sample. Tips for working with Crossref public data files and Plus snapshots Important considerations: The records are in JSON. Metadata is supplied by our members and, as such, not all records have the same completeness or quality of metadata. Every year our metadata corpus grows. The first data file was 65GB and held 112 million records; this year the file weighs in at 212 GB and contains metadata for 156 million records, or all Crossref records registered up to and including April 2024. Decompressing the .tar.gz files will take you several hours. Additional convenient tools: Given the size and the amount of files that the public data files comprises, we started experimenting with some additional tools to improve access to the data and repack the data into additional formats:\nPacker: A python application that allows you to repack the Crossref data dump into JSON-L. dois2SQLite: This tool will help you load Crossref metadata into a SQLite database. Crossref Data Dump Repacker: Rust Edition A Rust application that allows you to repack the Crossref data dump into SQLite. API for Interacting with the Crossref Annual Data File: A python API for interacting with the Crossref Annual Data File dump, allowing you to build various indexes for working with the annual data dump from Crossref. These tools are experimental, so please remember that we they are released without warranties and support, but are happy to hear about your experience using them. You can read more about them in this blog post\nPrevious releases (Click to expand)\n+- 2023\rRelease notes 2023 March public data file (185GB) +- 2022\rRelease notes 2022 April public data file (167GB) +- 2021\rRelease notes 2021 January public data file (102GB) +- 2020\rRelease notes 2020 March public data file (67GB) ", "headings": ["2024 Public data file","Tips for working with Crossref public data files and Plus snapshots","Important considerations:","Additional convenient tools:","Previous releases"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/collaboration/", "title": "Collaboration", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/doaj-and-crossref-renew-their-partnership-to-support-the-least-resourced-journals/", "title": "DOAJ and Crossref renew their partnership to support the least-resourced journals", "subtitle":"", "rank": 1, "lastmod": "2024-03-06", "lastmod_ts": 1709683200, "section": "Blog", "tags": [], "description": "Crossref and DOAJ share the aim to encourage the dissemination and use of scholarly research using online technologies and to work with and through regional and international networks, partners, and user communities for the achievement of their aims to build local institutional capacity and sustainability. Both organisations agreed to work together in 2021 in a variety of ways, but primarily to ‘encourage the dissemination and use of scholarly research using online technologies, and regional and international networks, partners and communities, helping to build local institutional capacity and sustainability around the world.", "content": "Crossref and DOAJ share the aim to encourage the dissemination and use of scholarly research using online technologies and to work with and through regional and international networks, partners, and user communities for the achievement of their aims to build local institutional capacity and sustainability. Both organisations agreed to work together in 2021 in a variety of ways, but primarily to ‘encourage the dissemination and use of scholarly research using online technologies, and regional and international networks, partners and communities, helping to build local institutional capacity and sustainability around the world.’ Some of the fruits of this labour are:\nDOAJ added support for Crossref XML to make it easier for publishers to upload metadata Closer collaboration between customer/member support at both organisations, making it easier for publishers and journal editors to navigate both service’s technologies the launch of PLACE: ‘a ‘one-stop shop’ for information to support publishers in adopting best practices the industry developed’ (together with other partners) a pilot gap analysis of the journals in DOAJ with the possibility of helping them start to use and resolve DOIs. The new agreement, signed earlier this month, will slightly shift focus to build upon existing collaborations, particularly around metadata. One of the primary sections of the MOU is enhancing support for the least-resourced journals by:\nAssigning DOIs and depositing the metadata with Crossref Finding ways to improve their DOAJ application experience to help them become indexed Collect and ingest their Crossref metadata into DOAJ Help them to get preserved via JASPER or similar initiatives Help identify other local partners, such as Crossref Sponsoring Organisations, to support their use of Crossref services It’s great that we can further underpin what is already a good working relationship. Both Crossref and DOAJ are central to discovery so it’s a natural partnership. Helping journals meet better standards and become indexed to make them more discoverable on a global scale is at the heart of our strategy. This agreement opens up a new avenue that allows the community to really focus on supporting those journals and the research they publish.’\n\u0026ndash; Joanna Ball, Managing Director of DOAJ\n‘The collaborations with DOAJ so far only reconfirmed our shared goal to help make the global scholarly communications system more equitable wherever we can. Our joint projects aim to seek out and devise support for resource-constrained journals in multiple ways. DOAJ’s work is essential in helping journals to develop good practice, while Crossref offers an open infrastructure to ensure all journals can be included and discoverable in the global scholarly record.’\n\u0026ndash; Ginny Hendricks, Director of Member and Community Outreach at Crossref\n\u0026mdash;\u0026mdash;\u0026ndash; END \u0026mdash;\u0026mdash;\nAbout DOAJ DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer reviewed journals. DOAJ deploys around one hundred carefully selected volunteers from the community of library and other academic disciplines to assist in curating open access journals. This independent database contains over 20,400 peer-reviewed open access journals covering all areas of science, technology, medicine, social sciences, arts and humanities. DOAJ is financially supported worldwide by libraries, publishers and other like-minded organizations. DOAJ services (including the evaluation of journals) are free for all, and all data provided by DOAJ are harvestable via OAI/PMH and the API. See https://doaj.org/ for more information.\nAbout Crossref Crossref is a global community-governed open scholarly infrastructure that makes all kinds of research objects easy to find, assess, and reuse through a number of services critical to research communications, including an open metadata API that sees over 1.5 billion queries every month. Crossref’s ~20,000 members come from 155 countries and are made up of universities, publishers, funders, government bodies, libraries, and research groups. Their ~155 million DOI records contribute to the collective vision of a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nFor more information please contact: dominic@doaj.org and rclark@crossref.org\n", "headings": ["About DOAJ","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/staff/", "title": "Staff", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-do-we-know-about-dois/", "title": "What do we know about DOIs", "subtitle":"", "rank": 1, "lastmod": "2024-02-29", "lastmod_ts": 1709164800, "section": "Blog", "tags": [], "description": "Crossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).", "content": "Crossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).\nWhat, though, do we actually know about the state of persistence of these links? How many DOIs resolve correctly? How many landing pages, at the other end of the DOI resolution, contain the information that is supposed to be there, including the title and the DOI itself? How can we find out?\nThe first and seemingly most obvious way that we can obtain some of these data is by working through the most recent sample of DOIs and attempting to fetch metadata from each of them using a standard python script. This involves using the httpx library to attempt to resolve each of the DOIs to a resource, visiting that resource and seeing what the landing page yields.\nEven this is not straightforward. Landing pages can be HTML resources or they can be PDF files, among other things. In the case of PDF files, to detect a run of text is not simple as a single line break can be enough to foil our search. Nonetheless, when using this strategy we find the following statistics:\nTotal DOI count in sample: 5000\nNumber of HTTP 200 response: 3301*\nPercentage of HTTP 200 responses: 66.02%\nNumber of titles found on landing page: 1580\nPercentage of titles found on landing page: 31.60%\nNumber of DOIs in recommended format found on landing page: 1410\nPercentage of DOIs in recommended format found on landing page: 28.20%\nNumber of titles and DOIs found on landing page: 929\nPercentage of titles and DOIs found on landing page: 18.58%\nNumber of PDFs found on landing page: 1469\nPercentage of PDFs found on landing page: 29.38%\nPercent of PDFs found on landing pages that loaded: 44.50%\n* an HTTP 200 response means that the web page loaded correctly\nWhile these numbers look quite low, the problem here is that a large number of scholarly publishers use Digital Rights Management techniques on their sites that block a crawl of this type. We can use systems like Playwright to remote control browsers to do the crawling, so that the request looks as much like a genuine user as possible and to evade such detection systems. However, lots of these sites detect headless browsers (where the browser is invisible and running on a server) and block them with a 403 Permission Denied error.\nThere\u0026rsquo;s a great Github javascript suite that aims to help evade headless detection. The tests it uses are:\nUser Agent: in a browser running with puppeteer in headless mode, user agent includes Headless.\nApp Version: same as User Agent above.\nPlugins: headless browsers don\u0026rsquo;t have any plugins. So we can say that if it has plugin it\u0026rsquo;s headful, but not otherwise since some browsers, like Firefox, don\u0026rsquo;t have default plugins.\nPlugins Prototype: check if the Plugin and PluginsArray prototype are correct.\nMime Type: similar to Plugins test, where headless browsers don\u0026rsquo;t have any mime type\nMime Type Prototype: check if the MimeType and MimeTypeArrayprototype are correct.\nLanguages: all headful browser has at least one language. So we can say that if it has no language it\u0026rsquo;s headless.\nWebdriver: this property is true when running in a headless browser.\nTime elapse: it pops an alert() on page and if it\u0026rsquo;s closed too fast, means that it\u0026rsquo;s headless.\nChrome element: it\u0026rsquo;s specific for chrome browser that has an element window.chrome.\nPermission: in headless mode Notification.permission and navigator.permissions.query report contradictory values.\nDevtool: puppeteer works on devtools protocol, this test checks if devtool is present or not.\nBroken Image: all browser has a default nonzero broken image size, and this may not happen on a headless browser.\nOuter Dimension: the attributes outerHeight and outerWidth have value 0 on headless browser.\nConnection Rtt: The attribute navigator.connection.rtt,if present, has value 0 on headless browser.\nMouse Move: The attributes movementX and movementY on every MouseEvent have value 0 on headless browser.\nUsing the stealth plugin for Playwright also allows us to evade most of these checks. This just leaves Mouse Move and Broken Image detection, which I thought would not outweigh all the other factors. We can also jitter the connection with arbitrary delays so that it should appear to be coming at random intervals, rather than a robotic crawl.\nYet the basic fact is that we are still blocked from crawling many sites. This does not happen when we put the browser into headful mode, so current detection techniques have clearly evolved in the past half decade (since Detect Headless) was designed.\nIf, however, we run the browser in a headful mode, the results are somewhat stunningly different:\nTotal DOI count in sample: 5000\nNumber of HTTP 200 response: 4852\nPercent of HTTP 200 responses: 97.04%\nNumber of titles found on landing page: 2547\nPercentage of titles found on landing page: 50.94%\nNumber of DOIs in recommended format found on landing page: 2424\nPercentage of DOIs in recommended format found on landing page: 48.48%\nNumber of titles and DOIs found on landing page: 1574\nPercentage of titles and DOIs found on landing page: 31.48%\nNumber of PDFs found on landing page: 2085\nPercentage of PDFs found on landing page: 41.70%\nPercentage of PDFs found on landing pages that loaded: 42.97%\nLet\u0026rsquo;s talk about the resolution statistics. Other studies, looking at general links on the web, have found a link-rot rate of about 60%-70% over a ten-year period (Lessig, Zittrain, and Albert 2014; Stox 2022). The DOI resolution rate that we have, with 97% of links resolving (or a 3% link-rot rate), is far better and more robust than a web link in general.\nIs 3% a good or a bad number? It\u0026rsquo;s more robust than the web in general, but it still means that for every 100 DOIs, just under 3 will fail to resolve. We also cannot tell whether these DOIs are resolving to the correct target, except by using the metadata detection metrics (are the title and DOI on the landing page, which we could only detect at a far lower rate). It is entirely possible for a website to resolve with an HTTP 200 (OK) response, but for the page in question to be something very different to what the user expected, a phenomenon dubbed content drift. A good example is domain hijacking, where a domain name expires and spam companies buy them up. These still resolve to a web page, but instead of an article on RNA, for a hypothetical example, the user gets adverts for rubber welding hose. That said, other studies are also prone to this and there is no guarantee that content drift doesn\u0026rsquo;t affect a huge proportion of supposedly good links in the other studies, too.\nOf course, one of the most frustrating elements of this exercise is having to work around publisher blocks on content when visiting using a server-only robot script. It\u0026rsquo;s important for us periodically to monitor the uptime rate of the DOI system. We also recognise, though, that publishers want to block malicious traffic. However, we can\u0026rsquo;t perform our monitoring in an easy, automatic way if headless scripts are blocked from resolving DOIs and visiting their respective landing pages. This is not even a call for open access; it\u0026rsquo;s just saying that current anti-bot techniques, sometimes implemented for legitimate reasons, stifle our ability to know the landscape. Even if the bot resolved a DOI to just a paywall, it would be easier for us to monitor this than it is now. Similarly, CAPTCHA systems such as Cloudflare that would seem to offer an easy way to distinguish between humans (good) and robots (bad) can make life very difficult at the monitoring end. We would certainly be grateful for any proposed solution that could help us to work around these mechanisms.\nConclusion The context in which I wanted to know this information was so that we can take a snapshot of a page and then, at a later stage, determine whether it is down or has changed substantially. To do this, we are developing Shelob, an experimental content drift spider system; that\u0026rsquo;s what we\u0026rsquo;ve used so far to conduct this analysis. Over time, Shelob will evolve, we hope, to give us a way to detect when content has drifted or gone offline. If, however, we can\u0026rsquo;t detect whether an endpoint is good in the first place, then we likewise cannot detect when things have gone wrong. On the other hand, if, when we first visit, we find the DOI and title on the landing page, but at some future point this degrades, we might be able to say with some confidence that the original has died. I, personally, would encourage publishers not to block automated crawlers, because it\u0026rsquo;s good when we can determine these types of figures.\nWorks Cited Lessig, Lawrence, Jonathan Zittrain, and Kendra Albert. 2014. \u0026lsquo;Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations\u0026rsquo;. Harvard Law Review 127 (4). https://harvardlawreview.org/forum/vol-127/perma-scoping-and-addressing-the-problem-of-link-and-reference-rot-in-legal-citations/.(https://www.zotero.org/google-docs/?970bfS\nStox, Patrick. 2022. \u0026lsquo;Ahrefs Study on Link Rot\u0026rsquo;. SEO Blog by Ahrefs. 29 April 2022. https://ahrefs.com/blog/link-rot-study/.\n", "headings": ["Conclusion","Works Cited"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-do-we-know-about-dois/", "title": "What do we know about DOIs", "subtitle":"", "rank": 1, "lastmod": "2024-02-29", "lastmod_ts": 1709164800, "section": "Blog", "tags": [], "description": "Crossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).", "content": "Crossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).\nWhat, though, do we actually know about the state of persistence of these links? How many DOIs resolve correctly? How many landing pages, at the other end of the DOI resolution, contain the information that is supposed to be there, including the title and the DOI itself? How can we find out?\nThe first and seemingly most obvious way that we can obtain some of these data is by working through the most recent sample of DOIs and attempting to fetch metadata from each of them using a standard python script. This involves using the httpx library to attempt to resolve each of the DOIs to a resource, visiting that resource and seeing what the landing page yields.\nEven this is not straightforward. Landing pages can be HTML resources or they can be PDF files, among other things. In the case of PDF files, to detect a run of text is not simple as a single line break can be enough to foil our search. Nonetheless, when using this strategy we find the following statistics:\nTotal DOI count in sample: 5000\nNumber of HTTP 200 response: 3301*\nPercentage of HTTP 200 responses: 66.02%\nNumber of titles found on landing page: 1580\nPercentage of titles found on landing page: 31.60%\nNumber of DOIs in recommended format found on landing page: 1410\nPercentage of DOIs in recommended format found on landing page: 28.20%\nNumber of titles and DOIs found on landing page: 929\nPercentage of titles and DOIs found on landing page: 18.58%\nNumber of PDFs found on landing page: 1469\nPercentage of PDFs found on landing page: 29.38%\nPercent of PDFs found on landing pages that loaded: 44.50%\n* an HTTP 200 response means that the web page loaded correctly\nWhile these numbers look quite low, the problem here is that a large number of scholarly publishers use Digital Rights Management techniques on their sites that block a crawl of this type. We can use systems like Playwright to remote control browsers to do the crawling, so that the request looks as much like a genuine user as possible and to evade such detection systems. However, lots of these sites detect headless browsers (where the browser is invisible and running on a server) and block them with a 403 Permission Denied error.\nThere\u0026rsquo;s a great Github javascript suite that aims to help evade headless detection. The tests it uses are:\nUser Agent: in a browser running with puppeteer in headless mode, user agent includes Headless.\nApp Version: same as User Agent above.\nPlugins: headless browsers don\u0026rsquo;t have any plugins. So we can say that if it has plugin it\u0026rsquo;s headful, but not otherwise since some browsers, like Firefox, don\u0026rsquo;t have default plugins.\nPlugins Prototype: check if the Plugin and PluginsArray prototype are correct.\nMime Type: similar to Plugins test, where headless browsers don\u0026rsquo;t have any mime type\nMime Type Prototype: check if the MimeType and MimeTypeArrayprototype are correct.\nLanguages: all headful browser has at least one language. So we can say that if it has no language it\u0026rsquo;s headless.\nWebdriver: this property is true when running in a headless browser.\nTime elapse: it pops an alert() on page and if it\u0026rsquo;s closed too fast, means that it\u0026rsquo;s headless.\nChrome element: it\u0026rsquo;s specific for chrome browser that has an element window.chrome.\nPermission: in headless mode Notification.permission and navigator.permissions.query report contradictory values.\nDevtool: puppeteer works on devtools protocol, this test checks if devtool is present or not.\nBroken Image: all browser has a default nonzero broken image size, and this may not happen on a headless browser.\nOuter Dimension: the attributes outerHeight and outerWidth have value 0 on headless browser.\nConnection Rtt: The attribute navigator.connection.rtt,if present, has value 0 on headless browser.\nMouse Move: The attributes movementX and movementY on every MouseEvent have value 0 on headless browser.\nUsing the stealth plugin for Playwright also allows us to evade most of these checks. This just leaves Mouse Move and Broken Image detection, which I thought would not outweigh all the other factors. We can also jitter the connection with arbitrary delays so that it should appear to be coming at random intervals, rather than a robotic crawl.\nYet the basic fact is that we are still blocked from crawling many sites. This does not happen when we put the browser into headful mode, so current detection techniques have clearly evolved in the past half decade (since Detect Headless) was designed.\nIf, however, we run the browser in a headful mode, the results are somewhat stunningly different:\nTotal DOI count in sample: 5000\nNumber of HTTP 200 response: 4852\nPercent of HTTP 200 responses: 97.04%\nNumber of titles found on landing page: 2547\nPercentage of titles found on landing page: 50.94%\nNumber of DOIs in recommended format found on landing page: 2424\nPercentage of DOIs in recommended format found on landing page: 48.48%\nNumber of titles and DOIs found on landing page: 1574\nPercentage of titles and DOIs found on landing page: 31.48%\nNumber of PDFs found on landing page: 2085\nPercentage of PDFs found on landing page: 41.70%\nPercentage of PDFs found on landing pages that loaded: 42.97%\nLet\u0026rsquo;s talk about the resolution statistics. Other studies, looking at general links on the web, have found a link-rot rate of about 60%-70% over a ten-year period (Lessig, Zittrain, and Albert 2014; Stox 2022). The DOI resolution rate that we have, with 97% of links resolving (or a 3% link-rot rate), is far better and more robust than a web link in general.\nIs 3% a good or a bad number? It\u0026rsquo;s more robust than the web in general, but it still means that for every 100 DOIs, just under 3 will fail to resolve. We also cannot tell whether these DOIs are resolving to the correct target, except by using the metadata detection metrics (are the title and DOI on the landing page, which we could only detect at a far lower rate). It is entirely possible for a website to resolve with an HTTP 200 (OK) response, but for the page in question to be something very different to what the user expected, a phenomenon dubbed content drift. A good example is domain hijacking, where a domain name expires and spam companies buy them up. These still resolve to a web page, but instead of an article on RNA, for a hypothetical example, the user gets adverts for rubber welding hose. That said, other studies are also prone to this and there is no guarantee that content drift doesn\u0026rsquo;t affect a huge proportion of supposedly good links in the other studies, too.\nOf course, one of the most frustrating elements of this exercise is having to work around publisher blocks on content when visiting using a server-only robot script. It\u0026rsquo;s important for us periodically to monitor the uptime rate of the DOI system. We also recognise, though, that publishers want to block malicious traffic. However, we can\u0026rsquo;t perform our monitoring in an easy, automatic way if headless scripts are blocked from resolving DOIs and visiting their respective landing pages. This is not even a call for open access; it\u0026rsquo;s just saying that current anti-bot techniques, sometimes implemented for legitimate reasons, stifle our ability to know the landscape. Even if the bot resolved a DOI to just a paywall, it would be easier for us to monitor this than it is now. Similarly, CAPTCHA systems such as Cloudflare that would seem to offer an easy way to distinguish between humans (good) and robots (bad) can make life very difficult at the monitoring end. We would certainly be grateful for any proposed solution that could help us to work around these mechanisms.\nConclusion The context in which I wanted to know this information was so that we can take a snapshot of a page and then, at a later stage, determine whether it is down or has changed substantially. To do this, we are developing Shelob, an experimental content drift spider system; that\u0026rsquo;s what we\u0026rsquo;ve used so far to conduct this analysis. Over time, Shelob will evolve, we hope, to give us a way to detect when content has drifted or gone offline. If, however, we can\u0026rsquo;t detect whether an endpoint is good in the first place, then we likewise cannot detect when things have gone wrong. On the other hand, if, when we first visit, we find the DOI and title on the landing page, but at some future point this degrades, we might be able to say with some confidence that the original has died. I, personally, would encourage publishers not to block automated crawlers, because it\u0026rsquo;s good when we can determine these types of figures.\nWorks Cited Lessig, Lawrence, Jonathan Zittrain, and Kendra Albert. 2014. \u0026lsquo;Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations\u0026rsquo;. Harvard Law Review 127 (4). https://harvardlawreview.org/forum/vol-127/perma-scoping-and-addressing-the-problem-of-link-and-reference-rot-in-legal-citations/.(https://www.zotero.org/google-docs/?970bfS\nStox, Patrick. 2022. \u0026lsquo;Ahrefs Study on Link Rot\u0026rsquo;. SEO Blog by Ahrefs. 29 April 2022. https://ahrefs.com/blog/link-rot-study/.\n", "headings": ["Conclusion","Works Cited"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/conferences-projects/", "title": "PIDs for Conferences & Projects", "subtitle":"", "rank": 1, "lastmod": "2024-02-26", "lastmod_ts": 1708905600, "section": "Working groups", "tags": [], "description": "This group ran through 2020 but is currently inactive.\nThis group aims to establish a persistent identifier (PID) system and registry for scholarly conferences. PIDS enable creation of a persistent metadata record about a conference and, when applied to published proceedings, enables more efficient decision making for researchers, libraries, publishers, funding and evaluation bodies. Longer term, it also helps to identify fraudulent and/or low-quality conferences. This group initially intended to research PIDs for Conferences and projects, but has limited the scope to Conferences for the first phase.", "content": "This group ran through 2020 but is currently inactive.\nThis group aims to establish a persistent identifier (PID) system and registry for scholarly conferences. PIDS enable creation of a persistent metadata record about a conference and, when applied to published proceedings, enables more efficient decision making for researchers, libraries, publishers, funding and evaluation bodies. Longer term, it also helps to identify fraudulent and/or low-quality conferences. This group initially intended to research PIDs for Conferences and projects, but has limited the scope to Conferences for the first phase.\nMetadata specifications were circulated for comment in Spring 2018, and a set of metadata to define conferences has been decided. The working group held a kick-off meeting at CERN in February 2019. Both DataCite and Crossref will be implementing this metadata set to allow conferences to be registered as DOIs.\nCurrently the group involves a good number of representative publishers and has regular calls every 1-2 months. You can follow the group activity on twitter using #confpid tag or via the DataCite blog and our own.\nPlease see blog posts Taking the con out of conferences and Towards persistent identification for conferences for background information.\nGroup Members Chair: Aliaksandr Birukou, Springer Nature\nFacilitator: Patricia Feeney, Crossref\nMarcel R. Ackermann, DBLP / Schloss Dagstuhl Nhora Cortes-Comerer, ASME Philip DiVietro, ASME Martin Fenner, DataCite Gerry Grenier, IEEE Christina Hoppermann, Springer Nature Bethan Keall, Elsevier Anna Lacson, ACM Andres Mori, Digital Science Lisa Nienhaus, Springer Nature Craig Rodkin, ACM Sweitze Roffel, Elsevier Judy Salk, Elsevier Kruna Vukmirovic, The IET Alexander Wagner, DESY Please contact Patricia Feeney with any questions or to apply to join the working group.\n", "headings": ["Group Members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/crossref/", "title": "Crossref", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-lammey-effect/", "title": "The Lammey Effect", "subtitle":"", "rank": 1, "lastmod": "2024-02-16", "lastmod_ts": 1708041600, "section": "Blog", "tags": [], "description": "We’re equally sad and proud to report that Rachael Lammey is moving on in her career to the very lucky team at 67Bricks. Her last day at Crossref is today, Friday 16th February. Which is too soon for us, but very exciting for her!\nIt’s hard to overstate Rachael\u0026rsquo;s impact on Crossref’s growth and success in her 12 years here. She started as a Product Manager where she developed that role into a broad and central function, and soon moved into the newly-formed community team as International Outreach Manager where she grew important programs such as Sponsors, Ambassadors, a series of ‘LIVE’ events around the world, and she went on to manage her own team and establish some of the most important strategic relationships that Crossref now feels fortunate to have.", "content": "We’re equally sad and proud to report that Rachael Lammey is moving on in her career to the very lucky team at 67Bricks. Her last day at Crossref is today, Friday 16th February. Which is too soon for us, but very exciting for her!\nIt’s hard to overstate Rachael\u0026rsquo;s impact on Crossref’s growth and success in her 12 years here. She started as a Product Manager where she developed that role into a broad and central function, and soon moved into the newly-formed community team as International Outreach Manager where she grew important programs such as Sponsors, Ambassadors, a series of ‘LIVE’ events around the world, and she went on to manage her own team and establish some of the most important strategic relationships that Crossref now feels fortunate to have.\nRachael was a significant part of the growth and adoption of new initiatives such as Crossmark, Similarity Check, the REST API, preprints, grants, data citation, and ROR. She's contributed to numerous organisations such as EASE, ALPSP, SSP, ISMTE, STM, and most recently co-Chaired the NISO working group on retractions and corrections. As Head of Strategic Initiatives, and most recently, Director of Product, Rachael has shown dedication and leadership, supporting and strengthening not just her own teams but all of us across the organisation, encouraging us to do better while being one of the easiest people to work with.\nThe \u0026lsquo;butterfly effect\u0026rsquo; is the notion that the world is deeply interconnected and that one small occurrence can influence a much larger complex system. Rachael embodies that notion, having created positive ripples and waves\u0026mdash;and certainly many connections\u0026mdash;in the scholarly record, in our organisation, and across the community.\nMessages from colleagues Rachael, I was saddened when I first heard the news that you were moving on to another opportunity. Your professionalism, work ethic, and positive attitude have been inspirational to work around. I have enjoyed the opportunities we have had to collaborate. As you move on to a new experience I wish you success and happiness in your future endeavors. Your presence will be missed at Crossref! Best Wishes.\n\u0026ndash; Ryan\nI will miss you, Rachael. It has been great working with you for the few months that I have been at Crossref. I also cannot forget kayaking together with you and capsizing on the return to the shore, but almost professionally recovering. We would have made the best team this time around. I wish you all the best and many wins in your new role.\n\u0026ndash; Obanda\nI feel like the luckiest human to have worked with Rachael over the last 4 years. She’s the perfect mix of smart and funny and knowing how to get things done. Rachael is a big part of what makes Crossref culture so special — I’ve never felt so supported in a role as when Rachael was my manager and for that I am very grateful. I will miss her wit and humor and her pragmatic approach to work and life!\n\u0026ndash; Sara\nOne of my first ‘Crossref LIVE’ events was with Rachael in Brazil in 2016. At the time, my role mostly focused on membership, and we had just started working more closely with ABEC, a large organisation in Brazil that sponsored quite a few members. Rachael managed the sponsor program then and thought this would be a good opportunity to collaborate with a sponsor on an event, and she asked me to join her. There is so much planning for these - venues, local partners, presentations, meetings - and she had all the details in order and made the event such a success. Rachael was supportive, encouraging, and I learned so much from working with her. The Brazil trip was such a positive experience that I realised I wanted to focus more closely on community engagement. Rachael encouraged me to do so.\nShe and I went on to partner on more LIVE events together. Our time in Indonesia was perhaps one of the most memorable for me - as well as our LIVE event, we had an unexpected tour of Yogyakarta with our Indonesian hosts, involving a tour of Prambanan Temple (see photo below), batik fabric shopping, visiting a few universities, and a stop at our hosts’ home. All the while trying not to let the winding car ride and traffic get the better of us. Our event the next day went perfectly, and I told her, half-jokingly, that the whole experience renewed my faith in humanity. Of note, we also drank the only bottle of wine available in the hotel bar.\nRachael was also my Crossref running buddy, and we spent quite a few miles together - in Brazil, NH, Maine, Oxford, and Spain. During our runs, topics ranged from Game of Thrones to Idris Elba to sportsing, but not so much about work. The next time I find myself in England, we will run a few more miles together, followed by a pint. Thank you for everything!\n\u0026ndash; Susan\nMany have pointed out how talented, wise, or skilled you are and I certainly will not contradict a single word of it but that\u0026rsquo;s not what comes to mind first for me. Those traits, while true, pale in comparison to the person you are. Your positive, bright demeanor and the way everyone always feels better just being around you. I have dreaded some meetings from time to time. But whenever I\u0026rsquo;ve been involved in something with you, I\u0026rsquo;ve always left feeling better than when I started (no matter how grumpy I may have entered). You have been a consistent bright light in the Crossref constellation and you will truly be missed.\n\u0026ndash; Jon\nRachael! You are the best at cutting through all the bulls**t to get at what really needs to happen and why! Your knowledge is broad and deep, as is your institutional memory for all things Crossref and scholarly publishing. And your unflappability in pretty much any context is admirable and inspiring. We’ll miss you big time! Wishing you all the success at 67Bricks and otherwise.\n\u0026ndash; Shayn\nHey Rachael - I’m happy to be writing this note of Congratulations!! to you, particularly because it would be awkward to explain this bit of verklempt I’m feeling. Our interactions have been limited, but my impressions of you are of confidence, calm, capability, and collegiality. Thanks so much for your work with the Billing team. I’m sorry we are losing you, and am also so glad to know that you are out there at the forefront of inspiring others elsewhere, not only in the work you do, but also how you go about it.\n\u0026ndash; Laura\nHey Rachael, Just a big THANK YOU for helping me out all this time. I've had so many questions, and you've always been there to answer them. I always knew I could count on you. Thanks for those heartening chats when I needed a boost, and for including me in webinars and recordings - it really helped me improve. Remember that funny mistake I made on a recording when I called us 'Rochael'? We sure had a good laugh! I'm gonna miss those times and working with you. Can't wait to catch up with you over a drink the next time I'm in town. Wishing you all the very best and once again, thanks for everything! \u0026ndash; Rosa\nI am happy we got to enjoy some delicious vegetarian/vegan meals and wine together. I guess I should also mention that I enjoyed recruiting, HR and business fun with you too. Thank you for being such a big part of Crossref for 12 years! Have fun conquering your new chapter. Congratulations!\n\u0026ndash; Michelle\nRachael! You will be missed. I have really enjoyed our chats and work together. I will miss our wide ranging talks about food, books, and your descriptions of all the sportsing, which I would admire because I can barely manage a short run. :-) Thanks so much for being you and let’s stay in touch! Congratulations on your new endeavor, you’re going to be great.\n\u0026ndash; Esha\nWhen Rachel joined Crossref she brought a lot of enthusiasm and interest in learning about all that we were doing and also about what we could do. Her ideas and engaging leadership are wonderful for creating interest and drive to make projects happen. It has been wonderful to work with her over the 12 years here. I always look forward to seeing her and hearing what she has been doing outside of Crossref as well as inside. I will miss her but I know she will be doing great things wherever she may be.\n\u0026ndash; Tim\nWe’ve had a number of opportunities to reminisce, gassing each other up about how great it has been to work together, so I won’t do too much more of that here. But we will continue building on all of your contributions at Crossref and will carry forward your truth-telling and problem-solving approach to the work we do here. Best of luck with all the future has to offer, and we will certainly miss having you on the team.\n\u0026ndash; Patrick P\nRachael - I will miss you. I’ve really enjoyed working with you, hanging out while traveling, and getting recommendations on good books to read. Crossref won’t be the same without you. I think you have worked in the most different areas of Crossref and on the most projects of anybody, ever. Your commitment, professionalism and humour helped make Crossref what it is today. Your sportsing is also very impressive. All the best. \u0026ndash; Ed\nNot all heroes wear capes! Rachael defines that saying so much with her ethic of getting things DONE! I know she loves to get things done but the speed and quality in which that happens is second-to-none. Rachael will be massively missed at Crossref and 67Bricks don’t yet know what they have found. I enjoyed working with Rachael throughout my tenure at Crossref, she has helped me a huge amount in developing my programming skills and has always been encouraging throughout, especially with the \u0026rsquo;toil-bashing’ which is substantial and overwhelming at times.\nOn a more personal note, she is a great drinking buddy and always motivates me to be more active… by making me feel lazy. The number of hours Rachael would work was crazy, but then I always thought that anyone who gets up that early to go for runs must be a little crazy! AIl the best in your future endeavors and don’t be a stranger.\n\u0026ndash; Paul\nWhen I started at Crossref in March 2015—at the UKSG conference in Glasgow—Rachael was leading a workshop on text mining, showing off in full glory her ‘unicorn’ mix of skills from her technical knowledge of metadata and APIs to her facilitation techniques with a large group of people, clearly a community whose needs she knew inside out. Later that evening, Rachael took it upon herself to induct me in the ways of Crossref. One of the most important things she thought I should know was that we were all trusted and treated like adults - there was no micromanagement and I was to feel completely free to challenge the status quo. After one of the first ‘LIVE’ events, in Vilnius, I realised that it was Rachael who had created and embodied that trusted vibe through her own approach. She has been entrusted with so many programs, projects, teams, and tricky situations. Almost every launch, release, announcement, or achievement at Crossref very likely had Rachael’s eye on it at some stage, certainly the ‘actually-getting-it-done’ stage. Our close working relationship over the last nine years grew into a great friendship and I’m not quite sure how I’ll feel when the reality sets in and she’s not here for a quick chat, always a reality check. Working with Rachael has been inspiring, exciting, reassuring, and hilarious (that dry 'Norn Iron' humour!). 67Bricks is so fortunate and I can’t wait to watch her help them go from strength to strength, just like she has done for Crossref. See you soon, Ranty Rachael, no doubt putting the world to rights over a bottle of Malbec and many eyerolls 🙄. \u0026ndash; Ginny\nAlthough our time working together only overlapped the short span of two years, I appreciate how much of a champion you were for ROR and everything else you did at Crossref! I’m sure you’ll continue to do the same, among many other great things, in your new journey at 67 Bricks. You will be missed!\n\u0026ndash; Adam\nRachael, It has been wonderful working with you!! You are truly a special person. I always looked forward to when we chatted over slack, had a call together, or got to spend time together in person. You are sure to do amazing things on your next adventure. You will truly be missed!! I hope we can stay in touch! Good luck, Rachael!!! Fondly, Amy.\n\u0026ndash; Amy\nI am happy I got to meet Rachael when I joined Crossref in December 2023. We spoke generally about the Products team at Crossref, the differences and similarities between the African and British culture and upcoming projects on automation. You were really patient towards explaining and providing great information on metadata and research. Thank you so much for always responding swiftly to my requests pertaining to Finance issues. I have no doubt that you would be missed at Crossref and would keep doing great things into the future!!! Congratulations Rachael.\n\u0026ndash; Patience\nI will greatly miss working with you, Rachael; you have been a stalwart of reliability and enthusiasm during my time at Crossref and the organization will not be the same without you. That said, of course, I wish you all the best of luck and success in your future endeavours!\n\u0026ndash; Martin\nRachael- Congratulations on this new opportunity, I am thrilled for you! I am also very sorry that our time at Crossref did not overlap much and I am grateful for all the chances I had of interacting with you (including being able to meet you in London recently)- you were always very helpful and kind to me. I am hopeful that our paths will cross again in the future. We will definitely miss you here, and I wish you all the best for all the exciting things ahead.\n\u0026ndash; Madhura\nMy third week at Crossref back in 2017 was at the annual meeting in Singapore, and not getting into the timezone and not sleeping for 4 days was eased by our visit to a rooftop nightclub on the penultimate night - just before you headed off to Indonesia for a series of meetings with members and sponsors. I still don’t know where you get all your energy! I’m so sorry you’re leaving - I’ll really miss your honesty, your approach to getting things done, and of course seeing Rosie on our zoom calls. Looking forward to seeing what’s next for 67 Bricks - exciting times!\n\u0026ndash; Amanda B\nRachael, it’s been a pleasure to work with you. You’re always ready to help and ever full of information. We’ve only just got coordinated on the perennial challenge of timelines! You took things on and got them done, as you said. The world of schol comms won’t even know how much it has to thank you for, probably chiefly for seeing the Retraction Watch data acquisition through and opening it up for all. I will miss your honesty and energy, and the opportunity to challenge you again on the amount of food consumed in one sitting… I don’t think you’ll need luck in your next place, but I wish you that it is all you want it to be.\n\u0026ndash; Kora\nI’m so glad to have met you in person over these couple of days in London shortly after I joined Crossref and it’s such a shame we didn’t have much time to work together more and spend more time (not working) together. Thanks for the introduction to the Scampi Fries - you’ve changed my life forever (for the best obviously)!\n\u0026ndash; Maryna\nThank you for your collaboration and friendship over the past decade! You will be missed. We've worked on a long series of abbreviations, acronyms, and portmanteaus! Thanks for organizing countless things, from conference satellites to conference rooms. Your long record as fire warden was unblemished. 67bricks will benefit from your singular drive and attention to detail. All the best! \u0026ndash; Joe\nRachael! One thing I admire most in a person is a facility with metaphor accompanied by the ability to see to the heart of a matter, and hoo boy do you have those qualities in spades. I remember so clearly your talk at the Crossref team meeting in Spain in 2023 in which you clarified the Big Picture for us all in an extremely enlightening way, and then, in a smaller but equally impressive achievement, casually mentioning in a Funder Registry meeting that funders should start \u0026ldquo;stretching and warming up\u0026rdquo; for the transition to ROR – boy did I latch on to that terrific image. I wish you all the best at 67 Bricks.\n\u0026ndash; Amanda F\nRachael, thank you so much for all the support, patience, honesty, and determination. I will certainly miss our chats, work-related and non-work related. I wish you all the best in your new ventures! \u0026ndash; Dominika\nRachael - thank you for your boundless patience, generosity, and sense of humour. I’m very grateful I got to learn the Crossref ropes (cropes?) from you. Looking forward to randomly running into you on the Bristol karaoke circuit in 10 years’ time and performing an epic duet of Dancing in the Dark together. There’s a joke in there somewhere about you being the boss.\n\u0026ndash; Lena\nRachael, Congrats on your new opportunity. You will be greatly missed here. Through the years we have only been at the same events in person a handful of times but I will always remember your amazing personality and sense of humor. I am thankful to have spent some time with you at 2020 PIDapalooza.\n\u0026ndash; Maria\nThank you, Rachael. Thank you. I know everyone is telling you that they’re sad to see you go (I am too; we all are). I keep thinking if I delay telling you that, maybe the day won’t come when you walk out the Crossref doors. But here it is. Just wanted to you to know that I appreciate you. I appreciate you pushing us forward. I appreciate you being an advocate for all things Crossref. We’ll all miss you. Best of luck at 67bricks!\n\u0026ndash; Isaac\nOn one of our first meet ups together, I drove us from the Lynnfield office to Logan airport in rush hour, and we managed to survive the Bostonian road rage in one piece. We spent the ride talking through the intricacies of a sponsoring organization’s agreement. Rachael has been a safe set of hands and an encyclopedia of institutional memory for Crossref for 12 years. Rachael is one of those people who’s as equally competent as she is a pleasure to work with. She’s an innate leader because people want to get behind her. She shows her depth of understanding while also inviting input from everyone in the room. I’ll miss our Zoom calls, our marathon Friday sessions, and our post-meeting pub visits.\n\u0026ndash; Lucy\nHello! Here\u0026rsquo;s to hoping your new workplace appreciates you as much as you were here – they\u0026rsquo;re lucky to have you. I only wish we had the chance to interact more. Many hugs!\n\u0026ndash; Luis\nRachael, I will really really miss you, professionally and personally (but you know this already !). I'll miss all our work, dog, book and putting the world to rights chats. You'll be brilliant whatever you do and wherever you go (67Bricks have no idea how lucky they are !). Just keep 'getting stuff done' and have fun 😀 \u0026ndash; Fabienne\nYou will be sorely missed but can be very proud of what you’ve done during your time at Crossref, I’m sure you’ll continue to have a big impact. You’ve always been a pleasure to work with: efficient, supportive, and always with a sense of fun and enjoyment. That’s probably one of the things that drew me to Crossref even before we worked together as colleagues. Thanks for the support and positivity you’ve brought on many, many occasions and best wishes for the future!\n\u0026ndash; Martyn\nHey Rachael! I might have not had the chance to meet with you much while still around but I’ll definitely miss your jokes and the good vibes you were bringing to each call! Looking forward to taking over your place for board games when around Bristol ;) Wishing you a great start in the new place!\n\u0026ndash; Panos\nThunderCats are on the move. ThunderCats are loose. Says it all, really. Best of luck in your new endeavours.\n\u0026ndash; Mike\nCrossref won\u0026rsquo;t be the same without Rachael and we wish her well on her way to even greater things. Good luck, Lammey! ", "headings": ["Messages from colleagues","Crossref won\u0026rsquo;t be the same without Rachael and we wish her well on her way to even greater things.","Good luck, Lammey!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ed-pentz-accepts-the-2024-niso-miles-conrad-award/", "title": "Ed Pentz accepts the 2024 NISO Miles Conrad Award", "subtitle":"", "rank": 1, "lastmod": "2024-02-13", "lastmod_ts": 1707782400, "section": "Blog", "tags": [], "description": "Great news to share: our Executive Director, Ed Pentz, has been selected as the 2024 recipient of the Miles Conrad Award from the USA\u0026rsquo;s National Information Standards Organization (NISO). The award is testament to an individual\u0026rsquo;s lifetime contribution to the information community, and we couldn\u0026rsquo;t be more delighted that Ed was voted to be this year\u0026rsquo;s well-deserved recipient.\nDuring the NISO Plus conference this week in Baltimore, USA, Ed accepted his award and delivered the 2024 Miles Conrad lecture, reflecting on how far open scholarly infrastructure has come, and the part he has played in this at Crossref and through numerous other collaborative initiatives.", "content": "Great news to share: our Executive Director, Ed Pentz, has been selected as the 2024 recipient of the Miles Conrad Award from the USA\u0026rsquo;s National Information Standards Organization (NISO). The award is testament to an individual\u0026rsquo;s lifetime contribution to the information community, and we couldn\u0026rsquo;t be more delighted that Ed was voted to be this year\u0026rsquo;s well-deserved recipient.\nDuring the NISO Plus conference this week in Baltimore, USA, Ed accepted his award and delivered the 2024 Miles Conrad lecture, reflecting on how far open scholarly infrastructure has come, and the part he has played in this at Crossref and through numerous other collaborative initiatives.\nEstablished in 1965, the Miles Conrad Award gives recognition to those who\u0026rsquo;ve made substantial contributions to the information community over a lifetime. Named after the founder of the National Federation of Abstracting and Indexing Services (NFAIS)—an association that since merged with NISO—the award encourages innovation in content management and dissemination. Over the years, leaders and innovators who have significantly influenced the field of information exchange have been honored with the award. Ed has joined an illustrious group!\nEd’s leadership in collaboration and diplomacy has led to Crossref\u0026rsquo;s success in making research objects more accessible and useful to a wide global audience, including publishers, researchers, funders, societies, libraries, and more. Crossref\u0026rsquo;s founding purpose is stated as:\n“To promote the development and cooperative use of new and innovative technologies to speed and facilitate scientific and other scholarly research”.\nAcknowledging his privilege as a Western, university-educated, white man, which he comments has helped his career, Ed prioritises collaboration, open communication, teamwork, and equity in creating a positive, trusted environment that has brought together a diverse team of 49 colleagues from 11 countries. The organisation’s culture allows everyone to grow and contribute to the mission of a connected research nexus by including and developing solutions for community members across the globe.\nBefore his journey with Crossref, Ed held a number of roles at Harcourt Brace, including launching Academic Press\u0026rsquo;s first online journal. This experience led to his involvement with the DOI-X pilot project, which became the foundation for Crossref. Since its launch in 2000, under his leadership, Crossref has become an important component of the research ecosystem, an open scholarly infrastructure with nearly 20,000 members across more than 150 countries. Crossref is now the main source of \u0026gt;155 million records about all kinds of research objects and this open metadata registry is relied upon by thousands of tools and services across the whole research system.\nEd’s influence is also evident throughout the wider world of open scholarly infrastructure; aside from establishing Crossref, he co-founded ROR and was a founding member of ORCID, where he also served as board Chair. Further, he has engaged with the community by holding various advisory positions, including the DOI Foundation, the Digital Object Naming Authority (DONA), and the Coalition for Diversity in Scholarly Publishing (C4DISC).\nEd also emphasised that the long-term success of community initiatives lies in patience and the ability to agree on high-level principles of purpose and governance, which oil the wheels of collaboration, encourage participation, and enable more progressive change that builds and lasts over time. He says, \u0026ldquo;to solve collective problems it takes collaboration and diplomacy, bringing together a group of stakeholders, balancing their different concerns, building trust, and reaching consensus.\u0026rdquo;\nThe adoption of the Principles of Open Scholarly Infrastructure (POSI), along with (so far) 14 other organisations, was a key turning point for Crossref, Ed said, and one which has already paved the way for more openness of key metadata for the community, including references and retractions, as well as closer partnerships with many of the other POSI adoptees, given their shared understanding and experience.\nReferencing the current \u0026ldquo;peak hype\u0026rdquo; around artificial intelligence (AI), Ed points to the challenge of research integrity and the \u0026ldquo;growing field of science sleuthing\u0026rdquo; as a forthcoming area that Crossref and open metadata may help tackle at scale, including through Crossref\u0026rsquo;s Integrity of the Scholarly Record (ISR) Program and\u0026mdash;of course\u0026mdash;community-wide collaboration.\nIn concluding his talk, Ed describes his hopes and dreams for scholarly communications in the future. He would like to see more balance in diversity in the leadership of open scholarly infrastructure, extended integrations among the various foundational infrastructures, and a fully connected system where the scholarly record is inclusive globally.\nEd, on behalf of all your proud colleagues at Crossref, thank you and congratulations! ", "headings": ["Ed, on behalf of all your proud colleagues at Crossref, thank you and congratulations!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/gem/", "title": "GEM", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/frankfurt-isr-roundtable-event-2023/", "title": "ISR Roundtable 2023: The future of preserving the integrity of the scholarly record together", "subtitle":"", "rank": 1, "lastmod": "2024-02-06", "lastmod_ts": 1707177600, "section": "Blog", "tags": [], "description": "Metadata about research objects and the relationships between them form the basis of the scholarly record: rich metadata has the potential to provide a richer context for scholarly output, and in particular, can provide trust signals to indicate integrity. Information on who authored a research work, who funded it, which other research works it cites, and whether it was updated, can act as signals of trustworthiness. Crossref provides foundational infrastructure to connect and preserve these records, but the creation of these records is an ongoing and complex community effort.", "content": "Metadata about research objects and the relationships between them form the basis of the scholarly record: rich metadata has the potential to provide a richer context for scholarly output, and in particular, can provide trust signals to indicate integrity. Information on who authored a research work, who funded it, which other research works it cites, and whether it was updated, can act as signals of trustworthiness. Crossref provides foundational infrastructure to connect and preserve these records, but the creation of these records is an ongoing and complex community effort. Crossref has always shown a deep commitment to preserving the integrity of the scholarly record in an open and scalable manner.\nGiven the increasing concerns in the community about matters of research integrity and integrity of the scholarly record (ISR), we at Crossref have been engaging with community members to understand what developments are needed. In 2022, we organised a roundtable discussion to talk about our role and the applicability of Crossref’s services in preserving and assessing the integrity of the scholarly record. We’ve acted on much of that feedback since, and so in October 2023, we organised a follow-up event, once more gathering representatives of publishers, research integrity experts, policy-makers, academic institutions, funders, and researchers (the full list of participants can be found in the appendix). This post aims to offer insight into the discussions at this event and the next steps. The objective of this event was to take the conversation forward by:\nSharing the progress made by Crossref on matters related to ISR since the last roundtable event. Sharing information about how metadata contributes to the Research Nexus, and can act as trust markers for research outputs. Apprising the community about the latest membership trends and examples of activities that we see, such as title transfer disputes, unregistered DOIs, requests for deleting records, and sneaked references . Building upon the ideas discussed during the 2022 roundtable event to progress the conversation about issues related to ISR. Learning from the participants about their experiences of pursuing research integrity initiatives. Last but no less importantly, hearing from the participants their perspectives on strategies for preserving the integrity of the scholarly record, and opportunities for collaborating to leverage metadata to assess the integrity of the scholarly record. The event was kicked off by Ed Pentz, who spoke to the participants about how integrity is key to Crossref’s mission, and Crossref’s vision of the Research Nexus. Next, Amanda Bartell, the Head of Member Experience at Crossref, shared the recent developments and trends in community behaviour. She expanded upon the actions taken by Crossref as part of its ISR program since the last roundtable event, which include:\nAcquisition and opening of the Retraction Watch database, which makes it easier to access information on retractions and corrections. Increased participation in the Global Equitable Membership (GEM) program, enabling a wider section of the community to provide and access trust signals. Newer developments around metadata that act as trust signals: e.g. 120K grants or awards now have a Crossref DOI, and the planned transition of the Open Funder Registry into ROR. Recruitment of a Community Manager to focus on working with publishers and editors, including on ISR (that’s me!), and recruitment of a Technical Community Manager to enable greater use of our APIs. Amanda highlighted that all Crossref members should be using ROR IDs to provide affiliations for authors (along with ORCID iDs) in their Crossref metadata. She also shared some latest examples of community behaviours that we have seen, such as requests from authors to delete records of works that were published without their permission, title ownership disputes between publishers, and the recent instance of sneaked references.\nIvan Oransky, co-founder of Retraction Watch, and Lena Stoll, Product Manager at Crossref, were next, and they spoke about the future of the Retraction Watch database, and about the Crossmark service. After this, some of the other roundtable participants shared initiatives that they have undertaken that support ISR:\nJodi Schneider from the University of Illinois Urbana-Champaign spoke about NISO’s CREC Working Group that has created a Recommended Practice that should be followed by relevant stakeholders for communicating retracted research (Crossref’s Director of Product Rachael Lammey was the co-chair of that group). Kihong Kim from the Korean Council of Science Editors shared information about the workshops that the Council has organised for researchers on publishing in journals. Alberto Martín-Martín from Universidad de Granada presented his thoughts on how to reconcile the publishing system and the institutional view of tracking research outputs. Bianca Kramer from Sesame Science spoke about her analysis of and the implications of sneaked references, duplicate references, and missing references for citation integrity. Joris van Rossum from STM Solutions spoke about the STM Integrity Hub and the integrity tools that are being developed in collaboration with some publishers. Some of the most valuable reflections stemmed from discussions in small groups on these three key questions:\nWhat value do you see in the integrity and completeness of the scholarly record in the way you operate? How do you contribute to it? How can it support you to achieve your own goals? Are you aware of Crossref services? What are the barriers to more uptake? What are the challenges and opportunities? What information is essential and nice to have for you in the scholarly records to support trust signalling and ascertaining trustworthiness? As groups shared their discussions, a few themes became apparent that I would like to elaborate on further.\nWhat is “complete”? Given the prompt to talk about the value of completeness of the scholarly record, an immediate reaction at most tables was: how much metadata qualifies as “complete” metadata? Can the scholarly record be considered complete if some publishers or journals do not use Crossref? What is the optimum level of metadata that should be deposited by members - should a minimum data standard be defined by disciplines, or should there be standard data requirements for all? The composition of metadata appears to change over time, too, as the processes change and our ability to record their facets increases. While there were spirited discussions about what constitutes a complete scholarly record, everyone agreed that “completeness” of metadata, as much as is possible, should be the aim. Unambiguous and consistent standards may help with this, for example, the Metadata 20/20 community creation of principles and best practices, and potentially also using a set of recognition standards and reproducibility badges.\nGlobal participation is equally important for a truly “complete” scholarly record. In order to enable as many in the scholarly community as possible to participate in Crossref services and metadata, Crossref launched the Global Equitable Membership (GEM) program in 2023. Under this initiative, membership and content registration fees are waived off for members from the least economically advantaged countries. We are seeing first signs that this initiative meaningfully lowers the barriers to participation for organisations based in those countries, and allows the global community to contribute towards the building of a comprehensive research ecosystem.\nAt the end of the day, it is important to recognize that rich metadata is crucial because it can be used for all kinds of analysis, which in turn can drive decision-making. Even if some of the metadata components are sporadically missing, that could be acceptable, because every piece of data counts!\nCorrections and Retractions Similar to last year, retractions and corrections continued to be a topic of great interest in this year’s roundtable. This was not surprising given their relevance as trust indicators as well as the recent development with the acquisition of the Retraction Watch database by Crossref. Having heard from Ivan about the Retraction Watch taxonomy of reasons for retractions and the metadata included in the database, participants expressed the need to investigate this taxonomy as a community standard. While the Retraction Watch taxonomy is not widely known, we at Crossref are working to map the Crossmark taxonomy with the Retraction Watch taxonomy, which will enable complete integration of the Retraction Watch database with the Crossref database.\nIt would also be useful to add more information to retraction notices. Having more information about the reasons for retraction will not only destigmatize retractions, but certain additional information, such as submission dates for those outputs, might help with ethical investigations to determine whether manuscripts were being submitted to multiple publishers simultaneously.\nOn the topic of retractions, another aspect that came up in the room was about incentives for researchers to publish as much and as quickly as possible. If researchers indulge in unethical publishing practices due to this pressure to publish, that is hugely detrimental to the cause of research integrity and to the progress of scientific research in general. However, there is a distinction to be made between the integrity of the research and the integrity of the scholarly record - unethical research and publishing practices, including but not limited to data falsification, fabrication, and plagiarism, affect research integrity while integrity of the scholarly record is affected by unavailability of metadata, outdated metadata, incomplete metadata records, and incorrect metadata (e.g. as seen in the case of sneaked references).\nThere was a lot of discussion about Crossmark, a cross-platform service provided by Crossref that allows readers to discover whether an item has been updated, corrected, or retracted just by clicking a button that is standardised across publication platforms. While most participants acknowledged its importance, they also pointed out that its uptake has been limited and publishers do not use it as much, perhaps because it is difficult to implement and there’s a matter of providing more clarity about it to the readers. There were suggestions to add a notification system to Crossmark such that every time a published output is retracted, a notification goes out. This seemed of particular interest to funders, whose grievance was that they are usually the last to find out when research that they have funded is retracted. They would welcome notifications that would alert them to such events.\nWe already have plans to consult with the community more specifically about what changes they’d like to see to Crossmark that will enable them to implement it easily and use it more frequently. Take a look at this thread on our community forum and add your thoughts for our next steps on Crossmark.\nThe importance of education There was an overwhelming sentiment that there was a need for collective arbitration of research integrity issues. However, everyone recognized that this is not a role for Crossref. We can act as a “trust broker” by bridging different metadata and identifiers that otherwise might not interact, creating a network of research outputs whose credibility can be verified by others. Many participants called for Crossref to increase its efforts in educating community members about the importance of metadata and how different pieces can be linked together to make meaningful connections.\nResearch practices vary between countries, and between institutions. Correspondingly, the metadata being provided by diverse Crossref members may also vary. There is an opportunity here for the global research community to work together to increase awareness about ethical standards, so that a lack of specific metadata or its variances (e.g. unusually formatted metadata, or non-standard metadata fields) may not be construed as “lower quality” metadata. Many felt that the greatest need for education about metadata is for the academic community – although individual researchers contribute a wealth of metadata associated with any published research output, they do not necessarily understand how metadata contributes to the completeness of the scholarly record. There is a further opportunity to talk to the academic community about how different metadata components link together to form a rich network, supporting visibility and confidence in their work. A greater awareness about these topics is likely to encourage researchers to provide more metadata and identifiers.\nWhile most participants at the roundtable event agreed about the need for this conversation and the educational opportunities here, if Crossref were to lead these efforts, it would represent, in some eyes, a diversion from its mission. We do have several initiatives already to support our communities. As part of the Crossref Ambassadors program, volunteers from the international scholarly community who believe in Crossref’s mission liaise with our team to conduct training in their communities about using Crossref services and, generally, about the importance of metadata. In 2023, we also launched a new online public forum, the PLACE, in collaboration with the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), and the Open Access Scholarly Publishers Association (OASPA). This forum is a place where new publishers can connect with these organisations and learn about best practices in scholarly publishing via discussion posts and by asking questions, as they get started. Another initiative that is designed to help new Crossref members is the “Managed Member Journey”: as members join and move through the various stages of membership, key information is shared with them during each of these stages in the form of triggered automated emails, web pages, and webinars.\nWhile Crossref\u0026rsquo;s direct interactions with researchers are limited, we welcome the community\u0026rsquo;s recognition of the need to raise awareness about these matters. We have started engaging more closely with the reporters of metadata issues, in many cases investigators and ‘sleuths’ in the area of research integrity, and plan some closer collaborations with this group in 2024. We are open to supporting community efforts to inform other stakeholders about the importance and uses of metadata.\nIncentives for the community Another theme that was heard repeatedly was “incentives”: incentives for researchers to contribute to a “complete” scholarly record, incentives for publishers to improve metadata, and incentives for everyone to report on and register retractions.\nAs I mentioned before, a shared sentiment is that researchers may not be aware of the value of rich metadata. While more publications, increased citations, and greater grant funding are some examples of incentives that are part of the current academic settings, the right incentives probably do not exist for researchers to provide complete metadata. With the diverse set of participants present at this meeting, some groups also discussed how the current research assessment system can change to incorporate other metrics, perhaps those based on open science and open data.\nWhat could be the incentives for publishers to improve the metadata collected and deposited by them? One suggestion was that clearly defined benefits of rich metadata can incentivise publishers. Being aware of what funders are mandating, can be another incentive. On the same note, funders will benefit from knowing what metadata is being provided by publishers. This metadata is available through our open API, and nine key checks on members’ activity are available through our public Participation Reports.\nRetractions featured again in the discussion on the topic of incentives. As shared by Ivan, retractions are on the rise every year, with about 43k retractions currently in the Retraction Watch database. On the other hand, retractions registered in Crossmark at the time of the meeting numbered just 14k and have recently jumped up to 25k thanks to Hindawi/Wiley’s dedication to good open metadata. Besides the fact that the uptake of Crossmark by Crossref members is limited, another reason for the low number of retractions being registered is the associated stigma. Corrections and errata are usually conflated with retractions, and all these terms, which represent different kinds of updates that may happen to a published item, have a stigma associated with them in the academic community. There is a need to destigmatize retractions, and perhaps incentivize them by noting that these updates are essential to uphold the integrity of the scholarly record and to highlight the publishers that are showing leadership in addressing the issues openly through up-to-date Crossref metadata.\nWhat metadata is nice to have in the scholarly record? We asked everyone what information they think is essential as well as “nice to have” in the scholarly record to support trust signalling, and we heard a range of answers. Peer review information was recognized to be important. This would include data on who the peer reviewers were and standard peer review terminology that has been published by NISO. More generally, as much metadata as possible about the main actors of the peer review process was considered important - such as designating who the corresponding author is, and who the handling editor or the decision-making editor was.\nAs special issues led by guest editors in journals have been brought to the attention of late due to the uncovering of irregularities in some of them, one of the first suggestions in this context was more metadata about special issues. Participants thought that it would be useful to collect and distribute information on handling/guest editors of special issues, peer reviewers, as well as submission and acceptance dates. Recently, COPE has released guidance on “best practices for guest-edited collections” , highlighting that this topic looms at the forefront for the scholarly information industry.\nAdding information on ethical approvals provided by institutional review boards would add more nuance to the research outputs. Metadata about clinical trials helps to add transparency to research in a field, where reproducibility is of primary importance. Conflicts of interest are another factor that could be a cause of concern if not reported accurately; these declarations were mentioned by the participants as important for signalling trust.\nRecognizing that it is the relationships in the metadata that add context to research output, participants echoed that better interlinking between preprints and their published versions is required. To aid with all of this, it has been suggested that a complete list of all metadata that can be deposited with Crossref be made available in a simple format, so that members have more visibility about all the possibilities that exist for providing metadata.\nNext steps We asked all participants if the discussions prompted them to plan to take any actions in the near future. Several attendees reflected that the discussion encouraged them to go back and review the metadata that they are depositing with Crossref, and how they can make more use of the data openly available from Crossref. We also heard how some found training opportunities therein - discussion points from the event could be included in workshops for affiliated researchers, and in COPE guidance for members. As encouraged by members of the NISO’s CREC Working Group, some participants were looking to respond in the (then open) consultations of the draft Recommended Practice, NISO RP-45-202X, Communication of Retractions, Removals, and Expressions of Concern (CREC). One message resonated loud and clear: preserving the integrity of the scholarly record cannot be a lone endeavour and has to be a community effort. Attendees expressed their commitment to continue these conversations, with the next most opportune time being at the STM week. Everyone recognised that collaboration in this space is the need of the hour: facilitating information and data sharing across all the players in the ecosystem would be crucial to progressing this topic. As Bianca Kramer declared during her presentation, “I am committed to using only open data in my research, as access to data is important for the community to detect problems at scale”.\nAt our end, we are looking to act on suggestions that are specific to Crossref:\nConsultation with the community about Crossmark One of the first things that we are doing in early 2024 is to consult with our community about the developments needed in the Crossmark service. Our key aim with this exercise would be to understand how we can enable a more effective uptake of this service so that Crossref members can easily fulfil their obligation of keeping their records updated. We are keen to understand what we can do to help our members to send us metadata about updates to an output, and how we can help downstream services that use this data. Insights from this consultation will also help inform how the Retraction Watch data can be most effectively integrated into Crossmark and communicated to users. Please visit the discussion and add your thoughts here: https://0-community-crossref-org.libus.csd.mu.edu/t/communicating-post-publication-updates-inviting-feedback-on-the-next-steps-for-crossmark/.\nDevelopment of resources for using our API As there is clearly no dearth of metadata components that the community thinks would be “nice to have” for signalling trust, it is equally important to equip users and downstream service providers to be able to access the rich metadata that is available with Crossref. This rich metadata opens up new avenues for the development of services and resources that can benefit the scholarly community. On account of this, we plan to prioritise development of resources for using Crossref APIs. These efforts would include making available workbooks with a variety of API use cases - ranging from how to use basic API queries, to how to use APIs for obtaining grant information or for obtaining citation data and so on, as well as retrieving corrections, retractions, and update information, especially when the Retraction Watch dataset merges in with the rest of the Crossref metadata.\nWorking group to facilitate community efforts for preserving ISR We are looking to set up a working group that will facilitate the various stakeholders in the scholarly ecosystem to work together towards preserving the integrity of the scholarly record. One direction for the group could be to consider the role and impact of Crossref metadata in ISR. Another area of focus will be to enrich information about retractions, corrections, and expressions of concern. Raising industry-wide awareness about the current concerns in upholding the integrity of the scholarly record, and how comprehensive metadata can act as markers of trust about research output, would be another focal point.\nContinued community outreach We will continue our efforts to engage with the community on the very important issues surrounding ISR. We are particularly keen to redouble our efforts to include more funders and institutions in these conversations. Preserving the integrity of the scholarly record needs to be a truly inclusive effort and will benefit from diverse voices in the community. With that in mind, consulting with the community in Asia is next on our radar.\nWe look forward to working with the community further on this important topic - if you are keen to participate in these discussions and want to contribute towards preserving the integrity of the scholarly record, we would love to hear from you. Please write to us at feedback@crossref.org if you have any suggestions on this topic.\nAppendix: Participant list Name Role Organization Ed Pentz Executive Director Crossref Amanda Bartell Head of Member Experience Crossref Madhura Amdekar Community Engagement Manager Crossref Luis Montilla Technical Community Manager Crossref Lena Stoll Product Manager Crossref Kora Korzec Head of Community Engagement and Communications Crossref Ivan Oransky Co-Founder Retraction Watch Jennifer Wright Research Integrity Manager Cambridge University Press Guntram Bauer Director of Science Policy \u0026amp; Communications Human Frontier Science Program Wendy Patterson Scientific Director Beilstein-Institut Sarah Jenkins Director, Research Integrity \u0026amp; Publishing Ethics Elsevier Helene Stewart Director, Editorial Relations Web of Science Clarivate Bianca Kramer Advisor, Research Analyst, Facilitator Sesame Open Science Adya Misra Research Integrity and Inclusion Manager Sage Andrew Joseph Wits University Press Theodora Bloom Executive Editor BMJ Alberto Martín-Martín Assistant Professor Universidad de Granada Aaron Wood Head, Product \u0026amp; Content Management American Psychological Association Fred Atherden Head of Production Operations eLife Kihong Kim Korean Council of Science Editors David Flanagan Senior Director, Data Science Wiley Chiara Di Giambattista Communications Director OpenCitations Scott Delman Director of Publications ACM Chi Wai (Rick) Lee General Manager World Scientific Publishing Co (WSPC) Leslie McIntosh VP, Research Integrity Digital Science Adam Day Director Clear Skies Damaris Critchlow Project Manager Karger Tamara Welschot Head of Research Integrity, Prevention Springer Nature Kathryn Dally Research Integrity and Policy Lead Research Services, University of Oxford Masahiko Hayashi Director, JSPS Bonn Office Japan Society for the Promotion of Science Simone Taylor Chief, Publishing American Psychiatric Association Christna Chap Head of Editorial Development Karger Publishers Coromoto Power Febres Research Integrity Manager Emerald Publishing Carole Chapin Project Manager French Office for Research Integrity Jodi Schneider Associate Professor of Information Sciences University of Illinois Urbana Champaign Oliver Koepler Head of Lab Linked Scientific Knowledge TIB - Leibniz Information Centre for Science and Technology Heather Staines Delta Think Eri Anno JSPS Bonn office Joris van Rossum STM Solutions Anita de Waard VP Research Collaborations Elsevier ", "headings": ["What is “complete”?","Corrections and Retractions","The importance of education","Incentives for the community","What metadata is nice to have in the scholarly record?","Next steps","Consultation with the community about Crossmark","Development of resources for using our API","Working group to facilitate community efforts for preserving ISR","Continued community outreach","Appendix: Participant list"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata-retrieval/", "title": "Metadata Retrieval", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/", "title": "Metadata Retrieval", "subtitle":"", "rank": 1, "lastmod": "2024-02-03", "lastmod_ts": 1706918400, "section": "Find a service", "tags": ["Metadata Retrieval", "APIs", "API Case Study", "Metadata Search", "REST API", "Research Nexus"], "description": "Analyse Crossref metadata to inform and understand research\nCrossref is the sustainable source of community-owned scholarly metadata and is relied upon by thousands of systems across the research ecosystem and the globe.\nSome of the typical users (outer) and uses (inner) of Crossref metadata\nShow image\n× People using Crossref metadata need it for all sorts of reasons including metaresearch (researchers studying research itself such as through bibliometric analyses), publishing trends (such as finding works from an individual author or reviewer), or incorporation into specific databases (such as for discovery and search or in subject-specific repositories), and many more detailed use cases.", "content": " Analyse Crossref metadata to inform and understand research\nCrossref is the sustainable source of community-owned scholarly metadata and is relied upon by thousands of systems across the research ecosystem and the globe.\nSome of the typical users (outer) and uses (inner) of Crossref metadata\nShow image\n× People using Crossref metadata need it for all sorts of reasons including metaresearch (researchers studying research itself such as through bibliometric analyses), publishing trends (such as finding works from an individual author or reviewer), or incorporation into specific databases (such as for discovery and search or in subject-specific repositories), and many more detailed use cases.\nAll Crossref metadata is open and available for reuse without restriction. Our 160,104,382 records include information about research objects like articles, grants and awards, preprints, conference papers, book chapters, datasets, and more. The information covers elements like titles, contributors, descriptions, dates, references, connecting identifiers such as Crossref DOIs, ROR IDs and ORCID iDs, together with all sorts of metadata that helps to determine provenance, trust, and reusability\u0026mdash;such as funding, clinical trial, and license information.\nTake a look at a list of some of the organizations who rely on our REST API and read some of the case studies from a selection of users. Download the metadata retrieval fact sheet or read more about the types of metadata and records we have.\nAnyone can retrieve and use 160,104,382 records without restriction. So there are no fees to use the metadata but if you really rely on it then you might like to sign up for Metadata Plus which offers greater predictability, higher rate limits, monthly data dumps in XML and JSON, and access to dedicated support from our team.\nOptions for retrieving metadata All Crossref metadata is completely open and available to all. Whatever your experience with metadata, there are several tools, techniques, and support guides to help\u0026mdash;whether you\u0026rsquo;re just beginning, exploring occasionally, or need an ongoing reliable integration.\nBEGINNING? You\u0026rsquo;ve heard Crossref metadata might be useful and want to know where to start.\nWe recommend you start with metadata search, funder search, or simple text query for matching references to DOIs. Also take a look at the tips for querying our REST API in a browser which only needs you to get a JSON plugin to view the results. We are also building up tutorials to demonstrate the possibilities, starting with a funder metadata workbook. If it\u0026rsquo;s retractions and corrections that you need, check out the frequently-updated csv file of the Retraction Watch dataset that we acquired and opened in 2023.\nEXPLORING? You have some specific queries and want a lightweight way to use Crossref metadata.\nTake a look at the in-depth interactive documentation for our REST API at api.crossref.org. If you\u0026rsquo;re comfortable with tar data files you can download our latest annual public data file. You may be interested in how relationships between research objects are represented, so have a look at the Colab notebook of the upcoming endpoint for relationship metadata, currently in beta at https://0-api-crossref-org.libus.csd.mu.edu/beta/relationships. Our Crossref Learning Hub is also another helpful resource.\nINTEGRATING? You rely on Crossref metadata and need to incorporate it into your product at scale.\nYou might want to jump straight to subscribing to Metadata Plus, which is our premium service for the REST API that comes with monthly data dumps in JSON and XML, higher rate limits, and fast support. But we always recommend that you try out the public version first to make sure it will work for your product. If you\u0026rsquo;re looking for a single DOI record in multiple formats (e.g. RDF, BibTex, CSL) you can use content negotiation.\nWatch the animated introduction to metadata retrieval English 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil READ THE DOCS\nLearn more by visiting our learning hub. \u0026nbsp;\u0026nbsp;CROSSREF LEARNING HUB\u0026nbsp;\u0026nbsp; ", "headings": ["Options for retrieving metadata","BEGINNING?","EXPLORING?","INTEGRATING?","Watch the animated introduction to metadata retrieval","Learn more by visiting our learning hub."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata-search/", "title": "Metadata Search", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/open-funder-registry/", "title": "Open Funder Registry", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/rachael-lammey/", "title": "Rachael Lammey", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/ror/", "title": "ROR", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/roring-ahead-using-ror-in-place-of-the-open-funder-registry/", "title": "RORing ahead: using ROR in place of the Open Funder Registry", "subtitle":"", "rank": 1, "lastmod": "2024-01-30", "lastmod_ts": 1706572800, "section": "Blog", "tags": [], "description": "A few months ago we announced our plan to deprecate our support for the Open Funder Registry in favour of using the ROR Registry to support both affiliation and funder use cases. The feedback we’ve had from the community has been positive and supports our members, service providers and metadata users who are already starting to move in this direction.\nWe wanted to provide an update on work that’s underway to make this transition happen, and how you can get involved in working together with us on this.", "content": "A few months ago we announced our plan to deprecate our support for the Open Funder Registry in favour of using the ROR Registry to support both affiliation and funder use cases. The feedback we’ve had from the community has been positive and supports our members, service providers and metadata users who are already starting to move in this direction.\nWe wanted to provide an update on work that’s underway to make this transition happen, and how you can get involved in working together with us on this.\nOverall, we are building more comprehensive support for ROR into Crossref’s services. Some of this work is specifically to support using ROR to identify funding organisations in place of funder registry IDs. We have a number of parallel, complementary projects underway to support different elements of this work:\nWe are evolving our metadata schema so that we can collect ROR IDs in places where we currently support the collection of Funder IDs. We are analysing the coverage of Funder ID to ROR ID mappings and testing the way we expose them in our APIs. We are developing new matching strategies to match text strings to ROR IDs. 1. Schema updates Everything flows from being able to get ROR IDs into the Crossref metadata!\nWe are evolving our metadata schema so that we can collect ROR IDs in places where we already support the collection of Funder IDs – for instance, in the funding section of the metadata for works and in the funder section for grants.\nWe’re working with members and service providers so that they can try sending us this data via a pipeline our Labs team has built to test schema updates before they go live. We are actively recruiting members to help us test our new pipeline by providing sample XML for registration. Planned metadata inputs and outputs are detailed in Including ROR as a funder identifier in your metadata (metadata prototyping instructions), we’d encourage you to provide feedback on these in the document, ideally in the next two weeks. We’re aiming to release an updated schema that supports these changes in Q1 2024.\n2. Modelling ROR ID/Funder ID mappings in our metadata model We have integrated the ROR registry into our evolving metadata model, and we have started work to integrate the Funder Registry. The aim is to create more flexibility in how Crossref’s metadata can be supplemented and queried, and give more clarity as to which party asserted or created a metadata element.\nWe’re working on an early iteration of how the model handles ROR IDs, funder IDs and their equivalencies. Once we have something to share, we’ll welcome community feedback on this approach and on the metadata model in general.\n3. Developing new matching strategies to match text strings to ROR IDs Ideally, everyone would always use persistent identifiers to exchange information about contributor and awardee affiliations, organisations related to works, as well as funders supporting the research. In practice, this information is often exchanged as data without identifiers, such as affiliation strings (e.g. “University of Virginia, Charlottesville, VA”), funder names, or even funding acknowledgements (e.g. “Funding and support generously provided by the Ford Foundation”). In such situations, a good metadata matching strategy can help map these to persistent identifiers.\nCurrently, we are focused on developing reliable strategies for matching affiliation strings to ROR IDs. In the future, we will adapt the strategies to support funder names and funding acknowledgements as well. All the strategies will be rigorously evaluated using real-life data. We will make the strategies, as well as the evaluation datasets and evaluation results, publicly available for anyone to use. If you are interested in collaborating on the development or the evaluation of the matching strategies, please get in touch!\nIn the future, we might also apply some of the new matching strategies at Crossref, to the metadata our members send us. This would allow us to insert matched identifiers to the metadata to better connect organisations with other items in the scholarly record. We already have a process that matches the names of funders supporting research against the Funder Registry and enriches the metadata with matched Funder Registry IDs. Developing and evaluating reliable matching strategies will allow us to modify this process to use ROR IDs instead, and extend it to support other use cases, such as contributor affiliations.\nWhat will the transition mean for you? We do recommend that you begin looking at what it will take to integrate ROR into your systems and workflows for identifying funders. Talk to your service providers about this to ready them for this change. To reiterate the point from the earlier post, in the short term, and even in the medium term, Funder IDs aren’t going away and the Funder IDs will continue to resolve – they are persistent, after all. Eventually, however, the Funder Registry will cease to be updated, so any new funders will only be registrable in Crossref metadata with ROR IDs. Legacy Funder IDs and their mapping to ROR IDs will be maintained, so if Crossref members submit a legacy Funder ID, it will get mapped to a ROR ID automatically. Note, too, that Crossref is committed to maintaining the current funder API endpoints until ROR IDs become the predominant identifier for newly registered content. We also know that there are questions that we’ll want to tackle with the community as we all make progress, some we know and some we don’t know. With that in mind:\nTell us what you need! We want to hear from you! We have set up several channels of communication meant to ensure that you can tell both ROR and Crossref what will make this transition easier for you and that you can get answers to your questions.\nFirst, we are conducting a series of Open Funder Registry user interviews designed to deepen our understanding of where Funder IDs are being used in workflows and systems. Write community@ror.org if you\u0026rsquo;d like to participate in these interviews to show and tell us how you\u0026rsquo;re using Funder IDs.\nSecond, in 2024, we will be running a follow-up to the funding data workshop we ran in June 2023. Please get in touch if your organisation would be interested in participating in the discussion.\n", "headings": ["1. Schema updates","2. Modelling ROR ID/Funder ID mappings in our metadata model","3. Developing new matching strategies to match text strings to ROR IDs","What will the transition mean for you?","Tell us what you need!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/content-registration/", "title": "Content Registration", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/evans-atoni/", "title": "Evans Atoni", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/isaac-farley/", "title": "Isaac Farley", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/kathleen-luschek/", "title": "Kathleen Luschek", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/open-support/", "title": "Open Support", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/paul-davis/", "title": "Paul Davis", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/persistence/", "title": "Persistence", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/poppy-riddle/", "title": "Poppy Riddle", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/references/", "title": "References", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/reports/", "title": "Reports", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/shayn-smulyan/", "title": "Shayn Smulyan", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/solving-your-technical-support-questions-in-a-snap/", "title": "Solving your technical support questions in a snap!", "subtitle":"", "rank": 1, "lastmod": "2024-01-25", "lastmod_ts": 1706140800, "section": "Blog", "tags": [], "description": "My name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\n", "content": "My name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nA little about us and what drives the team I’m fortunate to manage a great team - Evans, Kathleen, Paul, Poppy, and Shayn - who enjoy and are hardwired to guide. We have different strengths and interests, but the thing that unites us is that we are energized when we can unpick tricky problems for all of you, our members and users. In 2023, the technical support team answered around 11,000 questions from all of you. We do that with one-to-one requests sent to us via email and within our support center (using a closed-source software called Zendesk). And, we’ve been providing more and more support in our Community Forum, where we’re aiming for open interactions, so we can all learn from the rich exchanges with all of you (the Forum has an integration with Zendesk, so posts made in the Forum are delivered to us there, so our team won’t miss any of your questions).\nWe established in the previous paragraph that we have a great technical support team who all pride themselves on helping you. But we’re also human; the reality is that many of those ~11,000 technical support questions asked of us in 2023 were repetitive, and there are always trends in the questions asked. That’s another important reason why we’re hoping to have more and more of these questions asked and answered within our Community Forum; again, so we can all learn from one another. We know certain parts of content registration, metadata retrieval, and everything in between are, well, complicated. The Crossref learning curve can be steep for all of us. Collectively, our technical support team has more than 25 years of Crossref experience, and we’re continuously learning new things about the Research Nexus and the scholarly ecosystem from one another and all of you.\nLearning through this complexity is one of the most enriching parts of our days. Our daily stand-up, modeled off of different software development methodologies, where together we troubleshoot tangly questions from all of you, share ideas, and just keep up-to-date on the latest from across the organization leads to a lot of knowledge exchange. So, years ago, we decided to transform the issues we discuss in those stand-ups into public-facing posts in our Community Forum. It gave us the opportunity to share much-needed examples in a new community space; and, we knew, since these were the issues we were all discussing and learning from ourselves, that many of you would also benefit from us surfacing the topics openly. We call these posts tickets of the month, since the majority of topics we discuss have originated from tickets in our support center.\nExamples of some of the most popular topics in the last two-plus years have been:\nGetting started with REST API queries and the follow-up post Using Postman for API Queries Content Registration: Did it work? The new Labs Reports are here Are you an OJS user? Are the below questions familiar? Get Citation Counts for all Articles in a Particular Journal Snapping our fingers Like I said, these posts originated from real-life questions of us from our community members. In most cases, we’ve been asked these questions by many of you. These Community Forum posts are our attempts to unlock understanding of our services, rich metadata, or the larger Research Nexus. Said another way: we all see value in putting in the effort to post one more example or answer that nuanced question. Perhaps one of our posts will include an example that really resonates with you and/or your work.\nIn that spirit, I asked Evans, Kathleen, Paul, Poppy, and Shayn to answer this question below (yes, I’m going to weigh in, too):\nWhat’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?\nEvans, Technical Support Specialist As a publisher and a Crossref member, at one point or another, you might have made a mistake in the metadata deposited for a given DOI. I’m sure after the slight ‘shock’, the next question you had in mind was, ‘How can I correct this mistake?’ Well, here is a simplified guide on how to do that correction/update!\nCan I modify/ update the metadata of a registered DOI? As indicated by my colleague Shayn below in this blog post, Crossref DOIs are designed to be persistent (and cannot be changed/deleted once registered). And YES, you can update the metadata associated with any of your registered DOIs whenever necessary, at no additional fee.\nHow can I perform a standard metadata update? To add, change, or remove any metadata element from your existing records, you generally just need to resubmit your complete metadata record with the correct/new changes included. How you choose to update a DOI metadata record is highly dependent on the content registration tool/platform you are using/comfortable working with, as described below:\nOJS: Navigate to the article record you wish to update, add in your new metadata/delete relevant metadata fields, and deposit it again using the Crossref import/export plugin. You must be running at least OJS 3.1.2 and have the Crossref import/export plugin enabled.\nWeb deposit form: Open the web deposit form, and re-enter all the metadata, including the new changes - leave the relevant field blank to delete it, or add in your new metadata to update it - and resubmit the form (note: there are a handful of exceptions to this for the web deposit form).\nDepositing XML files with Crossref: Make changes to the relevant XML file and resubmit it to Crossref via the admin tool. When making an update, you must supply all the bibliographic metadata for the record being updated, not just the fields that need to be changed. During the update process, we overwrite the existing metadata with the new information you submit, and insert null values for any fields not supplied in the update. This means, for example, that if you’ve supplied an online publication date in your initial deposit, you’ll need to include that date in subsequent deposits if you wish to retain it. Note that the value included in the element must be incremented each time a DOI is updated.\nIf you’re looking for real-life examples of other members who have updated their metadata, the Community Forum is a great starting point. If you have follow-up questions on any of the existing threads, I invite you to post a message today.\nKathleen, Technical Support Specialist One of my favorite types of queries to tackle are those regarding content registration problems. I love a good mystery and getting to the bottom of why that pesky submission just didn\u0026rsquo;t succeed. Sometimes members come to us with an error message and specific questions about what has gone awry. But, in fact, two of the most common questions we receive are: 1) I deposited something; did it work? and 2) I deposited something; why isn\u0026rsquo;t it showing up?!\nTo address the first question of whether your submission went through or not, I wrote a forum post back last June talking about how to use the admin tool to see whether your registration was successful or not. We know there are also email alerts and perhaps status messages within your own registration platform, but using the admin tool is a great way to concretely check where your submission has ended up. If it\u0026rsquo;s not there, we didn\u0026rsquo;t get it!\nUsing the admin tool is also a great way to get more details about the submission and more information in case the submission happened to fail. You may have had the experience in which you contacted us with a question about a failed deposit, and we asked you for the submission ID. You can find that info in the admin tool! And we ask for that, because that helps us get to the bottom of those error message mysteries.\nAnd, as for the second question of when will your DOI be active, my colleague, Paul, wrote a fantastic post on the forum (with an excellent flowchart and all!), explaining when you can expect to see your DOI up and running. Often members will submit a deposit and expect the DOI to resolve immediately. When that doesn\u0026rsquo;t happen, many think that something has gone wrong or perhaps there is an error, but, in fact, our systems may still be updating and processing the metadata.\nI recommend giving these two posts a read if you\u0026rsquo;re at all concerned about whether you\u0026rsquo;re depositing your content correctly or not. Hopefully, they\u0026rsquo;ll help ease your content-registration worries.\nIsaac, Technical Support Manager Oh, thanks for asking! Many of our members, after receiving one of our reports, will respond to us in support with a message similar to: ‘What did I do wrong? Please help me fix this. I don’t want to be out of compliance!’\nThe receipt of one of our reports does not necessarily mean that you’ve done anything wrong. In truth, the reports we send to our official member contacts are produced using very simple logic. It’s true that they may signal larger, more complicated problems, but we really need your help to determine next steps (and, in some cases, no action is needed because there is no issue for members to fix (e.g., many failed resolutions within the resolution reports)).\nLet’s look at the conflict and resolution reports since those are the reports we get the most questions about:\nConflict reports are the most complicated of our reports to navigate. But, the reports are generated using simple logic: if you register two or more DOIs with matching bibliographic metadata, we’ll flag those DOIs as being in conflict, which will generate a warning message at the time of registration and a subsequent conflict report. When members receive this report, we often get the sense that members simply want us, the technical support team, to tell them how to fix it. The problem is we don’t know your content, so we don’t know if the two DOIs do represent a duplicate, or if both DOIs, while having very similar bibliographic metadata, are legitimate and will be maintained going forward (e.g., for errata). Paul wrote a great post in our community forum about what conflicts are and how to resolve them.\nResolution reports, like conflict reports, are generated using simple logic: a resolution is the result of a click on that DOI. If a DOI has been registered, that click results in a successful resolution. If that DOI has not been registered, that click results in a failed resolution. Our monthly report is a count of those resolutions - successful and failed. Failures can represent content registration errors in a member’s workflow. Or, they can signal that an end user has made a mistake when attempting to click the DOI in question. So, for example, an end user perhaps added an extra period onto their DOI link. Instead of trying to resolve https://0-doi-org.libus.csd.mu.edu/10.5555/cupnfcm2wj, a legitimate DOI, they added a period to the end and tried to resolve https://0-doi-org.libus.csd.mu.edu/10.5555/cupnfcm2wj. instead. That extra period at the end of the DOI has made it a completely different DOI that is not registered with us, thus they get a failed resolution. This is pretty common. For members with content being regularly clicked, there will be user errors in the logs appearing as failed resolutions. The first question members should ask themselves when reviewing the failed .csv report within the resolution report is: ‘are any of these DOIs legitimate DOIs that I thought we had registered?’ We have more on the basics of resolution reports also over in our Community Forum.\nPaul, Technical Support Specialist \u0026amp; R\u0026amp;D Support Analyst I know we were asked to name “one thing” but I have two that are closely related. May I snap my fingers twice and fix two issues? [Of course, Paul! Take it away!]\nPaul’s first snap\nOne of the most asked questions we get in support is “why is my DOI not working?” 90% of the time it is down to a failed submission. A good proportion of those failures are a result of title mismatches between the deposited container title and the one we have stored on the system here. There are other error messages that occur, too, which I wrote about back in 2020.\nSo, “why do we fail submissions because of title differences?” You might ask.\nWell, the title and ISSN/ISBN and/or the title level DOIs act like locks to the title record, which need the right keys to unlock the title so that you can add or update the records against it. So if you don’t match what was in the original submission, you get a failure. Without that stringent check, we would have way too many iterations of titles and matching to those would be a nightmare. Not to mention sorting those DOIs into one container in the REST API.\nIsaac wrote a great forum post about these title-level issues as well.\nIf a title update is required due to an error with an original title deposit, then these need to be made by the support team, so get in touch with us on the Community Forum.\nAnd, a second\nPermissions against titles and DOIs: Lots of our members don’t realise that each DOI has its own permissions against the prefix that currently ‘owns’ or is associated with that DOI in the background.\nIt would be fair to assume you can tell just by looking at a DOI who the current publisher is, based on the prefix at the start —but that’s not always the case. Things can (and often do) change. Individual journals get purchased by other publishers, and whole organizations get bought and sold.\nWhat you can tell from looking at a DOI prefix is who originally registered it, but not necessarily who it currently belongs to. That’s because if a journal (or whole organization) is acquired, DOIs don’t get deleted and re-registered to the new owner. The update will of course be reflected in the relevant metadata, but the prefix on the DOI will stay the same. It never changes—and that’s the whole point, that’s what makes the DOI persistent. Isaac also wrote this in much more detail and explains the internal Crossref processes in his blog “What can often change, but always stays the same?“\nThese permissions are very important to understand when it comes to title transfers and working with updating your metadata against transferred DOIs to prevent duplicate DOIs for the same work.\nPoppy, Technical Support Contractor As a researcher myself, I’d like to talk about references in a journal article, book, conference paper, etc. (I’ll just use ‘article’ going forward for simplicity). These are the references included in an article by the author. References in one article result in citations for another article. It\u0026rsquo;s the thing every author dreams of and accruing citations can be a big deal for authors, journals, and publishers.\nFor readers, articles with no references can be less discoverable using systems that use citation links for relevance, and that discoverability is of critical importance for our members who decide to register references with us. We all want your content to be shared, cited, linked, and used far and wide.\nWe receive many questions from authors asking why citations don’t show up; it\u0026rsquo;s usually due to metadata deposits with no references included. There may be an assumption that our process is like Google Scholar, which crawls full text and websites. This misunderstanding has a big impact on references and citation counts. However, as we do not store a copy of the paper, our intake system does not extract references from the article, regardless if they have a DOI. This is one of the main reasons that Crossref citation counts are lower than services that use extraction methods. We only store the data that a publisher registers and maintains with us. On deposit of a metadata record that includes references, our system performs a matching process - if there is a match, a cited-by connection is applied to the metadata. With deposits with no references, however, there is no data to match to other articles (and, therefore limitations on the discoverability and no cited-by count increase).\nAn article with no references has big impacts for the authors, the journal, the publisher, researchers, and ultimately, the readers. This can mean decreased distribution of the content itself, reduced citation counts for cited articles, lower impact metrics for journals, and can ultimately affect value for publishers. For example, researchers just don’t include articles without references for scientometric analysis.\nOur documentation on references includes the elements for both structured and unstructured data. Including the DOI in the structured data is best practice as it provides a precise location with rich data for matching. If the matcher does not see a link between the deposited DOI and the cited DOI at the time of deposit, then the references are stored to be crawled with other matching algorithms later. So, we\u0026rsquo;re always working to create those rich cited-by linkages between works (raising the content’s profile and overall discoverability), no matter when you register reference metadata. You can also see how your publisher is doing on depositing references by viewing their Participation Report. If you are an author, you can check if your DOIs that were registered contained any references by using our REST API. Don’t see them? You can always contact the editor of the journal or the publisher that published your paper and ask them to add them. Didn’t hear back? Just drop us a line in the Community Forum, we’re happy to help.\nShayn, Technical Support Specialist Let\u0026rsquo;s \u0026lsquo;zoom out\u0026rsquo; to the big picture. What are DOIs for? What makes them useful? What are we all doing here anyway?!\nThere are a lot of different answers to those questions. It\u0026rsquo;s a complex picture. But, way back in the late ‘90s, the DOI system was designed in order to allow for the creation of unique and persistent identifiers. Crossref members use these identifiers to represent their research outputs and publications. This allows for reliable linking to those items, and the ability to identify and communicate the relationships between them, notably (but not exclusively!) citation relationships.\nSo, what do we mean when we say that Crossref DOIs should be unique and persistent? In basic terms, unique means that there is only a single Crossref DOI registered for a given citable research output. And, persistent meaning that the DOI associated with a given research output today will continue to be associated with, and link to, that same research output indefinitely into the future.\nYes, there are some grey areas, and we know that everything doesn\u0026rsquo;t always work 100% perfectly all the time. But, the more deviations from persistence and uniqueness, the harder it becomes for end-users, publishers, Crossref, and other services which make use of our metadata to reliably find research outputs and reliably relate them to one another. It weakens the value and utility of DOIs for everyone.\nSo, what does this mean in practice?\nBe certain that every item you register with Crossref is something you can maintain in the long-term. Have an arrangement with an archive that can take responsibility for your content if your organization stops hosting it or ceases to exist. Don’t register things that you know will only exist for a short time. When you\u0026rsquo;re about to register new content, be absolutely sure that it hasn’t been registered already, either by your organization or any other organization. If you acquire a new journal from another publisher, have a process in place to check what content has already been registered and adopt the use of the DOIs registered by the prior publisher for that content. We can always provide a list of the existing DOIs for a journal. If you publish books, and have a co-publishing agreement with another publisher, distributor, or hosting platform, be aware that one of those other parties may have already registered DOIs for your books. Adopt the use of those DOIs rather than assigning and registering new ones. And, if you don’t want them to do that going forward, communicate that to your co-publishing partners. When mistakes happen, inadvertently resulting in duplicate DOIs for a single item, identify them quickly. Alias the new duplicate DOI to the long-standing original DOI, and remove all instances of the new DOI from your website or platform. Ensure that your publishing software, platform, or journal management system can accommodate DOIs with various prefixes for the same publication. You should be able to use (display, link, update metadata and URLs for) the DOIs registered for older content by any prior publishers as easily as you use the DOIs that you registered yourself for more recent content. Things like persistence and uniqueness can sound like theoretical abstractions, but they actually play an important role in the day-to-day grind of your publishing operations. Their impact on linking, citing, discovery, and analysis of your content is concrete and important. Thus, it’s not surprising that we often hear from members and others in the research community who share this commitment to persistence, uniqueness, and overall rich, accurate metadata. You’ll see that play out in the Community Forum where members and users get involved to troubleshoot issues, compare notes, and share ideas with us and one another. We appreciate the commitment to the Research Nexus and the overall spirit to serve in this growing community. Like we said at the top, we’re all wired to contribute in this way, so building an open, welcoming space that moves us forward excites us.\nAgain, we invite you to join the discussion on this and many other Crossref-related topics over in our Community Forum.\n", "headings": ["A little about us and what drives the team","Snapping our fingers","Evans, Technical Support Specialist","Kathleen, Technical Support Specialist","Isaac, Technical Support Manager","Paul, Technical Support Specialist \u0026amp; R\u0026amp;D Support Analyst","Poppy, Technical Support Contractor","Shayn, Technical Support Specialist"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/equity/", "title": "Equity", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/susan-collins/", "title": "Susan Collins", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-gem-program-year-one/", "title": "The GEM program - year one", "subtitle":"", "rank": 1, "lastmod": "2024-01-24", "lastmod_ts": 1706054400, "section": "Blog", "tags": [], "description": "In January 2023, we began our Global Equitable Membership (GEM) Program to provide greater membership equitability and accessibility to organisations located in the least economically advantaged countries in the world. Eligibility for the program is based on a member\u0026rsquo;s country; our list of countries is predominantly based on the International Development Association (IDA). Eligible members pay no membership or content registration fees.\nThe list undergoes periodic reviews, as countries may be added or removed over time as economic situations change.", "content": "In January 2023, we began our Global Equitable Membership (GEM) Program to provide greater membership equitability and accessibility to organisations located in the least economically advantaged countries in the world. Eligibility for the program is based on a member\u0026rsquo;s country; our list of countries is predominantly based on the International Development Association (IDA). Eligible members pay no membership or content registration fees.\nThe list undergoes periodic reviews, as countries may be added or removed over time as economic situations change. Sri Lanka was added to the GEM program in March 2023 as they were recategorised to the IDA classification by the World Bank.\nWhen the program launched, we had 214 existing members eligible for the program who then were no longer charged for membership or content registration. Since the program began, we have welcomed an additional 131 new members into the program, including our first members from Cambodia and Togo.\nCountry As of 1/1/2023\n(start of GEM) Additions in 2023 (end of first year of GEM) Total Afghanistan 6 4 10 Bangladesh 56 33 89 Benin 1 1 2 Bhutan 4 2 6 Burkina Faso 2 0 2 Burundi 1 0 1 Cambodia 0 2 2 Central African Republic 1 0 1 Congo, Democratic Republic 1 11 12 Ethiopia 4 6 10 Ghana 14 7 21 Guyana 1 1 2 Haiti 1 0 1 Kosovo 2 2 4 Kyrgyz Republic 22 3 25 Laos 1 0 1 Madagascar 1 1 2 Malawi 1 0 1 Maldives 1 0 1 Mali 2 0 2 Mauritania 1 0 1 Myanmar 1 0 1 Nepal 20 18 38 Nicaragua 1 0 1 Rwanda 4 1 5 Senegal 3 3 6 Somalia 2 2 4 Sri Lanka 13 5 18 Sudan 9 2 11 Tajikistan 5 1 6 Tanzania 9 7 16 Togo 0 1 1 Uganda 3 6 9 Yemen 16 12 28 Zambia 5 0 5 With help from our ambassadors based in GEM countries, we organised and co-hosted several webinars to introduce the program, along with an introduction to Crossref, and the benefits of including all kinds of research objects in the Research Nexus.\nIn April, our team, together with ambassador Binayak Raj Pandey, provided an overview of Crossref for members and organisations in Nepal. Our team and ambassadors, Dr Md Jahangir Alam and Shaharima Parvin hosted two webinars in May for members and organisations in Bangladesh. The first webinar provided an introduction to Crossref, our services, and the GEM Program. The second webinar focused on the methods to register content and how to add and update metadata. In September, ambassador Baraka Manjale Ngussa joined us for an introductory webinar aimed at organisations in Tanzania\nIn November, CARLIGH (the Consortium of Academic and Research Libraries in Ghana), Crossref, and EIFL co-hosted a webinar for librarians and journal editors in Ghana with a discussion on the GEM program and Crossref services.\nIn 2024, we will continue to collaborate with our ambassadors and other members of the community to offer more opportunities for organisations in GEM-eligible countries to learn about the program and the benefits of membership for content discovery.\nThe program was initially met with scepticism by some organisations in GEM-eligible countries, who wanted to be certain that it wasn\u0026rsquo;t a free trial, that there are no hidden fees, or that they would be required to pay later for other services. Others expressed concern that Crossref would introduce fees after a year or two. Though we were able to clarify these aspects of the program, we understand the concerns and are working to ensure we provide clarity and transparency about the program. Additionally, we will be conducting a complete review of our fees in 2024, and we will ensure that GEM-eligible members will have input.\nAlthough the program offers relief from fees, many organisations require technical assistance and language support. The GEM program would benefit from an increase in local Sponsors to facilitate membership and provide support, particularly In countries with the highest growth, such as Bangladesh, Nepal, Yemen, Kyrgyz Republic, and Ghana. Though we have Sponsors working with members who are in GEM countries (e.g. PKP), we do not yet have any Sponsors who are based in a GEM country.\nWe will be working with relevant like-minded organisations, such as PKP, DOAJ, INASP, OASPA, EIFL, and others, to help identify suitable candidates for new Sponsors in underserved regions and engage them proactively. Additionally, we will consult with our ambassadors in GEM countries to help identify potential Sponsors. We are beginning the year by making the most of the momentum created in African countries (Uganda, Ghana, Tanzania) and looking to develop new networks in other parts of the world in Q2-Q4 of this year.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/increasing-crossref-data-reusability-with-format-experiments/", "title": "Increasing Crossref Data Reusability With Format Experiments", "subtitle":"", "rank": 1, "lastmod": "2024-01-19", "lastmod_ts": 1705622400, "section": "Blog", "tags": [], "description": "Every year, Crossref releases a full public data file of all of our metadata. This is partly a commitment to POSI and partly just what we do. We want the community to re-use our metadata and to find interesting ends to which they can be put!\nHowever, we have also recognized, for some time, that 170GB of compressed .tar.gz files, spread over 27,000 items, is not the easiest of formats with which to work.", "content": "Every year, Crossref releases a full public data file of all of our metadata. This is partly a commitment to POSI and partly just what we do. We want the community to re-use our metadata and to find interesting ends to which they can be put!\nHowever, we have also recognized, for some time, that 170GB of compressed .tar.gz files, spread over 27,000 items, is not the easiest of formats with which to work. For instance, there\u0026rsquo;s no indexing capacity on these files, meaning that it is virtually impossible simply to pull out the record for a DOI. Decompressing the .tar.gz files takes a good three hours or more even on high-end hardware, without any additional processing.\nTo that end, the Crossref Labs team has been experimenting with different formats for trial release that might allow us to reach broader audiences, including those who have not previously worked with our metadata files. The two new formats, alongside the existing data file format, with which we have been experimenting, are JSON lines and SQLite.\nJSON-L\nThe first format with which we\u0026rsquo;ve been experimenting is JSON-L (JSON lines). With one JSON entry per line, as opposed to one giant JSON file/block, JSON-L lends itself to better parallelisation in systems such as SPARC, because the data can easily be partitioned.\nThis data format also has the benefit of being appendable, one line at a time. Unlike conventional JSON, which requires the entire structure to be parsed in-memory before an append is possible, JSON-L can simply be written to and updated. It\u0026rsquo;s also possible to do multi-threaded write operations on the file, without each thread having to parse the entire JSON structure and then sync with other threads.\nIn our experiments, JSON-L came with substantial parallelisation benefits. Our routines to calculate citation counts can be completed in ~20-25 minutes. Calculating the number of resolutions per container title takes less than half an hour.\nSQLite\nSQLite is a library written in C with client bindings for Python, Java, C#, and many other languages that produces an on-disk, portable, single-file SQL database. You can produce the SQLite file using our openly available Rust program, rustsqlitepacker. We also have a Python script that can produce the final SQLite file, for those happier working in this language.\nThe resultant SQLite file is approximately 900GB in size, so it requires quite a lot of free disk space to create in the first place (alongside storage of the data file that is needed to build it). However, queries are snappy when looking up by DOI and other indexes can be constructed (the indexing part of the procedure takes about 1.5 hours per field).\nThe database structure, at present, is the bare minimum that will work. It contains a list of fields for searching/indexing \u0026ndash; DOI, URL, member, prefix, type, created, and deposited \u0026ndash; and a metadata field that contains the JSON response that would be returned by the API for this value.\nThis allows for the processing and extraction of individual JSON elements using SQLite\u0026rsquo;s built-in json_extract method. For example, to get just the title of an item, you can use:\nSELECT json_extract(metadata, \u0026lsquo;$.title\u0026rsquo;) from works WHERE doi=\u0026ldquo;10.1080/10436928.2020.1709713\u0026rdquo;;\nThe balance that we have had to strike here is between flattening the JSON so that more fields are indexable and searchable, as against the trade-off in time and processing that this takes to create the database in the first place. The first draft version of our experiment was wildly ambitious in flattening all the records and using an Object Relation Mapper (ORM) to present Python models of the database. Like painting the Forth Bridge, this initial attempt would not finish in any sane length of time. Indeed, by the time we\u0026rsquo;d created this year\u0026rsquo;s data file, we\u0026rsquo;d need to begin work on the next.\nWhat are the anticipated use cases here? When people need to do an offline metadata search on an embedded device, for instance, the portability and indexed lookup of the SQLite database can be very appealing. One of our team has even got the database running on a Raspberry Pi 5. You can also load the database into Datasette if you want to explore it visually.\nWhere do we go from here with this? It would be good to flatten a few more fields, but we would welcome feedback on use cases that we haven\u0026rsquo;t anticipated for SQLite and we\u0026rsquo;d love to hear whether this is already too unwieldy (at 900GB).\nData Files\nAs usual, we will be releasing the annual data file in the next few months. As an experiment this year, we will also be releasing the tools that can be used on that file to produce these alternative file formats. We will consider releasing the final data files for each of these formats, too.\nWhat we would like to hear from the community is whether there are other data file formats that you might wish to use. Are there use cases that we haven\u0026rsquo;t anticipated? What would you ideally like in terms of file formats?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/140a/", "title": "140A", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/abstracts/", "title": "Abstracts", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/bianca-kramer/", "title": "Bianca Kramer", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/i4oa-hall-of-fame-2023-edition/", "title": "I4OA Hall of Fame - 2023 edition", "subtitle":"", "rank": 1, "lastmod": "2024-01-09", "lastmod_ts": 1704758400, "section": "Blog", "tags": [], "description": "The Initiative for Open Abstracts (I4OA) was launched in September 2020 to advocate and promote the unrestricted availability of the abstracts of the world\u0026rsquo;s scholarly publications, particularly journal articles and book chapters, in trusted repositories where they are open and machine-accessible. I4OA calls on all scholarly publishers to open the abstracts of their published works and, where possible, to submit them to Crossref.\n", "content": "The Initiative for Open Abstracts (I4OA) was launched in September 2020 to advocate and promote the unrestricted availability of the abstracts of the world\u0026rsquo;s scholarly publications, particularly journal articles and book chapters, in trusted repositories where they are open and machine-accessible. I4OA calls on all scholarly publishers to open the abstracts of their published works and, where possible, to submit them to Crossref.\nSince the launch of I4OA, we have been tracking the openness of abstracts for all Crossref members over time (for data and code, see this GitHub repository). For a subset of 40+, mostly larger, publishers, the proportion of current journal articles (published in the current year and preceding two years) that have abstracts deposited in Crossref is shown in a chart on the I4OA website, which is updated quarterly (Figure 1).\nFigure 1: Proportion of current journal articles from selected publishers that have open abstracts in Crossref. Data collected on January 1, 2024 for publication years 2021-2023. Publishers already supporting I4OA are shown in orange.\nThese longitudinal data and accompanying visualisations allow us to identify and highlight good examples from 2023: publishers (both large and small) who newly started to make abstracts openly available last year and/or who managed to get the proportion of their articles with open abstracts close to 100%1.\nWhile we highlight some of these examples below in our \u0026lsquo;Hall of Fame\u0026rsquo;, it\u0026rsquo;s important to also acknowledge all the publishers that already were depositing abstracts to Crossref for most or all of their journal articles prior to 2023, thereby contributing to the availability of abstracts as part of a rich ecosystem of open metadata, for others to use and build upon. Hall of Fame - Part 1: publishers included in I4OA visualisation For the set of (mostly larger) publishers included in the visualisation on the I4OA website, Figure 2 shows the difference in the proportion of abstracts available in Crossref between January and December 2023 for journal articles published in 2021-2023.\nA number of publishers stand out from this figure:\nWiley announced in October 2022 that it was joining I4OA and would be making abstracts available through Crossref. In August 2023, Wiley started to deposit abstracts to Crossref, and at the end of 2023, the proportion of current journal articles with open abstracts was 77%.\nThis makes Wiley the first of the four largest traditional commercial publishers to deposit abstracts for the majority of journal articles they publish. Springer Nature does this only for their current open access articles, while Elsevier and Taylor \u0026amp; Francis2 do not yet provide abstracts to Crossref at all. SAGE, the fifth largest traditional commercial publisher, was a founding member of I4OA and has open abstracts for 85% of current journal articles. Among society publishers, the American Geophysical Union (AGU) went from 7% to 99% open abstracts for current journal articles last year, which is a great achievement. The publishing arm of the American Institute of Physics (AIP Publishing) joins them in reaching close to 100% open abstracts, going from 41% to 95% in 2023.\n1Depending on the type of journal(s) of a given publisher, the maximal coverage of open abstracts will often be somewhat below 100%, as in Crossref, all journal content is assigned the type ‘journal article’. This includes e.g. editorials, letters to the editor and other publication types that are not always expected to have abstracts.\n2The numbers for Wiley and Taylor \u0026amp; Francis do not include Hindawi and F1000 Research, respectively, as these have separate Crossref member IDs. As most full open access publishers, both Hindawi and F1000 Research have high proportions of open abstracts (81% and 98%, respectively).\nCAIRN and Project Muse, two publishing platforms in the humanities and social sciences representing a number of individual publishers, both started including abstracts in the metadata they provide to Crossref in 2023. At the end of 2023, CAIRN had abstracts available for 41% of current journal articles, while Project Muse was just starting out at 5%. Both will hopefully increase further this coming year.\nReturning to traditional commercial publishers, Wolters Kluwer Health, part of Wolters Kluwer, had seen a slow growth in the proportion of journal articles with open abstracts in the years prior to 2023, going from 2% to 10%. However, they showed a rapid increase in 2023, ending the year with 52% open abstracts.\nWhile it is good to see publishers who have publicly committed their support for I4OA follow through with opening their abstracts (like Wiley and AIP), it is also very encouraging to see publishers who are not (yet) listed as I4OA supporters do so. This shows a growing awareness and action on this issue beyond advocacy through I4OA alone. And of course, we would love to list these publishers on our website as official supporters of I4OA!\nFigure 2 also shows some cases where the proportion of open abstracts has gone down during the year. This can be due to temporary technical issues in depositing abstracts (as was the case for Hindawi). Theoretically, the proportion of open abstracts can also go down when publishers stop providing abstracts altogether during the year, but we have not observed that to be the case.\nFigure 2: Development in the proportion of open abstracts in 2023 for current journal articles (publication years 2021-2023) from selected publishers. Publishers already supporting I4OA are shown in orange. Light orange/blue dots show the proportion of open abstracts in January 2023, and dark orange/blue dots in December 2023.\nHall of Fame - Part 2: other publishers Among the many publishers not included in the limited selection shown in the I4OA visualisation, there are also some interesting highlights of publishers either starting out to deposit abstracts (and reaching a sizeable proportion) or having deposited open abstracts for almost all their current journal articles in 2023. The examples below drew our attention in 2023; they include a number of medium-sized publishers as well as a group of smaller publishers that deserve special attention.\nThe European Molecular Biology Organization (EMBO) went from 0% to 42% open abstracts in 2023. However, from January 2024 onwards, several EMBO journals were transferred to Springer Nature, so EMBO can no longer be tracked at publisher level in Crossref. It will still be possible to look at the development of open abstracts for individual EMBO journals.\nThe Institution of Engineering and Technology (IET), a medium-sized publisher, started to deposit abstracts in 2023, reaching 33% open abstracts for current journal articles at the end of the year.\nThe Acoustical Society of America (ASA) had open abstracts for almost all their current journal articles at the end of 2023, increasing from 50% to 97%.\nFinally, in the second quarter of 2023, a group of over 200 smaller Turkish publishers saw large increases in their coverage of open abstracts, resulting in open abstracts for 95%-100% of their current journal articles. Consultation with Crossref pointed to the potential supporting role of DergiPark, one of the largest Crossref sponsors in Turkey. This is a great example of developments in open metadata at smaller publishers.\nLooking forward At the beginning of 2024, the proportion of current journal articles published by Crossref members with open abstracts has reached 49.7%, up from 20.7% when I4OA was launched in September 2020. This is thanks to a growing number of publishers who are depositing abstracts to Crossref, often depositing open abstracts for close to 100% of their journal articles.\nThis blog post has highlighted a number of publishers who contributed to this growth in the availability of open abstracts in 2023. We hope these examples will inspire other publishers to start doing the same and are looking forward to following the growth in the availability of open abstracts in 2024.\nFor publishers that started to deposit abstracts in recent years and are doing so for newly published articles only, our data on open abstracts for current journal articles will look better in 2024 than in 2023, as only articles published in the current year and two preceding years are taken into account.\nHowever, the benefits of having abstracts openly available from a central location such as Crossref (both for direct usage and for integration in other open scholarly infrastructures) are not limited to recent publications only. Hopefully, publishers currently depositing abstracts to Crossref will continue to do so both for newly published articles as well as for the backfiles of journal articles already published.\nPublishers who would like to be added to the list of I4OA supporters, or who would like more information on how to deposit abstracts for both new and existing journal articles, are very welcome to reach out to I4OA. More information about open abstracts in general, and I4OA in particular, can also be found in the FAQ on the I4OA website.\nThe author would like to thank Ludo Waltman (CWTS) and Ginny Hendricks (Crossref) for useful feedback on an earlier draft of this post.\nThis blog post is published under a CC BY 4.0 license. The header image is an adaptation of an image by Adam Jones available from Wikimedia Commons (https://commons.wikimedia.org/wiki/File:Interior_02_of_Rock_%26_Roll_Hall_of_Fame_and_Museum,_Cleveland_%28by_Adam_Jones%29.jpg) and is shared under a CC BY-SA license.\n", "headings": ["Hall of Fame - Part 1: publishers included in I4OA visualisation","Hall of Fame - Part 2: other publishers","Looking forward "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/working-at-crossref/policies/", "title": "Organizational policies & procedures", "subtitle":"", "rank": 4, "lastmod": "2024-01-04", "lastmod_ts": 1704326400, "section": "Operations & sustainability", "tags": [], "description": "Our policies and procedures handbook details organizational policies related to how we work. Along with our employee manual, we share these policies for inspection or reuse by others.", "content": "Our policies and procedures handbook details organizational policies related to how we work. Along with our employee manual, we share these policies for inspection or reuse by others.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/working-at-crossref/", "title": "Working at Crossref", "subtitle":"", "rank": 4, "lastmod": "2024-01-04", "lastmod_ts": 1704326400, "section": "Operations & sustainability", "tags": [], "description": "Crossref is powered by a team of about 50 people who are based all over the world. We aim to provide a consistent employment experience, while also complying with the labour practices of the countries where our staff live.\nOur employee handbook details the benefits we offer the team. This document, along with our organizational policies \u0026amp; procedures, Code of conduct, and respective employment contracts govern how we work together.", "content": "Crossref is powered by a team of about 50 people who are based all over the world. We aim to provide a consistent employment experience, while also complying with the labour practices of the countries where our staff live.\nOur employee handbook details the benefits we offer the team. This document, along with our organizational policies \u0026amp; procedures, Code of conduct, and respective employment contracts govern how we work together.\nTo the extent possible, we make these policies publicly available for inspection or reuse by others.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2023/", "title": "2023", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/discovering-relationships-between-preprints-and-journal-articles/", "title": "Discovering relationships between preprints and journal articles", "subtitle":"", "rank": 1, "lastmod": "2023-12-07", "lastmod_ts": 1701907200, "section": "Blog", "tags": [], "description": "In the scholarly communications environment, the evolution of a journal article can be traced by the relationships it has with its preprints. Those preprint–journal article relationships are an important component of the research nexus. Some of those relationships are provided by Crossref members (including publishers, universities, research groups, funders, etc.) when they deposit metadata with Crossref, but we know that a significant number of them are missing. To fill this gap, we developed a new automated strategy for discovering relationships between preprints and journal articles and applied it to all the preprints in the Crossref database. We made the resulting dataset, containing both publisher-asserted and automatically discovered relationships, publicly available for anyone to analyse.\n", "content": "In the scholarly communications environment, the evolution of a journal article can be traced by the relationships it has with its preprints. Those preprint–journal article relationships are an important component of the research nexus. Some of those relationships are provided by Crossref members (including publishers, universities, research groups, funders, etc.) when they deposit metadata with Crossref, but we know that a significant number of them are missing. To fill this gap, we developed a new automated strategy for discovering relationships between preprints and journal articles and applied it to all the preprints in the Crossref database. We made the resulting dataset, containing both publisher-asserted and automatically discovered relationships, publicly available for anyone to analyse.\nTL;DR We have developed a new, heuristic-based strategy for matching journal articles to their preprints. It achieved the following results on the evaluation dataset: precision 0.99, recall 0.95, F0.5 0.98. The code is available here.\nWe applied the strategy to all the preprints in the Crossref database. It discovered 627K preprint–journal article relationships.\nWe gathered all preprint–journal article relationships deposited by Crossref members, merged them with those discovered by the new strategy, and made everything available as a dataset. There are 642K relationships in the dataset, including:\n296K provided by the publisher and discovered by the strategy, 331K new relationships discovered by the strategy only, 15K provided by the publisher only. In the future, we plan to replace our current matching strategy with the new one and make all discovered relationships available through the Crossref REST API.\nIntroduction Relationships between preprints and journal articles link different versions of research outputs and allow one to follow the evolution of a publication over time. The Crossref deposit schema allows Crossref members to provide these relationships for new publications, either as a has-preprint relationship deposited with a journal article, or an is-preprint-of relationship deposited with a preprint.\nTo assist members who deposit preprints, we also try to connect deposited journal articles with preprints. The current method looks for an exact match between the title and first authors. We send possible matches as suggestions to the preprint server, which decides whether to update the metadata with the relationship.\nAt the time of writing, 137,837 journal articles in the Crossref database have a has-preprint relationship1, and 562,225 works of type posted-content (preprints belong to this type) have an is-preprint-of relationship2.\nWe suspected that many preprint–journal article relationships are missing, as some members inevitably fail to deposit them, even after suggestions from the current matching strategy. Another factor is that the current strategy is fairly conservative, and probably misses a significant number of relationships. For these reasons, we decided to investigate whether we could improve on the current process. Doing so would allow us to infer missing relationships on a large scale, similar to how we automatically match bibliographic references to DOIs.\nThis preprint matching task can be defined in two directions:\nWe start with a journal article and we want to find all its preprints. We start with a preprint and we want to find a subsequently published journal article. On the one hand, matching from journal articles to preprints would allow us to enrich the database continually with new relationships, either periodically or every time new content is added. Since journal articles tend to appear in the database later than their preprints, it makes sense for a new journal article to trigger the matching and not the other way round. This way we can expect the potential matches to be already in the database at the time of matching.\nOn the other hand, matching from preprints to journal articles can be useful in a situation where we want to add relationships in an existing database retrospectively. In our case, the database contains many more journal articles than preprints, so for performance reasons it is better to start with preprints.\nIn both cases we are dealing with structured matching, meaning that we match a metadata record of a work (preprint or journal article), rather than unstructured text.\nAs a result of matching a single preprint or a single journal article, we should expect zero or more matched journal articles/preprints. Multiple matches occur when:\nthere are multiple versions of the matched preprint and/or matched works have duplicates. The image shows the result of matching a journal article to two versions of a preprint:\nMatching strategy Our matching strategy uses the following workflow:\nGathering a short list of candidates using the Crossref REST API. Scoring the similarity between the input item and each candidate. A final decision about which candidates, if any, should be returned as matches. Gathering candidates is done using the Crossref REST API\u0026rsquo;s query.bibliographic parameter. The query is a concatenation of the title and authors\u0026rsquo; last names of the input item. We filter the candidates based on their type, to leave only preprints or only journal articles, depending on the direction of the matching. In the future, instead of getting the candidates from the REST API, we will be using a dedicated search engine, optimised for preprint matching.\nScoring candidates is heuristic-based. Similarities between titles, authors, and years are scored independently, and the final score is their average. Titles are compared in a fuzzy way using the rapidfuzz library. Authors are compared pairwise using the ORCID ID, or first/last names if ORCID ID is not available. The similarity score between issued years is 1 if the article was published no earlier than one year before the preprint and no later than three years after the preprint, or 0 otherwise.\nThe final decision is made based on two parameters: minimum score and maximum score difference, both chosen based on a validation dataset. The following diagram depicts the results of applying these two parameters in all possible scenarios. First, any candidate scoring below the minimum score is rejected (grey area in the diagram). Second, the scores of the remaining candidates are compared with the score of the top candidate. If the score of a candidate is close enough to the score of the top candidate, it is returned as a match (blue area).\nThis process can result in the following scenarios:\nScenario A: there is no candidate above the minimum score. This means nothing matches sufficiently, so nothing is returned. Scenario B: there is only one candidate above the minimum score. This means it is the best match and we don\u0026rsquo;t have much of a choice, so it is returned. Scenario C: there are multiple candidates above the minimum score, and they all have similar scores. This means they all are similarly good matches, so all are returned. Scenario D: there are multiple candidates above the minimum score, but their scores differ a lot. In this case, we don\u0026rsquo;t want to return all of them, but only those that are close to the top match. Intuitively, we don\u0026rsquo;t want to return less-than-great matches if we have really great ones. This is when the maximum score difference comes into play: we return the candidates with the “score distance” to the top candidate lower than the maximum score difference. We evaluated this strategy on a test set sampled from the Crossref metadata records. The test set contains 3,000 pairs (journal article, set of corresponding preprints). Half of the journal articles have known preprints and the other half don\u0026rsquo;t. The test set can be accessed here.\nWe used precision, recall, and F0.5 as evaluation metrics:\nPrecision measures the fraction of the matched relationships that are correct. Recall measures the fraction of the true relationships that were matched. F0.5 combines precision and recall in a way that favours precision. The strategy achieved the following results: precision 0.9921, recall 0.9474, F0.5 0.9828. The average processing time was 0.96s.\nWe have made this strategy (journal article -\u0026gt; preprints) available through the (experimental) API: https://0-marple-research-crossref-org.libus.csd.mu.edu/match?task=preprint-matching\u0026strategy=preprint-sbmv\u0026input=10.1109/access.2022.3213707. The input is the DOI of a journal article we want to match to preprints, and the output is a list of matches found, along with the score for each.\nWe have investigated other approaches to making decisions about which candidates to return as matches (step 3 above), including using machine learning. At present none have outperformed the heuristic approach described above. The heuristic method is also preferred because of its fast performance.\nPreprint–journal article relationship dataset We applied the strategy to the entire Crossref database:\nWe selected all preprints published until the end of August 2023. This included only works with type posted-content and subtype preprint, as reported by the REST API. There were 1,050,247 of them. We ran the matching strategy (preprint -\u0026gt; journal article) on them. This resulted in 627,011 preprint–journal article relationships. The resulting relationships were combined with the relationships deposited by the Crossref members. We included relationships of types has-preprint or is-preprint-of, where both sides of the relationship exist in our database, were published until the end of August 2023, and are of proper types and subtypes (type=journal-article for the journal article and type=posted-content, subtype=preprint for the preprint). The resulting dataset is a single CSV file with the following fields:\npreprint DOI (string) journal article DOI (string) whether the publisher of the journal article deposited this relationship (boolean) whether the publisher of the preprint deposited this relationship (boolean) the confidence score returned by the strategy (float, empty if the strategy did not discover this relationship) The dataset contains:\n641,950 relationships in total, including 580,532 preprints and 565,129 journal articles, 14,939 of them were deposited by the Crossref members, but not discovered by the strategy, 330,826 of them were discovered by the strategy, but not provided by any Crossref member, 296,185 of them were both deposited by a Crossref member and discovered by the strategy. The dataset can be downloaded here.\nConclusions and what\u0026rsquo;s next Overall, based on the number of existing and newly discovered preprint–journal article relationships, it seems that employing automated matching strategies would approximately double the number of these relationships in the Crossref database. In the future, we would like to match new journal articles on an ongoing basis. We also plan to make all discovered relationships available through the REST API.\nIn the meantime, we will be publishing the discovered relationships in the form of datasets, and we invite anyone interested to further analyse this data. And if you find out something interesting about preprints and their relationships, do let us know!\nhttps://api.crossref.org/types/journal-article/works?filter=relation.type:has-preprint\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://api.crossref.org/types/posted-content/works?filter=relation.type:is-preprint-of\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": ["TL;DR","Introduction","Matching strategy","Preprint–journal article relationship dataset","Conclusions and what\u0026rsquo;s next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/preprints/", "title": "Preprints", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/perspectives/", "title": "Perspectives", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-madhura-amdekar-meeting-community-pursuing-research-integrity/", "title": "Perspectives: Madhura Amdekar on meeting the community and pursuing passion for research integrity", "subtitle":"", "rank": 1, "lastmod": "2023-12-05", "lastmod_ts": 1701734400, "section": "Blog", "tags": [], "description": "The second half of 2023 brought with itself a couple of big life changes for me: not only did I move to the Netherlands from India, I also started a new and exciting job at Crossref as the newest Community Engagement Manager. In this role, I am a part of the Community Engagement and Communications team, and my key responsibility is to engage with the global community of scholarly editors, publishers, and editorial organisations to develop sustained programs that help editors to leverage rich metadata.", "content": "\rThe second half of 2023 brought with itself a couple of big life changes for me: not only did I move to the Netherlands from India, I also started a new and exciting job at Crossref as the newest Community Engagement Manager. In this role, I am a part of the Community Engagement and Communications team, and my key responsibility is to engage with the global community of scholarly editors, publishers, and editorial organisations to develop sustained programs that help editors to leverage rich metadata.\nThis represents an exciting phase in my professional journey, as I now have the chance to learn and develop new skills, broaden my understanding of the publishing landscape, and at the same time be able to leverage the experience I gained so far. I originally trained as an ecologist, obtaining a PhD studying colour change in a tropical agamid lizard in India at the Indian Institute of Science (Bengaluru, India). Having immensely enjoyed the process of writing manuscripts based on the data that resulted from my PhD thesis, I was drawn to working in the scholarly publishing industry. I worked for 3.5 years as a Senior Associate at Wiley, overseeing an editor support service by devising strategic scale-up planning and process improvement initiatives.\nI then moved countries as well as jobs and joined Crossref. The world of scholarly communications is a rapidly changing ecosystem, that is ably supported by scholarly infrastructure - the sets of tools and services that support this industry. Being a part of Crossref, a global organisation that provides open scholarly infrastructure, allows me to work with and make an impact on the broad scholarly community that ranges from publishers of all shapes and sizes, funders, to academic institutions, and researchers.\nSo far, the integrity of the scholarly record (ISR) has been the focus of my work. Now more than ever, the community is cognizant of the need to uphold the integrity of the scholarly output. Metadata and relationships between research outputs can support this endeavour in a substantial manner because information such as who contributed to a research output, who funded it, who cites it, whether it was updated after publication, aids provenance and provides signals about whether the output is trustworthy.\nMost of Crossref’s tools and services play a key role here: be it reference linking to allow researchers to increase discoverability of their work, tracking post-publication updates to research outputs via Crossmark, or detecting text plagiarism via Similarity Check. We noticed that not all editors and editorial teams associate metadata as signals of integrity, and might be unaware of the benefits of rich metadata. Therefore, my priority is to utilise opportunities to engage with editors about how metadata can provide trust indicators about a research output. I aim to empower editors to collect and leverage rich metadata.\nWhile I am no stranger to the world of scholarly communications, engaging with the broader Crossref community has been a new experience for me. In my day to day work, I employ a range of different skills such as program design and management, content planning and outreach, networking, and meeting facilitation. I have also been participating in trainings to enhance my skill set – I recently completed a training course on Community Engagement Fundamentals, which has equipped me with a better understanding of the concepts and strategies that I will need as a community manager. Additionally, I also underwent the Group Facilitation Methods training course led by the Institute of Cultural Affairs (ICA) where I learnt a couple of effective methods for group facilitation and leading workshops.\nEquipped with these skills, I have moderated a few community events already – most prominently the community call about Crossref and Retraction Watch to discuss Crossref’s acquisition and opening up of the Retraction Watch database. It was a valuable experience to contribute to the planning of an online event and host a panel of distinguished guests.\nI was also fortunate to be able to meet our community members in-person: I supported the organisation of the Frankfurt roundtable event that was held as part of Crossref’s Integrity of the Scholarly Record (ISR) program, where we engaged with community members to get their perspectives on how to work together towards preserving the integrity of the scholarly record (keep watching this space for a forthcoming blog summarising the outcomes from this event!). Additionally, I attended the Frankfurt Book Fair – the experience of getting to meet our members and to hear from them first-hand about all things Crossref, was unparalleled! I used this opportunity to meet several of our publisher members and discuss their view points about engaging with editors on ISR. The idea was received positively: we heard specific suggestions of metadata that would be of interest to readers of scientific manuscripts, and our members also expressed interest in finding out more about how metadata can act as markers of trust for a research output. I plan to use the insights from these meetings for the development of the ISR editor engagement program.\nAs I reflect on the past three months, there are a few things that have stood out to me. In terms of work, no two days are the same. My work plan for the day can range from making presentations for outreach activities, creating content such as this blogpost, working on an engagement strategy, to planning events, attending online or offline community meetings, facilitating or moderating some of those events, and networking with community members. This variety in work keeps me motivated to give my best each day. I am also grateful that I have the ability to make an impact with my work in an area that I am passionate about. In my previous job, I had developed a good understanding of research integrity and publication ethics. As a community manager now, I’m looking to work with editorial teams on the integrity of the scholarly record. This role gives me an opportunity to further nurture this interest of mine.\nAt times, working from home remotely has been a challenge. However, I have enjoyed attending in-person events as they are not just a chance to meet our community members, but also a chance to meet my colleagues and connect with them.\nI feel privileged to be able to connect with research communities all over the world and make a meaningful contribution towards supporting the discoverability and impact of their work. I am particularly excited to work at the forefront of shaping the future of preserving the integrity of the scholarly record, in tandem with our community. If this is a topic that excites you as well, I am keen to hear from you. It has been a wonderful three months at Crossref so far and I look forward to future collaborations with our community to develop effective ways of supporting and empowering editors to make the most of metadata for their publications.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/datacite/", "title": "DataCite", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/datacite/", "title": "DataCite", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/joint-statement-on-research-data/", "title": "Joint Statement on Research Data", "subtitle":"", "rank": 1, "lastmod": "2023-11-28", "lastmod_ts": 1701129600, "section": "Blog", "tags": [], "description": "STM, DataCite, and Crossref are pleased to announce an updated joint statement on research data.\nIn 2012, DataCite and STM drafted an initial joint statement on the linkability and citability of research data. With nearly 10 million data citations tracked, thousands of repositories adopting data citation best practices, thousands of journals adopting data policies, data availability statements and establishing persistent links between articles and datasets, and the introduction of data policies by an increasing number of funders, there has been significant progress since.", "content": "\rSTM, DataCite, and Crossref are pleased to announce an updated joint statement on research data.\nIn 2012, DataCite and STM drafted an initial joint statement on the linkability and citability of research data. With nearly 10 million data citations tracked, thousands of repositories adopting data citation best practices, thousands of journals adopting data policies, data availability statements and establishing persistent links between articles and datasets, and the introduction of data policies by an increasing number of funders, there has been significant progress since. It now seems appropriate to focus on providing updated recommendations for the various stakeholders involved in research data sharing.\nThe premise of the original joint statement still stands: most stakeholders across the spectrum of researchers, funders, librarians and publishers agree about the benefits of making research data available and findable for reuse by others. This improves utility and rigor of the scholarly record. Still, research data sharing is not yet a self-evident step in the research lifecycle. We now have sufficient scholarly communication infrastructure in place to bring about widespread change and believe momentum is building for collective action.\nIt is in this context that DataCite, a global membership community working with over 2800 repositories around the world, and STM, whose membership consists of over 140 scientific, technical, and medical publishing organizations, are issuing this joint statement. Crossref, a nonprofit open infrastructure with over 18,000 institutional members from 150 countries, joins this call, recognising the need for an amplified focus on data citation. The aim of this statement is to accelerate adoption of best practices and policies, and encourage further development of critical policies in collaboration with a wide group of stakeholders.\nSignatories of this statement recommend the following as best practice in research data sharing:\nWhen publishing their results, researchers deposit related research data and outputs in a trustworthy data repository that assigns persistent identifiers (DOIs where available). Researchers link to research data using persistent identifiers. When using research data created by others, researchers provide attribution by citing the datasets in the reference section using persistent identifiers. Data repositories enable sharing of research outputs in a FAIR way, including support for metadata quality and completeness. Publishers set appropriate journal data policies, describing the way in which data is to be shared alongside the published article. Publishers set instructions for authors to include Data Citations with persistent identifiers in the references section of articles. Publishers include Data Citations and links to data in Data Availability Statements with persistent identifiers (DOIs where available) in the article metadata registered with Crossref. In addition to Data Citations, Data Availability Statements (human- and machine-readable) are included in published articles where appropriate. Repositories and publishers connect articles and datasets through persistent identifier connections in the metadata and reference lists. Funders and research organizations provide researchers with guidance on open science practices, track compliance with open science policies where possible, and promote and incentivize researchers to openly share, cite and link research data. Funders, policymaking institutions, publishers and research organizations collaborate towards aligning FAIR research data policies and guidelines. All stakeholders collaborate in the development of tools, processes, and incentives throughout the research cycle to enable sharing of high-quality research data, making all steps in the process clear, easy and efficient for researchers by providing support and guidance. Stakeholders responsible for research assessment take into account data sharing and data citation in their reward and recognition system structures. We, the following signatories shall adopt and promote the relevant best practices laid out above. We hope that our action inspires the community, including researchers, research funders, research institutions, data repositories and publishers, to join us in making it easy for researchers to share, link and cite research data.\nEndorse the statement here.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/linked-data/", "title": "Linked Data", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/stm/", "title": "STM", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-was-the-talk-of-crossref2023/", "title": "What was the talk of #Crossref2023?", "subtitle":"", "rank": 1, "lastmod": "2023-11-21", "lastmod_ts": 1700524800, "section": "Blog", "tags": [], "description": "Have you attended any of our annual meeting sessions this year? Ah, yes – there were many in this conference-style event. I, as many of my colleagues, attended them all because it is so great to connect with our global community, and hear your thoughts on the developments at Crossref, and the stories you share.\nLet me offer some highlights from the event and a reflection on some emergent themes of the day.", "content": "Have you attended any of our annual meeting sessions this year? Ah, yes – there were many in this conference-style event. I, as many of my colleagues, attended them all because it is so great to connect with our global community, and hear your thoughts on the developments at Crossref, and the stories you share.\nLet me offer some highlights from the event and a reflection on some emergent themes of the day. You can browse the recordings and slides archived on our Annual Meeting page.\nGinny Hendricks opened the meeting by reminding everyone about the research nexus vision, and the work that’s underway to bring us closer to it. Ginny went on to highlight progress in metadata and relationships being registered by our members, and mentioned members that have particularly rich metadata records – with the special joint recognition for learned societies of South Korea. Participation statistics can be reviewed in our Labs Member Metadata Metrics Tables.\nSince 2018 we’ve seen a 512% increase in the number of abstracts included in the metadata; with Wiley’s recent addition of millions of abstracts to their records largely contributing to this change. On the relationships side, in the same period, we’ve noted a staggering 3004% growth in preprint-to-article links, and we’re pleased to report a growing number of funding relationships being made available thanks to more and more funders registering Crossref DOIs for grants.\nFor those who couldn’t join us at such an early hour, Ed Penz included some of these highlights in his own strategic update later in the day. However, he focused on our activity and plans towards fulfilling our four strategic goals:\nTo contribute to an environment where the community identifies and co-creates solutions for broad benefit To be a sustainable source of complete, open, and global scholarly metadata and relationships To be publicly accountable to the Principles of Open Scholarly Infrastructure (POSI) practices of sustainability, insurance, and governance To foster a strong team—because reliable infrastructure needs committed people who contribute to and realise the vision, and thrive doing it Speakers from across our global community shared their initiatives too. Most of these talks have been accompanied by posters or abstracts shared on our Community Forum and still available for preview and discussion:\nMaking data citations available at scale: The Global Open Data Citation Corpus by Iratxe Puebla; “Who Cares?” Defining Citation Style in Scholarly Journals by Vincas Grigas and Pavla Vizváry; DOI registration for scholarly blogs by Martin Fenner; Enhancing Research Connections through Metadata: A Case Study with AGU and CHORUS by Tara Packer, Kristina Vrouwenvelder, Shelley Stall; Index Crossref, Integrity, Professional And Institutional Development by Engjellushe Zenelaj; Brazilian retractions in the Retraction Watch Database - RWDB by Edilson Damasio; and Now that you’ve published, what do you do with Metadata? - by Joann Fogleson. In addition to these updates, we’ve heard from:\nIzabela Szyprowska (OP, European Commission), Nikolaos Mitrakis (RTD, European Commission), and Paola Mazzucchi (mEDRA) talked about the process and rationale of implementing Crossref DOIs for grants at the European Commission; and Amanda French from ROR/Crossref about the new ‘ROR / Open Funder Registry overlap’ tool. We also assembled a diverse panel and invited the community to discuss “What we still need to build a robust Research Nexus?” The discussion ranged from how different parts of our community currently use existing metadata, to how we can come together to make improvements, especially in the area of standards and equitability, and touched on metadata priorities. I’ll highlight some of the threads below, but it’s certainly worth engaging with the full recording of the discussion, and offering your own perspective on the Community Forum, commenting below.\nHaving participated in the whole day of talks, I found that a few themes emerged as popular in the community: data citations, making it easier to register metadata, making better use of metadata, retractions, and equity of participation in the research nexus.\nData citations With the advances in the Crossref API relationships endpoint, Martyn Rittman demonstrated how we’re now providing more comprehensive support for data citations. You can follow his demonstration in the Collab Notebook he used for the demo and shared for your perusal. He also mentioned that the developments in this feature of our API will soon replace the current service provided via the Events API. Feel free to connect with Martin on the community forum and comment with questions and suggestions.\nAs mentioned above, DataCite’s Iratxe Puebla mentioned the Make Data Count initiative and the leaky pipeline of data citations we’ve got at the moment in the scholarly literature, obscuring the true picture of data reuse. This prevents the community from recognising and incentivising data creation and reuse appropriately. One way of addressing this is the Global Open Data Citation Corpus. Crossref and DataCite collaborate closely in connecting and making that data available.\nLinking datasets, as well as software, was reported as part of the AGU and CHORUS initiative in Enhancing Research Connections through Metadata.\nData sharing and citing is as much a culture as a technology problem. As Iratxe Puebla admitted, there are many norms and processes for capturing and sharing that information,and DataCite is interested to hear about different use cases. As highlighting data’s relationship with works is a growing interest for our community, hopefully more understanding and perhaps even commonality can be built soon.\nMaking it easier to register metadata As part of the Demonstrations session, we’ve seen two developments to support members with registering their metadata more easily.\nCrossref’s Lena Stoll shared plans for the new version of the Crossref Registration Form, the helper tool for manual registration of metadata, which translates the submission into XML, for inclusion in the Crossref database. At the moment, the form only accepts grant registrations, but it will be bolstered before the end of the year to include journal articles then other record types in time.\nErik Hanson from PKP demonstrated the latest OJS version, commenting on specific changes made in the new version in response to the key pain points reported by users of the previous release.\nIn addition, we’ve heard of two independent projects by Martin Fenner and Esha Data to enable metadata registration and Crossref DOIs for scholarly blogs.\nMaking better use of metadata Supported by the beginner’s demo of our REST API by Luis Montilla, there were many voices about opportunities for making good use of Crossref’s open metadata. Nikolaos Mitrakis of the European Commission talked about the implementation of Crossref IDs for grants as a step towards tracing and connecting the grants with not just academic but also societal outcomes of the awards, and the plans for using those in the evaluation and steering of their funding programmes.\nJoann Fogleson of the American Society of Civil Engineers gave a buzzy metaphor of publishers’ role in their work with metadata being comparable with that of a pollinator – collecting the metadata at one end, then registering, displaying and making it available to different services, in order to enable a reacher scholarly environment for discovery.\nMany of the major themes have found their way to the discussion of what is still needed to build a robust network of connections between scholarly objects, institutions and individuals. One of the ways Ludo Waltman of CWTS, Leiden University, intends to use our open metadata is as part of the upcoming open-source version of the Laiden rankings and he invited the community to contribute and help optimise this project to provide an alternative to closed and selective databases.\nPanellists also spoke of new opportunities in the light of data mining and machine learning. Ran Dang, Atlantis Press, as a publisher shared a concern about the standard of metadata across cultures and disciplines, and the need to digitise past publications – which can then help better leverage multi-lingual scholarship. Matt Buys of DataCite, pointed out to the Global Data Citation Corpus they are developing, which leverages a SciBERT model to pull out data citations, which is brought together with Crossref/DataCite citation metadata.\nOpening the data is essential to enabling its wider use, and here Ludo gave the example of the fantastic outcome for references metadata, which has been made open by default for the entire corpus of Crossref-registred works. He hopes that this can inspire us to make similar progress in other areas.\nA little on a tangent with regards to metadata use, yet speaking of excellent examples of the community making progress together, Ginny pointed out ROR, how this is becoming a new standard for solving a longstanding problem of standardising affiliations metadata.\nRetractions Perhaps not entirely surprising, given the recent acquisition of the Retraction Watch database by Crossref and making the data openly available, retractions featured in a few different talks at the meeting. First, Lena Stoll and Martin Eve from Crossref, shared how that data can be accessed – that is as the csv file from https://0-api-labs-crossref-org.libus.csd.mu.edu/data/retractionwatch?[your-email@here](add your email as indicated), and the Crossref Labs API also displays information about retractions in the /works/ route when metadata is available. There are plans for incorporating this information with our REST API in the future.\nEd and Ginny have shown stats for increases in retraction metadata registered in Crossmark but commented on limited participation in Crossmark overall. Recording retraction information in this way is still important, alongside the Retraction Watch data, this allows for multiple assertions of that information, and increases confidence in its accuracy. We’re preparing to consult with the community at large about the future direction of the Crossmark service, to make it easier to implement and more useful for the readers.\nFinally, Edilson Damasio from State University of Maringá-UEM, Brazil, and a long-time Crossref Ambassador, presented the analysis of Brazilian records in the Retraction Watch data, and he promises further analysis to come, comparing the situation across geographies.\nEquity of participation in the research nexus Amanda Bartell opened the research nexus discussion with a reminder of what that vision entails and pointing out commonality of goals in the community – “Like others, Crossref has a vision of a rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society. We call this interconnected network the Research Nexus, but others in the community have different names for it, such as knowledge graph or PID graph.”\nThe richness of this network depends upon the participation of all those who produce and publish scholarship, so naturally the topic of equality emerged in that discussion. In addition to Ran Dang’s concern for multilingualism and digitisation of past publications from all parts of the world, Mercury Shitindo of St Paul\u0026rsquo;s University, Kenya talked of the need for more education, training and accessible resources for her community, to be able to participate more effectively in this ecosystem. She can see that affiliations and citations are of priority there, as these enable transparency and facilitate collaborations. Matt Buys of DataCite echoed her point, talking about the importance of the role of contributors “It\u0026rsquo;s important not to lose sight of people and places – to recognise the importance of contributor roles in the PID-graph”.\nEarlier in the day, we mentioned the launch of our Global Equitable Membership, or GEM programme. Since January, 110 new organisations from eligible countries have joined Crossref fee-free. Ginny was quick to admit that the need for a fee-waiver programme like this stems from the regular fees schedule not being in tune with our global membership, and she mentioned the upcoming fees review.\nFinancial barriers are often what get attention, yet reducing barriers to participation with technology is equally important for building a robust research nexus. With the planned changes to our registration form, we’ll make it easier to register works for those who don’t regularly use XML.\nJohanssen Obanda took time to show the examples of community activity and events organised by our global network of Ambassadors, and to thank all our advocates and partners for their tireless work. They are also helping tackle barriers, supporting our members to actively participate in the research nexus with their metadata, and help enable the community to make good use of the network of relationships that data denotes.\nShowcasing our “One member one vote” truth, the Board election was the focal point of the annual meeting, as always. We closed the ballot and announced the results, with seven members selected to join the Board in 2024.\nThe event went very smoothly overall. Talks were delivered efficiently, the panellists shared diverse perspectives and we elected our new Board members. Huge thanks to Rosa Clark, our Communications and Events Manager, who orchestrated the event and has been a constant behind-the-scenes presence supervising the entire show. I’m grateful to all colleagues at Crossref, who helped make it an enjoyable experience and an informative event for our community. Finally – it wouldn’t be a real meeting without the active participation of the speakers and panellists, who shared their metadata stories, and even joined us for some relaxed unplugged chats.\n", "headings": ["Data citations","Making it easier to register metadata","Making better use of metadata","Retractions","Equity of participation in the research nexus"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-luis-montilla-sci-fi-concepts-reality-scholarly-ecosystem/", "title": "Perspectives: Luis Montilla on making science fiction concepts a reality in the scholarly ecosystem", "subtitle":"", "rank": 1, "lastmod": "2023-11-20", "lastmod_ts": 1700438400, "section": "Blog", "tags": [], "description": "Hello, readers! My name is Luis, and I\u0026rsquo;ve recently started a new role as the Technical Community Manager at Crossref, where I aim to bridge the gap between some of our services and our community awareness to enhance the Research Nexus. I\u0026rsquo;m excited to share my thoughts with you.\nMy journey from research to science communications infrastructure has been a gradual transition. As a Masters student in Biological Sciences, I often felt curious about the behind-the-scenes after a paper is submitted and published.", "content": "\rHello, readers! My name is Luis, and I\u0026rsquo;ve recently started a new role as the Technical Community Manager at Crossref, where I aim to bridge the gap between some of our services and our community awareness to enhance the Research Nexus. I\u0026rsquo;m excited to share my thoughts with you.\nMy journey from research to science communications infrastructure has been a gradual transition. As a Masters student in Biological Sciences, I often felt curious about the behind-the-scenes after a paper is submitted and published. For example, the fate of data being stored in the drawer or copied and forgotten in the hard drive after the paper is online. I come from a university that shares its name with at least three completely different universities in Latin America, and that also is pretty similar to another one with multiple offices across the region, which made me wonder if there was a standard way of identifying our affiliations. And then we have the topic of our names in hispanoamerica. We use two family names, and more often than not, we have a middle name (and then I could tell you stories about multiple-word middle names), which inevitably leads to authors having many combinations of full names and hyphenations.\nThis curiosity led me to volunteer in the Journal of the Venezuelan Society of Ecology. This role has been a transformative experience because my goal was to learn more about the publishing aspect of science. Still, today I realize that this is a fraction of what the scholarly ecosystem represents. The experience allowed me to grasp the importance of having a community with a sense of belonging, the relevance of multilingualism, and the importance of having access to an open infrastructure that allows smaller communities to be participants in the global dynamics. Moreover, it seemed to me that a research paper is more than the capstone of a building that we place and then move on to the next project or the next experiment; instead, it is a node in the vast network of human knowledge, connected to other papers through references, but also to all the other elements that are produced as part of the research, namely datasets, protocols, code, presentations, posters, preprints, peer-review reports and more. In short, the research metadata extends the life of the research output and makes it visible to the rest of the community.\nThis brings us to my onboarding to the Crossref team. At Crossref, I became part of a team and a driving force whose idea of the Research Nexus 1 aligns perfectly with my aspirations. And to explain myself better, I\u0026rsquo;ll draw an analogy using one of my favorite authors. In Isaac Asimov\u0026rsquo;s Second Foundation, a character shows to another a wall covered to the last millimeter with equations and writings. He describes his contribution to \u0026ldquo;The Plan\u0026rdquo; as follows: \u0026ldquo;\u0026hellip;Every red mark you see on the wall is the contribution of a man among us who lived since Seldon\u0026rdquo;.2 This idea sounded fascinating to me and only possible in a sci-fi book; a massive integrated research ecosystem where scientists focused more on how their contributions fit in the big picture. Today I have come to think that metadata helps materialize this idea by interconnecting all knowledge, and more importantly, in stark contrast to Asimov\u0026rsquo;s plan developed and guarded by a secret society, Crossref\u0026rsquo;s research nexus is a \u0026ldquo;reusable open network,\u0026rdquo; \u0026ldquo;a scholarly record that the global community can build on forever.\u0026rdquo; In a world with undeniably unequal access to resources, providing open access and fostering community efforts to contribute to this growing collective effort is a fundamental condition to empower and visualize underrepresented voices.\nWe make available a series of tools to access and probe this data, including our REST API, but we know its potential is far from being realized. As Technical Community Manager at Crossref, my primary responsibility is to understand the needs of our community members who interact with our REST API. I aim to build and maintain relationships with new and existing metadata users to promote the effective usage of our API. I will also be working closely with organizations such as hosting platforms, manuscript submission systems, and general publisher services. In essence, I want to ensure that our community across the globe is aware of the vast possibilities that imply using and contributing to the Research Nexus.\nI am committed to fostering an engaged and collaborative technical community. As we move forward, I look forward to sharing insights, experiences, and knowledge with all of you. Stay tuned for more updates, and let\u0026rsquo;s explore the world of APIs, metadata, and scholarly communities together!\nCrossref (2021) The research nexus. Accessed on 20 October 2023.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nAsimov, I. (1953) Second Foundation. Gnome Press.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/fabienne-michaud/", "title": "Fabienne Michaud", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/similarity-check/", "title": "Similarity Check", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-update-a-new-similarity-report-and-ai-writing-detection-tool-soon-to-be-available-to-ithenticate-v2-users/", "title": "Similarity check update: A new similarity report and AI writing detection tool soon to be available to iThenticate v2 users", "subtitle":"", "rank": 1, "lastmod": "2023-11-01", "lastmod_ts": 1698796800, "section": "Blog", "tags": [], "description": "In May, we updated you on the latest changes and improvements to the new version of iThenticate and let you know that a new similarity report and AI writing detection tool were on the horizon.\nOn Wednesday 1 November 2023, Turnitin (who produce iThenticate) will be releasing a brand new similarity report and a free preview to their AI writing detection tool in iThenticate v2. The AI writing detection tool will be enabled by default and account administrators will be able to switch it off/on.", "content": "In May, we updated you on the latest changes and improvements to the new version of iThenticate and let you know that a new similarity report and AI writing detection tool were on the horizon.\nOn Wednesday 1 November 2023, Turnitin (who produce iThenticate) will be releasing a brand new similarity report and a free preview to their AI writing detection tool in iThenticate v2. The AI writing detection tool will be enabled by default and account administrators will be able to switch it off/on.\nTurnitin will be running a webinar on their new similarity report and AI writing detection tool on Tuesday 28 November (EDIT 23/11/16: Monday 11 December 2023). More information on the webinar and how to register will be communicated by Turnitin in the coming weeks.\nNew similarity report On Wednesday, all iThenticate v2 users will have access to the new version of the similarity report which will include:\na word count and the number of text blocks for each matched source the ability to include or exclude overlapping sources from the overall similarity score a clearer colour differentiation between the different sources improved accessibility features Enabling the new similarity report The new similarity report will be enabled as a default for all your journals. Account administrators wishing to switch off the new similarity report can do so by going to Settings and selecting from the General tab, under the New Similarity Report Experience heading, the Disable option.\nClassic view / new view As this will be a significant change to your current experience, Turnitin have provided access for a period of time to the ‘classic view’ and you will be able to toggle between the original interface and the new one by clicking on ‘Switch to the classic view’ or ‘Switch to the new view’ buttons at the top of your report.\nThe similarity score will continue to be available at the top right-hand corner of the similarity report.\nExclusions By clicking on the Filters button you’ll be able to check and/or adjust your report’s section and repository exclusions.\nPlease note that the exclusions previously set up by account administrators should be unchanged by this release.\nSources / Match Groups view The Sources view will be the default view and will list all sources. By using the on/off button next to ‘Show overlapping sources’, you’ll be able to include or exclude overlapping sources. This will be ‘off’ as a default.\nThe Match Groups view is completely new and may not suit everyone’s needs. It is divided into four categories ‘Not Cited or Quoted’, ‘Missing Quotations’, ‘Missing Citation’ and ‘Cited and Quoted’ and will highlight matches found in your text.\nPDF report You’ll also now find the PDF report in the top right-hand corner of the similarity report, by clicking on the ‘download’ icon.\nSubmission details ‘Submission Details’ is located now under the ‘i’ icon in the top right-hand corner of your report. This is where you will find the oid (or unique number) for your manuscript which Turnitin will ask you to provide when you are reporting a technical issue.\nTurnitin’s documentation for the new similarity report\nAI writing detection tool Many of you have been concerned about the use of AI writing in the research papers you’ve received since the launch of ChatGPT last November and have been in touch to enquire about the availability of an AI writing detection tool for Crossref members.\nYou will also have read that Turnitin have developed an AI writing detector tool and have made it available to their education sector customers since April. Turnitin have published an update in May, a helpful video and further information on the false positive rates in June based on the feedback they’ve received from the education community.\nI am pleased to announce that Turnitin’s AI writing detection tool will be available as a free preview to iThenticate v2 users, via the new version of the similarity report, from Wednesday 1 November until the end of December 2023.\nEnabling AI writing detection Our preference was to have the new AI writing detection tool turned ‘off’ as a default, however this hasn’t been possible. Account administrators can turn this feature off by going to Settings and selecting the Crossref Web tab and scrolling down to the AI Writing section at the very bottom of the page. The feature is applied to all submissions when it is enabled.\nPlease note that AI Writing detection is only available in the new similarity report.\nIntegrations There is currently no integration between manuscript tracking systems and the AI writing detection tool. However the AI score will be available via the similarity report. If the AI writing detection tool has been set as ‘off’ by the account administrator, there will be no score and the ‘AI Writing’ heading will not be visible on the similarity report:\nFile requirements Turnitin have made some important file requirements available for the tool to run a report:\nMust be written in English A minimum 300 words A maximum of 15,000 words The file size must be less than 100 MB Accepted file types are .docx, .pdf, .rtf and .txt If your file does not meet the above requirements, iThenticate v2 will display the following message:\nTurnitin’s AI writing detection tool has been developed to detect GPT 3, 3.5, 4 and other variants. More information on this is available on their FAQs page.\nTurnitin have provided the following guidance regarding the AI scores:\n\u0026ldquo;Blue with a percentage between 0 and 100: The submission has processed successfully. The displayed percentage indicates the amount of qualifying text within the submission that Turnitin’s AI writing detection model determines was generated by AI. As noted previously, this percentage is not necessarily the percentage of the entire submission. If text within the submission was not considered long-form prose text, it will not be included.\nOur testing has found that there is a higher incidence of false positives when the percentage is between 1 and 20. In order to reduce the likelihood of misinterpretation, the AI indicator will display an asterisk (*) for percentages between 1 and 20 to call attention to the fact that the score is less reliable.\nTo explore the results of the AI writing detection capabilities, select the indicator to open the AI writing report. The AI writing report opens in a new tab of the window used to launch the Similarity Report. If you have a pop-up blocker installed, ensure it allows Turnitin pop-ups.\u0026rdquo;\nPlease note that unlike the similarity report, the AI writing report will only provide a score and highlight the blocks of texts likely to have been written by an AI tool and will not list source matches.\nWe encourage you to test the writing detection tool as much as possible during the free preview period (1 November-31 December 2023).\nNext Paraphrase detection Turnitin are planning to release a beta version of their new paraphrase detection tool at the end of this year/Q1, 2024. It will be initially available as a free preview for a short period of time. (EDIT 23/11/16: There is currently no timeline available for Turnitin\u0026rsquo;s paraphrase detection tool which is having a knock-on effect on the availiblity of the AI writing and paraphrase detection bundle and associated fees previously mentioned in this post)\nAI and paraphrase detection bundle (EDIT 23/11/16: AI writing detection tool) Once the free preview period ends, Turnitin would like to offer Crossref members an AI and paraphrase detection bundle (EDIT 23/11/16: are planning to make their AI writing detection tool available) from 2024 - this means that if you choose to subscribe to this new service, you will be charged an additional fee each time you upload a manuscript.\nFixes Many of you have been waiting for fixes to the aggregation of URLs issues in the matched sources of the similarity report and to the doc-to-doc PDF report in iThenticate v2. Turnitin are planning to release fixes for these before the end of 2023.\n✏️ Do get in touch via support@crossref.org if you have any questions about iThenticate v1 or v2 or start a discussion by commenting on this post below or in our Community Forum.\n", "headings": ["New similarity report","Enabling the new similarity report","Classic view / new view","Exclusions","Sources / Match Groups view","PDF report","Submission details","AI writing detection tool","Enabling AI writing detection","Integrations","File requirements","Next","Paraphrase detection","AI and paraphrase detection bundle (EDIT 23/11/16: AI writing detection tool)","Fixes"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/special-programs/register-references/", "title": "Register references", "subtitle":"", "rank": 1, "lastmod": "2023-10-24", "lastmod_ts": 1698105600, "section": "Get involved", "tags": [], "description": "Make your references count: register today Improving the discoverability and impact of research across the globe is central to the future of open research. We understand that your time is limited. That’s why we have a clear process for registering your references with Crossref using our Simple Text Query, built with you in mind.\nGive your content the recognition it deserves We know how hard it is to find time. Once your content is registered through the web deposit form, use our Simple Text Query to add your references.", "content": "Make your references count: register today Improving the discoverability and impact of research across the globe is central to the future of open research. We understand that your time is limited. That’s why we have a clear process for registering your references with Crossref using our Simple Text Query, built with you in mind.\nGive your content the recognition it deserves We know how hard it is to find time. Once your content is registered through the web deposit form, use our Simple Text Query to add your references. Learn more about how to use our Simple Text Query using either of our handy guides:\nStep-by-step instructions Instructional video +- Video transcript\rTranscript for the video \u0026ldquo;How to use the Crossref Simple Text Query form to register your references with Crossref”\nHi there. This is a tutorial on how to use Crossref\u0026rsquo;s simple text query form. This is a form that allows you to discover the DOIs for your references and add them to the metadata for a content item that has already been registered with CrossRef and already has a DOI registered for it.\nWhat this does is create a relationship between the citing item and each of the various items that it cites. Be aware that this method will overwrite any references that have already been deposited for that content item. So we recommend that you deposit all of an item\u0026rsquo;s references at once when you use the simple text query.\nTo start, let\u0026rsquo;s head on over to Crossref\u0026rsquo;s website, crossref.org. Here in the search box, start typing \u0026ldquo;simple text query.\u0026rdquo; You\u0026rsquo;ll want the second entry here: documentation, simple text query. This page has a whole lot of useful background and information about the simple text query form. For our purposes, we just want to access it right now. So we\u0026rsquo;ll head down here to number one and click this link. Voilà, the simple text query form. It is indeed pretty simple.\nYou\u0026rsquo;ll want to paste your list of references here in the text field. A couple of things to note about this:\nIt doesn\u0026rsquo;t matter which citation style you\u0026rsquo;re using. What\u0026rsquo;s important is that you\u0026rsquo;re using a consistent citation style for all of your references. Each reference should be on its own line, ideally in a numbered list form, although you won\u0026rsquo;t break anything by not having the numbers. You want to avoid line breaks within individual references as this can result in poorer citation matching from the form. So watch out for that if you\u0026rsquo;re copying the references over from a .doc or a PDF file. When all your references are in here, click Submit, and the simple text query will match them in a couple of seconds to a minute or two, depending on how many references you have, and it can handle up to 1000 at a time.\nAnd there you have it. It looks like it\u0026rsquo;s found DOIs for seven of the eight references that we included here. Of course, not every cited item has a DOI, but the great feature of the simple text query is that even if something doesn\u0026rsquo;t have a DOI, the form will save it and periodically run checks to see if a DOI has been added for this item.\nEvery so often, it\u0026rsquo;s a good practice to spot check a few of the DOIs that the simple text query finds to make sure they correspond to the article. When you\u0026rsquo;re happy with the results, click Deposit down here at the bottom. In the Email Address field, you\u0026rsquo;ll enter your email address because you\u0026rsquo;ll be receiving a report after you finish the deposit.\nThe parent DOI is the DOI for the content item whose references you\u0026rsquo;re registering here—a reminder that you need to have registered a DOI for the content item first before using the simple text query. So if you haven\u0026rsquo;t yet done that, go back and register the DOI and then come back to the simple text query to deposit its references.\nThe username is either going to be your organization\u0026rsquo;s shared role credential, which will be a four, five, or six alphanumeric character string, or your individual user credential, which will be an email address followed by the role. And your password is your password. Click Deposit. Great. The simple text query has sent your references to CrossRef.\nYou\u0026rsquo;ll now receive two emails: one with the XML of the references you just deposited, and the second, which will confirm if the deposit was a success or a failure. That\u0026rsquo;s it. That\u0026rsquo;s the simple text query. We hope that you\u0026rsquo;ll find that it\u0026rsquo;s easier than ever to deposit references with CrossRef and that you\u0026rsquo;ll start doing it today if you haven\u0026rsquo;t already. Thanks so much.\nMake your research more visible, evaluated, and likely to be cited. By registering your references you improve the discoverability of your work, facilitate evaluation, and assist with citation counts and accuracy.\nReferences matter:\nUsed by thousands of organisations Guide researchers to your content Contribute to the evaluation of research Progress your metadata strategy Start registering references today.\nAlready submitted your article metadata? Now register your article references to improve discoverability of your article: https://0-apps-crossref-org.libus.csd.mu.edu/SimpleTextQuery/ Ready to submit your article metadata? Add your metadata using our Web Deposit form and then register your references with our Simple Text Query. Need more help? Explore our step by step instructions.\nWhy is registering references important? By registering your references, you provide context and help readers understand the methodology and sources in your content, supporting the scholars to build upon it. Make your content more accessible and discoverable to a wider audience and increase the chances of it being cited.\nHelp with research assessment. Your authors will benefit from accurate citations as it contributes to the impact of their research. Many citation-counting services use references as a way of tracking citations.\nThanks to the recent changes to our Citedby service, which is now open for all members to use, your references not only support your readers in their research, but now more than ever before – they can help guide others to your content as well.\nTop tips for submitting your references: Work with authors to make sure that your references are complete and accurate. This includes each source\u0026rsquo;s author, title, publisher, publication date, and –– if available –– the DOIs. Use a consistent citation style. This will make it easier for readers to find your references and understand how to cite them in their own work. Submit your references straight after you register your content DOIs so they are included in citation databases, and citations are counted accurately. Looking for support? Use our community forum or email support@crossref.org.\nHelpful resources: Our blogs about references Amendments to membership terms to open reference distribution and include UK jurisdiction Linking references is different from depositing references ", "headings": ["Make your references count: register today","Give your content the recognition it deserves","Step-by-step instructions","Instructional video","Why is registering references important?","Top tips for submitting your references:","Helpful resources:","Our blogs about references","Amendments to membership terms to open reference distribution and include UK jurisdiction","Linking references is different from depositing references"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/special-programs/", "title": "Special programs", "subtitle":"", "rank": 1, "lastmod": "2023-10-24", "lastmod_ts": 1698105600, "section": "Get involved", "tags": [], "description": "Discover our special programs At Crossref, our mission is not only about making research easier to find, cite, link, assess, and reuse. We are constantly widening our horizon to create special programs that focuses on different elements of the Research Nexus vision – a vision that aspires to create a rich, reusable open network of relationships connecting research elements for the benefit of society.\nThese programs focus on making connections between all kinds of research information, including journal articles, book chapters, grants, and preprints.", "content": "Discover our special programs At Crossref, our mission is not only about making research easier to find, cite, link, assess, and reuse. We are constantly widening our horizon to create special programs that focuses on different elements of the Research Nexus vision – a vision that aspires to create a rich, reusable open network of relationships connecting research elements for the benefit of society.\nThese programs focus on making connections between all kinds of research information, including journal articles, book chapters, grants, and preprints. We want to connect these different elements to make it easier for everyone involved in research to find and understand the information they need. We aim to involve everyone - like researchers, funding organizations, and governments - in building a better, more open network for sharing research.\nExplore our programs designed to uphold our Research Nexus vision.\nSpecial Programs Resourcing Crossref for Future Sustainability (RCFS) A multi-year program to make participation more accessible through our fees becoming more equitable, less complex, and rebalanced to ensure they match our mission and future.\nRegister references Registering your references with Crossref helps citation linking, creates a more interconnected scholarly network, and improves the discoverability and impact of your research.\nIntegrity of the scholarly record (ISR) Our efforts ensuring that research is accurately documented and trustworthy by improving the quality and context of metadata, without directly judging the content itself. It supports the research community by providing reliable signals to assess the integrity of scholarly work.\nMore to come\u0026hellip;watch this space.\n", "headings": ["Discover our special programs","Special Programs","Resourcing Crossref for Future Sustainability (RCFS)","Register references","Integrity of the scholarly record (ISR)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/audrey-kenni-nemaleu/", "title": "Audrey Kenni-Nemaleu", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/kora-korzec/", "title": "Kora Korzec", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-audrey-kenni-nemaleu-on-scholarly-communications-in-cameroon/", "title": "Perspectives: Audrey Kenni-Nemaleu on scholarly communications in Cameroon", "subtitle":"", "rank": 1, "lastmod": "2023-10-05", "lastmod_ts": 1696464000, "section": "Blog", "tags": [], "description": "\rOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nNotre série de blogs Perspectives met en lumière différents membres de la communauté internationale de Crossref. Nous en apprenons davantage sur leur vie et sur la manière dont ils ont appris à nous connaître et à travailler avec nous, et nous entendons parler du paysage de la recherche universitaire dans leur pays, des défis auxquels ils sont confrontés et de leurs projets pour l\u0026rsquo;avenir.\n", "content": "\rOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nNotre série de blogs Perspectives met en lumière différents membres de la communauté internationale de Crossref. Nous en apprenons davantage sur leur vie et sur la manière dont ils ont appris à nous connaître et à travailler avec nous, et nous entendons parler du paysage de la recherche universitaire dans leur pays, des défis auxquels ils sont confrontés et de leurs projets pour l\u0026rsquo;avenir.\nToday, we meet Audrey Kenni-Nemaleu, Crossref Ambassador in Cameroon and Assistant Editor of the Pan-African Medical Journal (PAMJ). Audrey is excited about engaging Crossref\u0026rsquo;s community in French West Africa. Please take a moment to read and listen to Audrey\u0026rsquo;s perspective.\nAujourd\u0026rsquo;hui, nous rencontrons Audrey Kenni-Nemaleu, ambassadrice Crossref au Cameroun et rédactrice adjointe du Pan-African Medical Journal (PAMJ). Audrey est enthousiaste à l\u0026rsquo;idée d\u0026rsquo;impliquer la communauté Crossref en Afrique occidentale française. Veuillez prendre un moment pour lire et écouter le point de vue d\u0026rsquo;Audrey.\nEnglish\nFrançais\nTell us a bit about your organization, your objectives, and your role\nPouvez-vous nous parler de votre organisation, vos objectifs et votre rôle ?\nMy name is Audrey Kenni Nganmeni-Nemaleu, assistant editor for the Pan-African Medical Journal. I am specifically responsible for editing the articles in terms of form, ensuring that they meet the journal\u0026rsquo;s standards. Furthermore, I am the focal point of my journal for Crossref, that is to say I am responsible for managing all the problems that all publishers may encounter with DOIs and the various Crossref services to which our journal has subscribed. My role is also to manage all the conflicts that we may encounter with the DOIs submitted to Crossref. I train our journal staff in using Crossref services. I am also the focal point of my journal for COPE (Committee of Publications Ethics) which is an organization that helps to regulate ethical publishing practices. It is in this capacity that I participate COPE\u0026rsquo;s webinars on behalf of our journal.\nJe m’appelle Audrey Kenni Nganmeni Nemaleu, éditrice assistante pour le Pan African Medical Journal. Je m’occupe précisément de traiter les articles sur le plan de la forme en m’assurant qu’ils respectent les normes du journal. Par ailleurs je suis point focal de mon journal pour Crossref c’est-à-dire je suis chargée de gérer tous les problèmes que l’ensemble des éditeurs peuvent rencontrer avec les DOIs et les différents services de Crossref auxquels notre journal a souscrit. Mon rôle également c’est de gérer tous les conflits qu’on peut rencontrer avec les DOIs soumis à Crossref. Je forme également le personnel de notre journal à l’utilisation des services de Crossref. Je suis aussi point focal de mon journal pour COPE (Committee of Publications ethics) qui est un organisme qui aide dans la régulation des pratiques éthiques en matière de publication. C’est dans ce cadre que je participe à tous les webinaires de cette organisation afin qu’il y ait toujours au moins une personne qui participe à ces webinaires pour le compte de notre journal.\nWhat is one thing that others should know about your country and its research activity?\nQue doivent savoir les autres sur les activités de recherche dans votre pays ?\nIn my country, Cameroon, the research activity is still young. There are few scientific journals and we are actually the most influential journal in our country and subregion. There are also few schools or institutions that focus especially on research. For the time being, research activities in my country mainly revolve around congresses and conferences where researchers can exhibit their works. There is very little support for scientific research in my country.\nDans mon pays, le Cameroun, la recherche scientifique est encore jeune. Il existe peu de revues scientifiques et nous sommes en fait le journal le plus influent de notre pays et de notre sous-région. Il existe également peu d\u0026rsquo;écoles ou d\u0026rsquo;nstitutions qui spécialisées sur la recherche. Pour l\u0026rsquo;instant, les activités de recherche dans mon pays s\u0026rsquo;articulent principalement autour de congrès et de conférences où les chercheurs peuvent exposer leurs travaux. Il y a très peu de soutien à la recherche scientifique dans mon pays.\nAre there trends in scholarly communications that are unique to your part of the world?\nExiste-t-il des tendances particulières en matière de recherche scientifique dans votre région ?\nIn this part of the world, we do our best to follow the code of ethics of the various organizations in which we are a member: Committee of publication ethics (COPE), World Association of Medical Editors (WAME), Open Access Scholarly Publishing Association (OASPA). What we have seen emerging recently is the organization, by professional scientific societies, of small conferences, workshops and meetings to exchange information. These small events are less costly to organize, hence their gain in popularity. We support these activities through sponsorship, and use them as opportunities to strengthen young researchers\u0026rsquo; capacities in areas such as scientific writing, publication ethics. We also use those opportunities to introduce to young researchers concepts such as Open Access, Open Science, DOIs and other modern publishing services.\nDans notre pays, nous nous efforçons de suivre le code de déontologie des différentes organisations dont nous sommes membre : Committee of publication ethics (COPE), World Association of Medical Editors (WAME), Open Access Scholarly Publishing Association (OASPA). Ce que l\u0026rsquo;on a vu émerger récemment, c\u0026rsquo;est l\u0026rsquo;organisation, par des sociétés scientifiques professionnelles, de petits colloques, ateliers et réunions d\u0026rsquo;échange d\u0026rsquo;informations. Ces petits événements sont moins coûteux à organiser, d\u0026rsquo;où leur gain en popularité. Nous soutenons ces activités par le sponsoring et les utilisons comme des opportunités pour renforcer les capacités des jeunes chercheurs dans des domaines tels que l\u0026rsquo;écriture scientifique, l\u0026rsquo;éthique de la publication. Nous utilisons également ces opportunités pour leur présenter des concepts tels que le libre accès, la science ouverte, les DOIs et d\u0026rsquo;autres services d\u0026rsquo;édition modernes.\nWhat about any political policies, challenges, or mandates that you have to consider in your work?\nQuels sont les politiques, défis ou mandats auxquels vous faites face dans votre travail ?\nOperating a journal in our context is challenging. The critical challenges are as basic as constant availability of electricity or stable and fast internet connectivity. How to maintain a stable stream revenue to support the journal is also a critical challenge. Most of our authors are young, self-funded and with limited resources. Most cannot afford the amount we charge for article publishing fees, which in comparison, is very limited. So we have to be extremely creative to operate.\nFaire fonctionner une revue dans notre contexte est difficile. Les défis critiques sont aussi fondamentaux que la disponibilité constante de l\u0026rsquo;électricité ou une connexion Internet stable et rapide. Comment maintenir un flux stable des revenus pour soutenir la revue constitue également un défi crucial. La plupart de nos auteurs sont jeunes, autofinancés, avec des ressources limitées et par conséquent n’arrivent pas à payer les frais de publication d\u0026rsquo;articles pourtant très bas. Nous devons donc être extrêmement créatifs pour gérer nos charges.\nHow would you describe the value of being part of the Crossref community; what impact has your participation had on your goals?\nComment décririez-vous la valeur de faire partie de la communauté Crossref ? Quel est l’impact de votre participation sur vos objectifs ?\nAs a Crossref ambassador, I talk about Crossref around me, among my colleagues whether they are in Kenya or Cameroon. I shared the links to participate in Crossref webinars with my colleagues. I invited them to become ambassadors by sharing with them the links to join the community. I participated in several ambassador training webinars on different themes including: how to submit DOI to Crossref, ORCID. I participated in a Crossref event in Nairobi, Kenya. It was a memorable moment where I was able to meet other ambassadors. We were able to have a small meeting on the difficulties we encountered in growing the Crossref community in Africa. We produced a document to this effect which we submitted to Crossref in 2022. For the moment, I have not yet been able to organize an event as an ambassador, but I would like to with the help of Crossref. But being an ambassador is not the easiest thing because sometimes in our context people do not understand the use of Crossref\u0026rsquo;s services because we are in an environment where the DOI is not yet very well known, and where even publishers know nothing about this. A question I am often asked is whether this work is paid and are discouraged when they learn that it is voluntary work.\nComme ambassadrice de Crossref, je parle autour de moi de Crossref, parmi mes collègues qu’ils soient au Kenya ou au Cameroun. J’ai partagé les liens pour participer à des webinaires de Crossref à mes collègues. Je les ai invités à devenir des ambassadeurs en partageant avec eux les liens pour rejoindre la communauté. J’ai participé à plusieurs webinaires de formation des ambassadeurs sur différents thèmes notamment ORCID. J’ai également participe à un évènement de Crossref à Nairobi au Kenya. C’était un moment mémorable ou j’ai pu rencontrer d’autres ambassadeurs. Nous avons pu faire une petite réunion sur les difficultés que nous rencontrons pour faire grandir la communauté Crossref en Afrique. Nous avons d’ailleurs produit un document à cet effet que nous avons soumis à Crossref en 2022. Pour l’instant, je n’ai pas encore pu organiser d’évènement dans le cadre d’ambassadeur, mais j’aimerais avec l’aide de Crossref voir comment le faire. Etre ambassadrice n’est pas la chose la plus facile car parfois dans notre contexte les gens ne comprennent pas le bien-fondé des services de Crossref car on est dans un environnement ou le DOI n’est pas encore très connu, et où beaucoup de journaux et même d’editeurs ne savent rien de cela. Une question qu’on me pose souvent est celle savoir si ce travail est remunere et se découragent quand ils apprennent que c’est du bénévolat.\nFor you, what would be the most important thing Crossref could change (do more of/do better in)?\nPour vous, quelle serait la chose la plus importante que Crossref pourrait changer (faire plus/faire mieux) ?\nCrossref could invest in more capacity building, events, and communications in this part of the world. Why not localize Crossref in the francophone part of Africa? Crossref could offer continuing educational activities to professionals in order to improve their skills or acquire new knowledge in metadata and correlative disciplines. Crossref could also sponsor/support journal publishing and scholarship in Africa.\nCrossref pourrait investir dans davantage de renforcement des capacités, d\u0026rsquo;événements et de communications dans cette partie du monde. Pourquoi ne pas localiser Crossref dans la partie francophone de l’Afrique ? Crossref pourrait proposer des activités de formation continue aux professionnels afin d\u0026rsquo;améliorer leurs compétences ou d\u0026rsquo;acquérir de nouvelles connaissances dans les métadonnées et les disciplines corrélatives. Crossref pourrait également sponsoriser/soutenir la publication de revues et les bourses d’études en Afrique.\nWhich other organizations do you collaborate with or are pivotal to your work in open science?\nAvec quelles autres organisations collaborez-vous ou alors quelles sont les organismes pivot au cœur de votre travail en science ouverte ?\nI collaborate with various institutions such as COPE (Committee on Publication Ethics), AJOL African Journals Online, and OASPA (Open Access Scholarly Publishing Association). I attend webinars of these organizations on behalf of my journal.\nJe collabore avec diverses institutions telles que COPE (Committee on Publication Ethics), AJOL African Journals Online, et OASPA (Open Access Scholarly Publishing Association). J\u0026rsquo;assiste à des webinaires de ceux-ci organisations au nom de ma revue.\nWhat are your plans for the future?\nQuels sont vos plans pour l\u0026rsquo;avenir ?\nMy plan for the future is to continue working in science communication with different other organizations, and more within my community.\nMon plan pour l\u0026rsquo;avenir est de continuer à travailler dans le domaine de la communication scientifique avec différentes autres organisations, et davantage au sein de ma communauté.\nThank you, Audrey!\nMerci, Audrey !\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/crossref-labs/", "title": "Crossref Labs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/feedback-on-automatic-digital-preservation-and-self-healing-dois/", "title": "Feedback on automatic digital preservation and self-healing DOIs", "subtitle":"", "rank": 1, "lastmod": "2023-09-28", "lastmod_ts": 1695859200, "section": "Blog", "tags": [], "description": "Thank you to everyone who responded with feedback on the Op Cit proposal. This post clarifies, defends, and amends the original proposal in light of the responses that have been sent. We have endeavoured to respond to every point that was raised, either here or in the document comments themselves.\nWe strongly prefer for this to be developed in collaboration with CLOCKSS, LOCKSS, and/or Portico, i.e. through established preservation services that already have existing arrangements in place, are properly funded, and understand the problem space.", "content": "Thank you to everyone who responded with feedback on the Op Cit proposal. This post clarifies, defends, and amends the original proposal in light of the responses that have been sent. We have endeavoured to respond to every point that was raised, either here or in the document comments themselves.\nWe strongly prefer for this to be developed in collaboration with CLOCKSS, LOCKSS, and/or Portico, i.e. through established preservation services that already have existing arrangements in place, are properly funded, and understand the problem space. There is low level of trust in the Internet Archive, also given a number of ongoing court cases and erratic behavior in the past. People are questioning the sustainability and stability of IA, and given it is not funded by publishers or other major STM stakeholders there is low confidence in IA setting their priorities in a way that is aligned with that of the publishing industry.\nWe acknowledge that some of our members have a low level of trust in The Internet Archive, but many of our (primarily open access members) work very closely with the IA and our research has shown that, without the IA, the majority our smaller open access members would have almost no preservation at all. We have already had conversations with CLOCKSS and Portico about involvement in the pilot and thinking through what a scale-to-production would look like. That said, for a proof-of-concept, the Internet Archive presents a very easy way to get off the ground, with a stable system that has been running for almost 30 years.\nThis seems to be a service for OA content only, but people wonder for how long. Someone already spotted an internal CrossRef comment on the working doc that suggested “why not just make it default for everything \u0026amp; everyone”, and that raises concern.\nThe primary audience for this service is small OA publishers that are, at present, poorly preserved. These publishers present a problem for the whole scholarly environment because linking to their works can prove non-persistent if preservation is not well handled. Enhancing preservation for this sector therefore benefits the entire publishing industry by creating a persistent linking environment. We have no plans to make this the “default for everything and everyone” because the licensing challenges alone are massive, but also because it isn’t necessary. Large publishers like Elsevier are doing a good job of digitally preserving their content. We want this service to target the areas that are currently weaker.\nCrossref will always respect the content rights of our members. We never force our members to release their content through Crossref that they don\u0026rsquo;t ask us to release.\nThe purpose of the Op Cit project is to make it easier for our members to fulfil commitments they already made when they joined Crossref.\nCrossref is fundamentally an infrastructure for preserving citations and links in the scholarly record. We cannot do that if the content being cited or linked to disappears.\nWhen signing the Crossref membership agreement, members agree to employ their best efforts to preserve their content with archiving services so that Crossref can continue to link citations to it even in extremis. For example- if they have ceased operations.\nSome of our members already do this well. They have already made arrangements with the major archiving providers. They do not need the Op Cit service to help them with archiving. However, the Op Cit service will still help them ensure that the DOIs that they cite continue work. So it will still benefit them even if they don\u0026rsquo;t use it directly.\nHowever, our research shows that many of our members are not fulfilling the commitments they made when joining Crossref. Over the next few years, we will be trying to fix this. Primarily through outreach- encouraging members to set up and record with Crossref archiving arrangements with the archives of their choice.\nBut we know some members will find this too technically challenging and/or costly. [And frankly, given what we\u0026rsquo;ve learned of the archiving landscape, we can see their point.] The proposed Op Cit service is for these members. The vast majority of these members are Open Access publishers, so the \u0026ldquo;rights\u0026rdquo; questions are far more straightforward- making the implementation of such a service much more tractable.\nSomeone asked what this means for the publisher-specific DOI prefix for this content? Will this be lost?\nNo.\nThere is concern about the interstitial page that Crossref would build that gives the user access options. The value of Crossref to publishers is adding services that are invisible and beneficial to users, not adding a visible step that requires user action.\nThere is nothing in Crossref’s terms that says that we have to be invisible. The basic truth is that detecting content drift is really hard and several efforts to do so before have failed. Without a reliable way of knowing whether we should display the interstitial page, which may become possible in future, we have to display something for now, or the preservation function will not work.\nCrossref has, also, supported user-facing interstitial services for over a decade, including:\nMultiple Resolution Coaccess CrossMark Crossref Metdata Search REST API So we have a long track record of non-B2B service provision.\nThere is confusion about why Crossref seems to want to build the capacity to “lock” records in absence of flexibility. People feel no need for Crossref to get involved here.\nThis is a misunderstanding of the terminology. The Internet Archive allows the domain owner to request content to be removed. This would mean that, in future, if a new domain owner wanted, they could remove previously preserved material from the archive, thereby breaking the preservation function. When we say we want to “lock” a record, we mean that a future domain owner cannot remove content from the preservation archive. This also prevents domain hijackers from compromising the digital preservation.\nThere is concern about the possibility to hack this system to give uncontrolled access to all full-text content by attacking publishing systems and making them unavailable. This is an unhappy path scenario but something on people’s minds.\nThe system only works on content that is provided with an explicitly stated open license (see response above).\nI think this project would be improved by better addressing the people doing the preservation maintenance work that this requires. Digital preservation is primarily a labor problem, as the technical challenges are usually easier than the challenge of consistently paying people to keep everything maintained over time. Through that lens, this is primarily a technical solution to offload labor resources from small repositories to (for now) the Internet Archive, where you can get benefits from the economies of scale. There are definitely cases where that could be useful! But I think making this more explicit will further a shared understanding of advantages and disadvantages and help you all see future roadblocks and opportunities for this approach.\nThis consultation phase was designed, precisely, to ensure that those working in the space could have their say. While this is a technical project, we recognize that any solution must value and understand labor. That means that any scaling to production must and will also include a funding solution to address the social labor challenge.\nIs there any sense in polling either the IA Wayback Machine or the LANL Memento Aggregator first to determine if snapshot(s) already exist?\nWe could do this, but it would add an additional hop/lookup on deposit. Plus, we want to store the specific version deposited at the specific time it is done, including re-deposits.\nI would encourage looking at a distributed file system like IPFS (https://en.wikipedia.org/wiki/InterPlanetary_File_System). This would allow easy duplication, switching and peering of preservation providers. Correctly leveraged with IPNS; resolution, version tracking and version immutability also become benefits. Later after beta the IPNS metadata could be included as DOI metadata.\nWe had considered IPFS for other projects, but really, for this, we want to go with recognised archives, not end up running our own infrastructure for preservation.\nIt might be useful to look into the 10320/loc option for the Handle server: the https://0-www-handle-net.libus.csd.mu.edu/overviews/handle_type_10320_loc.html. I can imagine a use case where a machine agent might want to access an archive directly without needing to go to an interstitial page.\nIt is good to see reference to the HANDLE system and alternative ways that we might use it. We will consult internally on the technical viability of this.\nIn general, though, we prefer to use web-native mechanisms when they are available. We already support direct machine access via HTTP redirects and by exposing resource URLs in the metadata that can be retreivd via content negotiation. In this case, we would be looking at supporting the 300 (multiple choice) semantics.\nI\u0026rsquo;m curious to see how this will work for DOI versioning mechanisms like in Zenodo, where you have one DOI to reference all versions as well as version specific DOIs. If your record contains metadata + many files and a new version just versions one of the several files my assumption is that within the proposed system an entire new set (so all files) is archived. In theory this could also be a logical package, where simply the delta is stored, but I guess in a distributed preservation framework like the one proposed here, this would be hard to achieve.\nThis is a good point and it could lead to many more, frustrating, hops before the user reaches the content. We will conduct further research into this scenario, but we also note that Zenodo\u0026rsquo;s DOIs do not come from Crossref, but from DataCite.\nThere\u0026rsquo;s a decent body of research at this point on automated content drift detection. This recent paper: https://ceur-ws.org/Vol-3246/10_Paper3.pdf likely has links to other relevant articles.\nWe have no illusions about the difficulty of detecting semantic drift but this is helpful and interesting. We will read this material and related articles to appraise the current state of content drift detection.\nOut of curiosity, will we be using one type of archive (i.e., IA or CLOCKSS or LOCKSS or whatever) or will it possibly be a combination of a few archives? Reading the comments, it looks like some of them charge a fee, so I see why we\u0026rsquo;d use open source solutions first. Also, eventually could it be something that the member chooses? i.e. which archive they might want to use. Again, the latter question isn\u0026rsquo;t something for the prototype, but I\u0026rsquo;m curious about this use case. Also, I wonder about the implementation details if it is more than one archive. The question is totally moot of course, if we\u0026rsquo;re sticking with one archive for now.\nThe design will allow for deposit in multiple archives – and we will have to design a sustainability model that will cover those archives that need funding. As above, this is an important part of the move to production.\nWill be good for future interoperability to make sure at least one of the hashes is a SoftWare Hash IDentifier (see swhid.org). The ID is not really software specific and will interoperate with the Software Heritage Archive and git repositories.\nWe will certainly ensure best practices for checksums.\nComments on the Interstitial Page\nI\u0026rsquo;d keep the interstitial page without planning its eradication. (See why in the last paragraph) I\u0026rsquo;d even advocate for it to be a beautiful and useful reminder to users that \u0026ldquo;This content is preserved\u0026rdquo;. I\u0026rsquo;d go further and recommend that publishers deposit alternate urls of other preservation agents like PMC etc, that would also be displayed. This page could even be merged with multi-resolution system.\nThe why: I\u0026rsquo;m concerned of hackers and of predatory publishers exploiting the spider heuristics by highjacking small journals and keeping just enough metadata as in them as to fool the resolver and then adding links to whatever products, scams and whatnots\u0026hellip;\nTechnical. Scraping landing pages is hard. We\u0026rsquo;ve had a lot of projects to do this over the years. You can mitigate the risk by tiering / heuristics. Maybe even feedback loop to publishers to encourage them to put the right metadata on the landing page.\nThis is the only part of this proposal that I don\u0026rsquo;t like. People are used to DOIs resolving directly to content, and I don\u0026rsquo;t think that should be changed unless absolutely necessary. I would prefer that the DOI resolves to the publisher\u0026rsquo;s copy if it exists, and the IA copy otherwise.\nWe will continue the discussion about the interstitial page. The basic technical fact, as above, is that detecting content drift is hard and so we may need, at least, to start with the page. However, some commentators presented reasons for keeping it.\nWe also have already supported interstitial pages for multiple resolution and co-access for over a decade.\nIt is member\u0026rsquo;s choice whether they wish to deposit alternative URLs and we already have a mechanism for this.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/preservation/", "title": "Preservation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2023-board-election-slate/", "title": "2023 board election slate", "subtitle":"", "rank": 1, "lastmod": "2023-09-27", "lastmod_ts": 1695772800, "section": "Blog", "tags": [], "description": "I’m pleased to share the 2023 board election slate. Crossref’s Nominating Committee received 87 submissions from members worldwide to fill seven open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000.", "content": "I’m pleased to share the 2023 board election slate. Crossref’s Nominating Committee received 87 submissions from members worldwide to fill seven open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000. We have two large member seats and five small member seats open for election in 2023.\nThe Nominating Committee presents the following slate.\nThe 2023 slate Tier 1 candidates (electing five seats): Beilstein-Institut, Wendy Patterson Korean Council of Science Editors, Kihong Kim Lujosh Ventures Limited, Olu Joshua NISC Ltd, Mike Schramm OpenEdition, Marin Dacos Universidad Autónoma de Chile, Dr. Ivan Suazo Vilnius University, Vincas Grigas Tier 2 candidates (electing two seats): Association for Computing Machinery (ACM), Scott Delman Oxford University Press, James Phillpotts Public Library of Science (PLOS), Dan Shanahan University of Chicago Press, Ashley Towne Here are the candidates\u0026rsquo; organizational and personal statements You can be part of this important process by voting in the election If your organization is a voting member in good standing of Crossref as of September 10th, 2023, you are eligible to vote when voting opens on September 27th, 2023.\nHow can you vote? Your organization’s designated voting contact will receive an email from eBallot the week of September 25th with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. The email will include a username and password with a link to our voting platform.\nThe election results will be announced at the LIVE23 online meeting on October 31st, 2023. Save the date! Incoming members will take their seats at the March 2024 board meeting.\n", "headings": ["The 2023 slate","Tier 1 candidates (electing five seats):","Tier 2 candidates (electing two seats):","Here are the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2023-board-election-slate/", "title": "2023 board election slate", "subtitle":"", "rank": 1, "lastmod": "2023-09-27", "lastmod_ts": 1695772800, "section": "Blog", "tags": [], "description": "I’m pleased to share the 2023 board election slate. Crossref’s Nominating Committee received 87 submissions from members worldwide to fill seven open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000.", "content": "I’m pleased to share the 2023 board election slate. Crossref’s Nominating Committee received 87 submissions from members worldwide to fill seven open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000. We have two large member seats and five small member seats open for election in 2023.\nThe Nominating Committee presents the following slate.\nThe 2023 slate Tier 1 candidates (electing five seats): Beilstein-Institut, Wendy Patterson Korean Council of Science Editors, Kihong Kim Lujosh Ventures Limited, Olu Joshua NISC Ltd, Mike Schramm OpenEdition, Marin Dacos Universidad Autónoma de Chile, Dr. Ivan Suazo Vilnius University, Vincas Grigas Tier 2 candidates (electing two seats): Association for Computing Machinery (ACM), Scott Delman Oxford University Press, James Phillpotts Public Library of Science (PLOS), Dan Shanahan University of Chicago Press, Ashley Towne Here are the candidates\u0026rsquo; organizational and personal statements You can be part of this important process by voting in the election If your organization is a voting member in good standing of Crossref as of September 10th, 2023, you are eligible to vote when voting opens on September 27th, 2023.\nHow can you vote? Your organization’s designated voting contact will receive an email from eBallot the week of September 25th with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. The email will include a username and password with a link to our voting platform.\nThe election results will be announced at the LIVE23 online meeting on October 31st, 2023. Save the date! Incoming members will take their seats at the March 2024 board meeting.\n", "headings": ["The 2023 slate","Tier 1 candidates (electing five seats):","Tier 2 candidates (electing two seats):","Here are the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/geoffrey-bilder/", "title": "Geoffrey Bilder", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/news-release/", "title": "News Release", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/news-crossref-and-retraction-watch/", "title": "News: Crossref and Retraction Watch", "subtitle":"", "rank": 1, "lastmod": "2023-09-12", "lastmod_ts": 1694476800, "section": "Blog", "tags": [], "description": "https://0-doi-org.libus.csd.mu.edu/10.13003/c23rw1d9\nCrossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together\n12th September 2023 —\u0026ndash; The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource.", "content": " https://0-doi-org.libus.csd.mu.edu/10.13003/c23rw1d9\nCrossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together\n12th September 2023 —\u0026ndash; The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource. An agreement between the two organisations will allow Retraction Watch to keep the data populated on an ongoing basis and always open, alongside publishers registering their retraction notices directly with Crossref.\nBoth organisations have a shared mission to make it easier to assess the trustworthiness of scholarly outputs. Retractions are an important part of science and scholarship regulating themselves and are a sign that academic publishing is doing its job. But there are more journals and papers than ever, so identifying and tracking retracted papers has become much harder for publishers and readers. That, in turn, makes it difficult for readers and authors to know whether they are reading or citing work that has been retracted. Combining efforts to create the largest single open-source database of retractions reduces duplication, making it more efficient, transparent, and accessible for all.\nProduct Director Rachael Lammey says, “Crossref is focused on documenting and clarifying the scholarly record in an open and scalable form. For a decade, our members have been recording corrections and retractions through our infrastructure, and incorporating the Crossmark button to alert readers. Collaborating with Retraction Watch augments publisher efforts by filling in critical gaps in our coverage, helps the downstream services that rely on high-quality, open data about retractions, and ultimately directly benefits the research community.”\nThe Center for Scientific Integrity and the Retraction Watch blog will remain separate from Crossref and will continue their journalistic work investigating retractions and related issues; the agreement with Crossref is confined to the database only and Crossref itself remains a neutral facilitator in efforts to assess the quality of scientific works. Both organisations consider publishers to be the primary stewards of the scholarly record and they are encouraged to continue to add retractions to their Crossref metadata as a priority.\n“Retraction Watch has always worked to make our highly comprehensive and accurate retraction data available to as many people as possible. We are deeply grateful to the foundations, individuals, and members of the publishing services industry who have supported our efforts and laid the groundwork for this development,” said Ivan Oransky, executive director of the Center for Scientific Integrity and co-founder of Retraction Watch. “This agreement means that the Retraction Watch Database has sustainable funding to allow its work to continue and improve.”\nPlease join Crossref and Retraction Watch leadership, among other special guests, for a community call on 27th September at 1 p.m. UTC to discuss this new development in the pursuit of research integrity.\nSupporting details Crossref retractions number 14k, and the Retraction Watch database currently numbers 43k. There is some overlap, making a total of around 50k retractions. The full dataset has been released through Crossref’s Labs API, initially as a .csv file to download directly: https://0-api-labs-crossref-org.libus.csd.mu.edu/data/retractionwatch?name@email.org (add your ‘mailto’). Edit: 2024-10-10: The full dataset is available in a git repository at https://gitlab.com/crossref/retraction-watch-data. The Crossref Labs API also displays information about retractions in the /works/ route when metadata is available, such as https://0-api-labs-crossref-org.libus.csd.mu.edu/works/10.2147/CMAR.S324920?name@email.org (add your ‘mailto’). If you don\u0026rsquo;t have a .json viewer, please see below for screenshot. Crossref is paying an initial acquisition fee of USD $175,000 and will pay Retraction Watch USD $120,000 each year, increasing by 5% each year. The initial term of the contract is five years. The full text of the contract will be made public in the coming fortnight. EDIT 2023-09-26: Here is the signed agreement. There will be a community call on 27th September at 1 p.m. UTC (your time zone here). Please register. An open FAQ document is available to collect questions to be answered at the webinar. This announcement will always be accessible via Crossref DOI https://0-doi-org.libus.csd.mu.edu/10.13003/c23rw1d9; please use this persistent link for sharing. About Retraction Watch and The Center for Scientific Integrity The Center for Scientific Integrity is a U.S. 501(c)3 non-profit whose mission is to promote transparency and integrity in science and scientific publishing, and to disseminate best practices and increase efficiency in science. In addition to maintaining and curating the Retraction Watch Database, the Center is the home of Retraction Watch, a blog founded in 2010 that reports on scholarly retractions and related issues in research integrity.\nAbout Crossref Crossref is a global community infrastructure that makes all kinds of research objects easy to find, assess, and reuse through a number of services critical to research communications, including an open metadata API that sees over 1.1 billion queries every month. Crossref’s \u0026gt;19,000 members come from 151 countries and are predominantly university-based. Their ~150 million DOI records contribute to the collective vision of a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nEnquiries For Retraction Watch/Center for Scientific Integrity: Ivan Oransky, ivan@retractionwatch.com For Crossref: Ginny Hendricks, ginny@crossref.org A screenshot of an example Labs API metadata record with a Retraction Watch-asserted retraction\n", "headings": ["Crossref acquires Retraction Watch data and opens it for the scientific community","Supporting details","Enquiries"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/retractions/", "title": "Retractions", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/amanda-french/", "title": "Amanda French", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/maria-gould/", "title": "Maria Gould", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/open-funder-registry-to-transition-into-research-organization-registry-ror/", "title": "Open Funder Registry to transition into Research Organization Registry (ROR)", "subtitle":"", "rank": 1, "lastmod": "2023-09-07", "lastmod_ts": 1694044800, "section": "Blog", "tags": [], "description": "Today, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned.", "content": "Today, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned. Crossref and ROR are therefore working together to ensure that Crossref members and funders can use ROR to simplify persistent identifier integrations, to register better metadata, and to help connect research outputs to research funders.\nJust yesterday, we published a summary of a recent workshop between funders and publishers on funding metadata workflows that we convened with the Dutch Research Council (NWO) and Sesame Open Science. As the report notes, \u0026ldquo;open funding metadata is arguably the next big thing\u0026rdquo; [in Open Science]. That being the case, we think this is the ideal time to strengthen our support of open funding metadata by beginning this transition to ROR.\nComparing the features of ROR and the Funder Registry Let\u0026rsquo;s look at some of the major similarities and differences between the two registries, including their history, features, scope, and usage, since there are important nuances and distinctions that are helpful to understand.\nOverview ROR Funder Registry Launched in 2019 Launched in 2013 Primary use case is contributor affiliation Primary use case is funding acknowledgement 105k+ records 35k+ records CC0 data CC0 data REST API REST API Free to use Free to use Entire registry downloadable as JSON and CSV Entire registry downloadable as RDF; funder names and IDs downloadable as CSV Records contain mappings to other IDs Records do not contain mappings to other IDs Organization relationships and hierarchy Organization relationships and hierarchy 8 organization types 2 funder types, 8 funder subtypes Open source code and multiple open-source tools available Open source code Web-based registry search Web-based search for works in Crossref associated with each Funder ID Web-based landing pages for each ROR record JSON landing pages for each Funder Registry record Updated monthly Updated monthly Public curation process Private curation process Anyone can request changes and additions Anyone can request changes and additions Stable financial support Stable financial support Beginning to be supported in funding and publishing workflows Somewhat well supported in most funding and publishing workflows Currently used by 260+ Crossref members 1 Currently used by 2100+ Crossref members 2 History The Open Funder Registry was launched as FundRef over a decade ago to enable the community to cite research financing and assert it within the scholarly record, acknowledging the organizations granting their support. Elsevier generously donated the seed data for the Funder Registry and has managed its curation for the last ten years, while we have maintained the technical operations and promoted community adoption of the Funder Registry.\nThe Research Organization Registry (ROR) was introduced in 2019 by the California Digital Library, DataCite, and Crossref to enable the community to cite contributor affiliations and assert them within the scholarly record, acknowledging the organizations that housed or performed the research. Digital Science generously donated the seed data for the Research Organization Registry from its Global Research Identifier Database (GRID) initiative, and Crossref, DataCite, and the California Digital Library have contributed labor and resources to turn ROR into a mature, independent, freely available offering.\nScope One key difference between the registries is that ROR has always included funding organizations, and ROR records have always included mappings to Funder IDs where available, while the reverse is not true: the Funder Registry includes only funding organizations, not other kinds of organizations, and Funder Registry records do not currently include mappings to ROR IDs or other identifiers. It therefore makes sense to expand ROR\u0026rsquo;s initial contributor affiliation use case to include the function of identifying research financing.\nUsage More Crossref members use Funder IDs than use ROR IDs, to be sure. You can see from the table above that the number of Crossref members using Funder IDs in Crossref records is higher by almost a factor of 10 than the number of Crossref members using ROR IDs in Crossref records. But note too that the current rate of adoption is far higher for ROR than it is for the Funder Registry. Since January of 2022, we\u0026rsquo;ve seen a gratifying number of publishers and service providers beginning to use ROR identifiers for contributor affiliations in Crossref. In the last year, the number of Crossref members depositing ROR IDs has increased by 356%, while the number depositing Funder IDs has increased only by 12%. As evidenced by its ballooning API traffic, too, with more than 20 million requests last month,3 ROR is clearly being used by many scholarly research systems for many purposes. The more systems that use an identifier, the more valuable that identifier becomes as a vehicle for exchanging information.\nEven though ROR\u0026rsquo;s primary use case has been to identify contributor affiliations, ROR is in fact already being used by funders. Nineteen funding organizations are depositing ROR IDs in their grant records with Crossref to denote principal investigator affiliations,4 and, following a meeting of the Crossref Funder Advisory Group last month, all eighty funder members are primed to start using ROR IDs to identify themselves in grant records. DataCite has allowed ROR IDs as a funding identifier since 20195, and while there are currently over 877,000 DataCite records that use Funder IDs to identify funders,6 there are also over 161,000 DataCite records that use ROR IDs to identify funders.7\nTools and services Both the Funder Registry and ROR offer open data and open source code, but we think that ROR\u0026rsquo;s suite of free and open source utilities (some of which were developed by Crossref staff) gives it a competitive advantage. We know that publishers and their service providers have ongoing challenges in collecting and matching funding information from authors and in validating Funder IDs. With ROR’s extensive toolkit, publishers and their technology providers who adopt ROR will be in a much better position to improve the accuracy of funding acknowledgements in metadata, which can in turn enable the development of reliable analytics, tools, and services for funders, regulators, research facilities, and the public.\nCrossref has built tools based on OpenRefine for both the Funder Registry and ROR: the Open Funder Registry Reconciliation Service and the ROR Reconciler are both useful ways to clean messy data. ROR, however, also offers a much-used API endpoint that helps match organization names to ROR IDs, and several third parties have also developed and shared open source matching tools and services for ROR. Crossref and ROR are also collaborating on new strategies for affiliation matching that will be able to match funding references.\nCommunity engagement models The Funder Registry has been curated for over a decade through time and expertise generously donated by Elsevier. ROR offers more transparency and community involvement; it is openly governed by Crossref, DataCite, and the California Digital Library and is advised by a global network of community stakeholders through its Steering Group and Community Advisory Group. ROR is openly curated and is aided by a global Curation Advisory Board of volunteers.\nSummary For all of the above reasons, then, we believe that in the long term ROR will serve the community better as an identifier for funders. In a future post, we\u0026rsquo;ll do an even deeper dive into comparing the Funder Registry and ROR, comparing the metadata and data in each registry and giving statistics on funder assertions in our metadata.\nWhat will this mean for you? The many organizations whose tools, services, and workflows have been architected to use Funder Registry IDs will find this transition a challenge, and we don\u0026rsquo;t want to make light of that issue. Over the last ten years, we have encouraged the community to adopt Funder IDs, and the community has demonstrably recognized the benefits of doing so. Publishers have put a great deal of time, thought, and effort into collecting funder data and including it in Crossref metadata, and they have built internal reports and workflows around the Funder Registry. Both Crossref and ROR are committed to making the transition from the Funder Registry to the Research Organization Registry as simple as possible for those who have adopted the Funder Registry.\nIf you are not already using the Funder Registry and are planning to begin standardizing funding data, we recommend that you use ROR to identify funders. If you are currently using the Funder Registry in your systems and workflows, don\u0026rsquo;t worry! In the short term, and even in the medium term, Funder IDs aren\u0026rsquo;t going away. Eventually, however, the Funder Registry will cease to be updated, so any new funders will only be registrable in Crossref metadata with ROR IDs. Legacy Funder IDs and their mapping to ROR IDs will be maintained, so if Crossref members submit a legacy Funder ID, it will get mapped to a ROR ID automatically. Note, too, that Crossref is committed to maintaining the current funder API endpoints until ROR IDs become the predominant identifier for newly registered content.\nIn short, if you are already using Funder IDs, you can and should continue to do so. However, we do recommend that you begin looking at what it will take to integrate ROR into your systems and workflows for identifying funders. Think of it as warming up before a workout: it\u0026rsquo;s time to start swinging your arms and stretching your hamstrings.\nWe face challenges in this transition, too. Of these, we think the largest will be (1) completing the reconciliation work involved in mapping Funder IDs to ROR IDs, and (2) overhauling Crossref\u0026rsquo;s schemas, APIs, and deposit tools to support ROR IDs in all the ways we currently support Funder IDs. We\u0026rsquo;ll discuss both of these challenges in future blog posts, but it\u0026rsquo;s worth saying that any challenges pale in comparison to the benefit of enabling the whole community to use a single open identifier in multiple places in the scholarly record.\nTell us what you need! We want to hear from you. You can use our Community Forum talk to us about the Crossref Funder Registry, and you can join the ROR Slack to talk to the ROR team and community. You can also contact Crossref via our request form or email ROR at info@ror.org, and you can attend online Crossref events and ROR events to get updates from us and ask us your questions.\nOne of the major messages we\u0026rsquo;re already hearing from funders and publishers is expressed in yesterday\u0026rsquo;s post on open funding metadata: \u0026ldquo;While many concluded that there was still a long way to go to solve the many technical challenges related to funding metadata, attendees were unanimous on its importance.\u0026rdquo; We look forward to beginning this important work together.\nCrossref API works with ROR IDs faceted by publisher name\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nCrossref API works with Funder IDs faceted by publisher name\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nROR API Public API Usage Insights\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nCrossref API works of type \u0026ldquo;Grant\u0026rdquo; with ROR IDs faceted by publisher name\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nDataCite Metadata Schema 4.3 release notes, August 2019\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nDataCite API Funder ID in funding reference\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nDataCite API ROR ID in funding reference\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": ["Comparing the features of ROR and the Funder Registry","Overview","History","Scope","Usage","Tools and services","Community engagement models","Summary","What will this mean for you?","Tell us what you need!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/hans-de-jonge/", "title": "Hans De Jonge", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/open-funding-metadata-community-workshop-report/", "title": "Open funding metadata through Crossref; a workshop to discuss challenges and improving workflows", "subtitle":"", "rank": 1, "lastmod": "2023-09-06", "lastmod_ts": 1693958400, "section": "Blog", "tags": [], "description": "Ten years on from the launch of the Open Funder Registry (OFR, formerly FundRef), there is renewed interest in the potential of openly available funding metadata through Crossref. And with that: calls to improve the quality and completeness of that data. Currently, about 25% of Crossref records contain some kind of funding information. Over the years, this figure has grown steadily. A number of recent publications have shown, however, that there is considerable variation in the extent to which publishers deposit these data to Crossref.", "content": "Ten years on from the launch of the Open Funder Registry (OFR, formerly FundRef), there is renewed interest in the potential of openly available funding metadata through Crossref. And with that: calls to improve the quality and completeness of that data. Currently, about 25% of Crossref records contain some kind of funding information. Over the years, this figure has grown steadily. A number of recent publications have shown, however, that there is considerable variation in the extent to which publishers deposit these data to Crossref. Technical but also business issues seem to lie at the root of this. Crossref - in close collaboration with the Dutch Research Council NWO and Sesame Open Science - brought together a group of 26 organizations from across the ecosystem to discuss the barriers and possible solutions. This blog presents some anonymized lessons learned.\nThere is no Open Science without open metadata The interest in the potential of this open-source funding metadata seems to be entering a new stage. When registering (or updating) a DOI record for a publication, publishers can include information about the funding of the research. The Open Funder Registry grew out of recommendations in the report from the US Scholarly Publishing Roundtable in 2010. During the Annual Meeting of Crossref that year, Frederick Dylla, CEO of the American Institute of Physics, argued that in order to make research funding information in publications accessible, it needed to be presented in a standard way and stored in a central location.\nThe benefits of having open funding metadata available, listed by Dylla in his presentation 13 years ago, are still very valid:\nResearchers benefit because it increases transparency of their funding sources and supports the requirements they already have from their funders. For funders, having this data available is essential because it allows them to identify the published outcomes of publicly funded research. Essential to monitor compliance with open access policies, but also important given the pressures funders face to account for their spending of public money. For publishers, funding metadata provides a valuable service, as it provides insight into how the research they publish is funded. Although Crossref has been collating funding metadata for many years, there seems to be a renewed interest in this service. Publishers have long expressed a desire to solve the challenges, meta-researchers need this information in order to analyze research on research, editors are concerned with research integrity, including funding trends, and funders themselves need to track the reach and return of their support.\nOpen Science seems to be an important driver: As we move to an ecosystem built on Open Science principles, not only publications, data, and software need to be openly available, but also the metadata associated with those scholarly outputs. Indeed, in an Open Science world, all meta information should be open, and academia should not be dependent anymore on data from proprietary bibliographic databases. Indicators for research assessment and policy development should be open indicators, derived from open metadata. Much has been done in this area already, in the context of Open Citations and Open Abstracts. While many in the community have focused on the bigger picture of advocating for all open metadata, e.g. Metadata 20/20, open funding metadata is arguably the next big thing. Open Research Information, including open metadata, must be a strategic priority for science and society.\nRoom for improvement After ten years of collecting funding metadata, 25% of records in Crossref contain some kind of funding information, and this figure was reached by a steady growth over that time. A number of recent studies have shown, however, that there is room for improvement. A case study published by two of the present authors has shown that the extent to which publishers deposit funding information to Crossref varies considerably. Some larger society presses - American Chemical Society (ACS), American Physical Society (APS), and Royal Society of Chemistry (RSC) - perform exceptionally well, with almost 100% of publications containing funding information. But there is still a large number of publishers - among them large legacy publishers - that attain substantially lower figures or do not seem to deposit funding metadata at all. Our case study has shown that often this cannot be explained by the fact that authors have not provided any funding information, as often this information is available in the acknowledgement sections of the papers. Somehow, however, this data does not find its way to Crossref.\nWorkflows and challenges: collect, retain, validate, deposit In order to chart the challenges that publishers face when collecting this information, we organized a roundtable session. 26 organizations were invited from across the ecosystem. These included: major publishers (American Chemical Society, British Medical Journal, Elsevier, IOP Publishing, PLOS, Royal Society of Chemistry, Sage, Springer Nature, Taylor \u0026amp; Francis, and Wiley), funders (European Research Council, Austrian Research Council, Dutch Research Council, OSTI-DOE, UKRI, and Michael J Fox Foundation) as well as service providers (Aries Editorial Manager, PKP / OJS, Scholastica, and eJournal Press).\nIn order to map the potential barriers and challenges publishers face, participants were presented with a workflow scheme representing a hypothetical production process.\nThis workflow outlined the steps in the production process at which funder information would potentially be handled, as well as some of the considerations that might be at play at each step.\ncollecting funder information (upon submission or acceptance) extracting funder information from full text retaining funder information through the production workflow including funder information in article metadata making metadata and/or full text available for indexing Participants were invited to comment on this workflow and place digital dots in the scheme to identify challenges in the collection, retention, and deposit of funding information. These pain points were afterwards fleshed out in break-out groups.\nLessons learned 1. Still a lack of awareness among editors and authors For many journals and publishers, collecting funding information starts when papers are submitted through submission systems. Many publishers use the same systems: ScholarOne and Editorial Manager, though many have multiple systems in place for different portfolios of journals. Around 25,000 journals use PKP’s Open Journal System, and Scholastica and eJournal Press are growing in popularity and importance. All of them provide the possibility for authors to enter funder information but this does not by all means mean that all journals make use of it. Submission systems are highly customizable, and publishers tend to tailor systems to the needs and wishes of their journals. Editors who do not see much value in collecting funding metadata therefore present a first ‘weak link’. Publishers and tech providers agreed that more outreach is needed about the importance of funding metadata among editors and authors.\n2. Improvements are needed in submission systems Where journals and publishers agree on asking authors to register funding information through the submission systems, many express a tension between collecting structured metadata and making it as easy as possible for authors. Many are hesitant to use mandatory input fields. Instead, funding metadata is often collected as free text, giving rise to a plethora of ambiguities. Most systems provide suggestions based on the input of the author based on the Open Funder Registry. A lot seems to go wrong at this stage. Authors often persist in the wrong spelling of their funder and do not choose predefined suggestions, making it very difficult to match input to Funder IDs. Publishers estimated the number of non-matches up to 50%. Trivial issues like “Bill \u0026amp; Melinda” versus “Bill and Melinda” or “Netherlands Organization” versus “Netherlands Organisation” result in errors. Here, autocomplete techniques seem to be in dire need of improvement. Based on a preliminary analysis of funder name variants used in Crossref, adding up to 3 of the most frequently used name variants to the list of ‘alternative funder names’ in the Funder registry could solve around 60% of missed matches.\n3. A lot can be learned from how some publishers have changed and organized their workflows Faced with these issues, the Royal Society of Chemistry has invested in innovative workflows to enhance the availability of funding metadata. Instead of relying solely on the free text input of the authors, RSC presented to the group the details of how they have tackled the issue. In addition to author-provided acknowledgements, they work with third-party production vendors to programmatically extract information from the acknowledgement section of papers. Data from the two sources are compared, and when differences or conflicts are being noted, the data is fixed, completed, and reformatted. The next step is crucial - the newly-cleansed funding data is fed back to the author for validation, and retained during the production phase of the paper. Implementation of this validation stage has increased the availability of funding metadata by 30%. In 2023 80% of papers published by RSC have some kind of structured funding metadata. An additional benefit of this feedback loop was its educational effect by alerting authors to the importance of correct funding information. But even RSC continues to struggle with issues of funder name ambiguity, use of acronyms, authors reporting grant or award names instead of funder names, issues with phraseology of funding acknowledgements, and frustrations with the user experience of the service provider integrations with the OFR.\nMany publishers agreed that collecting funding information from full-text papers is the preferred option. Not only because it lowers the burden for authors, but also because this potentially renders better data as this is where authors are expected to include this information as part of their funder’s commitments.\n4. Retaining information and submitting: no big deal At the beginning of the workshop, it was expected that maybe the retention of funding information and the propagation through various interlinked systems might pose problems for publishers. However, this was not identified as a problem by participants. Nor was there mention of any challenges in depositing information to Crossref, nor of downstream databases having difficulties retrieving the metadata.\n5. There is a genuine interest across the ecosystem to improve funding information in Crossref While many concluded that there was still a long way to go to solve the many technical challenges related to funding metadata, attendees were unanimous on its importance. Participants agreed that these improvements would require investments from publishers. A willingness to do those was expressed, but also a sense that publishers who do should be incentivised for it, maybe as part of the agreements they have with library consortia. JISC’s recent contract with Taylor \u0026amp; Francis (page 164, Section 7a (iii)) is a good example of how consortia can successfully negotiate the supply of high quality metadata, including funding metadata. It was agreed that another solution could be to allow the additional deposit of the free-text acknowledgement section as a metadata field in Crossref. Instead of educating authors to enter their data correctly or relying on publishers and tech providers to improve their systems to turn free text funder acknowledgement text to structured data, text mining and machine learning could facilitate the improvement of this data.\nNext steps For this workshop, we concentrated on the collection and registration of funding metadata by publishers and did not go into the important, related, issue of the Crossref Grant Linking System (Grant IDs) nor of the plans to further align funder IDs with ROR IDs, both projects that help the community to better record funding information.\nNext steps resulting from this community workshop, as\nFunders are encouraged to join and register their grants with Crossref DOIs so that registered grants can in future be linked directly to publications and other outputs. About 50 funders have already created around 90,000 grant records. The more grant DOIs that are created by funders, the more likely publishers will be able to prioritize collecting them in their own publication metadata. Publishers are encouraged to work with their service providers to prioritize the quality of the open funding metadata through Crossref, which is a source for downstream analyses and inclusion by many thousands of tools and services. Other stakeholders are also offering opportunities to focus on funding metadata, showing a growing interest in the completeness of funder metadata. For example, OA Switchboard’s funder pilot, which also looks at the potential to feed enriched metadata back to Crossref to make them publicly available, and the Open Research Funder Group’s work to promote the improvement of tracking research output, including funding metadata, which includes an active working group in this area. Crossref will continue to work with publishers and service providers to encourage and make it easier to include funder information in article metadata, including the use of grant identifiers and funder identifiers. Work is underway to bring the Open Funder Registry closer to ROR (Research Organization Registry), and is planning, at some point in the future, to merge the OFR into ROR, as ROR has a much wider scope and is more broadly community-governed. Crossref has also begun some work on collecting ROR IDs where we currently collect Funder IDs. More technical information is available in this ticket). We would like to thank all the participants of the workshop for their openness and commitment to working through these issues together. It was a rare opportunity to share insights from publishers, service providers, funders, and researchers - and a useful first step in co-creating a shared understanding of the challenges and charting a path forward.\n", "headings": ["There is no Open Science without open metadata","Room for improvement","Workflows and challenges: collect, retain, validate, deposit","Lessons learned","1. Still a lack of awareness among editors and authors","2. Improvements are needed in submission systems","3. A lot can be learned from how some publishers have changed and organized their workflows","4. Retaining information and submitting: no big deal","5. There is a genuine interest across the ecosystem to improve funding information in Crossref","Next steps"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/", "title": "Become a member", "subtitle":"", "rank": 5, "lastmod": "2023-09-01", "lastmod_ts": 1693526400, "section": "Become a member", "tags": [], "description": "If you’re Funder, learn more here. Organisations need to be members of Crossref to create metadata records that identify, describe, and locate their work. You don\u0026rsquo;t need to be a member to retrieve metadata (read about our open metadata retrieval tools).\nBenefits of membership You can create and steward rich metadata records, adding relationships, provenance, contributor, and funding information, ensuring accuracy and persistence over time. By doing so, you are adding to and benefiting from reciprocal connections among a global network of research objects, co-creating a Research Nexus for the benefit of society including future generations.", "content": " If you’re Funder, learn more here. Organisations need to be members of Crossref to create metadata records that identify, describe, and locate their work. You don\u0026rsquo;t need to be a member to retrieve metadata (read about our open metadata retrieval tools).\nBenefits of membership You can create and steward rich metadata records, adding relationships, provenance, contributor, and funding information, ensuring accuracy and persistence over time. By doing so, you are adding to and benefiting from reciprocal connections among a global network of research objects, co-creating a Research Nexus for the benefit of society including future generations.\nWorks are more likely to be discovered if they have a Crossref record. Your organisation will be joining the world\u0026rsquo;s largest registry of Digital Object Identifiers (DOIs) and metadata for scholarly research; your work is connected with 160 million other records from \u0026gt;20,000 other members from ~155 countries. Crossref facilitates an average of 1.1 billion DOI resolutions (clicks of a DOI link) every month, which is 95% of all DOI activity.\nYour metadata is freely and openly shared in a consistent, machine-readable way. You don\u0026rsquo;t have to duplicate the information for multiple parties as your metadata is discoverable by thousands of other systems in the global research ecosystem; there are over 1 billion calls to our API every month.\nYour organisation can have access to unique tools that support research integrity, such as incorporating Crossmark status buttons on your landing pages and PDFs, and using Similarity Check to screen for text-based plagiarism.\nYou can vote in our board elections and/or stand for a seat, participating in the governance of Crossref. There are also many opportunities to get involved with the whole community, to co-create solutions to shared problems, help shape and influence our roadmap, and to join the discussions.\nIt\u0026rsquo;s so much more than just getting a DOI.\nAre you eligible? Many types of organisations register their research objects with Crossref. You could be a research institution, a publisher, a government agency, a research funder, or a museum! In order to become a member, you need to meet the criteria set out in our membership terms approved by our governing board which defines eligibility as:\nMembership in Crossref is open to organisations that produce professional and scholarly materials and content.\n(NB: We are bound by international sanctions so membership is restricted in certain countries; see our sanctions page for details.)\nIn most cases, if your work is likely to be cited in the research ecosystem and you consider it part of the evidence trail, then you’re eligible to join.\nMember obligations When you apply for membership, you agree to our membership terms which include several obligations. Some of the important ones to note, are:\n1. Register only what you have the legal rights to Our terms (3 (c)) stipulate that \u0026ldquo;The Member will not deposit or register Metadata for any Content for which the Member does not have legal rights to do so\u0026rdquo;.\n2. Deposit metadata and create DOI links All members join in order to create records with metadata and DOIs and share these throughout Crossref infrastructure for others to use. If you publish journals, you are obliged to create records for all articles going forward.\n3. Maintain and update your metadata and landing page URLs over the long term You\u0026rsquo;ll need to maintain and update your metadata, including updating URLs if your content moves or changes, and adding rich metadata as you collect more. Look into archiving or other preservation services to ensure your records live on after you.\n4. Make sure you have a unique landing page for each metadata record/DOI and follow our DOI display guidelines Make sure that your DOI links resolve to a unique landing page URL. For example, if you are registering journal articles, each article needs to have a unique landing page URL. It is the same if you are, for example, a funder: each DOI link needs to resolve to a page with information about the specific grant.\nFollow our DOI display guidelines and position your DOI links on the page or PDF near the descriptive information such as title and contributors. Always use DOI links to communicate and share your work across the web whether in documents or databases.\n5. Link references Reference linking was the original reason that the community decided to start Crossref. It is so that members don\u0026rsquo;t have to establish bilateral linking agreements with each other and instead can create and exchange connections through the Crossref infrastructure. If your work is the kind that typically has a list of references (e.g. articles or preprints), you are obliged to retrieve and use the DOI links that your fellow Crossref members create. You can do this using our reference linking tools.\nWe also strongly encourage you to include references within your own metadata records. This benefits you and others by further maximising the network effect.\n6. Pay your fees Crossref is sustained by fees and not reliant on grant funding, which would conflict with our sustainability model, guided by our commitment to the Principles of Open Scholarly Infrastructure (POSI). Your fees keep the organization afloat and the system running. We are a not-for-profit organisation so any surpluses are invested back into improving the infrastructure and solving new problems for the community. Please pay your invoices on time 👍🏽\nWhat are the fees? All our fees are set out on the fees page.\nIndependent membership Members pay an annual membership fee, which is tiered depending on your organisation\u0026rsquo;s revenue or expenses. After you apply, we will send you a pro-rated membership invoice for the remainder of the current year and this is due to be paid before your membership can be activated. You then pay the same annual membership fee each year, every January. Membership renews automatically unless you actively cancel.\nThere are also one-off registration fees for each new metadata record and DOI that you create. There are never any fees to add to and update the metadata. We send you this invoice quarterly, after calculating the quantity and type of records you have registered that quarter.\nOrganisations located in the least economically-advantaged countries in the world do not pay an annual membership fee and do not pay registration fees - find out more on our Global Equitable Membership (GEM) program page.\nIf you have further questions about billing, visit our fees page and our billing FAQs page.\nMembership via a Sponsor The Sponsor Program is for members who do not have the resources or capabilities to work directly with and pay Crossref. More than half of our members have joined via a Sponsor, which means they don\u0026rsquo;t pay Crossref a membership fee, and they receive technical support and expertise locally, through their sponsor.\nIf a sponsored member is eligible for the Global Equitable Membership (GEM) program, we do not charge the sponsor any membership or registration fees. Find out more about working with a sponsor.\n(NB: some Sponsors may charge you for their services, so it\u0026rsquo;s important that you clarify their terms before joining).\nReady to apply? Option 1: Apply as an independent member Option 2: Find a Sponsor to join through If you have questions, please consult our forum at community.crossref.org or open a ticket with our membership team where we\u0026rsquo;ll reply within a few days.\n", "headings": ["Benefits of membership","Are you eligible?","Member obligations","1. Register only what you have the legal rights to","2. Deposit metadata and create DOI links","3. Maintain and update your metadata and landing page URLs over the long term","4. Make sure you have a unique landing page for each metadata record/DOI and follow our DOI display guidelines","5. Link references","6. Pay your fees","What are the fees?","Independent membership","Membership via a Sponsor","Ready to apply?","Option 1: Apply as an independent member","Option 2: Find a Sponsor to join through"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/", "title": "Get involved", "subtitle":"", "rank": 4, "lastmod": "2023-08-05", "lastmod_ts": 1691193600, "section": "Get involved", "tags": [], "description": "It takes a village. Crossref only works as long as the scholarly research community wants to work together globally, across all disciplines, for integrity and openness of the scientific record. Our community includes tens of thousands of organisations and systems in over 160 countries. Over 20,000 organisations create identifiers for metadata records that describe and locate their research. They share them through Crossref so that they don\u0026rsquo;t have to duplicate the information for the many thousands who consume and use it downstream throughout the research ecosystem.", "content": "It takes a village. Crossref only works as long as the scholarly research community wants to work together globally, across all disciplines, for integrity and openness of the scientific record. Our community includes tens of thousands of organisations and systems in over 160 countries. Over 20,000 organisations create identifiers for metadata records that describe and locate their research. They share them through Crossref so that they don\u0026rsquo;t have to duplicate the information for the many thousands who consume and use it downstream throughout the research ecosystem.\nCrossref was founded in 2000 by some established scientific societies and publishers. Now, our membership comprises just 35% publishers or societies, with the largest membership group (40%) self-identifying as research institutions and universities. The other 25% is made up of funders (who started joining to record grants, use of facilities, and other support) alongside hundreds of museums, government organisations, libraries, data and subject repositories, conference providers, standards bodies, individual scholars, and news outlets. Our members register any research object that might be part of the scholarly evidence trail - from grants, articles, books, preprints, and reports to data, software, video, and physical objects.\nCrossref is the world\u0026rsquo;s largest registry of Digital Object Identifiers (DOIs) and metadata for the scholarly research community. Unlike other DOI agencies, we encompass all research stakeholders and all geographies. We facilitate an average of 1.1 billion DOI resolutions (clicks of a DOI link) every month, which is 95% of all DOI activity. And our APIs see over 1 billion queries of our metadata every month.\nThis global network effect means your work is more likely to be found, if it\u0026rsquo;s in Crossref.\nEveryone is welcome, and there are lots of opportunities to participate and help prioritise our strategic agenda and roadmap, to get together, to learn or advise, or to more formally contribute to open science communication through co-creating and using our Research \u0026lsquo;Nexus\u0026rsquo; of metadata and relationships, backed by our strong commitment to the POSI principles for broad governance, transparent and forkable operations, and financial sustainability.\nSo get involved with Crossref to\u0026hellip;\nIdentify and describe any research object You can maintain and enrich the scholarly record in perpetuity to contribute evidence and make your work discoverable globally. If you publish scholarly or professional materials, content, or any kind of research object, you are eligible to become a member of Crossref. Membership allows you to create DOI records that persistently identify and describe your work.\nAlways use the DOI links wherever you communicate about your work, following the display guidelines available at https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy, so that the community can refer to and build upon your work. You agree to certain obligations that help safeguard the system for everyone, including future generations.\nThere are fees attached as part of our not-for-profit sustainability model, and we also have a number of ways to reduce barriers, such as the Global Equitable Membership (GEM) program, or joining via a Sponsor organisation. If you\u0026rsquo;re just starting out in your journey, visit the Publishers Learning And Community Exchange (The PLACE) for more general orientation including information about ethical publishing and open access, which is broader than just Crossref.\nAdd more metadata to make a difference You can curate and steward the record and include metadata and assert relationships between other objects so your work can be found, cited, linked, assessed, and reused by the whole community. You can add context and match research to funding, affiliations, translations, references, contributors, and more.\nCrucially, you can also put updates, corrections, and retractions on the record so these notices accompany the work to better inform users and readers in the future.\nAll of this added context helps co-create the \u0026lsquo;Research Nexus\u0026rsquo;, which is the vision of a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society\nAnalyse metadata to inform and understand research Crossref is the sustainable source of community-owned scholarly metadata and is relied upon by thousands of systems across the research ecosystem and the globe.\nTake a look at our metadata and search for anything at search.crossref.org and specifically by funder at search.crossref.org/funding. Or explore how to use our API with these sample queries.\nPeople using Crossref metadata need it for all sorts of reasons including metaresearch (researchers studying research itself such as through bibliometric analyses), publishing trends (such as finding works from an individual author or reviewer), or incorporation into specific databases (such as for discovery and search or in subject-specific repositories), and many more detailed use cases.\nAnyone can retrieve and use over 150 million records without restriction. So there are no fees to use the metadata but if you really rely on it then you might like to sign up for Metadata Plus which offers greater predictability and higher rate limits.\nAdditionally, we facilitate screening of text-based research for originality through collective licensing terms with leading software tools such as iThenticate from Turnitin.\nCo-create solutions to shared problems Crossref co-developed the initial system for what is now known as ORCID, we co-founded and help operate the ROR Registry, and we run the Scholix endpoint for data citation.\nWe are active on dozens of boards and working groups, such as JATS, Open Research Funders Group, African Journals Online (AJOL), software citation, and OA Switchboard - to list just a few. We collaborate with others on several community initiatives, such as co-founding Metadata 20/20 and we support other campaigns for richer open metadata, such as I4OC and I4OA.\nWe have especially close integrations and partnerships with other POSI adopters, such as DOAJ, the Public Knowledge Project, Europe PMC, and DataCite.\nWe’re keen for volunteers to join our advisory and working groups which help to support and give input on our different programs and initiatives. If you want to help test new metadata schema, suggest or join a working group, or have thoughts about what Crossref should prioritise, please get in touch to make a suggestion. We have a large agenda and roadmap but we\u0026rsquo;re always open to ideas.\nJoin the discussions We host webinars and in-person events which are all open and free to attend. Through these we aim to inform and update people with what we’re working on, but we also want your feedback - what’s happening with you and what do you need from us? What shall we collaborate on next? Who in the community would you like to hear more from, and what about?\nWith an eye towards the future of our planet with a promise to reduce travel, we\u0026rsquo;re doing more events online than in-person so please come and chat with us over on Mastodon, interact with the whole community and ask for support on our forum, and read and comment on our blog (or volunteer a guest post).\nIf you\u0026rsquo;re passionate about open research infrastructure and your community doesn\u0026rsquo;t feel represented in Crossref, please tell us. Consider applying to become an Ambassador. You\u0026rsquo;d be joining many others around the world in becoming a Crossref expert and helping us understand how to support your community better.\nYou can browse and contribute to our open code on GitLab and in fact you can even update this website by opening issues or merge requests in the public repository.\nOur team is looking forward to hearing from you; thanks for getting involved.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/ror/", "title": "Research Organization Registry (ROR)", "subtitle":"", "rank": 1, "lastmod": "2023-07-31", "lastmod_ts": 1690761600, "section": "Get involved", "tags": [], "description": "ROR IDs and Affiliations of authors can now be tracked in Participation Reports! Check your own Participation Report to see how many of your publications have author affiliations and ROR IDs in Crossref metadata. If you deposit metadata via XML, see our guide on Affiliations and ROR for instructions on how to include affiliations and ROR IDs in your metadata. Crossref encourages our members to include ROR IDs in metadata in order to help make research organization information clear and consistent as it is shared between systems.", "content": " ROR IDs and Affiliations of authors can now be tracked in Participation Reports! Check your own Participation Report to see how many of your publications have author affiliations and ROR IDs in Crossref metadata. If you deposit metadata via XML, see our guide on Affiliations and ROR for instructions on how to include affiliations and ROR IDs in your metadata. Crossref encourages our members to include ROR IDs in metadata in order to help make research organization information clear and consistent as it is shared between systems. ROR IDs are essential to realize a rich and complete Research Nexus because they enable connections between research outputs and the organizations that support researchers.\n\u0026ldquo;At Scholastica, we care about taking steps to enrich metadata – like adding ROR IDs, for example, on behalf of our customers, so they don’t have to worry about the technical aspects of metadata collection or creation and can instead focus on maximizing the discovery benefits.\u0026rdquo; \u0026ndash; Cory Schires, Co-founder and Chief Technology Officer, Scholastica\nRead the case study on ROR in Scholastica publishing products.\n\u0026ldquo;If we’re talking about misconduct, then you might need to be able to contact the institution that the author is from. On an individual manuscript, it doesn’t matter if there’s no identifier – an address will do. But if you find some signal that is on manuscripts at scale, and you’ve got thousands of them, well, you need an identifier. You can’t go through them and try and search for every single one of those institutions.\u0026rdquo; \u0026ndash; Adam Day, CEO, Clear Skies Ltd.\nRead the case study on why Clear Skies uses ROR to help detect paper mills.\nWhat is ROR? ROR is the Research Organization Registry \u0026ndash; a global, community-led registry of open persistent identifiers for research organizations. The registry currently includes globally unique persistent identifiers and associated metadata for more than 110,000 research organizations.\nROR IDs are specifically designed to be implemented in any system that captures institutional affiliations and to enable a richer networked research infrastructure. ROR IDs are interoperable with other organization identifiers, including GRID (which provided the seed data that ROR launched with), the Open Funder Registry, ISNI, and Wikidata. ROR data is available under a CC0 Public Domain waiver and can be accessed at no cost via a public API and a data dump.\nROR record for the University of St. Andrews\nWho is ROR? ROR is operated as a joint initiative by Crossref, DataCite, and the California Digital Library, and was launched with seed data from GRID in collaboration with Digital Science. These organizations have invested resources into building an open registry of research organization identifiers that can be embedded in scholarly infrastructure to effectively link research to organizations. ROR is not a membership organization (or an organization at all!) and charges no fees for use of the registry or the API. Read more about ROR\u0026rsquo;s sustainability model.\nWhy ROR IDs are an important element of Crossref metadata For a long time, Crossref only collected affiliation metadata as free-text strings, which made for ambiguity and incomplete data. An author affiliated with the University of California at Berkeley might give the name of the university in any of several common ways:\nUniversity of California, Berkeley University of California at Berkeley University of California Berkeley UC Berkeley Berkeley And likely more … While it isn’t too difficult for a human to guess that “UC Berkeley,” “University of California, Berkeley,” and “University of California at Berkeley” are all referring to the same university, a machine interpreting this information wouldn’t necessarily make the same inference. If you are trying to easily find all of the publications associated with UC Berkeley, you would need to run and reconcile multiple searches at best, or, at worst, miss some data completely.\nThis is where an organization identifier comes in: a single, unambiguous, standardized identifier that will always stay the same. For UC Berkeley, that would be https://ror.org/01an7q238.\nIn 2019, Crossref members indicated that the ability to associate research outputs with organizations in a clean and consistent fashion was one of their most desired improvements to Crossref metadata. In January of 2022, therefore, Crossref added support for ROR IDs in its metadata schema and APIs. Since then, more and more Crossref members have been including ROR IDs in DOI metadata.\nPublishers and service providers can implement ROR in their systems so that submitting authors and co-authors can easily choose their affiliation from a ROR-powered list instead of typing in free text. Authors themselves do not have to provide a ROR ID or even know that a ROR ID is being collected. This affiliation information can then be sent to Crossref alongside other publication information.\nDemo of collecting ROR IDs in a typeahead field\nIf the submission system you use does not yet support ROR, or if you don\u0026rsquo;t use a submission system, you\u0026rsquo;ll still be able to provide ROR IDs in your Crossref metadata. ROR IDs can be added to JATS XML, and Crossref helper tools will start to support the deposit of ROR IDs. There\u0026rsquo;s also an OpenRefine reconciler that can map your internal identifiers to ROR identifiers.\nROR IDs for affiliations stand to transform the usability of Crossref metadata. While it’s crucial to have IDs for affiliations, it’s equally important that the affiliation data can be easily used. The ROR dataset is CC0, so ROR IDs and associated affiliation data can be freely and openly used and reused without any restrictions.\nThe ROR IDs registered by members in their Crossref metadata are available via Crossref’s open APIs so that they can be detected, analyzed, and reused by anyone interested in linking research outputs to research organizations. Examples include\nInstitutions who want to monitor and measure their research output by the articles their researchers have published Funders who want to be able to discover and track the research and researchers they have supported Academic librarians who want to find all of the publications associated with their campus Journals who want to know where authors are affiliated so they can determine eligibility for institutionally sponsored publishing agreements The inclusion of ROR IDs in Crossref metadata will eventually help all these entities make all these connections much more easily.\nGet ready to ROR 🦁! ROR is already working with publishers, funders and service providers who are integrating ROR in their systems, mapping their affiliation data to ROR IDs, and/or including ROR IDs in publication metadata. Libraries and institutional repositories are also beginning to build ROR into their systems and to send ROR IDs to Crossref in their metadata. See the growing list of active and in-progress ROR integrations for more stakeholders who are supporting ROR.\nIf you deposit metadata with Crossref via XML, see our guide on Affiliations and ROR for instructions on how to include author affiliations and ROR IDs.\nFor further information on how ROR IDs are supported in the Crossref metadata, you can take a look at this .xsd file (under the ‘institution’ element) or in this journal article example XML. ROR also has some great help documentation for publishers and anyone else working with the ROR Registry.\nGet in touch with ROR if you have questions or want to be more involved in the project. If you have questions about adding ROR IDs to your Crossref metadata, get in touch with our support specialists or ask on the Crossref Community Forum.\n", "headings": ["What is ROR?","Who is ROR?","Why ROR IDs are an important element of Crossref metadata","Get ready to ROR 🦁!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-my-thoughts-on-starting-my-new-role-at-crossref/", "title": "Perspectives: My thoughts on starting my new role at Crossref", "subtitle":"", "rank": 1, "lastmod": "2023-07-06", "lastmod_ts": 1688601600, "section": "Blog", "tags": [], "description": "My name is Johanssen Obanda. I joined Crossref in February 2023 as a Community Engagement Manager to look after the Ambassadors program and help with other outreach activities. I work remotely from Kenya, where there is an increasing interest in improving the exposure of scholarship by Kenyan researchers and ultimately by the wider community of African researchers. In this blog, I’m sharing the experience and insights of my first 4 months in this role.", "content": "My name is Johanssen Obanda. I joined Crossref in February 2023 as a Community Engagement Manager to look after the Ambassadors program and help with other outreach activities. I work remotely from Kenya, where there is an increasing interest in improving the exposure of scholarship by Kenyan researchers and ultimately by the wider community of African researchers. In this blog, I’m sharing the experience and insights of my first 4 months in this role.\nRight before joining Crossref, I was working as Stakeholder Manager with AfricArXiv, a community-led digital archive for African research communication. I transitioned to working with Crossref to take up a more challenging role, so I can apply the community-building and social innovation skills I gained over the last five years in my profession.\nWhat surprised me the most here is realising that such a robust infrastructure is being administered by a relatively small team. I wondered how the team keeps the services running and builds new solutions for the community. However, I am impressed by the collaborative culture, positive and healthy work environment, and great systems.\nI work within the Community Engagement and Communications team, where we collaboratively address members’ questions and challenges, plan events, create helpful content for our community and keep in touch with them. We help grow our community and create a better experience using our products and services.\nMy main focus has been the Ambassador programme, which started in 2018 and currently comprises 48 Ambassadors globally. The Ambassadors are our trusted contacts who support and engage our communities locally to make scholarly communications better. Through one-on-one virtual interaction with most of them, I noted that there was little interaction among the Ambassadors. Most of our Ambassadors want to connect more, both face-to-face and online. In the coming months, we aim to design our meetings together with the Ambassadors to encourage better exchange and relationships.\nI value Crossref’s insistence on diversity, equity and inclusion, and I enjoy contributing to those activities. Working with my colleagues in the outreach team to organise webinars and activities for the Global Equitable Membership (GEM) programme has been an exciting experience. I particularly enjoyed engaging with our Ambassadors Shaharima Parvin and Jahangir Alam from Bangladesh, and Binayak Pandey from Nepal, in organising the initial webinars for the GEM program in their countries. I feel it is one of the ways of creating more in-depth connections between our communities and our Ambassadors while making it possible for more institutions to be part of Crossref and contribute to scholarly communication.\nI have made a few webinar presentations online and recently did one in-person poster presentation in South Africa at the Sustainability, Research and Innovation conference. I gained more confidence interacting with the wider Crossref community and a deeper understanding of Crossref’s services. I look forward to more opportunities to discuss Crossref’s mission with the community and to collaborate with like-minded organisations, contributing to joint initiatives, such as the upcoming Better Together webinar series with ORCID and DataCite, and the Forum for Open Research in MENA events.\nI experienced the challenges of working remotely in many ways. A couple of days, there was no power, other days the internet connection was painfully slow, and hopping from one restaurant to another was something I had to deal with from time to time, with the hopes of finding quiet most times to have a good meeting with my colleagues, until I had more dependable work station. On the positive side, coordinating meeting times with colleagues, taking on tasks asynchronously and collaborating in real-time across different tools are making me more agile, patient and empathetic with myself and my colleagues.\nI am driven by the impact I want to contribute in my career working with Crossref, which is to build an inclusive research ecosystem where researchers across the globe can easily access scientific knowledge and make meaningful connections. And I feel confident about my colleagues, our systems and infrastructure and my capabilities to be part of a thriving community and organisation.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-request-for-comment-automatic-digital-preservation-and-self-healing-dois/", "title": "A Request for Comment - Automatic Digital Preservation and Self-Healing DOIs", "subtitle":"", "rank": 1, "lastmod": "2023-06-29", "lastmod_ts": 1687996800, "section": "Blog", "tags": [], "description": "Digital preservation is crucial to the \u0026ldquo;persistence\u0026rdquo; of persistent identifiers. Without a reliable archival solution, if a Crossref member ceases operations or there is a technical disaster, the identifier will no longer resolve. This is why the Crossref member terms insist that publishers make best efforts to ensure deposit in a reputable archive service. This means that, if there is a system failure, the DOI will continue to resolve and the content will remain accessible.", "content": "Digital preservation is crucial to the \u0026ldquo;persistence\u0026rdquo; of persistent identifiers. Without a reliable archival solution, if a Crossref member ceases operations or there is a technical disaster, the identifier will no longer resolve. This is why the Crossref member terms insist that publishers make best efforts to ensure deposit in a reputable archive service. This means that, if there is a system failure, the DOI will continue to resolve and the content will remain accessible. This is how we protect the integrity of the scholarly record.\nI will write another post, soon, on the reality of preservation of items with a Crossref DOI, but recent work in the Labs team has determined that we have a situation of drastic under-preservation of much scholarly material that has been assigned a persistent identifier. In particular, content from our smaller Crossref members, with limited financial resources, is often precariously preserved. Further, DOI URLs are not always updated, even when, for instance, the underlying domain has been registered by a different third party. This results in DOIs pointing to new, hijacked, and elapsed content that does not reflect the metadata that we hold.\nWe (Geoffrey) have (has) long-harboured ambitions to build a system that would allow for automatic deposit into an archive and then to present access options to the resolving user. This would ensure that all Crossref content had at least one archival solution backing it and greatly contribute to the improved persistent resolvability of our DOIs. We refer to this, internally, as \u0026ldquo;Project Op Cit\u0026rdquo;. And we\u0026rsquo;re now in a position to begin building it.\nHowever, we need to get this right from the design phase out. We need input from librarians working in the digital preservation space. We need input from members on whether they would use such a service. We are not digital preservation experts and we are acutely aware that we need the expertise of those who are, particularly where we\u0026rsquo;ve had to take some shortcuts. For instance: we are aware that the Internet Archive is perhaps not the first choice of many digital preservation librarians and specialists, who opt for specific scholarly-communications solutions. However, it is easy, open, and free. Hence, we propose for the prototype to use IA, on the assumption that this will be a proof-of-concept only, which we will expand to other archives if there is demand and once it works.\nSo: please do read the below and add your comments and questions to this thread in the community forum (link below), or send me queries/concerns by email. It would be excellent if we could receive comments by mid-August 2023. If you would rather comment on a Google doc, that\u0026rsquo;s also possible.\nIf enough people are interested, we could also host a community call to discuss this design and its prototyping. Do please, when emailing, let me know if this is of interest.\nProject Op Cit (Self-Healing DOIs) Request for Comment This document sets out the problem statement, a proposed prototype solution, and a transition path to production if successful.\nProposed Prototype Solution For members who opt-in to the service, We have a special class of DOI (only for open-access content) where, when the DOI is registered:\nWe immediately make an archive of the item with any archiving services that care to participate in the project (minimally, the Internet Archive, which is the easiest for us to begin with, but a modular/pluggable archival system). The Internet Archive Python Library should let us submit to them. We could pursue other arrangements with CLOCKSS, LOCKSS, and Portico.\nWe update the XML to reflect the archives to which it has been submitted.\nThe DOI landing page is redirected to an interstitial page that we control. This page gives the user access options.\nWe develop processes to determine whether the original URL \u0026ldquo;works\u0026rdquo;. The heuristics that define whether a resource has changed substantially or works need long-term consideration and real-world testing. Using the interstitial page approach will allow us to refine this, with a long-term goal of eradicating it.\nFigure 1: The Deposit Process\nFigure 2: The Resolution Process\nPotential Challenges Content drift. It would be extremely difficult to detect content change vs. (eg) page structure change, except in the case of binary fulltext. However, we can poll for the DOI at an HTML endpoint and detect when binary fulltext items, such as a PDF, change.\nLatency on resolver if lookup is real-time. For this reason, we need a periodic crawler so that resolvers do not wait for real-time detection on access.\nIf using Internet Archive, the domain owner (at the present moment) can request the removal of content. We would need the capacity to \u0026ldquo;lock\u0026rdquo; records that are being used as Op Cit redirection archival copies. This requires a further conversation with the Internet Archive.\nPrototype Components/Architecture Registration Proxy and Database (\u0026ldquo;Fleming\u0026rdquo;) The registration proxy implements a pass-through to the deposit API and hosts a relational database of self-healing DOIs (Postgres). It will be hosted at api.labs.crossref.org/deposit/opcit and clients will have to use this endpoint to deposit. Simultaneously, the proxy will:\nDetermine the license status of the incoming item.\nIf the license is open and fulltext is provided, deposit a copy in selected digital preservation archives. Store proof of licensing attestation.\nIn the case of binary files (fulltext PDF), store a hash of the content.\nStore the DOI, binary hash, and all URLs in a relational database under \u0026ldquo;pending\u0026rdquo; state.\nPass through the request to Crossref\u0026rsquo;s content registration system.\nMonitor the result of this request and remove stored data if registration fails.\nRe-registration through Fleming will update existing entries and re-fix their data against content drift at this time.\nSpider (\u0026ldquo;Shelob\u0026rdquo;) A series of components that:\nCheck that \u0026ldquo;pending\u0026rdquo; DOIs have been successfully registered. Remove those that have not and move those that have to \u0026ldquo;active\u0026rdquo; state.\nDereference \u0026ldquo;active\u0026rdquo; DOIs and ensure that we have the most current URL in case updates have gone directly to the live resolver.\nPeriodically crawl URLs in the self-healing database.\nOn HTTP 301 code, update database entry to point to new permanent URL.\nOn HTTP 302 code, follow the temporary redirect expecting the original content.\nOn HTTP 4xx codes, mark the entry as dead.\nOn HTTP 200 code of HTML landing page, parse the page for the presence of the DOI. If the DOI is not present, mark the entry as dead.\nResolver Proxy (\u0026ldquo;Hippocrates\u0026rdquo;) Display an interstitial landing page with archival versions and an explanation.\nAt some future point, for active entries, resolve to the stored URL (faster but could be de-synced) or pass the request to the live resolver (requires an extra hop but will always be in-sync with deposit).\nObservability and statistics Metrics we will collect:\nCount of DOIs using Op Cit\nCount of visitors arriving on Op Cit landing pages\nUsage count of each outgoing link/access option\nA daily report will present:\nNewly \u0026ldquo;failed\u0026rdquo; entries that we believe have died\nThese will be checked extensively, particularly at first, to ascertain whether our failure heuristics are valid\nEntries that have recovered\nErrors will be logged and monitored via Grafana.\nDocumentation and Automated Tests Core assumptions and new behaviours of the platform will be documented as part of the prototype.\nAutomated tests will be written, especially for the spider (\u0026ldquo;Shelob\u0026rdquo;), which must handle a diverse variety of real-world situations.\nPrototype Architecture Requirements Postgres RDS for resolution/self-healing DOI data (AWS).\nFastAPI hosting for passthrough proxy (fly.io).\nEC2 hosting for the spider (AWS).\nFastAPI hosting for resolver proxy (fly.io).\nTransition to Production If this prototype garners popular appeal, a transition to production would need to keep some prototype components and rewrite others.\n\u0026ldquo;Fleming\u0026rdquo; would need to be rewritten as a deposit module / integrated with Manifold\u0026rsquo;s (the next-generation system at Crossref) deposit. If this would create too much overhead, it need not be a blocking process in the deposit.\n\u0026ldquo;Shelob\u0026rdquo; would continue to need to run continuously and to scale with the adoption of self-healing DOIs unless one of the other options were used.\nPrototype architecture will be written so that spidering can be distributed between several servers, if required.\n\u0026ldquo;Hippocrates\u0026rdquo; would need to be integrated into the live link resolver. Depending on how a field for a self-healing DOI is embedded in Manifold, this may not need any additional database hits.\nBack Content We also have a database of back content stored by the Internet Archive, mapped to DOIs where they have been able to do so. This data source could be used to enable self-healing DOIs on all content in this archive.\n", "headings": ["Project Op Cit (Self-Healing DOIs)","Request for Comment","Proposed Prototype Solution","Potential Challenges","Prototype Components/Architecture","Registration Proxy and Database (\u0026ldquo;Fleming\u0026rdquo;)","Spider (\u0026ldquo;Shelob\u0026rdquo;)","Resolver Proxy (\u0026ldquo;Hippocrates\u0026rdquo;)","Observability and statistics","Documentation and Automated Tests","Prototype Architecture Requirements","Transition to Production","Back Content"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/technology/", "title": "Technology", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-research-and-development-releasing-our-tools-from-the-ground-up/", "title": "Crossref Research and Development: Releasing our Tools from the Ground Up", "subtitle":"", "rank": 1, "lastmod": "2023-06-21", "lastmod_ts": 1687305600, "section": "Blog", "tags": [], "description": "This is the first post in a series designed to showcase what we do in the Crossref R\u0026amp;D group, also known as Crossref Labs, which over the last few years has been strengthened, first with Dominika Tkaczyk and Esha Datta, last year with part of Paul Davis’s time, and more recently, yours truly. Research and development are, obviously, crucial for any organization that doesn’t want to stand still. The R\u0026amp;D group builds prototypes, experimental solutions, and data-mining applications that can help us to understand our member base, in the service of future evolution of the organization.", "content": "This is the first post in a series designed to showcase what we do in the Crossref R\u0026amp;D group, also known as Crossref Labs, which over the last few years has been strengthened, first with Dominika Tkaczyk and Esha Datta, last year with part of Paul Davis’s time, and more recently, yours truly. Research and development are, obviously, crucial for any organization that doesn’t want to stand still. The R\u0026amp;D group builds prototypes, experimental solutions, and data-mining applications that can help us to understand our member base, in the service of future evolution of the organization. One of the strategic pillars of Crossref is that we want to contribute to an environment in which the scholarly research community identifies shared problems and co-creates solutions for broad benefit. We do this in all teams through research and engagement with our expanding community.\nFor example, if the metadata team wants to implement a new field in our schema, it helps to have a prototype to show to members. The Labs team would implement such a prototype. If we want to know the answer to a question about the 150m or so metadata records we have – e.g. how many DOIs are duplicates? – it’s the Labs team that will work on this.\nWhen building such prototypes, which can often seem esoteric and one-off, though, it can be easy to believe that there is no way anybody else would re-use our components. At the same time, we find ourselves consistently working with the same infrastructures, re-using different code blocks across many applications. One of the tasks I have been working on is to extract these duplicated functions and to get them into external code libraries.\nWhy is this important? As many readers doubtless know, Crossref is committed to The Principles of Open Scholarly Infrastructure. For reasons of insurance, everything we do and newly develop is open source and we want our members to be able to re-use the software that we create. It’s also important because, if we centralize these low-level building blocks, we make it much easier to fix bugs when they occur, which would otherwise be distributed across all of our projects.\nAs a result, Crossref Labs has a series of small code libraries that we have released for various service interactions. We often find ourselves needing to interact with AWS services. Indeed, Crossref’s live systems are in the process of transitioning to running in the cloud, rather than our own data centre. It makes sense, therefore, for prototype Labs systems to run on this infrastructure, too. However, the boto3 library is not terribly Pythonic. As a result, many of our low-level tools interact with AWS. These include:\nCLAWS: the Crossref Labs Amazon Web Services toolkit. The CLAWS library gives speedy and Pythonic access to functions that we use again and again. This includes downloading files from and pushing data to S3 buckets (often in parallel/asynchronously), fetching secrets from AWS Secrets Manager, generating pre-signed URLs, and more. Longsight: A range of common logging functions for the observability of Python AWS cloud applications. Less mature than CLAWS, this is the starting point for observability across Labs applications. It supports running in AWS Lambda function contexts or pushing your logs to AWS Cloudwatch from anywhere else. It also supports logging metrics in structured forms. Crucially, the logs are all converted into machine-readable JSON format. This allows us to export the metrics into Grafana dashboards to visualize failure and performance. Distrunner: decentralized data processing on AWS services. Easily the least mature and experimental of these libraries, distrunner is one of the ways that we distribute the workloads of our recurrent data processing. A number of the Labs projects require us to run recurrent data-processing tasks. For instance, my colleague Dominika Tkaczyk has developed the sampling framework that is regenerated once per week. We use Apache Airflow (and, specifically, Amazon Managed Workflows for Apache Airflow) to host these periodic tasks. This is useful because it gives us quick, visual oversight if tasks fail. However, the Airflow worker instances on AWS are quite severely underpowered and unsuitable for large in-memory activities. Hence, the sampling framework fires up a Spark instance for its processing. Often, though, we do not need the parallelization of Spark and just want to be able to run a generic Python script in a more powerful environment. That’s what distrunner is designed to do. The current version uses Coiled but this may change in the future. While these tools will be useful to nobody except programmers – and this has been quite a technical post – there is a broader philosophical point to be made about this approach, in which everything is available for re-use, “from the ground up”. The point is: we also try, in Labs and in the process of “R\u0026amp;Ding”, to work without privileged access. That is: I don’t get “inside” access to a database that isn’t accessible to external users. I have to work with the same APIs and systems as would an end-user of our services. This means that, when we develop internal libraries, it’s worth releasing them. Because they use systems that are accessible to any of our users.\nI should also say that our openness is more than unidirectional. While we are putting a lot of effort into ensuring that everything new we put out is openly accessible, we are also open to contributions coming in. If we’ve built something and you make changes or improve it, please do get in touch or submit a pull request. Openness has to work both ways if projects are truly to be used by the community.\nFuture posts – coming soon! – will introduce some of the technologies and projects that we have been building atop this infrastructure. This includes a Labs API system; new functionality to retrieve unpaginated datasets of whole API routes; a study of the preservation status of DOI-assigned content; and a mechanism for modeling new metadata fields.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/presevation/", "title": "Presevation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/committees/audit/", "title": "Audit committee", "subtitle":"", "rank": 2, "lastmod": "2023-06-01", "lastmod_ts": 1685577600, "section": "Committees", "tags": [], "description": "The Audit Committee is made up of three board members who aren\u0026rsquo;t officers. They oversee our accounting and financial reporting processes and the audit of our financial statements. The committee also appoints an independent auditor, reviews the results of the audit and oversees the compliance with any conflict of interest or whistleblower policies. You can see our financial statements in our annual report that we produce in the November of each year.", "content": "The Audit Committee is made up of three board members who aren\u0026rsquo;t officers. They oversee our accounting and financial reporting processes and the audit of our financial statements. The committee also appoints an independent auditor, reviews the results of the audit and oversees the compliance with any conflict of interest or whistleblower policies. You can see our financial statements in our annual report that we produce in the November of each year.\n2025 Audit Committee members Staff facilitator: Lucy Ofiesh\nAshley Towne, University of Chicago Press (Chair) committee in formation ", "headings": ["2025 Audit Committee members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/our-annual-call-for-board-nominations/", "title": "Our annual call for board nominations", "subtitle":"", "rank": 1, "lastmod": "2023-05-30", "lastmod_ts": 1685404800, "section": "Blog", "tags": [], "description": "The Crossref Nominating Committee invites expressions of interest to join the Board of Directors of Crossref for the term starting in March 2024. The committee will gather responses from those interested and create the slate of candidates that our members will vote on in an election in September.\nExpressions of interest will be due Monday, June 26th, 2023.\nAbout the board elections The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board.", "content": "The Crossref Nominating Committee invites expressions of interest to join the Board of Directors of Crossref for the term starting in March 2024. The committee will gather responses from those interested and create the slate of candidates that our members will vote on in an election in September.\nExpressions of interest will be due Monday, June 26th, 2023.\nAbout the board elections The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board. Board terms are for three years; this year, seven seats are open for election.\nThe board maintains a balance of seats, with eight seats for smaller members and eight seats for larger members (based on total revenue to Crossref). This is to ensure that the diversity of experiences and perspectives of the scholarly community are represented in decisions made at Crossref.\nThis year we will elect two of the larger member seats (membership tiers $3,900 and above) and five of the smaller member seats (membership tiers $1,650 and below). You don’t need to specify which seat you are applying for. We will provide that information to the nominating committee.\nThe election takes place online, and voting will open in September. Election results will be shared at the annual meeting on October 31st. New members will commence their term in March 2024.\nAbout the Nominating Committee The Nominating Committee reviews the expressions of interest and selects a slate of candidates for election. The slate put forward will exceed the total number of open seats. The committee considers the statements of interest, organizational size, geography, and experience.\n2023 Nominating Committee:\nAaron Wood, American Psychological Association, chair* Oscar Donde, Pan Africa Science Journal* David Haber, American Society for Microbiology Rose L’Huillier, Elsevier* Marie Souliere, Frontiers (*) indicates Crossref board member\nWhat does the committee look for The committee looks for skills and experience that will complement the rest of the board. Candidates from countries and regions that are not currently reflected on the board are strongly encouraged to apply. Successful candidates often have some or all of these characteristics:\ndemonstrate a commitment to or understanding of our strategic agenda or the Principles of Open Scholarly Infrastructure; have expertise that may be underrepresented on the board currently; hold senior/director-level positions in their organizations; have experience with governance or community involvement; represent member organizations that are active in the scholarly communications ecosystem; demonstrate metadata best practices as shown in the member’s participation report Board roles and responsibilities Crossref’s services provide a central infrastructure to scholarly communications. Crossref’s board helps shape the future of our services and, by extension, impacts the broader scholarly ecosystem. We are looking for board members to contribute their experience and perspective.\nThe role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The board is representative of our membership base and guides the staff leadership team on trends affecting scholarly communications. The board sets strategic directions for the organization while also providing oversight into policy changes and implementation. Board members have a fiduciary responsibility to ensure sound operations. Board members do this by attending board meetings, as well as joining more specific board committees.\nWho can apply to join the board? Any active member of Crossref can apply to join the board. Crossref membership is open to organizations that produce content, such as academic presses, commercial publishers, standards organizations, and research funders.\nWhat is expected of board members? Board members attend three meetings each year that typically take place in March, July, and November. Meetings have taken place in various international locations, and travel support is provided when needed. March and November board meetings are held virtually, and all committee meetings take place virtually. Each board member should sit on at least one Crossref committee. Care is taken to accommodate the wide range of timezones in which our board members live.\nWhile the expressions of interest are specific to an individual, the seat that is elected to the board belongs to the member organization. The primary board member also names an alternate who may attend meetings if the primary board member cannot. There is no personal financial obligation to sit on the board. The member organization must remain in good standing.\nBoard members are expected to be comfortable assuming the responsibilities listed above and to prepare and participate in board meeting discussions.\nHow to apply Please click here to submit your expression of interest. We ask for a brief statement about how your organization could enhance the Crossref board and a brief personal statement about your interest and experience with Crossref.\nPlease contact me with any questions at lofiesh@crossref.org\n", "headings": ["About the board elections","About the Nominating Committee","What does the committee look for","Board roles and responsibilities","Who can apply to join the board?","What is expected of board members?","How to apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-connects-the-global-community-summary-of-our-community-update-2023/", "title": "Metadata connects the global community – summary of our Community update 2023", "subtitle":"", "rank": 1, "lastmod": "2023-05-12", "lastmod_ts": 1683849600, "section": "Blog", "tags": [], "description": "We were delighted to engage with over 200 community members in our latest Community update calls. We aimed to present a diverse selection of highlights on our progress and discuss your questions about participating in the Research Nexus. For those who didn’t get a chance to join us, I’ll briefly summarise the content of the sessions here and I invite you to join the conversations on the Community Forum.\nYou can take a look at the slides here and the recordings of the calls are available here.", "content": "We were delighted to engage with over 200 community members in our latest Community update calls. We aimed to present a diverse selection of highlights on our progress and discuss your questions about participating in the Research Nexus. For those who didn’t get a chance to join us, I’ll briefly summarise the content of the sessions here and I invite you to join the conversations on the Community Forum.\nYou can take a look at the slides here and the recordings of the calls are available here.\nTL;DR The membership is growing, including that in the GEM programme countries, and we focus on adding new Sponsors in areas where we have insufficient coverage to support prospective members The grant registration form is available for funders who don’t use XML, and we’re working to expand to other record types The preview of the Relationship API endpoint is available – start exploring relationships between different records and record types, from citations to funding, and more Usefulness of metadata records for inferring integrity of the content or publisher relies on all members of the community contributing to this effort. Crossref will continue to enrich our schema to capture new types of relevant information and to promote the best metadata practices. Cited-by is now open for everyone to use 🎉 – no need for additional authorisation steps – Registering your references will have even greater impact now! The Labs participation report is available and it’s been a hit. Please note that this tool is still underdevelopment – new functionalities can be added but there might also be bugs that we are yet to resolve, so don’t hold off with feedback. We’ve received close to 1,000 responses in our first ever Metadata Priorities Survey. It’s still open until 18th of May and we encourage all members to take it. So far we’ve learnt that majority of our respondents are keen to deposit as much metadata as possible – and some would like to register more than we currently enable. Metadata completeness and integrity A key theme of the call was encouraging greater participation in the Research Nexus and the importance of complete metadata. One particular benefit of a rich and transparent metadata network is the opportunity to infer judgments on the integrity of the scholarly record (ISR). Amanda Bartell, Head of Member Experience, highlighted that the community agrees that availability of information about relationships between research outputs, institutions and other elements of the scholarly ecosystem together provide essential context for deciding about trustworthiness of organisations and their published content. Conversely, it can make it harder for parties to pass off information as trustworthy when that context is missing. Amanda summarised community feedback related to Crossref’s role in the integrity of the scholarly record in her recent blog post.\nOur members can contribute to that rich network of relationships by curating their metadata and providing contextual information – especially the highly sought for elements highlighted in the presentation.\nOur community Since LIVE22, we have had 1,130 new members join us. That includes 51 organisations from countries included in our Global Equitable Membership (GEM) programme. You can find out more in the latest news about the programme on our Community Forum from Susan Collins, Community Engagement Manager.\nWe see great opportunities with enriching our metadata corpus with works carried out in some of the least economically-advantaged regions of the world. Registering their content with us will increase its discoverability for the global scholarship, while adding important relationships into the Research Nexus. We’re glad at the new members joining us under the auspices of the Global Equitable Membership (GEM) programme and we’re reaching out to existing and new communities with our Ambassadors, to encourage more metadata registrations.\nOur Sponsors and Ambassadors, alongside our Outreach and Membership Team, support members to participate as effectively as possible in the Research Nexus. We’re delighted to see both programmes growing, with eight new Sponsors and seven new Ambassadors having joined us since October.\nSimultaneously, we’re working with like-minded organisations to provide useful resources for the growing and changing scholarly communications community. The recent launch of the online forum for new publishers seeking to learn about best practices in the industry, The PLACE, is another way in which we hope to support wider participation in the Research Nexus, and promote open and sustainable practices.\nWith our growing community, there’s always interest in We have planned a webinar later this month to provide an overview of Crossref – including the members benefits and obligations, and how to use our services.\nService news References metadata is essential for connecting works with one another. It enables provision of citation information, aids discoverability for researchers, as well as assessment and evaluation for institutions and funders. It’s almost a year since all the references metadata deposited with Crossref has been made openly available. At the moment, 52.0% of journal articles, and 44.5% of all works have references. Martyn Rittman, Product Manager for the Cited-by service says “It’s not bad, but we can do better!”\nWith three different mechanisms for doing it available to our members, we hope that all have a suitable tool to fit with their needs. You can register references with XML via HTTPS POST (structured or unstructured), with the dedicated OJS Plugin if you’re an OJS user, or with our Simple Text Query (unstructured text) – this is especially relevant to the Web Deposit Form users. We find that journal articles with deposited references seem to be cited more than those without, and by a lot: 21.8 vs. 6.1 incoming citations on average!\nWe have now made our Cited-by service open to all. To realise its full benefit, it is essential to register your references.\nThere were concerns in the community about references ‘lost’ as part of supplementary material that may not be registered in its own right. Colleagues advised that if the data has an identifier, such as a DataCite DOI, you can add a relationship to say that it\u0026rsquo;s supplementary material (see https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/markup-guide-metadata-segments/relationships/) or add them as a reference. Martyn is curious to hear from others in the community on this topic. There is an increasing focus on data citations and we\u0026rsquo;d like to see how we can better support them.\nMany members have questions related to plans for replacing Metadata Manager. Rachael Lammey, Director of Product, explained that we’re working on broadening our new Grant Registration Form to include more record types over the course of 2023. It has a few advantages over the current Web Deposit Form. It allows you to save a local copy once you first register a piece of content. It makes updating your records easier, as you can drop that file onto the form to add the metadata so that you can update it and redeposit rather than having to fill out the information all over again, and we have started adding automatic lookup fields to help users populate information on affiliations using ROR IDs more accurately. We will keep you posted on the progress with new developments and ask for beta testers for new record types as they are added.\nMetadata information about individual work is not as useful as the opportunity to interrogate the relationships between works and within the global scholarly output. [The preview of the Relationship API endpoint](https://0-community-crossref-org.libus.csd.mu.edu/t/relationships-are-here/3523, modest as it is at this stage – with only 1% of our relationship metadata included (or 10 mln relationships) – offers a powerful demonstration of the way in which metadata contextualises research outputs within the entangled network of ever-progressing scholarship.\nWe’ve also mentioned the recent transition of our website to GitLab, which allows everyone to contribute by creating merge requests and issues. Through this open collaboration, which supports our commitment to meet the Principles of Open Scholarly Infrastructure, we aim to cultivate a sense of ownership among contributors and make our information and documentation more useful and efficient for everyone.\nLabs participation report For organisations who wish to keep a close eye on their metadata – to understand what they deposit, how that compares with other members, and what could be improved, can start using our Lab participation reports. We encourage you to test this not-yet-finished tool and let us know your feedback. Participants at our updates found it very informative, with the opportunity to preview contents of recent deposits, see the participation breakdowns by a prefix, and improved data visualisation. We had questions about how data citation counts are generated in the report. Martyn Rittman explained that: “This is a prototype and that\u0026rsquo;s one of the issues we need to tidy up! We know via Event Data and our Scholix endpoint what is a dataset, but that hasn\u0026rsquo;t yet been incorporated to the Labs Reports”. There was also a suggestion of enabling export of simple lists of all member’s DOIs with respective URLs from the report and the team might look into that. Yet, lists of DOIs missing specific metadata types are already downloadable.\nTo learn more about the reports, try them out, and to provide feedback, please take a look at the information shared recently by Paul Davis, Tech Support Specialist \u0026amp; R\u0026amp;D Support Analyst.\nMetadata priorities Patricia Feeney, Head of Metadata, shared some updates about the current metadata corpus registered with Crossref, and some recent trends.\nShe then went on to summarise some preliminary results of our ongoing metadata priorities survey, which all members are encouraged to take part in by 18th of May. So far, we’ve received close to 1,000 responses. We’ve learnt that majority of our respondents are keen to deposit as much metadata as possible – and some would like to register more than we currently enable. Close to a half of the respondents who did not express an interest in sharing all metadata are still interested to learn more about the value of their metadata.\nShe then went on to summarise some preliminary results of our ongoing metadata priorities survey, which all members are encouraged to take part in by 18th of May. So far, We’ve received close to 1,000 responses. We’ve learnt that majority of our respondents are keen to deposit as much metadata as possible – and some would like to register more than we currently enable. However, close to a half of the respondents are interested to learn more about the value of their metadata.\nThe survey consults our members about their preferences for developing any of the potential projects under consideration:\nContributor IDs Contributor roles/ CRediT Alternate names Multilingual metadata Expand abstract support Citation types (content) Conference event IDs It appears that support for citation types is the strongest among our respondents, while very polarised views have been shared about multilingual metadata and expanding support for abstracts. Among other suggestions, we received a lot of comments related to keywords. Overall, support for all projects was strong.\nThe verdicts are not in yet – still time to respond to the survey and make your metadata priorities known!\nThank you and keep in touch With much of the content shared ahead of the time through our Community Forum, the sessions were bubbling with questions and valuable comments from the community. We look forward to continuing the conversations asynchronously on the Community Forum. Please don’t hesitate to share your thoughts and ask further questions. We’d also love to hear suggestions for topics of the most interest for our future updates.\nThe more complete the metadata we collect together, the more connections in the ecosystem become transparent. This creates opportunities for discovery and collaborations, and greater insights about the scholarly process. Our community is growing in numbers, diversity, and technical capacity for building the Research Nexus together. We welcome your questions and suggestions of initiatives that support the fullest participation possible.\n", "headings": ["TL;DR","Metadata completeness and integrity","Our community","Service news","Labs participation report","Metadata priorities","Thank you and keep in touch"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2023-public-data-file-now-available-with-new-and-improved-retrieval-options/", "title": "2023 public data file now available with new and improved retrieval options", "subtitle":"", "rank": 1, "lastmod": "2023-05-02", "lastmod_ts": 1682985600, "section": "Blog", "tags": [], "description": "We have some exciting news for fans of big batches of metadata: this year’s public data file is now available. Like in years past, we’ve wrapped up all of our metadata records into a single download for those who want to get started using all Crossref metadata records.\nWe’ve once again made this year’s public data file available via Academic Torrents, and in response to some feedback we’ve received from public data file users, we’ve taken a few additional steps to make accessing this 185 gb file a little easier.", "content": "We have some exciting news for fans of big batches of metadata: this year’s public data file is now available. Like in years past, we’ve wrapped up all of our metadata records into a single download for those who want to get started using all Crossref metadata records.\nWe’ve once again made this year’s public data file available via Academic Torrents, and in response to some feedback we’ve received from public data file users, we’ve taken a few additional steps to make accessing this 185 gb file a little easier.\nFirst, we’re proactively hosting seeds in a few locations around the world to improve torrent download performance in terms of both speed and reliability.\nAnd second, we’ve added an option to download this year’s public data file directly from Amazon S3 for a small transaction fee paid by the recipient, bypassing the need to use the torrent altogether. The fee just covers the AWS cost of the download. Instructions for downloading the public data file via the \u0026ldquo;Requester Pays\u0026rdquo; method are available on the \u0026ldquo;Tips for working with Crossref public data files and Plus snapshots\u0026rdquo; page.\nThe 2023 public data file features over 140 million metadata records deposited with Crossref through the end of March 2023, including over 76,000 grant records. Because Crossref metadata is always openly available, you can use our API to keep your local copy of our metadata corpus up to date with new and updated records.\nIn previous years, closed and limited references were removed from the public data file. Since we updated our membership terms to make all deposited references open in 2022, the 2023 public data file for the first time includes all references deposited with us.\nWe hope you find this public data file useful. Should you have any questions about how to access or use the file, please see the tips below, or bring your questions to our community forum.\nTips for using the torrent and retrieving incremental updates Use the public data file if you want all Crossref metadata records. Everyone is welcome to the metadata, but it will be much faster for you and much easier on our APIs to get so many records in one file. Here are some tips on how to work with the file.\nUse the REST API to incrementally add new and updated records once you have the initial file. Here is how to get started (and avoid getting blocked in your enthusiasm to use all this great metadata!).\nWhile bibliographic metadata is generally required, because lots of metadata is optional, records will vary in quality and completeness.\nQuestions, comments, and feedback are welcome at support@crossref.org.\n", "headings": ["Tips for using the torrent and retrieving incremental updates"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-look-out-for-a-refreshed-interface-and-improvements-for-ithenticate-v2-account-administrators/", "title": "Similarity Check: look out for a refreshed interface and improvements for iThenticate v2 account administrators", "subtitle":"", "rank": 1, "lastmod": "2023-05-01", "lastmod_ts": 1682899200, "section": "Blog", "tags": [], "description": "In 2022, we flagged up some changes to Similarity Check, which were taking place in v2 of Turnitin\u0026rsquo;s iThenticate tool used by members participating in the service. We noted that further enhancements were planned, and want to highlight some changes that are coming very soon. These changes will affect functionality that is used by account administrators, and doesn\u0026rsquo;t affect the Similarity Reports themselves.\nFrom Wednesday 3 May 2023, administrators of iThenticate v2 accounts will notice some changes to the interface and improvements to the Users, Groups, Integrations, Statistics and Paper Lookup sections.", "content": "In 2022, we flagged up some changes to Similarity Check, which were taking place in v2 of Turnitin\u0026rsquo;s iThenticate tool used by members participating in the service. We noted that further enhancements were planned, and want to highlight some changes that are coming very soon. These changes will affect functionality that is used by account administrators, and doesn\u0026rsquo;t affect the Similarity Reports themselves.\nFrom Wednesday 3 May 2023, administrators of iThenticate v2 accounts will notice some changes to the interface and improvements to the Users, Groups, Integrations, Statistics and Paper Lookup sections.\nLogging in iThenticate v2 account administrators and browser users will see a new login page when logging in to iThenticate v2:\nA refreshed interface Once logged in to iThenticate v2, account administrators will see an updated design, with improved notifications to let them know whether a task/action has been successfully completed or not.\nUsers There will be improvements to the user management system for account administrators, including a much clearer navigation menu for managing active, pending and deactivated users.\nThere will also be a filtering option on the Users page to search for active, pending and deactivated users by first name, last name, email address, group and date added. In addition coloured labels will be introduced to easily identify the level of access (or \u0026lsquo;Role\u0026rsquo;) for each user.\nAn improved bulk user import process will be available, with clearer guidance on any issues that may arise during the upload. This new development will also include new screens for adding and editing users with more notifications to help prevent mistakes.\nIntegrations For account administrators managing peer review management system integrations and needing to generate API keys, the Integrations page will be improved to make copying API keys simpler.\nStatistics iThenticate v2 administrators will also notice some improvements to the Statistics page. Usage data should load faster and will be sortable by user group. They will also be able to generate large usage reports of over 100k submissions.\nPaper lookup The Paper lookup will allow iThenticate v2 account administrators to find submissions that have been made from any integration connected to their iThenticate v2 account. They can be found by searching the paper ID (or oid number) of the submission.\nPlease note: the ability to search for submissions by the user\u0026rsquo;s name is available for manuscripts submitted via the iThenticate v2 website only and not for papers submitted via an integration.\nNew password requirements To improve the security of users\u0026rsquo; accounts, new password requirements will be introduced, including a minimum of 8 symbols, 1 special symbol, 1 upper case letter, and 1 number.\nNext in iThenticate v2 Turnitin, who produce iThenticate, are currently working on a number of new features and developments including an improved similarity report, paraphrase and AI writing detection. A detailed timeline is not yet available but we\u0026rsquo;ll be updating you on these new developments in the coming months.\n✏️ Do get in touch via support@crossref.org if you have any questions about iThenticate v1 or v2 or start a discussion by commenting on this post below.\n", "headings": ["Logging in","A refreshed interface","Users","Integrations","Statistics","Paper lookup","New password requirements","Next in iThenticate v2"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/isr-part-four-working-together-as-a-community-to-preserve-the-integrity-of-the-scholarly-record/", "title": "ISR part four: Working together as a community to preserve the integrity of the scholarly record", "subtitle":"", "rank": 1, "lastmod": "2023-04-26", "lastmod_ts": 1682467200, "section": "Blog", "tags": [], "description": "We\u0026rsquo;ve been spending some time speaking to the community about our role in research integrity, and particularly the integrity of the scholarly record. In this blog, we\u0026rsquo;ll be sharing what we\u0026rsquo;ve discovered, and what we\u0026rsquo;ve been up to in this area.\nWe’ve discussed in our previous posts in the “Integrity of the Scholarly Record (ISR)” series that the infrastructure Crossref builds and operates (together with our partners and integrators) captures and preserves the scholarly record, making it openly available for humans and machines through metadata and relationships about all research activity.", "content": "We\u0026rsquo;ve been spending some time speaking to the community about our role in research integrity, and particularly the integrity of the scholarly record. In this blog, we\u0026rsquo;ll be sharing what we\u0026rsquo;ve discovered, and what we\u0026rsquo;ve been up to in this area.\nWe’ve discussed in our previous posts in the “Integrity of the Scholarly Record (ISR)” series that the infrastructure Crossref builds and operates (together with our partners and integrators) captures and preserves the scholarly record, making it openly available for humans and machines through metadata and relationships about all research activity. This Research Nexus makes it easier and faster for everyone involved in research performance, management, and communications to understand information in context and make decisions about the trustworthiness of organizations and their published research outputs. Conversely, it can make it harder for parties to pass off information as trustworthy when the information doesn\u0026rsquo;t include that context.\nThe community needs open scholarly infrastructure that can adapt to the changes in scholarly research and communications, and we’ve been changing and adapting already by building on the concept of the scholarly record with our vision:\nLike others, we envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nWe don’t assess the quality of the work that our members register, and we keep the barriers to membership deliberately low to ensure that we are capturing as much of the scholarly record as possible and encouraging best practice. We are careful to talk about Crossref’s specific role being with the Integrity of the Scholarly Record (ISR), and not the broader area of ‘research integrity’ (i.e. the integrity of the research process or content itself).\nBut there are many challenges and threats to research integrity and the integrity of the scholarly record, and there are tradeoffs with keeping the barriers to membership low. With that in mind, we have been dedicating more time to speaking with the community to explore what part we are and should in future play to help the community assess and improve trustworthiness in the scholarly record. We also want to work out where we can make use of our neutral, central role to convene different groups in scholarly communications to work together on these challenges.\nA revealing afternoon in Frankfurt Our starting point was a roundtable discussion in Frankfurt in October 2022. We organized it to coincide with the Frankfurt Book Fair, but the invited participants were from a wider spectrum than just publishers. The 40 invited participants represented editors, funders, research integrity professionals at publishers, representatives of ministries of science, and other partner organizations such as OASPA, COPE, STEM and DOAJ.\nThis half-day session enabled us to sense-check our thinking with the community and get input into whether our position is the best one for their needs.\nEd Pentz introduced the session by reminding participants that integrity is key to Crossref’s mission and is the basis of the shared Research Nexus vision. Amanda (that’s me) talked through our current membership processes, recent membership trends, and why wider participation is key and also the sort of questions the community comes to Crossref to solve (eg title ownership disputes). And finally, Ginny Hendricks talked through the specific services and metadata that Crossref has already developed to support the community as signals of trustworthiness, and introduced some new activities and ideas.\nYou can check out the slide deck and for more background, read our previous posts in the ISR series.\nParticipants then split into small groups representing a mix of communities, and we asked them to discuss three key questions:\nIs Crossref’s role what you expected? What surprised you? What are we missing? Are you aware of Crossref services? What are the barriers to more uptake? What are the challenges and opportunities? What more could Crossref or its members do? After discussion, each small group fed back to the room, and we followed up with a whole group discussion, before ending the day with a post-it note exercise for what Crossref should start doing, stop doing, and continue doing.\nHere\u0026rsquo;s what we learned.\nThe importance of whole community involvement in research integrity and ISR The need for all parts of the community to come together to solve the problems of research integrity came through loud and clear - there is no single group that can solve this problem on its own.\nPublishers expressed frustration that responsibility for research integrity has been placed seemingly solely in their hands when institutions and funders can “unwittingly incentivise bad behaviour”. But it was clear that funders are just as concerned with research integrity issues, with many having made a dedicated trip for the roundtable. There were comments that bringing publishers and funders together around these issues was a rare but important opportunity, and there were calls for this to be an annual event. Both funders and publishers called for more involvement from and inclusion of research institutions in the discussion.\nThe group agreed that Crossref’s main focus should continue to be capturing and sharing the scholarly record, and that metadata and relationships are key for attribution, evidence, and provenance. One participant commented that “you can’t make open science work unless the metadata is complete” and that this would only happen with efforts throughout the community. Accurate and complete metadata needs to be:\npushed for by funders and institutions (through advocacy and policy) provided by the authors and other contributors collated, curated, and registered by the publishers and repositories collected, matched, (sometimes cleansed), and distributed by Crossref. (and we would add “prioritised by all who want to support open infrastructure over commercial alternatives”) Interestingly, this echoes the ‘metadata personas’ output of the Metadata 20/20 initiative which defined roles in the community’s collective metadata effort:\nMetadata Creators: providing descriptive information (metadata) about research and scholarly objects. Metadata Curators: classifying, normalising, and standardising this descriptive information to increase its value as a resource. Metadata Custodians: storing and maintaining this descriptive information and making it available for consumers. Metadata Consumers: knowingly or unknowingly using the descriptive information to find, discover, connect, cite, and assess research objects. Importance of whole-publisher involvement A few participants, particularly those in editorial or integrity roles at publishing organisations, had not previously made the connection that metadata could be important signals of integrity. This highlighted a key problem - working with Crossref is seen by publishers as a technical/production workflow issue, and so knowledge of the benefits of metadata can be siloed within those teams. Crossref needs to reach out to editorial and research integrity teams to explain that good metadata isn’t just an end in itself and reinforce the impact it has on research integrity. This buy-in from across publisher organisations is vital.\nWe’re currently recruiting a Community Engagement Manager with editorial or research integrity experience to dedicate time to this area, to advocate for richer metadata within the editorial community, and progress this important conversation.\nAgreement of the importance of metadata but an acknowledgment that this brings extra cost Most participants agreed that rich metadata and relationships provide a core tool in establishing and protecting integrity. But they also acknowledged that collecting and registering more metadata often comes with an extra cost - whether that’s from system changes or just extra staff time. This is particularly true where publishers are working with third-party platforms and suppliers where there may be additional costs for adding fields and functionality to collect more metadata and register it with Crossref. Where knowledge of metadata is siloed in technical and production teams, and the wider benefits aren’t acknowledged, it can be hard to get internal buy-in for these extra costs and efforts.\nThe Frankfurt group also pointed out that the benefits of more comprehensive metadata (and what this means for ISR) are spread across the research ecosystem, but it is the publisher that usually bears the costs.\nNeed to define which metadata elements are trust signals and make it easier for the community to provide and access them Through the course of the discussion, various elements were determined to be important to capture as “trust signals” and to identify relationships such as for retractions, conferences, reviewers, data, and when Crossref membership has been revoked for cause. We need to spend time identifying and prioritising these so that our members can do the same.\nWe need to make it easier for smaller, less technically-resourced members to provide this metadata, both through our tools and our documentation, as “doing this work can be very geeky and the documentation isn’t easy to understand as a layperson”.\nThere was also a discussion about where the metadata comes from - should community members be able to contribute metadata and assertions to other members’ records? If the provenance is captured then yes.\nOnce the metadata is captured, there remain challenges for users in where to start with the 145 million Crossref records. The groups asked Crossref to make it easier for community members to understand and use these records to make informed decisions, including by creating and sharing sample queries, libraries, and case studies.\nWe’re currently recruiting a Technical Community Manager to help improve the support we provide in this area to API users, service providers, and other metadata integrators .\nThe importance of retractions/corrections information There was a lot of discussion about retractions and their importance as trust indicators. The group was surprised by how few retractions are currently registered with Crossref through Crossmark (12k). There was a lot of discussion around why Crossmark isn’t currently being adopted, and interest in taking this forward.\nThis needs to be a focus for Crossref, to encourage members to register retractions, corrections, and updates, and to make it easier for smaller publishers. There are new and emerging publishers who really want tools to help them demonstrate the legitimacy of their research, and an easy way for them to record corrections and retractions is key.\nIn their paper Towards a connected and dynamic scholarly record of updates, corrections, and retractions (September 17th, 2022), Ginny Hendricks, Rachael Lammey, and Martyn Rittman discuss how retraction information could be more effectively used - for example, letting a preprint reader know that the resulting article has been retracted, or letting the author of an article know the data that they’ve based their work on has been withdrawn.\nCollecting the information is just the start - cascading retraction information throughout the research ecosystem is the main goal, and Crossref plays a central role here. As noted in the Information Quality Lab’s project Reducing the inadvertent spread of retracted science: Shaping a research and implementation agenda, “Many retracted papers are not marked as retracted on publisher and aggregator sites, and retracted articles may still be found in readers’ PDF libraries, including in reference management systems such as Zotero, EndNote, and Mendeley”.\nIt’s particularly important that this information is fed back to funders and institutions, and the group discussed having push notifications to these audiences for retractions. Some funders even employ staff members whose main purpose is to identify retractions.\nIt was pointed out that there may be good sources of retraction information (such as Retraction Watch) that Crossref could incorporate and match in our metadata.\nGaps in ‘ownership’, and Crossref’s role The group discussed the many gaps in ownership for elements of research integrity, and some groups wondered if Crossref should actually change our approach and take on more responsibility for vetting content. However, after discussion, the group mostly agreed that this would mean a change of mission (and more staff) for Crossref and potentially limit global participation, thus making the metadata corpus less useful. Crossref should provide the widest possible metadata in an easy-to-consume format, and “other organisations can provide the verification layer”.\nIt was acknowledged that it would be easy for Crossref to get overwhelmed, so we ended the day by discussing not only what we should start doing, but also what we should stop doing. Unsurprisingly, there was a lot more to continue or start doing than stop doing!\nHowever, the fact remains that there are gaps in ownership - for example, there is no central arbiter of who ‘owns’ a journal. Also, where do you go if you have a problem with a journal? Often the Committee on Publication Ethics (COPE) is seen as a solution, but they can’t solve this problem alone - it needs a coordinated effort from funders, institutions, publishers, and other partner organisations such as the Open Access Scholarly Publishing Association (OASPA), the Directory of Opena Access Journals (DOAJ), and like-minded organisations.\nMany noted that Crossref is well-positioned to convene horizontal multi-stakeholder discussions to start to find solutions.\nWe also know that there are other industry initiatives aimed at supporting this work. The STM Association’s work on an Integrity Hub is gathering pace and aims to provide, among other things ‘a cloud-based environment for publishers to check submitted articles for research integrity issues’.\nWhat happened next? Turns out, it really is all about relationships… Since this meeting in Frankfurt last October, we’ve been focusing on relationships - thinking about how we capture them in our metadata, and working in partnership with other organisations to bolster our support for ISR.\nThe rest of this blog post highlights some of the activities underway:\nIncreasing participation in Crossref In January 2023, we launched our new GEM Program, which offers relief from fees for members in the least economically-advantaged countries in the world. By opening up participation even further, we aim to extend the corpus of open metadata, giving opportunities for more connections, more context, and more relationships.\nSupporting members in meeting best practices ISR blog 2 explained more about how we help new members become “good Crossref citizens” with automated onboarding emails, extensive documentation, events and webinars, and help from our support team, Ambassadors, and other members in our Community Forum.\nWe’ve recently joined forces with COPE, DOAJ, and OASPA to create a new online public forum for organisations interested in adopting best practices in scholarly publishing. At the Publishers Learning And Community Exchange or The PLACE, new scholarly publishers can access information from multiple agencies in one place, ask questions of the experts, and join conversations with each other. Do take a look!\nBeing clearer on the impact of better metadata As discussed earlier, better metadata can sometimes bring extra costs, and it’s helpful to understand the impact of this investment. We know from our ongoing outreach work that it’s difficult for our members to keep hearing that Crossref needs more and better metadata. They ask us for resources and increasingly want to see hard evidence of benefits to them. We recently showcased the journey of the American Society for Microbiology which went from ‘zero to hero’ in terms of metadata participation and completeness in Crossref. They describe their efforts to increase their registered metadata over the last few years, and note a significant increase in their average monthly successful DOI resolutions from ~390,000 in 2015 to an average of ~3.7 million in 2022. They found that “the more metadata we push out into the ecosystem, the more it appears to be used… Remembering that your publishing program benefits as much as everyone else’s when you deposit more metadata can help refine your short-term and long-term priorities.”\nWe know we sound like a broken record sometimes, but now other members can take it from ASM!\nEncouraging better metadata and more relationships and identifying \u0026rsquo;trust signals' We’re trying to make it easier for members to accurately register key metadata fields, with the launch of our new grants registration form which will be extended to journals and other record types soon. This includes a ROR lookup - adding this unique identifier for research organisations gives even better context for the metadata.\nWe are also working to make it possible for anyone to contribute to metadata records, and have the provenance of these contributions clearly asserted.\nMetadata adoption is still a key goal for our staff; indeed our new 2023-2025 strategic roadmap specifies…\n“We want to be a sustainable source of complete, open, and global scholarly metadata and relationships. We are working towards this vision of a ‘Research Nexus’ by demonstrating the value of richer and connected open metadata, incentivising people to meet best practices, while making it easier to do so.”\n… with item number one under projects ‘in focus’, being: “Adoption activities to focus on top metadata adoption priorities, which are:\nreferences; abstracts; grants; and ROR”. We’re continuing to talk with the community to work out which metadata elements are most useful as trust signals, and we’re trying to prioritise some of the schema changes required to capture new elements. If you haven’t already, please respond to Patricia Feeney’s metadata priorities survey.\nThinking about retractions and corrections We’ve been closely involved with the NISO CREC working group, and they should be making the initial draft recommendations public soon - watch this space!\nMaking it easier to view and compare metadata and expand the relationships Our Participation Reports provide a visualisation of the metadata that’s available via our free REST API. There’s a separate Participation Report for each member, and it shows what percentage of that member\u0026rsquo;s content includes nine key metadata elements. It’s an important tool to help those in the community understand our metadata more easily.\nWe have been working on a new version of Participation Reports, allowing more comparison between members, and extra metadata elements to communicate trustworthiness, including whether each member has thought about the long-term preservation of their content, and whether it has been added to a repository. There is a test version to look at in our Labs sandbox. Do take a look and provide feedback.\nWe’ve also made public our list of members whose membership was revoked for contravention of the membership terms.\nContinuing to work with funders We’re continuing to work with funders through our growing funder membership, the Funder Advisory Group and other groups, including the Open Research Funders Group, the HRA, Altum, Europe PMC, and the ORCID Funder Interest Group. And we’re continuing to build the important relationships between funding and outputs (see Dominika Tkaczyk’s recent report) and engage with this key audience for research integrity.\nDiscussions with the community We’ll be talking about ISR at our next community update on May 3rd - there are two versions of the meeting depending on your timezone - do sign up if you haven’t already. And if you’re attending the SSP conference in June, do come along to our panel “Working together to preserve the integrity of the scholarly record in a transparent and trustworthy way”.\n", "headings": ["A revealing afternoon in Frankfurt","The importance of whole community involvement in research integrity and ISR","Importance of whole-publisher involvement","Agreement of the importance of metadata but an acknowledgment that this brings extra cost","Need to define which metadata elements are trust signals and make it easier for the community to provide and access them","The importance of retractions/corrections information","Gaps in ‘ownership’, and Crossref’s role","What happened next? Turns out, it really is all about relationships…","Increasing participation in Crossref","Supporting members in meeting best practices","Being clearer on the impact of better metadata","Encouraging better metadata and more relationships and identifying \u0026rsquo;trust signals'","Thinking about retractions and corrections","Making it easier to view and compare metadata and expand the relationships","Continuing to work with funders","Discussions with the community"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/jobs/", "title": "Jobs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/michelle-cancel/", "title": "Michelle Cancel", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/were-hiring-new-technical-community-and-membership-roles-at-crossref/", "title": "We’re hiring! New technical, community, and membership roles at Crossref", "subtitle":"", "rank": 1, "lastmod": "2023-04-21", "lastmod_ts": 1682035200, "section": "Blog", "tags": [], "description": "Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge.\nWe are recruiting for three new staff positions, all new roles and all fully remote and flexible. See below for more about our ethos and what it\u0026rsquo;s like working at Crossref.\n🚀 Technical Community Manager, working with our \u0026lsquo;integrators\u0026rsquo; so all repository/publishing platforms and plugins, all API users incl.", "content": "Do you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge.\nWe are recruiting for three new staff positions, all new roles and all fully remote and flexible. See below for more about our ethos and what it\u0026rsquo;s like working at Crossref.\n🚀 Technical Community Manager, working with our \u0026lsquo;integrators\u0026rsquo; so all repository/publishing platforms and plugins, all API users incl. managing contracts with subscribers, and generally helping a very nice bunch of RESTful API dabblers, both novice and intermediate. The goal is to offer more interactive engagement such as sprints, and more technical consultation to help the community with things like query efficiency, public data dump ingestion, etc. Thousands of users exist, from individual researchers and small academic tools to giant technology companies. Researching and analysing usage and building tools to meet their needs is key, so this role works closely with Product and R\u0026amp;D colleagues and likely needs a developer or developer-advocacy background.\n🎯 Member Experience Manager, ramping up to handle the mammoth operation that is\u0026hellip; membership, currently 18,000 members from 150 countries, and onboarding the ~180 new joiners we welcome monthly, mostly from Africa and Asia. This role involves lots of education and relationship management, but because of the scale, we also need someone with a real business process/analysis approach, improving how our systems function so that the operation flows seamlessly and isn\u0026rsquo;t a pain for people (both members and staff). This role manages two full-time Member Support Specialists (UK and Indonesia) and three part-time contractors (USA, France, and one other as yet unknown).\n🎈 Community Engagement Manager, working with the global community of scholarly editors at a time when research integrity is top of mind for our entire ecosystem. This is a classic community role for someone keen to cross over from managing or editing journals or books and perhaps make your volunteer work official. Activities will include program and project management, event and working group facilitation, communications and content creation. You\u0026rsquo;d be interacting with groups like the Asian Council of Science Editors, the European Association of Science Editors, and the Council of Science Editors, plus many more that you\u0026rsquo;d identify. It\u0026rsquo;s all about helping editors, who work hand-in-hand with authors, to think about metadata as signals of trust and better use available services, such as those for retraction management or plagiarism checking, and helping to define needs for emerging activity too, such as machine-generated content.\nWorking at Crossref We’re a not-for-profit membership organization that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nCrossref sits at the heart of the global exchange of research information, and our job is to make it possible—and easier—to find, cite, link, assess, and reuse research, from journals and books, to preprints, data, and grants. Through partnerships and collaborations we engage with members in 150 countries (and counting) and it’s very important to us to nurture that community.\nWe’re about 45 staff and remote-first. This means that we support our teams working asynchronously and with flexible hours. We are dedicated to an open and fair research ecosystem and that’s reflected in our ethos and staff culture. We like to work hard but we have fun too! We take a creative, iterative approach to our projects, and believe that all team members can enrich the culture and performance of our whole organisation. Check out the organisation chart.\nWe are active supporters of ongoing professional development opportunities and promote self-learning at every opportunity. Crossref has a healthy financial situation and we only continue to grow. While we won’t have a clear hierarchical path for staff to follow, there are always evolving opportunities to progress and be challenged.\nWe especially encourage applications from people with backgrounds historically under-represented in research and scholarly communications.\nBookmark our jobs page to watch for future opportunities!\n", "headings": ["Working at Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/accessibility/", "title": "Accessibility", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-place-for-new-publishers-a-one-stop-shop-for-information-and-a-friendly-community/", "title": "The PLACE for new publishers – a one-stop-shop for information and a friendly community", "subtitle":"", "rank": 1, "lastmod": "2023-04-17", "lastmod_ts": 1681689600, "section": "Blog", "tags": [], "description": "The Publishers Learning And Community Exchange (PLACE) at theplace.discourse.group is a new online public forum created for organisations interested in adopting best practices in scholarly publishing. New scholarly publishers can access information from multiple agencies in one place, ask questions of the experts and join conversations with each other.\nScholarly publishing is an interesting niche of an industry – it appears at the same time ancillary and necessary to the practice and development of scholarship itself.", "content": "The Publishers Learning And Community Exchange (PLACE) at theplace.discourse.group is a new online public forum created for organisations interested in adopting best practices in scholarly publishing. New scholarly publishers can access information from multiple agencies in one place, ask questions of the experts and join conversations with each other.\nScholarly publishing is an interesting niche of an industry – it appears at the same time ancillary and necessary to the practice and development of scholarship itself. The sooner and more easily a piece of academic work is shared, the greater the chance that others will find and build upon it. Many practices of the publishing industry have been developed to support discovery and integrity of the scholarship that produces shareable works, and as the landscape of scholarly communications constantly evolves, a number of agencies arose to promote and continuously update the standards and best practices within it.\nWe realise that the sheer number of agencies involved in regulating and preserving scholarly content is in itself a challenge and can be confusing. Newer publishers may find it difficult to know where to go to find the right information, what policies they need to follow or international criteria they need to meet and how to go about doing so. When time or finances are tight, it’s not easy to try to reinvent the wheel.\nFollowing the long-established practice of signposting organisations between us, we’ve worked together with the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), and the Open Access Scholarly Publishers Association (OASPA) to establish the PLACE. We share values and goals to work more effectively to better support the needs of our communities. Each organisation is taking actions to lower barriers to participation and provide greater support for the organisations that publish scholarly and professional content that we work with.\nHence, we envisaged the PLACE as a ‘one stop shop’ for access to more consolidated and plainly put information, to support publishers in adopting best practices the industry developed. We also hope that by setting the information service as a forum, we will encourage open exchange with publishers who aspire to do things right, as well as between them.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/renewed-persistence/", "title": "Renewed Persistence", "subtitle":"", "rank": 1, "lastmod": "2023-04-01", "lastmod_ts": 1680307200, "section": "Blog", "tags": [], "description": "We believe in Persistent Identifiers. We believe in defence in depth. Today we\u0026rsquo;re excited to announce an upgrade to our data resilience strategy.\nDefence in depth means layers of security and resilience, and that means layers of backups. For some years now, our last line of defence has been a reliable, tried-and-tested technology. One that\u0026rsquo;s been around for a while. Yes, I\u0026rsquo;m talking about the humble 5¼ inch floppy disk.", "content": "We believe in Persistent Identifiers. We believe in defence in depth. Today we\u0026rsquo;re excited to announce an upgrade to our data resilience strategy.\nDefence in depth means layers of security and resilience, and that means layers of backups. For some years now, our last line of defence has been a reliable, tried-and-tested technology. One that\u0026rsquo;s been around for a while. Yes, I\u0026rsquo;m talking about the humble 5¼ inch floppy disk.\nThis may come as surprise to some. When things go well, you\u0026rsquo;re probably never aware of them. In day to day use, the only time a typical Crossref user sees a floppy disk is when they click \u0026lsquo;save\u0026rsquo; (yes, some journals still require submissions in Microsoft Word).\nHistory But why?\nLet me take you back to the early days of Crossref. The technology scene was different. This data was too important to trust to new and unproven technologies like Zip disks, CD-Rs or USB Thumb Drives. So we started with punched cards.\nIBM 5081-style punched card.\nPunched cards are reliable and durable as long as you don\u0026rsquo;t fold, spindle or mutilate them. But even in 2001 we knew that punched cards\u0026rsquo; days were numbered. The capacity of 80 characters kept DOIs short. Translating DOIs into EBCDIC made ASCII a challenge, let alone SICIs. We kept a close eye on the nascent Unicode.\nBreathing Room In 2017 the change of DOI display guidelines from http://0-dx-doi-org.libus.csd.mu.edu to https://0-doi-org.libus.csd.mu.edu shortened each DOI by 2 characters, buying us some time. But eventually we knew we had to upgrade to something more modern.\nSo we migrated to 5¼ inch floppy disks.\n5¼ Floppy disk in drive\nAt 640 KB per disk these were a huge improvement. We could fit around 20,000 DOIs on one floppy. Today we only need around 10,000 floppy disks to store all of our DOIs (not the metadata, just the DOIs). Surprisingly this only takes about 20 metres of shelf space to store.\nTypical work from home setup. Getting ready to backup some DOIs!\nThe move to working-from-home brought an unexpected benefit. Staff mail floppy disks to each other and keep them in constant rotation, which produces a distributed fault tolerant system.\nPersistence Means Change But it can\u0026rsquo;t last forever. DOIs registration shows no sign of slowing down. It\u0026rsquo;s clear we need a new, compact storage medium. So, after months of research, we\u0026rsquo;ve invested in new equipment.\nToday we announce our migration to 3½ inch floppies.\nIf it goes to plan you won\u0026rsquo;t even notice the change.\nImage credits Punched card: IBM 5081-style punched card. Derived from public domain by Gwern.\n", "headings": ["History","Breathing Room","Persistence Means Change","Image credits"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/citation/", "title": "Citation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/data-citation/", "title": "Data Citation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/start-citing-data-now.-not-later/", "title": "Start citing data now. Not later", "subtitle":"", "rank": 1, "lastmod": "2023-03-23", "lastmod_ts": 1679529600, "section": "Blog", "tags": [], "description": "Recording data citations supports data reuse and aids research integrity and reproducibility. Crossref makes it easy for our members to submit data citations to support the scholarly record.\nTL;DR Citations are essential/core metadata that all members should submit for all articles, conference proceedings, preprints, and books. Submitting data citations to Crossref has long been possible. And it’s easy, you just need to:\nInclude data citations in the references section as you would for any other citation Include a DOI or other persistent identifier for the data if it is available - just as you would for any other citation Submit the references to Crossref through the content registration process as you would for any other record And your data citations will flow through all the normal processes that Crossref applies to citations.", "content": "Recording data citations supports data reuse and aids research integrity and reproducibility. Crossref makes it easy for our members to submit data citations to support the scholarly record.\nTL;DR Citations are essential/core metadata that all members should submit for all articles, conference proceedings, preprints, and books. Submitting data citations to Crossref has long been possible. And it’s easy, you just need to:\nInclude data citations in the references section as you would for any other citation Include a DOI or other persistent identifier for the data if it is available - just as you would for any other citation Submit the references to Crossref through the content registration process as you would for any other record And your data citations will flow through all the normal processes that Crossref applies to citations. And it will be distributed openly to the community (including DataCite!) via Crossref’s services and APIs. All data citations deposited with Crossref will be exposed in the (soon-to-be launched) Data Citation Corpus.\nAnd then, you can sit back and congratulate yourself for making your publication more useful to researchers who want to be able to reuse the data underlying your publications.\nBackground You might ask, “So if submitting Data Citations to Crossref has long been possible, why do you have to write this?”\nHistorically, authors did not cite data in the way they cited publications. Instead, they would often refer to the data in the main text of the article. This has made it hard to determine what data lay behind the research and/or access the data.\nBut the research community has increasingly recognized that data is a first-class research output and that we should treat it as such. In short, we should formally cite data.\nBut because citing data is a comparatively new practice, it has been subject to a lot of new analysis. And unsurprisingly, people analyzing data citation have discovered that there is a lot of nuance to citation of any kind.\nThere are lots of reasons for citing something. There are lots of internalized conventions for citing things. And there are different conventions for citation for different research objects. And SSH citation practice differs from STEM. And legal citation practices are different from scholarly citation practices. And citation practices even vary by subdiscipline and by journal.\nThose who have been looking at what it means to “cite data” have naturally stumbled into a thicket of divergent practices - some of which are historical holdovers, some of which are stylistic preferences, and some of which are clearly adaptations to deal with the specific needs of certain research objects/containers or different disciplines.\nThe temptation has been to try and rationalize this before extending the practice of citation to data.\n“Maybe because data is a distinct record type, we should include the fact that it is a data citation in the citation itself?”\n“Maybe because people cite data for different reasons, we should include a typology of citation types in all data citations?”\nAnd so you may hear some people say, “hold off on data citation - we don’t have an optimal way to do it yet, and it can be very complicated.”\nBut guess what?\nWe currently don’t label citations to monographs as “citation to monograph.”\nAnd we don’t currently include the reason for citation when we are citing a journal article.\nIt would be very cool if we did. And it would likely make citations even more useful if we did.\nBut citations are already useful even without these features. And so, to delay citing data indefinitely because we have an opportunity to improve the act of citation is just perverse. Our community has always opted for progress over perfection.\nFor one thing - the efforts are not mutually exclusive. We can start citing data with the current limitations of citation practices and simultaneously propose mechanisms for making citation more useful in the future, including new guidelines to deal with the unique issues that citing data poses.\nBut in the meantime, we will be doing researchers a giant favour if we at least include our imperfect and ambiguous, and unconventional references to data in the references section of an article so that they can be accessed and processed along with all the other imperfect, ambiguous and variant citations that we find so useful.\nSome of our members are already doing this. They have been for a long time. And they haven’t found it any more complicated than managing non-data references in the past.\nJoin them and make your metadata more useful.\nCite data now. Don’t put it off.\nAnd Crossref will continue to work with DataCite and the rest of the community to make the distribution even easier and more useful.\nSo who is already citing data? Top 10 members depositing data citations from November-May 2022 (broken down by DOI prefix, which is why you see some publishers listed twice):\nPrefix Member name Data citations deposited 10.1038 Springer Science and Business Media LLC 7174 10.1016 Elsevier BV 6527 10.1007 Springer Science and Business Media LLC 4748 10.5194 Copernicus GmbH 3017 10.1080 Informa UK Limited 2346 10.1177 SAGE Publications 2082 10.1002 Wiley 2048 10.1111 Wiley 1888 10.1108 Emerald 1876 10.3390 MDPI AG 1827 Top 10 data citations per deposited work (again, broken down by prefix)\nMember name Prefix Data citations deposited Data citations per work Consortium Erudit 10.7202 580 1.149 SLACK, Inc. 10.3928 462 0.646 S. Karger AG 10.1159 1653 0.532 Proceedings of the National Academy of Sciences 10.1073 973 0.502 American Academy of Pediatrics (AAP) 10.1542 486 0.397 F1000 Research Ltd 10.12688 552 0.341 American Association for the Advancement of Science (AAAS) 10.1126 952 0.317 Springer Science and Business Media LLC 10.1038 7174 0.231 JMIR Publications Inc. 10.2196 864 0.187 American Geophysical Union (AGU) 10.1029 692 0.166 These are for the prefixes with the most data citations deposited (\u0026gt;500 in 6 months) so there might be smaller members doing better than this.\nSummaries are great, but I want to see some actual examples! Here are some examples showing how data is cited by our members:\nThis eLife article: https://0-doi-org.libus.csd.mu.edu/10.7554/eLife.26410 cites this dataset in Dryad https://0-doi-org.libus.csd.mu.edu/10.5061/dryad.854j2. This Copernicus article: https://0-doi-org.libus.csd.mu.edu/10.5194/acp-22-7105-2022 cite to this dataset https://0-doi-org.libus.csd.mu.edu/10.24381/cds.bd0915c6 This Sciendo article: https://0-doi-org.libus.csd.mu.edu/10.2478/plc-2021-0008 cites this APA-hosted language competence test https://0-doi-org.libus.csd.mu.edu/10.1037/t15159-000 This De Gruyter article: https://0-doi-org.libus.csd.mu.edu/10.1515/opth-2020-0160 cites this bibliography at Oxford Bibliographies: https://0-doi-org.libus.csd.mu.edu/10.1093/OBO/9780195396584-0012 And here are some example API requests for discovering more metadata citations. You can use these API requests as examples and adapt to your own needs.\nFind all the DOIs that cite Dataset X (identified by DOI) https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=20\u0026amp;scholix=true\u0026amp;obj-id=10.5061/dryad.854j2\nFind all data citations from Crossref member X (identified by member prefix) https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=20\u0026amp;scholix=true\u0026amp;subj-id.prefix=10.7202\nFind papers with supplementary data https://0-api-crossref-org.libus.csd.mu.edu/v1/works?filter=prefix:10.3390,relation.type:is-supplemented-by\nFind all data citations to Crossref member X https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=20\u0026amp;scholix=true\u0026amp;obj-id.prefix=10.7202\nFind all data citations to DataCite member X https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=20\u0026amp;scholix=true\u0026amp;obj-id.prefix=10.5061\n", "headings": ["TL;DR","Background","So who is already citing data?","Top 10 members depositing data citations from November-May 2022","Top 10 data citations per deposited work","Summaries are great, but I want to see some actual examples!","Find all the DOIs that cite Dataset X (identified by DOI)","Find all data citations from Crossref member X (identified by member prefix)","Find papers with supplementary data","Find all data citations to Crossref member X","Find all data citations to DataCite member X"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/david-haber/", "title": "David Haber", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/deborah-plavin/", "title": "Deborah Plavin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/shooting-for-the-stars-asms-journey-towards-complete-metadata/", "title": "Shooting for the stars – ASM’s journey towards complete metadata", "subtitle":"", "rank": 1, "lastmod": "2023-03-14", "lastmod_ts": 1678752000, "section": "Blog", "tags": [], "description": "At Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent. We invest time in community initiatives such as Metadata 20/20 and Better Together webinars. We encourage members to take time to look up their participation reports, and our team can support you if you’re looking to understand and improve any aspects of metadata coverage of your content.", "content": "At Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent. We invest time in community initiatives such as Metadata 20/20 and Better Together webinars. We encourage members to take time to look up their participation reports, and our team can support you if you’re looking to understand and improve any aspects of metadata coverage of your content.\nIn 2022, we have observed with delight the growth of one of our members from basic coverage of their publications to over 90% in most areas, and no less than 70% of the corpus is covered by all key types of metadata Crossref enables (see their own participation report for details). Here, Deborah Plavin and David Haber share the story of ASM’s success and lessons learnt along the way.\nCould you introduce your organization? The American Society for Microbiology publishes 16 peer-reviewed journals advancing the microbial sciences, from food microbiology, to genomics and the microbiome, comprising 14% of all microbiology articles. Six of those are open-access journals, and 56% of ASM’s published papers are open access. Together, our journals contribute 25% of all microbiology citations.\nWould you tell us a little more about yourselves? DH: David Haber, Publishing Operations Director at the American Society for Microbiology. I live in a century-old house that is in a perpetual state of renovation due to my inability to stop starting new projects before I complete old ones.\nDP: Deborah Plavin, Digital Publishing Manager at the American Society for Microbiology. Following David’s example, my apartment in Washington D.C. is just up the block from one of the homes Duke Ellington lived in https://www.hmdb.org/m.asp?m=142334.\nWhat value do society publishers in general see in metadata in your view? DP: In my view, robust metadata allows publishers to look at changes over time, do comparative analysis within and across research areas, more easily identify trends, and plan for future analysis (e.g., if we deposit data citation information and we change our processes to make it more straightforward, do we see any change in the percentage of articles that include that information, etc.).\nDH: To echo Deborah\u0026rsquo;s point, to be able to name something distinctly and clearly identify its specific attributes is vital to understanding past research and planning for future possibilities. One of our fundamental roles as a publisher for a non-profit society is to properly lay this metadata foundation so that we can provide services and new venues for our members, authors, and readers that match their needs and track with the trends in research. Without good and robust metadata, it is impossible to truly understand the direction in which our community is pointing us.\nMetadata for your own research outputs in the last year has grown rapidly. Why such focus on metadata in 2022? DP: This is something that ASM has been chipping away at over time. Years ago we found that it wasn’t always easy to take advantage of deposits that included new kinds of metadata. That was either because we needed to work out how and where to capture it in the process or because platform providers weren’t always ready — coming up with ways to process the XML that publishers supply in many different ways takes time. These back-end processes that feed the infrastructure aren’t usually of great interest to stakeholders, and so it allowed us to play around, flounder, fail, refine, and try again.\nWe looked at having 3rd parties deposit metadata for us, and while that helped expand the kind of metadata we were delivering, it created workflow challenges of its own. What turned out to be most effective was budgeting for content cleanup projects and depositing updated and more robust metadata to Crossref.\nWe also benefited from a platform migration, which allowed us to take advantage of additional resources during that process.\nDH: Coming from a production background, I have always been fascinated with the when and how of capturing key metadata during the publishing process. When are those data good and valuable, and when should they be tossed or cleaned up for downstream deliveries? Because Deborah and ASM directors saw a more complete Crossref metadata set for our corpus as a truly valuable target, we were able to really think hard about what kind of data we were capturing and when, how those requirements may have influenced our various policies and copyediting requirements over the years, and how best to re-engineer our processes with the goal of good metadata capture throughout our publishing workflows. From our perspective, Crossref gave us a target, a “this-is-cool-bit-of-info\u0026quot; that Crossref can collect in a deposit; therefore, how can we capture that during our processes while driving further efficiencies? ASM journals had been so driven by legacy print workflows that such a change in perspective (toward metadata as a publishing object) really allowed us to re-imagine almost everything we do as a publisher.\nHas the OSTP memo influenced your effort? DP: I think that the Nelson memo hasn’t changed our focus; instead, I think it’s been another data point supporting our efforts and work in this area.\nDH: Deborah is exactly right. The release of this memo only re-affirmed our commitment to creating complete and rich metadata. The Nelson memo points to many possible paths forward, in terms of both Open Access and Open Science, but we feel our work on improving our metadata outputs positions us well to pick a path that best suits our goals as a non-profit society publisher.\nHow big was this effort? Could you draw us a picture of how many colleagues or parts of the organization were involved? Did you involve any external stakeholders, such as authors, editors, or others? DH: It was simple. Took five minutes… In all seriousness, the key is having the support of the organization as a whole. To do this properly, it is vitally important to know the end from the beginning, so to speak. It is one thing to say let’s start capturing ORCID IDs and deliver them to Crossref, but it is completely another to create a cohesive process in which those IDs are authenticated and validated throughout the workflow. So something as simple as a statement “ORCID IDs seem cool, let’s try to capture them” could affect how researchers submit files, how reviewers log into various systems (i.e., ORCID as SSO), how data are passed to production vendors, what copyeditors and XML QC people need to be focused on, and what integrations authors may expect at the time of publication. Being part of an organization that embraced such change allowed us to proceed with care with each improvement to the metadata we made.\nBut that is more about incremental improvement. The beginning of this process started when we were making upgrades to our online publishing platform, and we were trying to figure out how best to get DOIs registered for our older content. When we started looking at this, we soon realized that, sure, we could do the bare minimum and just assign DOIs to this older content outside the source XML/SGML, but did that make sense? Wouldn’t it make more sense, especially since we were updating the corpus to a new DTD, to populate the source content with these newly assigned DOIs? Once we decided that we were going to revise the older content with DOIs, it made sense for us to create a custom XSL transform routine to generate Crossref deposits that would capture as much metadata as possible. So, working with a vendor to clean and update our content for one project (an online platform update) allowed us also to make massive improvements to our Crossref metadata as a side benefit.\nOf course, I do have to apologize to the STM community for the Crossref outages in late 2019. That was just me depositing thousands of records in batches one sleepless night.\nWhat were the key challenges you encountered in this project, and how did you overcome them? DH: Resources and time are always an issue. Much of the work was done in-house in spare moments captured here and there. But there are great resources in github and at Crossref to help focus on defining what is important and what is possible in such a project. And, honestly, defining what was important and weighing that against the effort to find said important bit in the corpus of articles we have was the most challenging part of this process. In other words, limiting the focus. Once one decides to start looking at the inconsistencies in older content, it is hard not to say: “Oh, look. That semi-important footnote was treated as a generic author note rather than a conflict-of-interest statement; let’s fix that.” Once you start down that path, you can spend years fiddling with stuff. For me, a key mantra was: “We now have access to the content. We can always do another Crossref metadata update if things change or shift over time.”\nHave there been any important milestones along the way you were able to celebrate? Or any set-backs you had to resolve in the process? DP: For as long as I can remember, the importance of good metadata has been among the loudest messages of best practice in the industry. I don’t think that I have been able to really quantify/ demonstrate the value of that work. Looking at the consistent increases in the Crossref monthly resolution reports that we saw between 2015 and 2022 and looking at our participation reports has helped provide some measure of progress. For example, the number of average monthly successful resolutions in that Crossref report in 2015 was ~390,000. The last time I checked, the 2022 numbers were ~ 3.7 million. In 2023, I hope that we will be able to leverage Event Data for this as well.\nThe setbacks have fallen into two categories: timing and process. Our internal resourcing to get this done within our preferred time frame, to have the content loaded and delivered, and triage problems—it’s a battle between the calendar and competing priorities.\nDH: When Deborah first shared those stats with me, I was floored. I don’t think either of us suspected such an increase was possible. For me, the biggest setback was mistakenly sending about ~50,000 DOI records to queue and watching them all fail because I grabbed the wrong batch. Ooops. I never made that mistake again, though.\nWas any specific type of metadata or any part of the schema particularly easy or particularly difficult to get right in ASM’s production process? DH: For us, the most difficult piece of metadata revolves around data availability and how we capture linked data resources (outside of data citation resources). Because of our current editorial style (which had been print-centric for years), we did not do a good job of identifying whether there are data associated with published content in a consistent machine-readable way. We did some experiments with one of our journals to capture this outside of our normal Crossref deposit routine, but that was not as accurate or sustainable as we would have liked. But, in that experiment, we learned a few things about how we treat these data throughout our publishing process and we have plans to create a sustainable integrated workflow for this to capture resource/data linkages in our Crossref deposits.\nWhat were your thoughts on last year’s move to open references metadata? Has that impacted on your project in any way? DP: We were really excited about this; based on the rather limited approach to sorting out impact at the moment, the more metadata we push out into the ecosystem, the more it appears to be used. In my view, that is at the core of what society publishers want to do—ensure that research is accessible and discoverable wherever our users expect to find it.\nDH: 100% agree.\nHow did you keep motivated and on-course throughout? DP: These kinds of things are never done; for example, we have placeholders for CRediT roles, and getting ready for that work as part of a DTD migration will be the next big thing. The motivation for that is really meeting our commitment to the community, seeing the impact of the author metadata versus article metadata, and seeing what we can learn.\nDH: Metadata at its core is one of the pillars of our service as a publisher. To provide the best service, we need to provide the best metadata possible. Just remembering that this can be incremental, allows us to celebrate the large moments and the small. And whether one is partying with a massive 7 layer cake or a smaller cake pop, both are sweet and motivating.\nNow that the project is completed, are you seeing the benefits you were hoping to achieve? DP: This is a hard one to answer as we are using limited measurements at this time. At a high level, I am pleased. While I am eager to leverage event data in the coming year, it would be really helpful to get feedback from the community on how we can improve as well as other ways to evaluate impact.\nDH: I want to take up this idea of metadata as a service once more. I don’t mean in terms of discoverability or searchability, either. Let’s take ORCID deposited into Crossref as an example. When done properly (with the proper authentication and validation occurring in the background), we are able to integrate citation data directly to an author\u0026rsquo;s ORCID profile. We have found that this small service is really appreciated.\nIs there any metadata that you’d like to be able to include with your publishing records in the future that isn’t possible currently? What would it be and why? DP: CRediT roles would be great because it could give greater insight into collaboration within and across disciplines, it could allow for some automation and integration opportunities in the peer review process, and maybe it would visualize aspects of authors’ careers.\nDH: I second capturing CRediT roles. What would be really interesting is also creating a standard that quantifies the accessibility conformance/rating of content and passing that into Crossref.\nWhat was the key lesson you learned from this project? DP: Incremental change can be just as challenging as a massive overhaul, and so it’s important to reevaluate your goals along the way—things always change. There have been cases where we were able to do things that we hadn’t initially thought were feasible.\nDH: Always keep the larger goal in mind and remember that any project can birth a new project. Everything does not happen at once.\nWhat’s your next big challenge for 2023? DP: There is a lot to contend with in the industry right now, and in addition to that we are going through some serious infrastructure changes in our program. With all that madness comes many opportunities. For that reason, when I take a step back from the tactical implications of all that and what we are interested in doing, I think our biggest challenge in 2023 will be identifying what has made an impact and why.\nDH: In the short-term, it is making sure that none of our production process changes has negatively affected the past metadata work we spent so much time honing. Once that settles down, it will be determining the best way forward from a publishing perspective in handling true versioning and capturing accurate event data.\nBased on your experience, what would be your advice for colleagues from other scholarly publishing organizations? DP: It can seem daunting, but the small wins can create momentum and do not have to be expensive. Remembering that your publishing program benefits as much as everyone else’s when you deposit more metadata can help refine your short-term and long-term priorities.\nDH: Don’t be afraid of making a mess of things. Messes are okay. They aren’t risky. They just reveal the clutter. And clutter gives one reason to clean things up.\nTHANK YOU for the interview!\nAbout the American Society for Microbiology The American Society for Microbiology is one of the largest professional societies dedicated to the life sciences and is composed of 30,000 scientists and health practitioners. ASM\u0026rsquo;s mission is to promote and advance the microbial sciences.\nASM advances the microbial sciences through conferences, publications, certifications and educational opportunities. It enhances laboratory capacity around the globe through training and resources. It provides a network for scientists in academia, industry and clinical settings. Additionally, ASM promotes a deeper understanding of the microbial sciences to diverse audiences. For more information about ASM visit asm.org.\n", "headings": ["Could you introduce your organization?","Would you tell us a little more about yourselves?","What value do society publishers in general see in metadata in your view?","Metadata for your own research outputs in the last year has grown rapidly. Why such focus on metadata in 2022?","Has the OSTP memo influenced your effort?","How big was this effort? Could you draw us a picture of how many colleagues or parts of the organization were involved? Did you involve any external stakeholders, such as authors, editors, or others?","What were the key challenges you encountered in this project, and how did you overcome them?","Have there been any important milestones along the way you were able to celebrate? Or any set-backs you had to resolve in the process?","Was any specific type of metadata or any part of the schema particularly easy or particularly difficult to get right in ASM’s production process?","What were your thoughts on last year’s move to open references metadata? Has that impacted on your project in any way?","How did you keep motivated and on-course throughout?","Now that the project is completed, are you seeing the benefits you were hoping to achieve?","Is there any metadata that you’d like to be able to include with your publishing records in the future that isn’t possible currently? What would it be and why?","What was the key lesson you learned from this project?","What’s your next big challenge for 2023?","Based on your experience, what would be your advice for colleagues from other scholarly publishing organizations?","About the American Society for Microbiology"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/committees/nominating/", "title": "Nominating committee", "subtitle":"", "rank": 2, "lastmod": "2023-03-10", "lastmod_ts": 1678406400, "section": "Committees", "tags": [], "description": "The Nominating Committee is defined in the by-laws. There are five members of the committee and they can be either representations of organizations on the board or other regular members. Common practice is for membership to be made up of three board members not up for election that year, and two regular (non-board) members. The purpose of this committee is to review and create the slate each year for nominations to the board, ensuring fair representation of membership.", "content": "The Nominating Committee is defined in the by-laws. There are five members of the committee and they can be either representations of organizations on the board or other regular members. Common practice is for membership to be made up of three board members not up for election that year, and two regular (non-board) members. The purpose of this committee is to review and create the slate each year for nominations to the board, ensuring fair representation of membership.\nThe Committee meets to discuss the charge, process, criteria, and potential candidates, and puts forward a slate which is at least equal to or greater than the number of Board seats up for election. The slate may or may not consist of some Board members up for re-election.\n2025 Nominating Committee members Staff facilitator: Lucy Ofiesh\nJames Phillpotts*, Director of Content Transformation and Standards, Oxford University Press, committee chair Committee in formation (*) Indicates Crossref board member\nPlease contact Lucy Ofiesh with any questions\n", "headings": ["2025 Nominating Committee members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/", "title": "Board & governance", "subtitle":"", "rank": 1, "lastmod": "2023-03-09", "lastmod_ts": 1678320000, "section": "Board & governance", "tags": [], "description": "Crossref is registered as Publishers International Linking Association, Inc. (PILA) in New York, USA. You can view our certificate of incorporation and by laws. We have tax exempt status in the US as a 501(c)6 organization. Here is our antitrust policy (pdf) and our conflict of interest policy (pdf).\nThe board of directors governs Crossref. They meet three times a year and oversee the organization, set its strategic direction and make sure that Crossref fulfills its mission.", "content": "Crossref is registered as Publishers International Linking Association, Inc. (PILA) in New York, USA. You can view our certificate of incorporation and by laws. We have tax exempt status in the US as a 501(c)6 organization. Here is our antitrust policy (pdf) and our conflict of interest policy (pdf).\nThe board of directors governs Crossref. They meet three times a year and oversee the organization, set its strategic direction and make sure that Crossref fulfills its mission. A list of motions from every board meeting is available.\nOur members elect the board. Voting take place online and election results are announced at the annual business meeting during the Crossref LIVE conference each November. There is a nominating committee made up of three board members not up for election, and two non-board members. This committee puts forward a slate in September of each year for the entire membership to vote on. Please contact us if you would like to know more.\nTo ensure effective governance and get input from a wide range of members and other stakeholders in the scholarly communications community, we have committees.\nOfficers Chair: Lisa Schiff, California Digital Library (CDL) Treasurer: Rose L\u0026rsquo;Huillier, Elsevier Secretary: Lucy Ofiesh, Crossref Chief Operating Officer Assistant Secretary: Ed Pentz, Crossref Executive Director Board members Board member Representative Alternate Location Org Type Term APA Aaron Wood Jasper Simons USA Society 2025-2027 Austrian Science Fund (FWF) Katharina Rieck Falk Reckling Austria Research Funder 2025-2027 Beilstein-Institut Wendy Patterson Carsten Kettner Germany Nonprofit 2024-2026 California Digital Library (CDL) Lisa Schiff Chad Nelson US Library 2025-2027 Clarivate Christine Stohn Francesca Buckland USA Company 2023-2025 Elsevier BV Rose L\u0026rsquo;Huillier Alok Pendse Netherlands Publisher 2023-2025 Korean Council of Science Editors Kihong Kim Sue Kim South Korea Nonprofit 2024-2026 MIT Press Nick Lindsay Amy Brand US University Press 2023-2026 Open Edition Marin Dacos Marie Pellen France Nonprofit 2021-2024 Oxford University Press James Phillpotts John Campbell UK University press 2024-2026 Pan Africa Science Journal Oscar Donde Kenneth Onditi Kenya Nnprofit 2023-2025 Springer Nature Anjalie Nawaratne Nick Campbell UK Publisher 2023-2026 Taylor and Francis Amanda Ward Stewart Gardiner UK Publisher 2025-2027 Universidad Autónoma de Chile Ivan Suazo Chile University 2024-2026 University of Chicago Press Ashley Towne US University 2024-2026 Vilnius University Vincas Grigas Arūnas Gudinavičius Lithuania University 2024-2026 2024 Motions passed November 2024 Board meeting To approve the agenda for the November 12-13, 2024 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To approve Crossref’s FY 2025 budget as presented. To provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2025 slate consisting of Revenue Tier 1 seats and Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between Board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws); and To provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2025 annual meeting: Construct a slate of nominees that is at least equal to, and exceeds by no more than two, the number of available seats in each of the Revenue Tier categories (as defined in Crossref’s Bylaws); Prioritize maintaining representation of members including business models (commercial and non-commercial), geography, and sector (such as research institutions, libraries, funders, etc.) in addition to continuing to seek balance of personal characteristics of the individual representative, such as gender, language, and racial and ethnic background; Work with staff to develop a call for interest that reflects those areas of skill that are functional priorities for the Board, and clearly describes requirements with non-native English speakers in mind; and July 2024 Board meeting To approve the agenda for the July 9-10, 2024 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To approve the FY2023 audited financial statements. To create a non-voting class of Crossref membership, open only to otherwise-eligible organizations for whom, as determined by Crossref’s legal counsel, international sanctions law prohibits voting membership but permits non-voting membership; and further To agree to waive the 10 days notice requirement for bylaws changes in exchange for seven days’ notice of the proposed changes; and further To adopt the necessary changes to Crossref’s Bylaws, as set forth in the meeting materials. (APA, Oxford University Press, and Springer Nature abstaining.) To restrict $500,000 to the capital reserve fund. (AIP, Center for Open Science, Elsevier, and Oxford University Press against; Clarivate abstaining.) March 2024 Board meeting To approve the agenda for the March 12-13, 2024 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To appoint Lucy Ofiesh as Secretary and Ed Pentz as Assistant Secretary. January 2024 Board meeting Lisa Schiff as Chair of the Board. (CDL abstaining.) Rose L’Huillier as Treasurer. (Elsevier abstaining.) To appoint each of Wendy Patterson, James Phillpotts, and Christine Stohn to the Executive Committee. (Beilstein Institut, Oxford University Press, and Clarivate abstaining) To appoint Nick Lindsay as Chair of the Audit Committee. (MIT Press abstaining) To appoint Vincas Grigas as Chair of the Membership \u0026amp; Fees Committee. (Vilnius University abstaining) To appoint James Phillpotts as Chair of the Nominating Committee. (Oxford University Press abstaining) To approve the Minutes of the November 2023 meeting of the Board of Directors. 2023 Motions passed November 2023 Board meeting To approve the agenda for the November 14-15, 2023 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To approve Crossref’s FY 2023 budget as presented. To provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2024 slate consisting of Revenue Tier 1 seats and Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between Board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws); and To provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2024 annual meeting: Construct a slate of nominees that is at least equal to, and exceeds by no more than two, the number of available seats in each of the Revenue Tier categories (as defined in Crossref’s Bylaws); Prioritize maintaining representation of members including business models (commercial and non-commercial), geography, and sector (such as research institutions, libraries, funders, etc.) in addition to continuing to seek balance of personal characteristics of the individual representative, such as gender, language, and racial and ethnic background; Work with staff to develop a call for interest that reflects those areas of skill that are functional priorities for the Board, and clearly describes requirements with non-native English speakers in mind; and Encourage and recruit Crossref members who are research funders to apply to the Board. To amend Art. V. of Sec. 9 of the Crossref Bylaws to replace the words “until the next annual meeting” with the words “until the end of the term which the director was elected or appointed to fill, or for a term to be determined by the Board which ends at an annual meeting (but in no event longer than three (3) years).” To adopt the following remit for the Membership and Fees Committee: The Membership and Fees Committee (M\u0026amp;F Committee) is charged with working with staff to regularly review the policies and fees for all our services and making recommendations to the board about any changes. The M\u0026amp;F Committee will review the proposed policies and fees (if any) for new services while they are being developed and make recommendations to the board. The Committee ensures that all policies and fees are consistent with Crossref’s mission, fee principles, and the Principles of Open Scholarly Infrastructure (POSI). In addition, the board can also ask the committee to address specific issues about policies and services. 2024 committee scope: Recruit committee members as needed to represent Crossref stakeholders Work with the leadership team on the project approved at the November 2023 board meeting to assess Crossref’s long-term resourcing model Review and provide feedback on project outputs, including SWOT analysis, modelling of new fees, and impact/effort assessments of fee changes. Support staff in getting feedback from the community on fee models and possible changes to current fees. Committee work might also include advising on how we engage the community in the process, such as reviewing RFPs for a project consultant, refining the questions we ask, and reviewing the input. Make recommendations to the board for any proposed fee changes Share findings publicly with the community July 2023 Board meeting To approve the agenda for the July 11-12, 2023 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To ratify the election of Mike Schramm to fill the seat vacated by the resignation of Marc Hurlbert, to serve out the remainder of the current term of the vacated seat or until his respective successor is duly appointed and qualified. (NISC abstaining.) To approve the FY2022 audited financial statements. To restrict $2,700,000 of surplus funds to Crossref’s capital reserve fund. To amend Article I, Section 5 of the Bylaws to add the words “or required by applicable international sanctions compliance” after the words “non-payment of dues and fees,” and to delegate to the Executive Director the authority to carry out such suspensions or expulsions. To approve the revocation of membership of those Crossref members listed in the meeting materials as having been identified as being located in jurisdictions or industry areas subject to applicable sanctions regimes, effective as of December 31, 2023 unless sooner lifted. To ratify the membership revocations previously approved by the ExCo of the Crossref members listed in the meeting materials. To authorize Crossref personnel, further to the July 2019 resolution of the Board authorizing the for-cause termination of OMICS Publishing Group and several related entities, to terminate the membership of any additional Crossref member that is identified by Crossref staff as a likely affiliate of OMICS or controlled by its founder, where the member is unable to provide evidence of independent ownership. March 2023 Board meeting Lisa Schiff as Chair of the Board. (CDL abstaining.) Rose L’Huillier as Treasurer. (Elsevier abstaining.) To appoint each of Wendy Patterson and Christine Stohn to the Executive Committee. (Beilstein Institut and Clarivate abstaining) To appoint James Phillpotts to the Executive Committee. (Oxford University Press abstaining) To appoint Lucy Ofiesh as Secretary and Ed Pentz as Assistant Secretary. To appoint Aaron Wood as Chair of the Nominating Committee. (APA abstaining) To appoint Penelope Lewis as Chair of the Audit Committee. (AIP abstaining) To approve the agenda for the March 8-9, 2023 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To limit the 2023 Board slate to a number of nominees that is at least equal to, and exceeds by no more than two, the number of available seats in each of the Revenue Tier categories. To provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2023 slate consisting of Revenue Tier 1 seats and Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between Board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws); and To provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2023 annual meeting: Construct a slate of nominees that is at least equal to, and exceeds by no more than one, the number of available seats in each of the Revenue Tier categories (as defined in Crossref’s Bylaws); Prioritize maintaining representation of members including business models (commercial and non-commercial), geography, and sector (such as research institutions, libraries, funders, etc.) in addition to continuing to seek balance of personal characteristics of the individual representative, such as gender, language, and racial and ethnic background; Work with staff to develop a call for interest that reflects those areas of skill that are functional priorities for the Board, and clearly describes requirements with non-native English speakers in mind; and Prepare strategic issues for discussion at the July board meeting about nominating and governance goals and practices. 2022 Motions passed November 2022 Board meeting To approve the agenda for the November 8-9, 2022 meeting of the Board of Directors. (APA, Open Edition abstaining.) To approve the Consent Agenda as set forth in the meeting materials. (APA, Open Edition abstaining.) To approve Crossref’s FY 2023 budget as presented. (APA abstaining.) To approve the proposal to evolve the fee assistance program into a more expansive Global Equitable Membership (GEM) Program involving: waiving annual membership fees as well as content registration fees; and offering this by default to all eligible countries irrespective of joining through any specific Sponsor, or independently, subject to annual review. July 2022 Board meeting All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the July 12-13, 2022 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. (APA abstaining.) To ratify the appointment of Aaron Wood to fill the American Psychological Association seat vacated by Jasper Simons, to serve out the remainder of the current term or until his respective successor is duly appointed and qualified. (APA abstaining.) To approve the FY2021 audited financial statements, and to accept the recommendations from the Audit Committee. Crossref commits to support, along with the California Digital Library and DataCite, the long term sustainability of ROR by funding a share of ROR operating costs through its normal expense budget on an ongoing basis, along with joint governance and sustainability oversight. Crossref will include support for ROR in the normal budget proposal presented to the board in November. (CDL abstaining.) March 2022 Board meeting All motions passed unanimously except as otherwise noted.\nTo elect Reshma Shaikh as Chair of the Board. (Springer Nature abstaining) To elect Damian Pattison as Treasurer. (eLife abstaining) To appoint each of Rose L’Huillier and Wendy Patterson to the Executive Committee. (Elsevier and Beilstein Institut abstaining) To appoint Lucy Ofiesh as Secretary and Ed Pentz as Assistant Secretary of the Corporation. To appoint Jasper Simons as Chair of the Audit Committee. (APA abstaining) To appoint Abel Packer as Chair of the Membership \u0026amp; Fees Committee. (SciELO abstaining) To appoint Todd Toler as Chair of the Membership \u0026amp; Fees Committee. To ratify the appointment of Marc Hurlbert to fill the Melanoma Research Alliance seat vacated by Kristin Mueller, and of Damian Pattison to fill the eLife seat vacated by Melissa Harrison, in each case to serve out the remainder of the current term or until their respective successor is duly appointed and qualified. (eLife abstaining) To approve the agenda for the March 9-10, 2022 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. That, based on a technical assessment, Crossref will change its reference distribution policy so that all references registered with Crossref are treated the same as other metadata, following a planned transition. To limit the 2022 Board slate to a number of nominees that is at least equal to, and exceeds by no more than one, the number of available seats in each of the Revenue Tier categories. (6 in favor; 5 opposed; Clarivate abstaining) To provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2022 slate consisting of Revenue Tier 1 seats and Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between Board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws); and To provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2022 annual meeting: Construct a slate of nominees that is at least equal to, and exceeds by no more than one, the number of available seats in each of the Revenue Tier categories (as defined in Crossref’s Bylaws); Prioritize maintaining representation of members having both commercial and non-commercial business models, in addition to continuing to seek balance across factors such as gender, ethnic and racial background, geography, and sector; Work with staff to develop a call for interest that reflects those areas of skill that are functional priorities for the Board; and Take into account any recommendations from the previous year’s Nominating Committee. To appoint Liz Allen to the Executive Committee. (Taylor \u0026amp; Francis abstaining) 2021 Motions passed November 2021 Board meeting All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the November 9-10, 2021 meeting of the Board of Directors. To approve the Consent Agenda as set forth in the meeting materials. To adopt the proposed funds framework and investment policy changes as set forth in the meeting materials; and To authorize (1) the Investment Committee to establish a set of high-level guidelines for ethical and sustainable investment practices, and (2) the Leadership Team and the Investment Committee to implement Crossref’s capital investments consistent with those guidelines. To adopt Crossref’s proposed travel and events commitments, including budget levels, set forth in the meeting materials. To authorize the Leadership Team, supported by counsel, to proceed with negotiations with Turnitin, LLC based on the recommendations set forth in the Board meeting materials, as well as to (1) further investigate options for image detection and (2) continue to promote and cultivate longer-term alternative solutions within the plagiarism detection market. To approve Crossref’s FY 2022 budget as presented. July 2021 Board meeting All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the July 13-14, 2021 meeting of the Board of Directors. To approve the consent agenda as set forth in the meeting minutes. To approve the FY2020 audited financial statements, and to accept the recommendations from the Audit Committee. To ratify the Executive Committee’s termination of Graduate School of Economics and Management membership. To terminate the subsequent membership of University of Economic and Management. To disallow membership for any applications that the Leadership Team determines are the same entity or organization trying to regain membership. 7.To approve starting the recruitment and hiring for an additional infrastructure position as soon as possible. March 2021 Board meeting: All motions passed unanimously except as otherwise noted.\nTo elect Scott Delman as Chairman of the Board. (ACM abstaining) To elect Catherine Mitchell as Treasurer. (CDL abstaining) To appoint each of Melissa Harrison, Rose L’Huillier, and Reshma Shaikh to the Executive Committee. (eLife, Springer Nature, and Elsevier abstaining) To appoint Lucy Ofiesh as Secretary and Ed Pentz as Assistant Secretary of the Corporation. To appoint Jasper Simons as Chair of the Audit Committee. (APA abstaining) To appoint Todd Toler as Chair of the Membership \u0026amp; Fees Committee. (Wiley abstaining) To ratify the appointment of Dean Sanderson to fill the AIP seat vacated by Jason Wilde for the remainder of AIP’s term until March 2021. (AIP abstaining) To approve the agenda for the March 9-10, 2021 meeting of the Board of Directors. To approve the consent agenda as set forth in the meeting minutes. To appoint Liz Allen as Chair of the Nominating Committee. (Taylor \u0026amp; Francis abstaining) To instruct the Nominating Committee to (1) put forward a slate to fill three Tier 1 seats and two Tier 2 seats with one additional candidate per tier, for a total of four Tier 1 candidates and three Tier 2 candidates; and (2) to propose at least one name from a funder member for the current round of elections, with the Crossref Board to review this approach following the 2021 Board election. (APA and AJOL abstaining.) 2020 Motions passed November 2020 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the November 10-11, 2020 meeting of the Board of Directors.\nTo approve the Minutes of the July 2020 meeting of the Board of Directors.\nTo approve the Minutes of the October 2020 strategy session of the Board of Directors.\nTo hold the March 2021 Crossref Board meeting, in virtual format, on March 9-10, 2021.\nTo adopt the Audit Committee’s report as presented.\nTo endorse the Principles of Open Scholarly Infrastructure, as set forth in the meeting materials. (APA voting against; Elsevier abstaining.)\nTo approve the 2020 budget as presented.\nThat the Crossref Board supports another organization’s taking ownership of the Distributed Usage Logging (“DUL”) initiative. Crossref will support the DUL proof of concept, as is, until the March 2021 Crossref Board meeting or until ownership has been transitioned, whichever is sooner. Crossref will provide adequate transition support when the service migrates, and continue to support the registration of article-level DUL endpoint metadata if another organization takes over and maintains the service.\nJuly 2020 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the July 8-9, 2020 meeting of the Board of Directors.\nTo approve the Minutes of the March 2020 meeting and May and June 2020 strategy sessions of the Board of Directors, with the revisions proposed by participants.\nTo adopt the Minutes of the Crossref Executive Committee meetings of April 3, April 17, May 1, May 15, May 29, and June 12, 2020.\nTo adopt the auditors’ recommendations with respect to further updating and documenting Crossref’s internal controls.\nTo direct the Crossref leadership team to analyze the options with respect to type and frequency of regular audits of Crossref’s internal controls, and to present a proposal to the Board at its November 2020 meeting.\nTo adopt the 2019 audited financial statements of the Company.\nTo direct the organization to create a standing Investment Committee to provide guidance to the board and leadership team on managing Crossref’s financial assets, which committee shall (1) seek Board approval for any change in investment strategy and (2) work closely with the leadership team with respect to communications regarding Crossref’s asset management policies and related matters.\nTo revise the Day 2 Board meeting agenda as proposed.\nThat Crossref should proactively lead an effort to explore, with other infrastructure organizations and initiatives, how we can improve the scholarly research ecosystem. Crossref is committed to the collaborative development of open scholarly infrastructure for the benefit of our members and the wider research community. Abstaining: Open Edition, SciELO\nThat the exploration referenced in the foregoing resolution should consider a range of options looking at operational, governance, technical, and product and service issues and how the organizations could take advantage of synergies, efficiencies, and opportunities for the benefit of the wider research community by working more closely together.\nTo establish an ad hoc Exploratory Committee to determine, within six weeks, concrete next steps in exploring broader scholarly infrastructure partnerships under the auspices of Crossref; said committee to include the Crossref leadership team and board members at large with relevant experience and no conflicts of interest.\nTo constitute the ad hoc Exploratory Committee to consist of five members of the Crossref leadership team and four members of the Crossref Board. Abstaining: ACM, SciELO\nThat the Executive Committee’s May 29, 2020 termination of two sponsoring members, as more particularly set forth in the meeting materials, is hereby ratified, and staff directed to work with counsel to explore alternative modes of Crossref participation for similarly-situated members consistent with US and other applicable legal constraints.\nThat the Executive Committee’s May 29, 2020 termination of certain Crossref members linked with former Crossref member OMICS is hereby ratified.\nTo accept with pleasure, subject to appropriate legal review and documentation, the rescission of Ed Pentz’s resignation as Executive Director of Crossref.\nMarch 2020 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the March 11-12, 2020 meeting of the Board of Directors.\nTo ratify the appointment of Rose L’Huillier to fill the Elsevier seat vacated by Chris Shillum, and of Andrew Smeall to fill the Hindawi seat vacated by Paul Peters, in each case to serve out the remainder of the current term or until his/her successor is duly appointed and qualified.\nTo appoint Lucy Ofiesh as Secretary of the Corporation.\nTo appoint each of Amy Brand, Catherine Mitchell, and Reshma Shaikh to the Executive Committee. (MIT Press, CDL, and Springer Nature abstaining.)\nTo appoint Andrew Smeall as Chair of the Audit Committee. (Hindawi and Springer Nature abstaining.)\nTo adopt the recommendation of the Executive Committee with respect to acquisitions of Crossref members to provide that (1) any consolidated enterprise cannot occupy more than one Board seat, but can permit a subsidiary organization to occupy its Board seat; (2) wholly-owned subsidiaries should not be treated as separate Crossref members, but should be part of a single member whose fees are based on enterprise-level revenues; and (3) pursuant to the foregoing principles, Taylor \u0026amp; Francis and F1000 Research will be treated as a single member following the acquisition of F1000 Research by Taylor \u0026amp; Francis. (AJOL, SciELO, and Taylor \u0026amp; Francis abstaining.)\nTo approve the Minutes of the November 2019 meeting of the Board of Directors. (AJOL, Springer Nature, Clarivate Analytics, Elsevier, Wiley, and Hindawi abstaining.)\nTo provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2020 slate consisting of four Revenue Tier 1 seats and two Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between Board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws).\nTo provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2020 annual meeting:\nConstruct a slate of nominees that is at least equal to, and exceeds by no more than two, the number of available seats in each of the Revenue Tier categories (as defined in Crossref’s Bylaws); Prioritize maintaining representation of members having both commercial and non-commercial business models, in addition to continuing to seek balance across factors such as gender, ethnic and racial background, geography, and sector; and Work with staff to develop a call for interest that reflects those areas of technical skill that are functional priorities for the Board. To adopt the proposed 2020 scope of work for the Membership \u0026amp; Fees Committee as set forth in the meeting materials. (AJOL abstaining.)\nTo retire Crossref’s Text and Data Mining Click Through Service due to lack of uptake, working in consultation with those Crossref members currently utilizing the service to accommodate their reasonable timing needs.\nThat, with respect to the search process for Crossref’s next Executive Director, (1) Crossref’s Executive Committee, in consultation with Crossref’s staff directors, will conduct the search; (2) the Executive Committee will propose one candidate to the Board for ratification; and (3) in the event that the Executive Committee cannot agree on a single candidate to propose to the Board, it will seek broader assistance from the Board and senior staff, at the discretion of the Executive Committee, in order to arrive at a single candidate to propose for Board ratification.\n2019 Motions passed November 2019 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the November 12-13, 2019 meeting of the Board of Directors. To approve the Minutes of the July 2019 meeting of the Board of Directors. To adopt the amended Whistleblower Policy. That staff, working with the ad hoc Strategic Working Group, will develop the framework for a discussion of key strategic questions, including alternatives to evaluate, to be held at the March 2020 Board meeting, with facilitation if appropriate. That any resource request associated with the Distributed Use Logging Initiative will be submitted to Crossref’s Executive Committee for review and approval. That the Membership \u0026amp; Fees Committee will prepare a revised M\u0026amp;F Committee Charter, to include elements addressing committee membership criteria and expectations for committee membership. To finalize the Board’s previous provisional decision to eliminate the Crossmark fee, in light of the presentation of Crossref’s proposed 2020 budget. To approve the 2020 budget as presented. July 2019 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the July 10-11, 2019 meeting of the Board of Directors. To approve the minutes of the March 2019 meeting of the Board of Directors, as revised to reflect participant feedback. To ratify the appointment of Melissa Harrison to fill the eLife seat vacated by Mark Patterson, to serve out the remainder of the current term until November 2019 or her successor is duly appointed and qualified. (eLife abstaining.) To institute a membership fee of $5 per Sponsored Member. (AJOL abstaining.) To keep Event Data in maintenance mode and defer sustainability model work until July 2020, and require staff to present a plan to the Board in July 2020 for Event Data Plus. To take the following actions: (1) to implement a practice whereby, when a member registers a content item for which it pays the standard Content Registration fee, then no additional content fee will be assessed for a subsequent registration by the same member of a version of the original content, provided that the appropriate isVersionof designator is used in the metadata; (2) to treat the isTranslationOf relationship type in the same fashion; (3) to treat corrections and retractions in the same fashion when and to the extent technically feasible; and (4) to direct the Membership \u0026amp; Fees Committee to examine Crossref’s other relationship types to determine which others, if any, should be treated in the same fashion. To approve the following Crossref Fee Principles: Crossref’s fees should: (1) Enable us to fulfill our mission to make research outputs easy to find, cite, link, assess, and reuse. (2) Encourage best practice and discourage bad practice, as Crossref policies and obligations advise. (3) Be non-discriminatory, encouraging broad participation from organizations of all sizes and types. (4) Support the long-term persistence of our services and infrastructure, so long as relevant and valuable to the community. (5) Deliver value to our members. (6) Be transparent and openly available, recommended by the Membership \u0026amp; Fees Committee and approved by the Board. (7) Be the same for all, not discounted or negotiated individually, to ensure fairness. (8) Be independent of our members’ own business models. (9) Not always be necessary, e.g., new record types are not usually separate services. (10) Be based on services not metadata. To approve the elimination of the Crossmark fee, subject to the 2020 budgeting process. To adopt the 2018 audited financial statements of the Company. To ratify the account termination, for cause, of OMICS Publishing Group (Member ID 2674); Ashdin Publishing (Member ID 2853); Scitechnol Biosoft Pvt. Ltd. (Member ID 9225); and Herbert Publications PVT LTD (Member ID 4912). To amend Art. I Sec. 5 of Crossref’s Bylaws by replacing the second sentence thereto in its entirety with the following text: “Suspension or expulsion shall be by a vote of the Board (or by action of the Executive Committee, to take effect at the time specified in such Executive Committee action and to be reviewed and ratified by a vote of the Board at the next subsequent Board meeting), except where the suspension or expulsion is the result of the non-payment of dues and fees, in which event the Board may delegate such authority to the Executive Director.” To approve a policy of causing the DOIs of permanently terminated member accounts to resolve first to an interstitial page with a message indicating the DOI has resolved to content of a party that is no longer a member, and then to the original version of the content (or, if no original, an archived version). (AIP, eLife abstaining.) To form a temporary ad hoc Strategic Review Working Group to examine (1) the feedback on strategic matters provided by the Board breakout groups at the July 2019 Board meeting; and (2) results of Crossref’s value research project once available. March 2019 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the March 6-7, 2019 meeting of the Board of Directors. To elect Paul Peters as Chairman of the Board. (Hindawi abstaining.) To elect Scott Delman as Treasurer. (ACM abstaining.) To appoint each of Amy Brand, Wim van der Stelt, and Jason Wilde to the Executive Committee. (MIT Press, Springer Nature, and AIP abstaining.) To appoint Chris Shillum as Chair of the Audit Committee. (Elsevier abstaining.) To appoint Jasper Simons as Chair of the Nominating Committee. (APA abstaining.) To appoint each of Graham McCann and Mark Patterson to the Audit Committee. (IOP and eLife abstaining.) To appoint each of Scott Delman and Catherine Mitchell to the Nominating Committee. (ACM and CDL abstaining.) To appoint Lisa Hart as Secretary of the Corporation. To approve the minutes of the November 2018 meeting of the Board of Directors. To amend Art. VII, Sec. 2 of Crossref’s Bylaws by inserting the following language after the second sentence thereof: Each such slate will be comprised such that, as nearly as practicable, one-half of the resulting Board shall be composed of Directors designated by Members then representing Revenue Tier 1; and one-half of the resulting Board shall be composed of Directors designated by Members then representing Revenue Tier 2. “Revenue Tier 1” means all consecutive membership dues categories, starting with the lowest dues category, that, when taken together, aggregate, as nearly as possible, to fifty percent (50%) of Crossref’s annual revenue. “Revenue Tier 2” means all membership dues categories above Revenue Tier 1. To adopt the Crossref Board Election Campaign Policy as proposed in the Board materials. To provide the following guidance to the Nominating Committee: To achieve balance between Revenue Tiers by proposing a 2019 slate consisting of one Revenue Tier 1 seat and four Revenue Tier 2 seats, and a 2020 slate consisting of four Revenue Tier 1 seats and two Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref’s Bylaws). To provide the following further guidance to the Nominating Committee with respect to the choice of the slate of candidates for election to the Board at the 2019 annual meeting: Construct a slate of nominees that is at least equal to, and exceeds by no more than two, the number of available seats in each of the Revenue Tier categories(as defined in Crossref’s Bylaws); Prioritize maintaining representation of members having both commercial and non-commercial business models, in addition to continuing to seek balance across factors such as gender, ethnic and racial background, geography, and sector; and Work with staff to develop a call for interest that reflects those areas of technical skill that are functional priorities for the Board. To authorize the Treasurer, Executive Director, and Secretary to open a new account at a mutually agreed-upon bank, and formally close Crossref’s Citizen’s Bank account. To ratify the appointment of Ingrida Kasperaitienė to fill the VGTU seat vacated by Eleonora Dagiene, to serve out the remainder of their current term. 2018 Motions passed November 2018 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the November 15, 2018 meeting of the Board of Directors. To approve the minutes of the July 2018 meeting of the Board of Directors. To approve the recommendation of the Membership \u0026amp; Fees Committee that Crossref begins admitting funders as members and register grant identifiers and to approve the fee structure proposed. To approve the proposed Definitive Agreement with Turnitin, LLC, subject to (1) revisions to address Board member comments summarized in the Minutes and (2) further discussions to accommodate the content licensing concern expressed by Board meeting participants. To approve the adoption of a governance structure pursuant to which Board seats will be, as nearly as practicable, designated by revenue tier, with two categories (large and small), defined so as to roughly reflect half of Crossref’s revenue and registered content items in each category. (PASSED with one abstention (MIT Press). To approve the promulgation of a policy on campaigning in Board elections, with policy language to be developed by staff to reflect, at a minimum, a prohibition on negative campaigning and member expenditure of funds on campaigns and a distinction between passive versus active campaigning. To approve the process proposed by the Governance Committee for nominations to each of the following positions: Chair, Treasurer, Executive Committee members, the Nominating Committee Chair, and the Audit Committee Chair. To amend Art. VII of the Bylaws to remove Section 3 (Independent Nominations). To amend Art. I Sections 2 and 3 of the Bylaws to reflect the current means and sequence of member acceptance, as more particularly set forth in the Board meeting materials under “Proposed Amendments to the Crossref Bylaws.” To approve the 2019 budget as proposed, with the addition of up to $50,000 for the Distributed Usage Logging initiative. July 2018 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the July 11-12, 2018 meeting of the Board of Directors. To approve the minutes of the March 2018 meeting of the Board of Directors. To adopt the 2017 audited financial statements of the Company. To approve, with recommended revisions, the revised form of Crossref Membership Terms. There was general consensus to make three revisions to the draft terms: using the term “Crossref Infrastructure and Services” throughout the document; qualifying a member’s obligation to comply with the Crossref Display Guidelines; and clarifying the GDPR compliance language. To approve the proposed Term Sheet with Turnitin, LLC, subject to revisions: (1) to expressly retain the parties’ existing terms with respect to full text use and reuse by Turnitin; and (2) to provide that the final agreement will include required milestones and deliverables coupled with express remedies, including termination of exclusivity, for certain milestone/deliverable failures. To amend Art. I Sec. 1 of Crossref’s Bylaws by replacing the text of Art. I Sec. 1 in its entirety with the following text: “Membership in Crossref shall be open to any organization that publishes professional and scholarly materials and content and otherwise meets the terms and conditions of membership established from time to time by the Board of Directors, and to such other entities as the Board of Directors shall determine from time to time.” To amend Art. V Sec. 4 to replace the phrase “on the day after” with the phrase “during the next calendar quarter immediately following”. To promulgate a policy on board alternate participation, pursuant to which (1) alternates are encouraged and welcomed to attend each meeting of Crossref’s Board of Directors that is held concurrently with Crossref’s annual meeting; (2) only a director or their alternate, but not both, may attend any other meeting of the Crossref Board of Directors; and (3) Crossref’s need-based reimbursement policy will explicitly cover the participation of alternates. March 2018 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the March 7-8, 2018 meeting of the Board of Directors. To appreciate the contributions of Lois Wasoff to the Company. To approve the minutes of the November 2017 Board meeting. Board authorizes Crossref to make a proposal to DataCite and ORCID to proceed with an Org ID initiative on the following principles: OrgID activities would be conducted through a new legal entity (or whatever structural approach is optimal from a legal standpoint), with a governing body consisting of Crossref, Date Cite, and ORCID, and potentially other nonprofit representative bodies. Other interested parties who wish to contribute financially to the entity would be welcome to participate in a non-governance role. Crossref is willing to commit $300,000 as follows: $30,000 in 2018; and $270,000 in additional startup funding, contingent on raising an additional $400,000 from other stakeholders by mid-October (so that results are available by Crossref’s November 2018 meeting). Further funding beyond Year 1 will be contingent on a full business plan being developed and approved by the Crossref Board at its November 2018 meeting. To give the Nominating Committee the following guidance with respect to the choice of the slate of candidates for election to the Board at the 2018 annual meeting: Create a slate of nominees that encourages engagement in the election of directors through a contested election; maintain stability by constructing a slate that exceeds the number of available seats by no more than two. Construct slate based on the quality of the expressions of interest from candidates and to maintain balance across organizational size, gender, and geography. To create an ad hoc Finance Committee of the Board to project the financial profile of the Company on a two- to three-year prospective basis; and analyze and respond to the financial and revenue implications of the Company’s various strategic initiatives. 2017 Motions passed November 2017 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the July 2017 Board Meeting. To accept the minutes of the Executive Committee telephone meetings on October 6, October 27 and November 7, 2017. To elect Paul Peters as Chair of the Board of Directors, after a Board vote done through written ballots in which Paul Peters and Chris Shillum were each nominated as Chair and a majority of the votes were cast for Paul. To elect Scott Delman as Treasurer, Lisa Hart as Secretary, and Ed Pentz as Executive Director and Assistant Secretary. To elect Jason Wilde, Chris Shillum and John Shaw to the Executive Committee. To elect Wim van der Stelt as Chair of the Audit Committee, and Duncan Campbell and Helen King as members of the Audit Committee. To elect Mark Patterson as the Chair of the Nominating Committee, and to defer the appointment of other Nominating Committee members until the March Board meeting. To elect Graham McCann as the Chair of the Membership \u0026amp; Fees Committee. To authorize Crossref staff to reply to the Organization Identifier Working Group’s request for information (RFI) in accordance with the recommendations from the staff report, with specific guidance that the response should state Crossref’s willingness to take a leading role in the development of an independent organization identifier registry, that there is a strong preference that a new joint venture collaboration be formed and not a new non‐profit organization and that Crossref can make resources available to support the joint venture collaboration along with grants and funding from other organizations. To approve the 2018 budget as proposed. To create an ad hoc Governance Committee, to comprise Paul Peters, Mark Patterson, Chris Shillum, Ian Bannerman, Wim van der Stelt and Lisa Hart Martin with Lois Wasoff as counsel, which will make specific recommendations to the Board at the March meeting. To create an ad hoc Technical Committee to look at infrastructure issues, the membership of which will be determined by seeking expressions of interest from the members of the Board. July 2017 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the March 2017 Board Meeting, as corrected. To accept the minutes of the Executive Committee telephone meetings on April 21 and June 23, 2017. To approve the creation of an ad hoc Finance Committee, the members of which will be appointed at the November Board meeting. To formally recognize Bernie Rous’s contributions to Crossref and to scholarly publishing over his 40-year career. To approve the recommendations of the Membership \u0026amp; Fees Committee with respect to volume discounts for current deposits of posted content. (PASSED with AIP, IEEE, Elsevier opposed; and ACM abstaining.) To approve the recommendations of the Membership \u0026amp; Fees Committee with respect to the creation of “peer review” as a new record type, with specific metadata schema and a bundled fee of $1.25 to be charged, with the clarifications that (i) the original, unaccepted author manuscript is to be excluded; and (ii) although the number of peer reviews that can be registered at the bundled fee will be unlimited now, Crossref may make changes in future based on actual experience with the new record type. To approve the recommendations of the Membership \u0026amp; Fees Committee with respect to updating the metadata delivery offering to have a single agreement that covers all metadata APIs/delivery routes, to adopt a single (updated) fee structure, and to remove case-by-case opt-outs for metadata. To approve the audited financials. March 2017 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the November 2016 Board Meeting, as corrected. To accept the minutes of the Executive Committee telephone meetings on January 24 and February 9, 2017. To appoint Eric Merkel-Sobotta to fill the vacancy on the Audit Committee. To approve the proposed charge to the Audit Committee, with the addition of language giving the Audit Committee the responsibility of overseeing technical and security audits of the company’s systems. 2016 Motions passed November 2016 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the July 2016 Board Meeting. To accept the minutes of the Executive Committee telephone meetings on September 30 and October 21, 2016. To elect Bernard Rous as Chairman and President, Gerry Grenier as Treasurer and Vice Chairman, Ed Pentz as Executive Director and Assistant Secretary, and Lisa Hart as Secretary. To elect Ian Bannerman, Chris Shillum and Jason Wilde to serve on the Executive Committee along with Bernard Rous and Gerry Grenier. To appoint John Shaw as Chair, and Paul Peters, Reny Guida (IEEE) and Mark Patterson as members, of the Nominating Committee with authority to appoint two additional members who represent companies that are not on the Board. To appoint James Walker as Chair, and Wim van der Stelt and Jasper Wilde as members of the Audit Committee. To appoint Scott Delman as Chair of the Membership \u0026amp; Fees Committee. To approve the recommendation of the Membership \u0026amp; Fees Committee with respect to pricing for the registration of DOIs for preprints, with volume discounts to be applied only to backfile deposits. To approve ongoing participation in the working group being formed to look at Organization Identifiers and the expenditure of up to US$30,000 from capital reserves to support that participation. To approve the 2017 budget as proposed. July 2016 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the March 2016 Board meeting. To accept the minutes of the Executive Committee telephone meetings on April 26 and June 24, 2016. To amend Section 2, Article VII of the bylaws to add the words “at least” before the words “equal in number” in the second sentence, so that the sentence will read “The Nominating Committee shall designate a slate of candidates for each election that is at least equal in number to the number of Directors to be elected at such election.” (PASSED, with four opposing: Springer, Elsevier, IEEE and Sage; ACM abstaining.) To adopt the revised financial policy. To accept the report from the auditors. To approve the business model for Event Data recommended by the Membership and Fees Committee subject to the clarification of the definition of “reseller” and to delegate approval of that clarification to the Executive Committee. March 2016 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the November 2015 Board meeting. To accept the minutes of the Executive Committee telephone meeting on January 27, 2016, as corrected. To appoint Jasper Simons as the Chair of the Nominating Committee, to appoint Jason Wilde and Paul Peters as committee members, and to authorize the Nominating Committee to identify two additional committee members representing companies that are not on the Board. To delete the first sentence of Section 4a of PILA’s financial policy # 3 (Approval Authority), to eliminate the requirement that salaries and other compensation of all non-officer persons who report directly to the Executive Director must be jointly approved by the Executive Director, Treasurer and President, and to delegate the authority to set such compensation to the Executive Director. (PASSED with one abstention (IOP). To apportion 10% of the capital reserve fund to be invested in accordance with a different investment policy from the rest of the capital reserve fund, in a professionally managed portfolio consisting of dividend paying stocks and other instruments. To remove from the current investment policy the requirement that maturity of securities be targeted at January 31 of each year. To disband the Taxonomies Interest Group. 2015 Motions passed November 2015 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the July 2015 Board Meeting as corrected. To accept the minutes of the Executive Committee telephone meetings on September 25,2015. To elect Bernard Rous as Chairman and President, Gerry Grenier as Treasurer and Vice Chairman, Ed Pentz as Executive Director and Assistant Secretary, and Lisa Hart as Secretary. To elect Ian Bannerman, Kathleen Keane and Chris Shillum to serve on the Executive Committee along with Bernard Rous and Gerry Grenier. To appoint Jasper Simons as Chair of the Nominating Committee. To appoint James Walker as Chair of the Audit Committee and Carsten Buhr and Renny Guida as members of the Audit Committee. To appoint Scott Delman as Chair of the Membership \u0026amp; Fees Committee. To approve the recommendation of the Membership \u0026amp; Fees Committee with respect to pricing for registration of DOIs for standards (PASSED with Elsevier abstaining). To approve the language changes to the membership rules to cover registration of preprints as proposed by staff. To approve the 2016 budget as proposed. July 2015 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting. To approve the minutes of the March 2015 Board Meeting. To accept the minutes of the Executive Committee telephone meetings on April 13 and June 12, 2015. To accept the recommendation of the Membership \u0026amp; Fees Committee that the member fees be unchanged for 2016. To approve the audited financials. To approve the recommendation to create a new Director of Product Management position. To designate the interest generated by the Capital Reserve Fund as part of the fund. To change the minimum cash balance from 3 months operating expenses to 4 months operating expenses. With respect to the proposed DOI Event Tracking (DET) service: To support the launch of the DET service and to authorize staff to move forward with the development of a plan; To ask the M\u0026amp;F Committee to review and refine the proposed sustainable revenue model; and To establish a Crossref DET Committee to oversee and guide the ongoing development of the DET service and report back to the board. To change current membership rules 12 and 13 to eliminate inconsistencies and reflect current member practice and allow assignment of DOIs to preprints in accordance with the procedures described in the duplicative works report submitted to the board. (PASSED with PLOS, VGTU Press, Hindawi Limited, IOP, de Gruyter, Johns Hopkins, Springer, and IEEE voting in favor; ACM, AIP, Sage and APA voting against; and Elsevier abstaining). March 2015 Board meeting: All motions passed unanimously except as otherwise noted.\nTo approve the agenda for the meeting with the addition of the discussion of a possible acquisition to the second day’s agenda. To approve the minutes of the November 2014 Board Meeting, as amended at the meeting. To accept the minutes of the Executive Committee telephone meeting on February 13, 2015. To give the Nominating Committee the following guidance with respect to the choice of the slate of candidates for election to the Board at the 2015 annual meeting: In designating the slate of candidates, take into account issues of Board composition and balance, with the goal that the board fairly represent the membership; Look at the balance between large, medium and small members, the balance between non-profit and commercial organizations and the geographic location of Board members; Look at issues such as board meeting attendance, committee participation and serving as an officer when considering candidates for the slate; Complete its work sufficiently in advance of the annual meeting to permit independent nominations. To appoint Chris Shillum as a member of the Executive Committee to fill the vacancy created by Carol Richman’s retirement. To adopt the whistle blower policy as presented to the board. Earlier motions Here are the the motions passed by the board from 2010 to 2014.\nMotions 2014 PDF Motions 2013 PDF Motions 2012 PDF Motions 2011 PDF Motions 2010 PDF Policy on term limits The board adopted the following policy in November 2009:\nNon-officer members of the Executive Committee (that is, members of the Executive Committee other than the Chairman and the Treasurer) may serve no more than three (3) consecutive one-year terms on the Executive Committee. After a break in service of at least one (1) year, the term-limited director shall again be eligible to serve on the Executive Committee. Years of service on the Executive Committee as an officer shall not be included in calculating the number of consecutive terms served. A director may serve no more that three (3) consecutive one-year terms as Chair. After a break in service of at least one (1) year, the term-limited director shall again be eligible to serve as Chair. A director may serve no more that three (3) consecutive one-year terms as Treasurer. After a break in service of at least one (1) year, the term-limited director shall again be eligible to serve as Treasurer. The limitations set forth in this Board Policy apply to the directors as representatives of their member companies, and not as individuals. For avoidance of doubt, this means that if a member company designates a successor representative to serve on the Board, as set forth in the By-Laws, the years of service of that member company’s prior representative on the Executive Committee, as Chair or as Treasurer (as applicable) will be included in determining whether the newly appointed representative is term-limited under this policy. This Board Policy is being implemented pursuant to resolutions adopted at the July 2009 meeting of the PILA Board of Directors and is in effect as of the date of that meeting. For purposes of determining whether a director is term-limited under this policy, each director will be deemed to have commenced service in the relevant capacity as of November, 2008. Please contact our operations director with any questions about our governance.\n", "headings": ["Officers","Board members","2024 Motions passed","November 2024 Board meeting","July 2024 Board meeting","March 2024 Board meeting","January 2024 Board meeting","2023 Motions passed","November 2023 Board meeting","July 2023 Board meeting","March 2023 Board meeting","2022 Motions passed","November 2022 Board meeting","July 2022 Board meeting","March 2022 Board meeting","2021 Motions passed","November 2021 Board meeting","July 2021 Board meeting","March 2021 Board meeting:","2020 Motions passed","November 2020 Board meeting:","July 2020 Board meeting:","March 2020 Board meeting:","2019 Motions passed","November 2019 Board meeting:","July 2019 Board meeting:","March 2019 Board meeting:","2018 Motions passed","November 2018 Board meeting:","July 2018 Board meeting:","March 2018 Board meeting:","2017 Motions passed","November 2017 Board meeting:","July 2017 Board meeting:","March 2017 Board meeting:","2016 Motions passed","November 2016 Board meeting:","July 2016 Board meeting:","March 2016 Board meeting:","2015 Motions passed","November 2015 Board meeting:","July 2015 Board meeting:","March 2015 Board meeting:","Earlier motions","Policy on term limits"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/in-the-know-on-workflows-the-metadata-user-working-group/", "title": "In the know on workflows: The metadata user working group", "subtitle":"", "rank": 1, "lastmod": "2023-02-28", "lastmod_ts": 1677542400, "section": "Blog", "tags": [], "description": "What’s in the metadata matters because it is So.Heavily.Used.\nYou might be tired of hearing me say it but that doesn’t make it any less true. Our open APIs now see over 1 billion queries per month. The metadata is ingested, displayed and redistributed by a vast, global array of systems and services that in whole or in part are often designed to point users to relevant content. It’s also heavily used by researchers, who author the content that is described in the metadata they analyze.", "content": "What’s in the metadata matters because it is So.Heavily.Used.\nYou might be tired of hearing me say it but that doesn’t make it any less true. Our open APIs now see over 1 billion queries per month. The metadata is ingested, displayed and redistributed by a vast, global array of systems and services that in whole or in part are often designed to point users to relevant content. It’s also heavily used by researchers, who author the content that is described in the metadata they analyze. It’s an interconnected supply chain of users large and small, occasional and entirely reliant on regular querying.\nTl;dr Crossref recently wrapped up our first Working Group for users of the metadata, a group that plays a key role in discoverability and the metadata supply chain. You can jump directly to the stakeholder-specific recommendations or take a moment to share your use case or feedback.\nWhy a metadata user group? Why now? A majority of Crossref metadata users rely on our free, open APIs and many are anonymous. A small but growing group of users pay for a guaranteed service level option and while their individual needs and feedback have long been integrated into Crossref’s work, as a group they provide a window into the workflows and use cases for the metadata of the scholarly record. As this use grows in strategic importance, to both Crossref and the wider community, it was clear that we might be overdue for a deeper dive into user workflows.\nIn 2021, we surveyed these subscribers for their feedback and brought together a few volunteers over a series of 5 calls to dig into a number of topics specific to regular users of metadata. This group, the first primarily non-member working group at Crossref, wrapped up in December 2022, and we are grateful for their time:\nAchraf Azhar, Centre pour la Communication Scientifique Directe (CCSD) Satam Choudhury, HighWire Press Nees Jan van Eck, CWTS-Leiden University Bethany Harris, Jisc Ajay Kumar, Nova Techset David Levy, Pubmill Bruno Ohana, biologit Michael Parkin, European Bioinformatics Institute (EMBL-EBI) Axton Pitt, Litmaps Dave Schott, Copyright Clearance Center (CCC) Stephan Stahlschmidt, German Centre for Higher Education Research and Science Studies (DZHW) This post is intended to summarize the work we did, to highlight the role of metadata users in research communications, to provide a few ideas for future efforts and, crucially, to get your feedback on the findings and recommendations. Though this particular group set out to meet for a limited time, we hope this report helps facilitate ongoing conversations with the user community.\nSurvey Highlights If you’re looking for an easy overview of users and use cases, here’s a great starting point.\nShow image × If you interpret this graphic to mean that there is a lot of variety centered on a few high level use cases, the survey and our experiences with users certainly supports that. A few key takeaways from the 2021 survey may be useful context:\nFrequency of use: At least 60% of respondents query metadata on a daily basis Use cases Finding and enhancing metadata as well as using it for general discovery are all common use cases For most users, matching DOIs and citations is a common need but for a significant group, it is their primary use case Analyzing the corpus for research was a consistent use case for 13% of respondents Metadata of particular interest Abstracts are the most desirable non-bibliographic metadata, followed by affiliation information, including RORs Some other elements (beyond citation information) that respondents find useful are: Corrections and retractions Relationship metadata Book chapters Grant information NB: The survey did not ask about references but we are frequently asked why they’re not included more often.\nIt’s also worth noting that about a third of respondents said that correct metadata is more important to them than any particular element.\nThere is more to this survey that isn’t covered here but it was kept fairly short to help with the response rate. Knowing we would have some focused time to discuss issues too numerous or nuanced to reasonably address in a survey, we compiled a long list of questions and topics for the Working Group then followed up with a second, more detailed survey to kick off the meeting series.\nWhat we set out to address We had three primary goals for this Working Group:\nHighlight the efforts of metadata users in enabling discovery and discoverability Determine direction(s) for improved engagement Inform the Crossref product development roadmap for metadata retrieval services Of course, everyone involved had some questions and topics of interest to cover, including (but not limited to):\nUnderstanding publisher workflows How best to introduce changes, e.g. for a high volume of updated records Understanding the Crossref schema Query efficiencies, i.e. ‘tips and tricks’ (here for the REST API) Which scripts, tools and/or programs are used in workflows What other metadata sources are used What kind of normalization or processing is done on ingest How metadata errors are handled What did we learn? Workflows\nI started with the admittedly ambitious goal of collecting a library of workflows. After a few years of working with users, I learned never to assume what a user was doing with the metadata, why or how. For example, some subscribers use Plus snapshots (a monthly set of all records), regularly or occasionally and some don’t use them at all. Understanding why users make the choices they do is always helpful.\nIn my experience, workflows are frequently characterized as “set it and forget it.” It’s hard to know how often and how easily they might be adapted when, for example, a new record type like peer review reports becomes available. So, it’s worth exploring when and how to highlight to users changes that might be of interest.\nAs it turned out, half the group had their workflows mostly or fully documented. The rest are partially documented, not documented at all or the availability of documentation was unknown. Helping users document their workflows, to the extent possible, should be a mutually beneficial effort to explore going forward. We\u0026rsquo;re doing similar work with the aim of making ours more transparent and replicable.\nFeedback on subscriber services\nUser feedback might be the most obvious and directly consequential work of this group, at least for Crossref - understanding how well the services used meet their needs and what might be improved.\nOne frequent suggestion for improvement is faster response time on queries. This is an area we’ve focused on for some time, because refining queries to be more efficient is often the most straightforward way to improve response times and one reason for the emphasis on workflows.\nWe also discussed the possibility of whether or how to notify users of changes of interest. Just defining “change” is complex since they are so frequent and may often be considered very minor. We’ve been experimenting a bit over the past few years with notifying these users in cases where we’re aware of upcoming large volumes of changes, which is sometimes the case when landing page URLs are updated due to a platform change, for example. It was incredibly useful to discuss with the group what volume of records would be a useful threshold to trigger a notification (100K if you’re curious).\nBut perhaps the most common feedback we get from all users is on the metadata itself and the myriad quality issues involved. The group spent a fair amount of time discussing how this affects their work and shared a few examples of notable concerns:\nAuthor name issues, e.g. ‘Anonymous’ is an option for authors but that or things like ‘n/a’ are sometimes used in surname fields Invalid DOIs are sometimes found in reference lists Garbled characters from text not rendering properly Affiliation information is often not included or incomplete (e.g. doesn’t include RORs) Inconsistencies in commonly included information, e.g. ISSNs It’s worth noting that a common misunderstanding - not just among users - is what is required in the metadata. Users nearly always expect more metadata and more consistency than is actually available. The introduction of Participation Reports a few years ago was a very useful start to what is an ongoing discussion about the variable nature of metadata quality and completeness.\nUsers in the metadata supply chain\nA few years ago, our colleague Joe Wass used Event Data to put together this chart of referrals from non-publisher sources in 2015.\nThe role of metadata users in discoverability of content is key in my view and one that often doesn’t get enough attention, especially given that the systems and services that use this information often use it to point their own users to relevant resources. And because they work so closely with the metadata, users frequently report errors and so serve as a sort of de facto quality control. So, unfortunately, the effects of incomplete or incorrect metadata on these users might be the most powerful way to highlight the need for more and better metadata.\nWhat are the recommendations? In discussions with the Working Group, a few themes emerged, largely around best practices, which, by their nature, tend to be aspirational.\nIf you’re not already familiar with the personas and Best Practices and Principles of Metadata 2020, that is a useful starting point (I am admittedly biased here!) and many are echoed in the following recommendations:\nFor users:\nDocument and periodically review workflows Report errors to members or to Crossref support and reflect corrections when they’re made (metadata and content) Understand what is and isn’t in the metadata Follow best practices for using APIs For Crossref:\nDefine a set of metadata changes, e.g. to affiliations, to further the discussion around thresholds for notifying users of ‘high volumes’ of changes Provide an output schema. Continue refining the input schema to include information like preprint server name, journal article sub types (research article, review article, letter, editorial, etc.), corresponding author flags, raw funding statement texts, provenance information, etc. Collaborate on improving processes for reporting metadata errors and making corrections and enhancements For metadata providers (publishers, funders and their service providers):\nFollow Metadata 2020 Metadata Principles and Practices Consistency is important, e.g. using the same, correct relationship for preprint to VoR links for all records Workarounds such as putting information into a field that is ‘close’ but not meant for it can be considered a kind of error Understand the roles and needs of users in amplifying your outputs Respond promptly to reports of metadata errors Whenever possible, provide PIDs (ORCID IDs, ROR IDs, etc.) in addition to (not as a substitute for) textual metadata What is still unclear or unfinished? Honestly, a lot. We knew from the outset that the group would conclude with much more work to be done, in part because there is so much variety under the umbrella of metadata users and many answers lead to more questions and in part because the metadata and the user community will continue to evolve. Even without a standing group that meets regularly, it’s very much an ongoing conversation and we invite you to join it.\nNow it’s your turn–can you help fill in the blanks? Does any or all of this resonate with you? Do you take exception to any of it? Do you have suggestions for continuing the conversation?\nSpecifically, can you help fill in any of the literal blanks? We\u0026rsquo;ve prepared a short survey that we hope can serve as a template for collecting (anonymous) workflows. Please take just a few minutes to answer a few short questions such as how often you query for metadata.\nIf you are willing to share examples of your queries or have questions or further comments, please get in touch.\n", "headings": ["Tl;dr","Why a metadata user group? Why now?","Survey Highlights","What we set out to address","What did we learn?","What are the recommendations?","What is still unclear or unfinished?","Now it’s your turn–can you help fill in the blanks?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/jennifer-kemp/", "title": "Jennifer Kemp", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/users/", "title": "Users", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/mohamad-mostafa/", "title": "Mohamad Mostafa", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-mohamad-mostafa-on-scholarly-communications-in-uae/", "title": "Perspectives: Mohamad Mostafa on scholarly communications in UAE", "subtitle":"", "rank": 1, "lastmod": "2023-02-27", "lastmod_ts": 1677456000, "section": "Blog", "tags": [], "description": "\rOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nتسلط سلسلة مدونة توقعات - وجهات نظر الخاصة بنا الضوء على أعضاء مختلفين من مجتمعنا العالمي المتنوع في كروس رف .نتعلم المزيد عن حياتهم وكيف تعرفوا وعملوا معنا، ونسمع رؤى حول مشهد البحث العلمي في بلدهم، والتحديات التي يواجهونها، وخططهم للمستقبل. ", "content": "\rOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nتسلط سلسلة مدونة توقعات - وجهات نظر الخاصة بنا الضوء على أعضاء مختلفين من مجتمعنا العالمي المتنوع في كروس رف .نتعلم المزيد عن حياتهم وكيف تعرفوا وعملوا معنا، ونسمع رؤى حول مشهد البحث العلمي في بلدهم، والتحديات التي يواجهونها، وخططهم للمستقبل. As we continue with our Perspectives blog series, today, we meet Mohamad Mostafa, Crossref Ambassador in the UAE and Production Manager at Knowledge E. Mohamad is passionate about helping improve the discoverability of research through rich metadata. We invite you to read and listen to what Mohamad has to say!\nبينما نواصل سلسلة مدونة توقعات - وجهات نظر الخاصة بنا، نلتقي اليوم مع محمد مصطفى، سفير كروس رف في الإمارات العربية المتحدة ومدير الإنتاج في نوليدج اي . محمد متحمس للمساعدة في تحسين إمكانية اكتشاف البحث من خلال البيانات الوصفية الغنية. ندعوكم لقراءة ما يقوله محمد والاستماع إليه! English عربي Tell us a bit about your organization, your objectives, and your role\nأخبرنا قليلاً عن مؤسستك وأهدافك ودورك\nMy name is Mohamad Mostafa, and I am the Production Manager at Knowledge E. Within our publishing program, we publish around 2000 articles across 13 titles that are fully Open Access, which is something that I really value. اسمي محمد مصطفى، وأنا مدير الإنتاج في نولدج إي. ضمن برنامج النشر الخاص بنا، ننشر حوالي 2000 مقالة عبر 13 عنوانًا مفتوح الوصول بالكامل، وهو أمر أقدره حقًا. In a world that’s moving faster than ever, the availability, quality, and pursuit of knowledge are fundamental for advancement. Knowledge E, in line with its vision of developing a more knowledgeable world, helps institutions advance the quality of their research; move towards teaching excellence; upgrade library technology, services, and practices; and advance scholarship through journal publication, management, and training. In other words, it works with higher education institutions, research centres, ministries, publishers, and scholars to solve our society’s most significant challenges. في عالم يتحرك بشكل أسرع من أي وقت مضى، يعد توافر المعرفة وجودتها والسعي وراءها أمورًا أساسية للتقدم. إن نوليدج إي، تماشياً مع رؤيتها لتطوير عالم أكثر معرفة ودراية، تساعد المؤسسات على تحسين جودة أبحاثها؛ التحرك نحو التميز في التدريس؛ ترقية مكتباتها الرقمية والخدمات والممارسات المتعلقة بها؛ ودعم المنح الدراسية المتقدمة من خلال نشر المجلات وإدارتها والتدريب. بمعنى آخر، تعمل شركة نولدج إي مع مؤسسات التعليم العالي ومراكز البحث والوزارات والناشرين والعلماء لحل أهم التحديات التي تواجه مجتمعنا.\nI am also a Crossref Ambassador. As part of the ambassador program, we aim to raise awareness about Crossref services among librarians, publishers, editors, and authors in the Middle East and North Africa region. As part of this, we run workshops in English and Arabic, emphasizing the importance of comprehensive metadata and persistent identifiers. We also help research communities improve their understanding of how to use Crossref services. The importance of making regional research objects easy to find, cite and reuse encouraged me to join the ambassador program.\nأنا أيضًا سفير كروس رف. كجزء من برنامج السفراء، نهدف إلى زيادة الوعي حول خدمات Crossref بين أمناء المكتبات والناشرين والمحررين والمؤلفين في منطقة الشرق الأوسط وشمال إفريقيا. وكجزء من هذا، فإننا ندير ورش عمل باللغتين الإنجليزية والعربية، للتأكيد على أهمية البيانات الوصفية الشاملة والمعرفات المستمرة. نحن أيضًا نساعد مجتمعات البحث على تحسين فهمهم لكيفية استخدام خدمات .Crossref شجعتني أهمية تسهيل العثور على عناصر البحث الإقليمية والاستشهاد بها وإعادة استخدامها على الانضمام إلى برنامج سفراء كروس رف.\nWhat is one thing that others should know about your country and its research activity?\nما هو الشيء الذي يجب أن يعرفه الآخرون عن بلدك ونشاطه البحثي؟\nA lot of regional research is being produced (in Arabic) and even without proper infrastructure (the lack of language support within the international publishing ecosystems such as peer review systems, indexes, citations databases, submissions systems, etc.) and the inadequate awareness about the various services (such as Crossref solutions) that can help with the discoverability and visibility of this research, the Arab region is increasingly recognised as a global leader in research outputs. Generally, these are some of the challenges and frustrations associated with the MENA (Middle East/North Africa) region. يتم إنتاج الكثير من الأبحاث الإقليمية (باللغة العربية) وحتى بدون بنية تحتية مناسبة (نقص الدعم اللغوي داخل أنظمة النشر الدولية مثل أنظمة مراجعة الأقران، والفهارس، وقواعد بيانات الاستشهادات، وأنظمة التقديم، وما إلى ذلك) وعدم كفاية الوعي حول الخدمات المختلفة (مثل حلول(Crossref التي يمكن أن تساعد في اكتشاف هذه البحوث وإبرازها، يتم الاعتراف بالمنطقة العربية بشكل متزايد كرائد عالمي في مخرجات البحث. بشكل عام، هذه بعض التحديات والإحباطات المرتبطة بمنطقة الشرق الأوسط وشمال إفريقيا. Are there trends in scholarly communications that are unique to your part of the world?\nهل توجد اتجاهات في الاتصالات العلمية فريدة من نوعها في الجزء الذي تعيش فيه من العالم؟\nIn general, Open Access and Open Research are getting more and more attention in our region currently. We have recently launched the Forum for Open Research in MENA to raise awareness about all the new scholarly communications trends and support the Middle East and North Africa movement towards Open Science. بشكل عام، يحظى الوصول الحر والبحث المفتوح باهتمام متزايد في منطقتنا حاليًا. لقد أطلقنا مؤخرًا منتدى الأبحاث المفتوحة في منطقة الشرق الأوسط وشمال إفريقيا لزيادة الوعي حول الاتصالات العلمية الجديدة ودعم حركة الشرق الأوسط وشمال إفريقيا نحو العلوم المفتوحة. The Forum for Open Research in MENA (FORM) is a non-profit membership organisation supporting the advancement of open science policies and practices in research communities and institutions across the Arab world. منتدى البحوث المفتوحة في الشرق الأوسط وشمال إفريقيا (FORM) هو منظمة غير ربحية ذات عضوية تدعم النهوض بسياسات وممارسات العلوم المفتوحة في المجتمعات والمؤسسات البحثية في جميع أنحاء العالم العربي. We believe the Arab world has the resources and capability to play a pivotal role in the global transition towards more accessible, sustainable, and inclusive research and education models. And we want to support all our research communities and stakeholder groups in the journey towards a more ‘open’ world. Our vision is to help unlock research for and in the Arab world. Our mission is to support the advancement of open science practices in research libraries and universities across the Arab world by facilitating the exchange of actionable insights and developing practical policies. نعتقد أن العالم العربي لديه الموارد والقدرة على لعب دور محوري في التحول العالمي نحو نماذج بحث وتعليم أكثر سهولة واستدامة وشمولية. ونريد دعم جميع مجتمعاتنا البحثية ومجموعات أصحاب المصلحة في رحلتنا نحو عالم أكثر \"انفتاحًا\". رؤيتنا هي دعم الوصول الحر والبحوث المفتوحة في العالم العربي. ومهمتنا هي دعم تقدم ممارسات العلوم المفتوحة في مكتبات البحث والجامعات في جميع أنحاء العالم العربي من خلال تسهيل تبادل الأفكار القابلة للتنفيذ وتطوير السياسات العملية. Our first Annual Forum was held in Cairo in October 2022 (as part of the global Open Access Week initiative). The event was a huge success, with over 1,100 delegates from over 48 countries across the globe. The next Annual Forum will be hosted in the UAE in October 2023, and details will be available shortly on our website. عقد المنتدى السنوي الأول في القاهرة في أكتوبر 2022 (كجزء من مبادرة أسبوع الوصول الحر العالمي). حقق الحدث نجاحًا كبيرًا، حيث حضره أكثر من 1100 مندوب من أكثر من 48 دولة حول العالم. سيتم استضافة المنتدى السنوي القادم في دولة الإمارات العربية المتحدة في أكتوبر 2023، وستتوفر التفاصيل قريبًا على موقعنا. How would you describe the value of being part of the Crossref community; what impact has your participation had on your goals?\nكيف تصف قيمة أن تكون جزءًا من مجتمعCrossref ؟ ما هو تأثير مشاركتك على أهدافك؟\nI have been a Crossref ambassador for more than 5 years now, and I can really say that it has been a great experience being part of such an amazing and collaborative community. We got the chance to interact with different publishers and service providers and participate in different Crossref annual events. It’s also perfectly aligned with our vision of supporting Open Research. لقد كنت سفيرًا لـ Crossref لأكثر من 5 سنوات حتى الآن، ويمكنني حقًا أن أقول إنها كانت تجربة رائعة أن أكون جزءًا من هذا المجتمع المذهل والتعاوني. لقد أتيحت لنا الفرصة للتفاعل مع مختلف الناشرين ومقدمي الخدمات والمشاركة في الأحداث السنوية المختلفة لـ .Crossref كما أنه يتماشى تمامًا مع رؤيتنا لدعم البحث المفتوح. Recently, we have delivered a series of three Arabic webinars that offered basic metadata information and advanced insights about the role of metadata and how Crossref services can help an institution. These webinars have been well received by the community of regional publishers, university presses, and librarians. Dozens of questions have been answered, and technical enquires have been resolved. It was a great experience, and it was good to see that kind of interest in our community. Also, more educational webinars are yet to come! قدمنا مؤخرًا سلسلة من ثلاث ندوات عربية عبر الإنترنت تمحورت حول معلومات البيانات الوصفية الأساسية ورؤى متقدمة حول دور البيانات الوصفية وكيف يمكن لخدمات Crossref أن تساعد المؤسسات البحثية. لقيت هذه الندوات عبر الإنترنت استحسان مجتمع الناشرين الإقليميين دور النشر الجامعية وأمناء المكتبات. تمت الإجابة على عشرات الأسئلة، وتم الرد على الاستفسارات الفنية. لقد كانت تجربة رائعة، وكان من المفرح أن نرى هذا النوع من الاهتمام في مجتمعنا. بالإضافة إلى ذلك، سيتم تقديم المزيد من الندوات التعليمية على الإنترنت في المستقبل. For you, what would be the most important thing Crossref could change (do more of/do better in)?\nبالنسبة لك، ما هو الشيء الأكثر أهمية الذي يمكن لـ Crossref تغييره (القيام بالمزيد / القيام بعمل أفضل في)؟\nLanguage is still a barrier in some parts of the Arab region, so producing more educational content in different formats (webinars, flyers, videos with subtitles, etc.) would be highly appreciated here. لا تزال اللغة تشكل حاجزًا في بعض المناطق العربية، لذا سيكون إنتاج المزيد من المحتوى التعليمي بتنسيقات مختلفة (ندوات عبر الإنترنت، ونشرات، ومقاطع فيديو مع ترجمة، وما إلى ذلك) موضع تقدير كبير هنا. Which other organizations do you collaborate with or are pivotal to your work in open scholarship?\nما هي المنظمات الأخرى التي تتعاون معها أو التي تلعب دورًا محوريًا في عملك في مجال الابحاث المفتوحة؟\nWe work closely with ORCiD and invite them to our events, support DOAJ via our charitable Foundation, and rely heavily on PKP products mainly the Open Journal Systems (OJS) with plans to expand and start using Open Monograph Press (OMP). إننا نعمل عن كثب مع ORCiD ونقدر دعمهم لفاعلياتنا، كما ندعم DOAJ عبر موقعنا ومؤسستنا الخيرية، ونعتمد بشكل كبير على منتجات مشروع المعرفة العامة وخاصة المجلة المفتوحة أنظمة (OJS) كما أننا نود التوسع والبدء في استخدام Open Monograph Press (OMP). What are the post-pandemic challenges/hopes you are facing and how are you adapting to them/what you’re looking forward to?\nما هي التحديات / الآمال التي تواجهها في فترة ما بعد الجائحة وكيف تتكيف معها / ما الذي تتطلع إليه؟\nWe aim for more face-to-face meetings and onsite workshops/conferences as the world opens up again. In addition, we have launched the Forum for Open Research in MENA (FORM) (a non-profit membership organisation supporting the advancement of Open Science policies and practices in research communities and institutions across the Arab region.) نحن نهدف إلى المزيد من الاجتماعات وجهًا لوجه وورش العمل / المؤتمرات. بالإضافة إلى ذلك، أطلقنا منتدى البحث المفتوحMENA (FORM) ، وهي منظمة غير ربحية ذات عضوية تدعم النهوض بسياسات وممارسات العلوم المفتوحة في مجتمعات ومؤسسات البحث في جميع أنحاء المنطقة العربية. A catalyst for positive action, we work with key stakeholders to develop and implement a pragmatic programme to facilitate the transition toward more accessible, inclusive, and sustainable research and education models in the Arab region. Our driving focus is on building the resources, the membership, the organisational structures, and the broader community to support the advancement of Open Science in research communities and research institutions across the Arab world. كمحفز للعمل الإيجابي، نحن نعمل مع أصحاب المصلحة الرئيسيين للتطوير وتنفيذ برنامج عملي لتسهيل الانتقال نحو المزيد من نماذج البحث والتعليم الشاملة والمستدامة والتي يسهل الوصول إليها في المنطقة العربية. ينصب تركيزنا الدافع على بناء الموارد، والعضوية، والهياكل التنظيمية، والمجتمع الأوسع لدعم تقدم العلوم المفتوحة في المجتمعات البحثية والمؤسسات البحثية عبر العالم العربي. Following the huge success of our 2022 Annual Forum (held in Cairo with the support and endorsement of UNESCO and the Egyptian Knowledge Bank), which attracted over 1100 delegates from 48 countries, our 2023 Annual Forum will be held in Abu Dhabi in the UAE. For more details about the event and the call for papers, see our website: https://forumforopenresearch.com\nبعد النجاح الكبير لمنتدى 2022 السنوي (الذي عقد في القاهرة مع دعم وتأييد اليونسكو وبنك المعرفة المصري)، التي اجتذبت أكثر من 1100 مندوب من 48 دولة، المنتدى السنوي لعام 2023 سيعقد في أبو ظبي في دولة الإمارات العربية المتحدة. لمزيد من التفاصيل حول الحدث والدعوة للمشاركة، راجع موقعنا على الإنترنت:https://forumforopenresearch.com\nWhat are your plans for the future?\nما هي خططك المستقبلية؟\nKeep working with different global and regional stakeholders to help the transition of our region towards Open Science. استمر في العمل مع مختلف الشركاء العالميين والإقليميين للمساعدة في انتقال منطقتنا العربية نحو العلوم المفتوحة. Thank you, Mohamad! شكرا لك يا محمد! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-more-the-merrier-or-how-more-registered-grants-means-more-relationships-with-outputs/", "title": "The more the merrier, or how more registered grants means more relationships with outputs", "subtitle":"", "rank": 1, "lastmod": "2023-02-22", "lastmod_ts": 1677024000, "section": "Blog", "tags": [], "description": "One of the main motivators for funders registering grants with Crossref is to simplify the process of research reporting with more automatic matching of research outputs to specific awards. In March 2022, we developed a simple approach for linking grants to research outputs and analysed how many such relationships could be established. In January 2023, we repeated this analysis to see how the situation changed within ten months. Interested? Read on!", "content": "One of the main motivators for funders registering grants with Crossref is to simplify the process of research reporting with more automatic matching of research outputs to specific awards. In March 2022, we developed a simple approach for linking grants to research outputs and analysed how many such relationships could be established. In January 2023, we repeated this analysis to see how the situation changed within ten months. Interested? Read on!\nTL;DR The overall numbers changed a lot between March 2022 and January 2023:\nthe total number of registered grants doubled (from ~38k to ~76k) the total numbers of relationships established between grants and research outputs quadrupled (from 21k to 92k) the percentage of linked grants increased substantially (from 10% to 23%) Most of this growth can be attributed to one funder, the European Union. They started registering grants with us in December 2022, and:\ntheir grants constitute 47% of all grants registered by January 2023 and 95% of grants registered between March 2022 and January 2023 72% of all established relationships involve their grants We have further work planned both internally and with the community to consolidate and build out important relationships between funding and research outputs.\nIntroduction When we started to develop, think and talk about grant registration at Crossref back in 2017, one of the key things we expected this to support was easier, more efficient, accurate analysis of research outputs funded by specific awards.\nThis is backed up by conversations with funders who are keen to fill in gaps in the map of the research landscape with new data points and better quality information, search for grants, investigators, projects or organisations associated with awards and simplify the process of research reporting and with automatic matching of outputs to grants.\nThis is in keeping with and informed our recent recommendations about how funding agencies can meet open science guidance using existing open infrastructure, which included input from ORCID and DataCite. It\u0026rsquo;s also in keeping with recent studies on how important funding and grant metadata is to help the community use this information in their own research.\nTo meet these expectations, we need not only identifiers and metadata of grants, but also relationships between them and research outputs supported by them. Unfortunately, our schema does not make it easy to directly deposit such relationships, and so there are only a handful of them available. But we wouldn\u0026rsquo;t let such a minor obstacle stop us! In March 2022 we analysed the metadata of registered grants and developed a simple matching approach to automatically link grants to research outputs supported by them. Back then, we were able to find 20,834 relationships, involving 17,082 research outputs and 3,858 grants (which was 10% of all registered grants).\nNow that we are seeing the accumulation of grant metadata being registered with Crossref, we have a bigger dataset to test these expectations against than we did a year ago. So we decided to do the analysis again. And the results are in, they\u0026rsquo;re open, and they\u0026rsquo;re positive. We\u0026rsquo;ll explain below. The methodology To spare you from having to read the old analysis in detail, here is a very brief summary of the matching methodology. To find relationships between grants and research outputs, we iterated over all registered grants, and for each grant we searched for research outputs that looked like they might have been supported by this grant. We established a relationship between a grant and a research output if one of the following three scenarios was true:\nThe research output contained the DOI of the grant (deposited as the award number).\nThe award number in the grant was the same as the award number in the research output, the research output contained the funder ID, and one of the following was true: a. Funder ID in the grant was the same as the funder ID in the research output b. Funder ID in the grant replaced or was replaced by the funder ID in the research output c. Funder ID in the grant was an ancestor or the descendant of the funder ID in the research output\nThe award number in the grant was the same as the award number in the research output, the research output did not contain the funder ID, and one of the following was true:\na. Funder name in the research output was the same as the funder name in the grant\nb. Funder name in the research output was the same as the name of a funder that replaced or was replaced by the funder in the grant\nc. Funder name in the research output was the same as the name of an ancestor or a descendant of the funder in the grant\nNote that the replaced/replaced-by relationships and ancestor/descendant hierarchy are taken from the Funder Registry.\nCurrent results Since March 2022, six additional funders have started registering grants with us. As a result, the total number of grants doubled, and the total number of established relationships between grants and research outputs, linked grants, and linked research outputs quadrupled. Here is the comparison of the total numbers of grants, established relationships, linked grants, and linked research outputs in March 2022 and in January 2023:\n95% of grants registered within ten months between March 2022 and January 2023 were registered by one funder: the European Union. This suggests that this funder contributed a lot to this rapid increase in the number of established relationships. It looks like this funder\u0026rsquo;s grant metadata is of high quality and matches well the funding information given in the research outputs supported by this funder\u0026rsquo;s grants.\nLet\u0026rsquo;s also compare the breakdowns of all established relationships by the matching method:\nThe distributions are a bit different. Currently, the percentage of relationships established based on the replaced/replaced-by relationship is much smaller than before, suggesting that newer data uses correct funder IDs instead of deprecated ones. Also, the percentage of the relationships matched by the funder ID increased from 40% to 48%, which is great, because this is the most reliable way of matching.\nAnd here we have the statistics broken down by grant registrants. Only funders with at least 100 registered grants are included. The table shows the number of relationships, grants, linked grants, and linked research outputs, and is sorted by the percentage of linked grants.\nfunder relationships linked research outputs grants linked grants European Union 66,562 60,630 35,530 12,688 (36%) Gordon and Betty Moore Foundation 93 92 113 33 (29%) Japan Science and Technology Agency (JST) 15,584 13,464 9,923 2,323 (23%) James S. McDonnell Foundation 519 513 577 121 (21%) Melanoma Research Alliance 188 185 425 82 (19%) Muscular Dystrophy Association 50 50 178 25 (14%) Parkinson\u0026rsquo;s Foundation 30 29 107 15 (14%) Asia-Pacific Network for Global Change Research 127 127 560 70 (13%) The ALS Association 96 90 477 58 (12%) Wellcome 8,868 6,436 17,537 1,735 (10%) American Cancer Society 19 19 266 15 (6%) Templeton World Charity Organization 2 2 281 2 (0.7%) Office of Scientific and Technical Information (OSTI) 73 69 8,723 62 (0.7%) Children\u0026rsquo;s Tumor Foundation 1 1 662 1 (0.1%) There are substantial differences between the percentages of linked grants from different funders. One of the newest registrants, the European Union, is at the top of the table with 36% of their grants linked to research outputs. This further confirms the high quality of the metadata registered by this member. It is worth noticing that this member is responsible for the majority of the growth reported here as they cover Horizon Europe, the European Research Council, and many other funding bodies and schemes. Why are these percentages so low for some funders? It could be caused by systematic discrepancies between the award numbers attached to the grants and those reported in research outputs. It could also be the case that most grants registered by a given funder are new grants, and the research outputs supported by them simply have not been published yet. Time will tell! What\u0026rsquo;s next We\u0026rsquo;re dedicating lots of time in 2023 to examine, evolve, and expose the matching we do and can do at Crossref across different metadata fields. We then plan to incorporate matching improvements into our services so that everyone can benefit.\nThis isn\u0026rsquo;t a standalone piece of work. As you can see, the more award metadata we have connected to grants by funders and connected to outputs by those who post or publish research, the better we\u0026rsquo;ll be able to do this. To make it easier for more funders to participate, and based on funder feedback, we\u0026rsquo;ve built a simple tool for members to register their grants. We will also work to help incorporate grant identifiers into publishing and funder workflows, and further our discussions with the funders in our Funder Advisory Group and the wider community, including working together with the Open Research Funders Group, the HRA, Altum, Europe PMC, the OSTP, and the ORCID Funder Interest Group. And there will be more to come as we work together to consolidate and build out important relationships between funding and outputs - for everyone.\nFollow-up Every new thing takes time to get off the ground and to show evidence of its value. We\u0026rsquo;ve seen a significant step forward recently with funders joining and contributing to the research nexus. Publishers have been contributing funding data for years, and it\u0026rsquo;s now becoming much clearer to see how these two communities and these two sets of metadata are coming together to make research smoother and easier to manage and evaluate. If you are ready to register grants, talk about linking up your outputs, or just want to learn more about this work, we\u0026rsquo;d love to hear from you.\n", "headings": ["TL;DR","Introduction","The methodology","Current results","What\u0026rsquo;s next","Follow-up"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/data/", "title": "Data", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dont-take-it-from-us-funder-metadata-matters/", "title": "Don’t take it from us: Funder metadata matters", "subtitle":"", "rank": 1, "lastmod": "2023-02-16", "lastmod_ts": 1676505600, "section": "Blog", "tags": [], "description": "Why the focus on funding information? We are often asked who uses Crossref metadata and for what. One common use case is researchers in bibliometrics and scientometrics (among other fields) doing meta analyses on the entire corpus of records. As we pass the 10 year mark for the Funder Registry and 5 years of funders joining Crossref as members to register their grants, it’s worth a look at some recent research that focuses specifically on funding information.", "content": "Why the focus on funding information? We are often asked who uses Crossref metadata and for what. One common use case is researchers in bibliometrics and scientometrics (among other fields) doing meta analyses on the entire corpus of records. As we pass the 10 year mark for the Funder Registry and 5 years of funders joining Crossref as members to register their grants, it’s worth a look at some recent research that focuses specifically on funding information. After all, there is funding behind so much scholarly work it seems obvious that it would be routinely documented in the scholarly record. But it often isn’t and that’s a problem. These sources make clear the need for accurate funding information and the problems that the lack of it creates.\nFirst, a few notes for context on these sources and the issues they discuss :\nThe percent of records with funding information reached about 25% as of 2021. Not all items registered are the result of funding but surely it is much higher than 25% so there is considerable room for improvement. The authors cite publishers that omit funding information as well as those that include it routinely. Overall, society publishers are at the top of the list of those that do it well. Three of the four sources found problems in some cases confirming funding information from the metadata in the original sources. This initially surprised me though less so once I thought about the strange nature of metadata workflows. The complexity of fully and correctly acknowledging multiple sources of funding in any given publication is a recurring theme. All of the sources mention the need for manual work in analyzing funding and publication information. The first two papers are from the same 2022 issue of Quantitative Science Studies and are complementary.\nAlexis-Michel Mugabushaka, Nees Jan van Eck, Ludo Waltman; Funding COVID-19 research: Insights from an exploratory analysis using open data infrastructures. Quantitative Science Studies 2022; 3 (3): 560–582. doi: https://0-doi-org.libus.csd.mu.edu/10.1162/qss_a_00212\nThis first paper tackles the timely question of determining which funders have supported publications of COVID-19 research and compares coverage of funding data in Crossref to that in Scopus and Web of Science. Even with so much urgent attention focused on the pandemic, the authors found that only 17% of publications in the COVID-focused CORD-19 database have funding identified in their Crossref records. We’re often asked about differences in the metadata (and citation counts) between Crossref and other sources such as Scopus. In this case, both proprietary sources studied have more funder coverage. If you are disappointed in these results or want to learn more, I encourage you to read the authors’ recommendations for improving funding data in Crossref or get in touch with us.\nBianca Kramer, Hans de Jonge; The availability and completeness of open funder metadata: Case study for publications funded by the Dutch Research Council. Quantitative Science Studies 2022; 3 (3): 583–599. doi: https://0-doi-org.libus.csd.mu.edu/10.1162/qss_a_00210\nThis next paper focuses on a set of outputs funded by the NWO (the Dutch Research Council). Since the funder is already known, the authors could look at multiple sources (Crossref and others) to see whether or where the NWO is correctly identified as the funder. This study also found better coverage than Crossref in proprietary sources like Web of Science. Knowing that not all outputs are the result of funded research, this paper provides a new and useful baseline for comparing percentages of coverage. Discussions of research funding so often focus on the physical and life sciences so it’s very good to see that 37% of works in this study are in the humanities and social sciences.\nBorst, T., Mielck, J., Nannt, M., Riese, W. (2022). Extracting Funder Information from Scientific Papers - Experiences with Question Answering. In: , et al. Linking Theory and Practice of Digital Libraries. TPDL 2022. Lecture Notes in Computer Science, vol 13541. Springer, Cham. https://0-doi-org.libus.csd.mu.edu/10.1007/978-3-031-16802-4_24\nGiven the considerable effort required to conduct these analyses, it’s only logical to consider automating as much of the work as possible. This next paper focuses on automatic recognition of funders in economics papers in digital libraries. An interesting complication described here is the inclusion of funding for open access fees in acknowledgments and while the authors conclude that automated text mining of funder information performs better than manual curation, they also state that manual indexing is still necessary “for a gold standard of reliable metadata.”\nHabermann, T. (2022). Funder Metadata: Identifiers and Award Numbers. https://metadatagamechangers.com/blog/2022/2/2/funder-metadata-identifiers-and-award-numbers\nFinally, this concise blog post looks at RORs as well as funder names and acronyms. The author shows how acronyms contribute to the need for manual analysis. He also spends some time on award numbers, which is one of the three funding elements publishers can (and, as we’ve seen, should) include in their metadata. Award numbers are also a focus of this work and, unfortunately, another frequent reason for additional manual work.\nA common theme: More metadata needed Though collectively, this research paints a fairly dim picture of the current availability, completeness and accuracy of existing funding information in publication metadata, all is not lost. This is a good opportunity to point out the value and availability of grant records since unique, persistent identifiers for grants (yes, DOIs for grants) paired with more and better funding metadata from publishers go a very long way to realizing the vision of the Research Nexus. And it certainly would make things a whole lot easier for the researchers who use this open metadata to analyze the scholarly record for the rest of us.\n", "headings": ["Why the focus on funding information?","A common theme: More metadata needed"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/membership-operations/sanctions/", "title": "Sanctions compliance", "subtitle":"", "rank": 4, "lastmod": "2023-02-14", "lastmod_ts": 1676332800, "section": "Operations & sustainability", "tags": [], "description": "Crossref\u0026rsquo;s mission is to support global research in and between all countries and we will always do that to the maximum extent possible. But we are also bound by laws in the US, the UK, and the EU. Where sanctions apply, we have to comply with these.\nDue to extensive sanctions, we’re currently unable to accept applications for membership from organizations based in (or with significant links with) the following countries or regions:", "content": "Crossref\u0026rsquo;s mission is to support global research in and between all countries and we will always do that to the maximum extent possible. But we are also bound by laws in the US, the UK, and the EU. Where sanctions apply, we have to comply with these.\nDue to extensive sanctions, we’re currently unable to accept applications for membership from organizations based in (or with significant links with) the following countries or regions:\nCuba Iran North Korea Syria And because of sanctions instituted in response to the war in Ukraine, we are currently unable to accept applications for membership from organizations:\nThat are part of the Russian Government, or are under sanctions by the US, UK or EU. That are located or have physical operations in the disputed oblasts of Crimea, Donetsk, Kherson, Luhansk or Zaporizhzhia. All member applications from organizations based in Russia or Belarus are subject to sanctions checks, and in some cases a more detailed and extended review will be necessary. Because of this, these applications may take longer to process than other new member applications. (We are also currently unable to offer the Similarity Check service to members based in Russia).\nMember organizations based in other countries are able to work with journals and authors in these countries, but they are obligated to check the organizations and individuals that they are working with extremely carefully. By accepting our membership terms, all members confirm that:\n“\u0026hellip;neither it nor any of its affiliates, officers, directors, employees, or members is (i) a person whose name appears on the list of Specially Designated Nationals and Blocked Persons published by the Office of Foreign Assets Control, U.S. Department of Treasury (\u0026ldquo;OFAC\u0026rdquo;), (ii) a department, agency or instrumentality of, or is otherwise controlled by or acting on behalf of, directly or indirectly, any such person; (iii) a department, agency, or instrumentality of the government of a country subject to comprehensive U.S. economic sanctions administered by OFAC; or (iv) is subject to sanctions by the United Nations, the United Kingdom, or the European Union.\nSponsoring Organizations (and any remaining Sponsoring Members) are also required to ensure that their engagement with any Sponsored Member or Sponsored Organization is fully compliant with sanctions in the US, UK and EU.\nIf we discover that an existing member may be subject to sanctions and/or has misrepresented their location, identity, ownership or any other material information on their application form in order to evade sanctions or a prior revocation of membership, or any other illicit reason, we will follow our defined processes to revoke membership. Sanctions also change over time, and we periodically review our membership against the list of sanctioned organizations and individuals. This may lead to membership suspension or revocation of membership in accordance with these same processes.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/", "title": "Documentation", "subtitle":"", "rank": 1, "lastmod": "2023-02-07", "lastmod_ts": 1675728000, "section": "Documentation", "tags": [], "description": "Take a look at the topics on the right to browse and page through our documentation, or search using the box above the topic list Common questions; how do I\u0026hellip;? Construct DOI suffixes Verify a metadata registration Update an existing metadata record Interpret and act on reports Query the API to retrieve metadata See system status and maintenance If you have questions please consult other users on our forum at community.", "content": "Take a look at the topics on the right to browse and page through our documentation, or search using the box above the topic list Common questions; how do I\u0026hellip;? Construct DOI suffixes Verify a metadata registration Update an existing metadata record Interpret and act on reports Query the API to retrieve metadata See system status and maintenance If you have questions please consult other users on our forum at community.crossref.org or open a ticket with our technical support team where we\u0026rsquo;ll reply within a few days. Please also visit our status page to find out about scheduled (and unscheduled) maintenance and subscribe to updates.\n", "headings": ["Common questions; how do I\u0026hellip;?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/refocusing-our-sponsors-program-a-call-for-new-sponsors-in-specific-countries/", "title": "Refocusing our Sponsors Program; a call for new Sponsors in specific countries", "subtitle":"", "rank": 1, "lastmod": "2023-02-06", "lastmod_ts": 1675641600, "section": "Blog", "tags": [], "description": "Some small organizations who want to register metadata for their research and participate in Crossref are not able to do so due to financial, technical, or language barriers. To attempt to reduce these barriers we have developed several programs to help facilitate membership. One of the most significant\u0026mdash;and successful\u0026mdash;has been our Sponsor program.\nSponsors are organizations that are generally not producing scholarly content themselves but work with or publish on behalf of groups of smaller organizations that wish to join Crossref but face barriers to do so independently.", "content": "Some small organizations who want to register metadata for their research and participate in Crossref are not able to do so due to financial, technical, or language barriers. To attempt to reduce these barriers we have developed several programs to help facilitate membership. One of the most significant\u0026mdash;and successful\u0026mdash;has been our Sponsor program.\nSponsors are organizations that are generally not producing scholarly content themselves but work with or publish on behalf of groups of smaller organizations that wish to join Crossref but face barriers to do so independently. Sponsors work directly with Crossref in order to provide billing, technical, and, if applicable, language support to Members.\nBecause Sponsors are important partners in facilitating membership there is a high bar to meet to be accepted as a Sponsor. To ensure that an organization can accurately represent Crossref and has the resources to be successful we created a set of criteria that must be met to be considered.\nOur Sponsors program has grown considerably over the last decade and has now become the primary route to membership for emerging markets and small or academic-adjacent publishing operations.\nThe program began in 2012 with four Sponsors, based primarily in South Korea and Turkey, representing fewer than 100 members. In the next stage of development, the program covered Brazil, India, and Ukraine, and nearly 1300 members. At the end of 2022, the program had grown to over 100 sponsors from 45 countries representing over 11,000 of our members.\nThough the program continues to expand, there are still regions where we lack Sponsors, while having an abundance in others. We are working with members, ambassadors, and the community to help identify organizations that may be a fit with the Sponsor program and based in those regions where coverage is lacking.\nThis January we announced our Global Equitable Membership (GEM) Program which offers relief from membership and content registration fees for members in the least economically-advantaged countries in the world. Eligibility for the program is based on a member\u0026rsquo;s country on our curated list.\nThough the GEM program reduces financial barriers to becoming a member, many organizations still require technical assistance and local language support. Working with a Sponsor would help organizations overcome these burdens. However, there is little or no Sponsor coverage for organizations located in most GEM-eligible countries. That means that in places like Bangladesh, Nepal, and Senegal, where we\u0026rsquo;ve seen a lot of growth, more organizations could join us if a suitable local Sponsor could support them.\nWe have made the decision to pause accepting new Sponsors from regions where Sponsor numbers are already very high or not based in a GEM region. By doing so we can focus on growing the program in areas where there is the greatest need.\nWe are also going to focus on how best to support our current 100+ Sponsors and work with them to evaluate ways to improve the program. We will bolster the training and resources, outreach activities, and solicit feedback on additional ways we can help.\nWe would love to hear from organizations based in GEM countries who might consider becoming a Sponsor. But our invitation for Sponsors is not limited to the support for the GEM program. There are countries where the GEM program won\u0026rsquo;t apply, but where growth is high and no Sponsor is present. In particular, we seek support in the following countries where member numbers are growing but could be better supported.\nCountry/state Region No. Crossref members Nigeria Sub-Saharan Africa (Western) 99 Philippines South-eastern Asia 81 Kenya Sub-Saharan Africa (Eastern) 40 Egypt Northern Africa 26 Sri Lanka Southern Asia 13 If your organization is based in one of these regions and supports or provides services to scholarly publishers in one of the above countries \u0026mdash;please take a look at the criteria set out on our website and do get in touch to start the conversation if you think you can meet them. We\u0026rsquo;re excited to hear from you!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/sponsors/", "title": "Sponsors", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/books/", "title": "Books", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lettie-conrad/", "title": "Lettie Conrad", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/measuring-metadata-impacts-books-discoverability-in-google-scholar/", "title": "Measuring Metadata Impacts: Books Discoverability in Google Scholar", "subtitle":"", "rank": 1, "lastmod": "2023-01-25", "lastmod_ts": 1674604800, "section": "Blog", "tags": [], "description": "This blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen.\nAs sponsors of this project, we at Crossref are excited to see this work shared out.\nThe scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.", "content": "This blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen.\nAs sponsors of this project, we at Crossref are excited to see this work shared out.\nThe scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.\nIn 2021, we embarked on a Crossref-sponsored study designed to measure how metadata impacts end-user experiences and contributes to the successful discovery of academic and research literature via the mainstream web. Specifically, we set out to learn if scholarly books with DOIs (and associated metadata) were more easily found in Google Scholar than those without DOIs.\nInitial results indicated that DOIs have an indirect influence on the discoverability of scholarly books in Google Scholar \u0026ndash; however, we found no direct linkage between book DOIs and the quality of Google Scholar indexing or users’ ability to access the full text via search-result links. Although Google Scholar claims to not use DOI metadata in its search index, the results of our mixed-methods study of 100+ books (from 20 publishers) demonstrate that books with DOIs are generally more discoverable than those without DOIs.\nAs we finalize our analysis, we are sharing some early results and inviting input from our community. What relevant lessons can we glean from this exercise? What changes might book publishers consider based on the outcomes of this study?\nBackground on the study This study was designed to evaluate metadata impacts \u0026amp; benefits to users. Given its popularity with a range of stakeholders in our industry, we set out to measure metadata impacts on discoverability in the mainstream web – namely, Google Scholar.\nOur test method and analysis rubric was developed based on our own information-user research, in particular how readers search and retrieve scholarly ebooks, as well as published studies about academic information experiences and research practices. We rated the search performance of more than 100 scholarly books using preset test queries (two for each title). The books tested in this study came from publishers of all sorts and sizes, and represent both monographs and edited volumes from a range of fields; some were open access and others were published under traditional licensing models.\nWe developed and executed known-item test searches that were designed to simulate common researcher practices. Heuristic analysis of the search results was used to rate the search performance on a 5-point scoring rubric, which was designed to measure the degree of friction in locating the book in question. This method allowed us to assess specific book and metadata attributes by their search performance scores to assess the impact of book metadata on content discoverability in Google Scholar.\nResults and findings In this study, we learned that high-value fields include the primary title paired with subtitles, author/editor surnames and/or field of study. Queries using full book titles performed the best across the board. Those using publication dates and/or author/editor surnames and/or publisher names, but without the book title, were the lowest performers.\nSurprisingly, our discoverability scores show no significant variation in performance by the type of book, whether edited or authored. Open-access titles performed somewhat better than traditional ones. Books covering humanities and social science fields performed a bit better than STM books, but only by a slim difference (that is not statistically significant).\nWe primarily tested the discoverability of book titles, from equal numbers of books with and without chapter-level DOIs. We ran similar tests for chapter-title discoverability but found the majority of test queries for chapters lead users to the full book itself. While books without title-level DOIs were found to be less discoverable, we did not find a measurable difference between books with or without chapter-level DOIs. (Note: All books in this study with chapter-level DOIs assigned also carried a title-level DOI, which was found to be fairly common.)\nBased on these results, we are developing a theory that books with DOIs perform better in Google Scholar because they benefit from the structured, open metadata associated with those DOIs – which are used by hundreds of platforms and services, and therefore are “seeded” throughout the mainstream web, which Scholar may draw on for indexing, linking, etc. That said, however, these results also suggest that publishers are best served by a metadata strategy that is well attuned to the protocols expected of each channel for book search and discovery. In a recent conversation about our findings, Anurag Acharya himself noted that these results underscore the need for publishers to invest in the robust construction and broad distribution of book metadata.\nIn this study, we have observed that the metadata protocols surrounding Google Scholar are not fully integrated into our industry’s established scholarly information standards bodies, like NISO, or infrastructure organizations, like Crossref. While some mainstream data standards prevail in the Scholar index, like the use of schema.org and HTTP, some key metadata attributes seem to be lacking. For example, an indicator of the type of scholarly book (monograph, handbook, etc.) would improve Google Scholar’s search index and could be used to filter search results, thereby improving users’ experiences discovering scholarly books. One clear challenge for book publishers today is the fact that Google Scholar operates outside of our community-governed scholarly information infrastructure.\nWhat comes next While this study focused on Google Scholar, the results and lessons learned are applicable to other mainstream channels of information seeking/discovery. Our report, due out spring 2023, will contribute to the literature intended to support user-centric information systems design and content architecture by scholarly publishers and service providers.\nAs we write up our findings, we intend to develop a framework that can help publishers and others measure the impact of their work to enrich and distribute scholarly metadata. We hope this first systematic review of the impacts of metadata on the discoverability of books in Google Scholar will provide valuable insights for this community. In the meantime, please share your thoughts and questions in the comments below \u0026ndash; or reach out to us directly (see Lettie’s profile here and Michelle’s profile here).\nAcknowledgments: The authors would like to thank Jennifer Kemp at Crossref for the inspiration to take this dive into the metadata literature and reflect on its impact on research information experiences. Special thanks to Anurag Acharya at Google Scholar for his consultation during this study.\n", "headings": ["Background on the study","Results and findings","What comes next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/michelle-urberg/", "title": "Michelle Urberg", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/search/", "title": "Search", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/preprints/", "title": "Preprint advisory group", "subtitle":"", "rank": 3, "lastmod": "2022-12-16", "lastmod_ts": 1671148800, "section": "Working groups", "tags": [], "description": "The purpose of the preprint Advisory Group is to support Crossref to collect and improve the quality of metadata for preprints. The group is comprised of both our members as well as non-members (third party platforms and organizations) who are interested in preprints.\nGroup Members Chair: Oya Rieger, Ithaka Facilitator: Martyn Rittman, Crossref\nAlainna Wrigley, California Digital library Alex Mendonca, SciELO Ben Mudrak, ChemRxiv Bianca Kramer, Sesame Open Science Ioana Craciun, Preprints Dasapta Erwin Irawan, RINarxiv David Woodworth, OCLC Elisa Pettinelli Barrett, Research Square Emily Marchant, Cambridge University Press Ginny Hendricks, Crossref Gunther Eysenbach, JMIR Jeff Beck, NCBI, US National Library of Medicine Jingyu Liu, ChinaXiv Johanna Havemann, AfricaArxiv Johannes Wagner, Copernicus Katharine Hancox, IET Katie Corker, ASAPbio Frederick Atherden, Elife Michael Evans, F1000 Research Michael Parkin, Europe PMC Michele Avissar-Whiting, Research Square Nici Pfeiffer, Center for Open Science Patricia Feeney, Crossref Richard Sever, BioRxiv Richard Wynne, Rescognito Robin Dunford Shirley Decker-Lucke, SSRN Tony Alves, HighWire Press Thomas Lemberger, EMBO Wendy Patterson, Beilstein-Institut How the group works (and the guidelines) The preprint Advisory Group is led by a Chair and a Crossref Facilitator, who together help to develop meeting agendas, lead discussions, outline group actions and rally the community outside of the Advisory Group for support with the service where appropriate.", "content": "The purpose of the preprint Advisory Group is to support Crossref to collect and improve the quality of metadata for preprints. The group is comprised of both our members as well as non-members (third party platforms and organizations) who are interested in preprints.\nGroup Members Chair: Oya Rieger, Ithaka Facilitator: Martyn Rittman, Crossref\nAlainna Wrigley, California Digital library Alex Mendonca, SciELO Ben Mudrak, ChemRxiv Bianca Kramer, Sesame Open Science Ioana Craciun, Preprints Dasapta Erwin Irawan, RINarxiv David Woodworth, OCLC Elisa Pettinelli Barrett, Research Square Emily Marchant, Cambridge University Press Ginny Hendricks, Crossref Gunther Eysenbach, JMIR Jeff Beck, NCBI, US National Library of Medicine Jingyu Liu, ChinaXiv Johanna Havemann, AfricaArxiv Johannes Wagner, Copernicus Katharine Hancox, IET Katie Corker, ASAPbio Frederick Atherden, Elife Michael Evans, F1000 Research Michael Parkin, Europe PMC Michele Avissar-Whiting, Research Square Nici Pfeiffer, Center for Open Science Patricia Feeney, Crossref Richard Sever, BioRxiv Richard Wynne, Rescognito Robin Dunford Shirley Decker-Lucke, SSRN Tony Alves, HighWire Press Thomas Lemberger, EMBO Wendy Patterson, Beilstein-Institut How the group works (and the guidelines) The preprint Advisory Group is led by a Chair and a Crossref Facilitator, who together help to develop meeting agendas, lead discussions, outline group actions and rally the community outside of the Advisory Group for support with the service where appropriate.\nThe group is currently active. Please contact Martyn Rittman with any questions.\nOutputs Over its first year of operation, the Advisory Group has developed recommendations in four key areas of preprint metadata. These are:\npreprint withdrawal and removal preprints as an article type versioning of preprints preprint relationship metadata. In July 2022, the AG published a set of recommendations (https://0-doi-org.libus.csd.mu.edu/10.13003/psk3h6qey4) and invited public comment, available on our Community Forum. An in-depth report of discussions of the AG is available at https://0-doi-org.libus.csd.mu.edu/10.31222/osf.io/qzusj.\nMinutes Minutes of the group are available on this page.\n", "headings": ["Group Members","How the group works (and the guidelines)","Outputs","Minutes"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2022/", "title": "2022", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/introducing-our-new-global-equitable-membership-gem-program/", "title": "Introducing our new Global Equitable Membership (GEM) program", "subtitle":"", "rank": 1, "lastmod": "2022-12-07", "lastmod_ts": 1670371200, "section": "Blog", "tags": [], "description": "When Crossref began over 20 years ago, our members were primarily from the United States and Western Europe, but for several years our membership has been more global and diverse, growing to almost 18,000 organizations around the world, representing 148 countries.\nAs we continue to grow, finding ways to help organizations participate in Crossref is an important part of our mission and approach. Our goal of creating the Research Nexus\u0026mdash;a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society\u0026mdash;can only be achieved by ensuring that participation in Crossref is accessible to all.", "content": "When Crossref began over 20 years ago, our members were primarily from the United States and Western Europe, but for several years our membership has been more global and diverse, growing to almost 18,000 organizations around the world, representing 148 countries.\nAs we continue to grow, finding ways to help organizations participate in Crossref is an important part of our mission and approach. Our goal of creating the Research Nexus\u0026mdash;a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society\u0026mdash;can only be achieved by ensuring that participation in Crossref is accessible to all. Building a network for the global community must include input from all of the global community. Although Crossref membership is open to all organizations that produce scholarly and professional materials, cost and technical challenges can be barriers to joining for many organizations. To address some of these challenges, we created our Sponsors Program, which provides technical, financial and local language support. We also collaborate with the Public Knowledge Project on the Open Journals Platform to develop plugins for OJS users.\nAdditionally, we had a limited \u0026lsquo;fee assistance\u0026rsquo; program to waive the content registration fees for members working under specific Sponsor arrangements, including INASP, and African Journals Online (AJOL). Learning from the experiences of such successful partnerships, starting in January 2023, we are expanding this program to provide greater membership equitability and accessibility to organizations located in the least economically-advantaged countries in the world through our Global Equitable Membership (GEM) Program. This new scheme now encompasses the annual fee as well as the content registration fees.\nEligibility for the program is based on a member\u0026rsquo;s country. We have curated the list, predominantly based on the International Development Association (IDA) list and excluding anywhere we are bound by international sanctions. From January 2023, organizations based in countries listed in our GEM program will be eligible to join Crossref and contribute with their metadata to a robust scholarly record at no cost. This also applies to 187 existing members in eligible countries who will no longer be charged for Crossref membership or content registration.\nExisting Crossref members in GEM-eligible countries Bangladesh (54) Burundi (1) Kiribati (0) Kyrgyz Republic (20) Central African Republic (1) Lesotho (0) Nepal (19) Democratic Republic of the Congo (1) Liberia (0) Ghana (15) Guyana (1) Marshall Islands (0) Yemen (10) Haiti (1) Mauritania (0) Sudan (7) Honduras (1) Micronesia (0) Tanzania (7) Laos (1) Mozambique (0) Afghanistan (6) Madagascar (1) Nicaragua (0) Ethiopia (5) Malawi (1) Niger (0) Zambia (5) Maldives (1) Samoa (0) Bhutan (4) Myanmar (1) Sao Tome and Principe (0) Rwanda (4) Cambodia (1) Sierra Leone (0) Tajikistan (4) Chad (1) Solomon Islands (0) Kosovo (3) Comoros (1) South Sudan (0) Senegal (3) Cote d’Ivoire (1) Togo (0) Uganda (3) Djibouti (1) Tonga (0) Burkina Faso (2) Eritrea (1) Tuvalu (0) Mali (2) Gambia (1) Vanuatu (0) Somalia (2) Guinea (1) Benin (1) Guinea-Bissau (1) The list of countries will undergo an annual review, to follow the latest guidance from IDA, which uses the somewhat simplistic World Bank income classifications but applies a more granular blend of criteria for economic health, thereby allowing for greater nuance, such as indicating countries where the gap between rich and poor is very wide.\nThe program results from our experience working with and knowing the communities through Sponsors and working with past members who have struggled to pay. It aims to bring us closer to our vision of building an inclusive, rich and open network of relationships underpinning the scholarly record. With the support of the Membership and Fees Committee, the launch of the program was confirmed with the recent unanimous vote of our Board to evolve our fee assistance program into a more expansive scheme. GEM presents a more comprehensive and equitable solution than our former arrangements. It involves an opportunity to join Crossref and contribute scholarly metadata to our global community on a zero-fee basis for membership and content registration. This offering will be applied by default to organizations based in all eligible countries, irrespective of joining through any specific Sponsor, or independently.\nWhile the GEM Program will alleviate financial barriers, and we hope to see the numbers above grow significantly, the GEM program will not necessarily help ease technical or administrative burdens. We still need our valued Sponsors for that and we seek new Sponsors in the above locations. We would love to hear from organizations based in GEM countries who might consider becoming a Sponsor or otherwise support local colleagues in building experience of metadata and working with global open scholarly infrastructure systems like Crossref. Please reach out to me to discuss ideas or with any other questions or comments.\n", "headings": ["Existing Crossref members in GEM-eligible countries"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-funding-agencies-can-meet-ostp-and-open-science-guidance-using-existing-open-infrastructure/", "title": "How funding agencies can meet OSTP (and Open Science) guidance using existing open infrastructure", "subtitle":"", "rank": 1, "lastmod": "2022-11-17", "lastmod_ts": 1668643200, "section": "Blog", "tags": [], "description": "In August 2022, the United States Office of Science and Technology Policy (OSTP) issued a memo (PDF) on ensuring free, immediate, and equitable access to federally funded research (a.k.a. the “Nelson memo”). Crossref is particularly interested in and relevant for the areas of this guidance that cover metadata and persistent identifiers—and the infrastructure and services that make them useful.\nFunding bodies worldwide are increasingly involved in research infrastructure for dissemination and discovery.", "content": "In August 2022, the United States Office of Science and Technology Policy (OSTP) issued a memo (PDF) on ensuring free, immediate, and equitable access to federally funded research (a.k.a. the “Nelson memo”). Crossref is particularly interested in and relevant for the areas of this guidance that cover metadata and persistent identifiers—and the infrastructure and services that make them useful.\nFunding bodies worldwide are increasingly involved in research infrastructure for dissemination and discovery. While this post does respond to the OSTP guidelines point-by-point, the information here applies to all funding bodies in all countries. It will be equally useful for publishers and other systems that operate in the scholarly research ecosystem.\nIn response to calls from our community for more specifics, this post:\nProvides an overview of the specific ways that Crossref (along with organisations and initiatives like DataCite, ORCID, and ROR) helps U.S. federal agencies\u0026mdash;and indeed any other funder\u0026mdash;meet critical aspects of the recommendations. Restates our intent to collaborate with all stakeholders in the scholarly research ecosystem, including the OSTP, the US federal agencies, our existing funder, publisher, and university members, to support the recommendation as plans develop. References the work and adoption of Crossref Grant DOIs, including analyses of existing metadata matching funding to outputs. Highlights that what’s outlined in the memo aligns with our longstanding mission to capture and maintain the scholarly record and our vision of the Research Nexus, as we describe in our current blog series, regarding our role in preserving the integrity of the scholarly record (ISR). Infrastructure already exists to support funder goals; it just needs more adoption Ensuring free, immediate, and equitable access to metadata that captures the scholarly record is an essential part of meeting the aims of the memo but also supporting Open Science globally.\nIn September, Crossref ORCID, DataCite, and ROR participated in the 2022 Forum on Global Grants Management run by Altum and the summary provides a good example of the importance of open infrastructure and open metadata to the goals of Open Science:\nOpen Science begins with open infrastructure: Attendees agreed that Open Science relies on many other \u0026lsquo;opens’ – most notably, open metadata, open infrastructure, and open governance. Metadata and DOIs (digital object identifiers) for publications, grants, and research outputs, are essential to illuminate the connections that exist between funding and outcomes. That metadata runs on infrastructure powered by organizations such as Crossref, ORCID, ROR, and DataCite. As a foundational scholarly infrastructure committed to meeting the Principles of Open Scholarly Infrastructure (POSI) of governance, insurance, and sustainability, Crossref plays an essential role in implementing and supporting key aspects of the guidance. For many years, we have been focused on the integrity of the scholarly record (ISR), and the shared vision to collectively achieve what we call the Research Nexus, which is described as\nA rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nMetadata\u0026mdash;including persistent identifiers and relationships between different research objects\u0026mdash;is the foundation of the Research Nexus and is critical to openly and sustainably fulfilling the OSTP memo\u0026rsquo;s recommendations.\nThis topic of open metadata and identifiers isn’t just an issue for research resulting from US federal funding. We are working to implement open scholarly infrastructure globally, bringing significant benefits to the whole scholarly research ecosystem.\nThe current situation brings to mind the William Gibson quote, “The future is already here - it’s just not evenly distributed yet”. Much of the open infrastructure to support the identifier, metadata and reporting requirements of the OSTP memo already exists, but it is unevenly implemented. Increased collaboration and effort will be needed to bring this all to fruition.\nWe set out below some steps that all stakeholders can take to meet not just the OSTP guidelines, but Open Science goals more broadly, and globally.\nWhat does ‘adoption’ look like? How exactly do funders and other stakeholders work with this infrastructure? The OSTP memo calls for specific actions concerning metadata and identifiers where, fortunately, open and global solutions already exist.\nFor example, item 4 a) says, “Collect and make publicly available appropriate metadata associated with scholarly publications and data resulting from federally funded research.” Crossref and DataCite make metadata, including persistent identifiers (DOIs to be specific), openly available for a broad range of research objects from publications to data. Item 4 b) reads, “Assign unique digital persistent identifiers to all scientific research and development awards and intramural research protocols”. Again, federal agencies and other funders are already joining to register awards and grants and distribute these records openly through Crossref. However, this is an example of uneven adoption as registering awards and grants with DOIs is only being done by a few funders so far, which needs to increase.\nHere is an ideal workflow that funders and publishers can already follow Funders join Crossref to register grants and awards (or indeed any other object such as reports). They apply on our website, accept our terms, and provide key information such as contact details. An annual membership fee ranges from $200-$1200 USD. Funders and publishers collect ROR IDs and authenticated ORCID iDs for all authors/awardees and their affiliations. Funders register a Crossref DOI for the award/grant, including awardees’ ORCID iDs and ROR IDs. They send us XML information about the grant (note that we will imminently release an online form to make it easier for the less technical funders). Many funder members register the metadata through a third party, such as Altum (if they use ProposalCentral) or Europe PMC. At the same time, funders update the awardees’ ORCID record directly with the Crossref Grant DOI and metadata. Grantees produce research objects and outputs such as data, protocols, code, preprints, articles, conference papers, book chapters, etc. These objects are registered with Crossref or DataCite, and DOIs are created by the publisher or repository members who include ORCID iDs, Crossref Grant DOIs (gathered from the author), ROR IDs for affiliations for all contributors, and other key metadata such as licensing information, and in the case of publications - references and abstracts. Note that the publisher works its magic (actually, publishers do a lot of editorial and production work, such as including data citations in the references using DataCite DOIs for the data in data repositories). On the Crossref side, we do a bunch of processing and matching and are planning to refine this and do more. Sometimes relationships are notified and added, such as data citation, preprints related to articles or funding acknowledgements converted from free text to Open Funder Registry IDs and names. Grant records with Crossref DOIs are now part of the scholarly record. All stakeholders may retrieve the open metadata and relationships through our public APIs. Crossref and DataCite will always provide open metadata, as safeguarded by our respective commitments to POSI. Anyone can use the open metadata registered with Crossref, DataCite and ORCID as connections have been established between (ideally all) research objects and entities through open metadata and identifiers. This means that:\nFunding agencies can monitor compliance with their policies Publishers can identify the funder and meet their requirements Funding agencies can assess and report on the reach and return of their funding programs The provenance and integrity of the scholarly record is preserved and discoverable, benefitting all stakeholders. Suggestions for meeting OSTP and Open Science guidance, point by point OSTP Recommendation Publishers should… Funding agencies should… 4 a) Collect and make publicly available appropriate metadata associated with scholarly publications and data resulting from federally funded research For scholarly publications: register comprehensive metadata \u0026 DOIs with Crossref. For scholarly data: register comprehensive metadata and DOIs with DataCite. Use Crossref’s API to retrieve publication and other metadata. Use DataCite’s API to retrieve data/repository metadata. i) all author and co-author names, affiliations, and sources of funding, referencing digital persistent identifiers, as appropriate; Collect and validate the following from authors at manuscript submission: ROR \u0026 ORCiD IDs, Crossref Grant DOIs. Include data citations in reference lists, preferably with DataCite DOIs. Register awards and grants with Crossref and create DOI records for them. Use ORCID’s API to retrieve validated contributor metadata. Update contributors’ ORCID records with Crossref Grant DOIs and metadata. Use ROR API to retrieve and verify affiliation metadata. Recommend data citations be included in published outputs. ii) the date of publication; and, Include acceptance and publication dates in Crossref metadata. Use Crossref’s API to retrieve publication dates. iii) a unique digital persistent identifier for the research output; For scholarly publications and research outputs: register full metadata \u0026 DOIs with Crossref. For scholarly data: register full metadata and DOIs with DataCite. Use Crossref and DataCite APIs to retrieve DOIs for research outputs. 4 b) Instruct federally funded researchers to obtain a digital persistent identifier that meets the common/core standards of a digital persistent identifier service defined in the NSPM-33 Implementation Guidance, include it in published research outputs when available, and provide federal agencies with the metadata associated with all published research outputs they produce, consistent with the law, privacy, and security considerations. Collect ORCID iDs on manuscript submission for all authors. Register Crossref and DataCite DOIs and metadata for research outputs, including data. Recommend that researchers applying for funding obtain an ORCID iD and collect them upon grant application for all applicants. Prepopulate grant applications with CV and publication information from applicants’ ORCID records. ORCID iDs should be included in the grants registered by the agencies with Crossref. Agencies can use our open APIs to retrieve the metadata on publications and data rather than ask researchers to do it, saving time and effort. 4 c) Assign unique digital persistent identifiers to all scientific research and development awards and intramural research protocols that have appropriate metadata linking the funding agency and their awardees through their digital persistent identifiers. Join Crossref to register Crossref Grant DOIs, including ROR IDs and ORCID iDs Ensure grant proposal and assessment systems integrate with Crossref, ROR for affiliations and with ORCID for applicants/awardees. 5 a) coordinate between federal science agencies to enhance efficiency and reduce redundancy in public access plans and policies, including as it relates to digital repository access; Work with agencies to ensure a smooth, automated workflow. Using and supporting existing open scholarly infrastructure and using open identifiers will avoid duplication of effort and make the overall ecosystem more efficient . 5 b) improve awareness of federally funded research results by all potential users and communities; Collect Crossref Grant DOIs from authors and use them to link from publications to grant information. Communicate your Crossref Grant DOIs and open grant metadata widely via human and machine interfaces. Inclusion in the Crossref API will enhance dissemination and discoverability Update contributors’ ORCID records with Crossref Grant DOIs and metadata 5 c) consider measures to reduce inequities in the publishing of, and access to, federally funded research and data, especially among individuals from underserved backgrounds and those who are early in their careers; Registering grants and sharing metadata through Crossref means it’s part of the world’s largest open community-governed metadata exchange and makes it available to the entire world without restriction. 5 d) develop procedures and practices to reduce the burden on federally funded researchers in complying with public access requirements; Ensure your systems and those you work with make it as easy as possible for authors to provide the necessary metadata and persistent identifiers - work towards as much automation as possible and pulling from other systems rather than asking for data to be re-keyed. Ensure the platforms you work with, such as grant proposal or assessment systems, retrieve and prepopulate ROR IDs, ORCID iDs, and Crossref and DataCite DOIs and associated metadata whenever possible so that the researchers don’t have to manually rekey or reformat data. 5 e) recommend standard consistent benchmarks and metrics to monitor and assess implementation and iterative improvement of public access policies over time; Ensure that platforms and systems integrate with ROR, ORCID, Crossref, and DataCite so that this open metadata can lead to the creation of benchmarks and metrics. 5 f) improve monitoring and encourage compliance with public access policies and plans; Use open infrastructure to help authors easily comply with public access and funder/institution policies. Automate systems as much as possible. Using the open infrastructure, metadata, and identifiers outlined in this post will make monitoring more straightforward and compliance easier for all stakeholders. The community can build services on open infrastructure and metadata. 5 g) coordinate engagement with stakeholders, including but not limited to publishers, libraries, museums, professional societies, researchers, and other interested non-governmental parties on federal agency public access efforts; Work with the global open infrastructure organisations (Crossref, DataCite and ORCID) whose members include funding agencies, societies, publishers, universities, libraries, repositories, museums, NGOs, and many other stakeholders - all looking to improve the efficiency of the research ecosystem. Work with the global open infrastructure organisations (Crossref, DataCite and ORCID) whose members include funding agencies, societies, publishers, universities, libraries, repositories, museums, NGOs, and many other stakeholders - all looking to improve the efficiency of the research ecosystem. 5 h) develop guidance on desirable characteristics of—and best practices for sharing in—online digital publication repositories; Support automated systems that use metadata and identifiers to populate repositories automatically. Collaborate with publishers, Crossref and others to develop automated systems to populate repositories. 5 j) develop strategies to make federally funded publications, data, and other such research outputs and their metadata are findable, accessible, interoperable, and re-useable, to the American public and the scientific community in an equitable and secure manner. Provide and support a range of discovery services based on open infrastructure. Encourage discovery services - and develop services - that use the open infrastructure, metadata and persistent identifiers to enable. Everybody needs to play their part A lot of the work on making the above happen is already underway, and there is widespread adoption of open identifiers and metadata, but as noted above, funders are still early in the adoption journey, and implementation among all stakeholders is patchy.\nCritical parts of the infrastructure rely on third-party platforms that supply tools and systems to authors, funders, and publishers - so coordinating the support for the appropriate metadata and identifiers in these systems and tools is very important.\nWe are emphasising how our existing open scholarly infrastructure systems are helping. But we also know that it’s not all perfect yet. Infrastructure is always evolving, metadata is never complete, refactoring workflows and systems can be costly, and integration can always be smoother. But our existing open infrastructure has already delivered significant benefits, and broader adoption will bring additional benefits to the whole scholarly research and communications ecosystem and help achieve the promise of Open Science in advancing human knowledge.\nWhile working on this coordination and integration, we all try to remember that it should minimise work for researchers, and processes should be as automated as possible.\nCollaboration is key to making this all work.\nWe already work with many funders through our Advisory Group, our 30 funder members, 25 of whom have so far collectively registered around 40,000 Crossref Grant DOIs, retrievable from our open API. Some grants are even matched to resulting outputs already, and some funders have recently dug into Crossref metadata to analyse outcomes from their investments, such as the Dutch Research Council (NWO) which presents findings and makes a case for greater emphasis on Crossref funding metadata.\nWe also work closely with partners Europe PMC and Altum, and we engage in community research and discussion, for example, through the Open Research Funders Group.\nAlongside our fellow infrastructures and open identifier registries ORCID, DataCite, and ROR, we integrate with and support each other operationally and out in the community.\nWe will continue focusing our resources and efforts on engaging with funders, including US federal agencies responding by the OSTP guidelines, and all stakeholders to support the entire global scholarly research ecosystem.\nEveryone has a part to play, and we must all pull together to prioritize this work. Who’s in?\nPlease get in touch with Ed, Ginny, or Jennifer (or indeed DataCite or ORCID or ROR) if you’d like to have a discussion about the workflows described here, or just to make sure you’re up to date on the latest developments and opportunities we describe. We look forward to working with all funding agencies to support them as they develop their plans.\n", "headings": ["Infrastructure already exists to support funder goals; it just needs more adoption","What does ‘adoption’ look like? How exactly do funders and other stakeholders work with this infrastructure?","Here is an ideal workflow that funders and publishers can already follow","Suggestions for meeting OSTP and Open Science guidance, point by point","Everybody needs to play their part","Everyone has a part to play, and we must all pull together to prioritize this work."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/better-preprint-metadata-through-community-participation/", "title": "Better preprint metadata through community participation", "subtitle":"", "rank": 1, "lastmod": "2022-11-09", "lastmod_ts": 1667952000, "section": "Blog", "tags": [], "description": "Preprints have become an important tool for rapidly communicating and iterating on research outputs. There is now a range of preprint servers, some subject-specific, some based on a particular geographical area, and others linked to publishers or individual journals in addition to generalist platforms. In 2016 the Crossref schema started to support preprints and since then the number of metadata records has grown to around 16,000 new preprint DOIs per month.", "content": "Preprints have become an important tool for rapidly communicating and iterating on research outputs. There is now a range of preprint servers, some subject-specific, some based on a particular geographical area, and others linked to publishers or individual journals in addition to generalist platforms. In 2016 the Crossref schema started to support preprints and since then the number of metadata records has grown to around 16,000 new preprint DOIs per month.\nPreprints aren’t the same as journal articles, books, or conference papers. They have unique features, and how they are viewed and integrated into the publishing process has evolved over the past six years. For this reason, we have been revisiting the preprint metadata schema and decided that the best approach would be to form an advisory group (AG) of preprint practitioners and experts to help us.\nThe AG has identified a number of areas in which preprint metadata could be improved. Four of these were considered to have the highest priority:\nWithdrawal and removal of preprints. Preprints as an article type (not a subtype of posted content) in the schema. Relationships between preprints and other outputs. Versioning of preprints. The members of the AG set to work with great enthusiasm, sharing perspectives and expertise. This led to a first tranche of recommendations shared for feedback earlier this year, and we’re grateful for engagement and feedback from the community over the last few months.\nWhat did the community say? Some of the points raised in the feedback were:\nCould the origin of a withdrawal be included in the metadata, in particular whether it was requested by an author or another party? Can the metadata represent when a preprint has been submitted to a journal and what stage it is in the editorial process? Crossref is not alone in looking at preprint metadata, and several NISO groups are also engaged in related work. Interoperability and the ability to create relationships with identifiers beyond DOIs is important to maintain an accurate and comprehensive record of research outputs. These will form the basis for ongoing discussions.\nWhat happens next? There are three next steps that we will be taking.\nThe recommendations outline only the outcomes of discussions in a relatively brief format. We have been working on a more detailed paper to communicate more about what was discussed and provide some extra justification and alternatives. The AG will continue to meet and discuss the points raised during consultation on the recommendations, along with topics that were considered a lower priority at an earlier stage. We will draw up a set of proposals for specific changes to the metadata schema that will reflect the outcomes of the recommendations and discussions. Although the initial period for feedback on preprint metadata has ended, we welcome feedback at any time. If you would like to get in touch, please contact me or any member of the advisory group.\n", "headings": ["What did the community say?","What happens next?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/dois/", "title": "DOIs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/forming-new-relationships-contributing-to-open-source/", "title": "Forming new relationships: Contributing to Open source", "subtitle":"", "rank": 1, "lastmod": "2022-10-19", "lastmod_ts": 1666137600, "section": "Blog", "tags": [], "description": "TL;DR One of the things that makes me glad to work at Crossref is the principles to which we hold ourselves, and the most public and measurable of those must be the Principles of Open Scholarly Infrastructure, or POSI, for short. These ambitions lay out how we want to operate - to be open in our governance, in our membership and also in our source code and data. And it\u0026rsquo;s that openness of source code that\u0026rsquo;s the reason for my post today - on 26th September 2022, our first collaboration with the JSON Forms open-source project was released into the wild.", "content": "TL;DR One of the things that makes me glad to work at Crossref is the principles to which we hold ourselves, and the most public and measurable of those must be the Principles of Open Scholarly Infrastructure, or POSI, for short. These ambitions lay out how we want to operate - to be open in our governance, in our membership and also in our source code and data. And it\u0026rsquo;s that openness of source code that\u0026rsquo;s the reason for my post today - on 26th September 2022, our first collaboration with the JSON Forms open-source project was released into the wild.\nLike most organisations, we depend heavily on open-source software for our operations - the software is universally available, generally high quality and \u0026lsquo;free\u0026rsquo;. And it\u0026rsquo;s easy to take that dependency, and the associated dependency on free time and effort on the part of the maintainers, for granted - but that\u0026rsquo;s not very sustainable. In fact, we believe relying on open-source software without helping to sustain it is an anti-pattern, and this project marks the start of our efforts to make funding open-source software a standard part of our technology budget.\nThis isn\u0026rsquo;t the first time we\u0026rsquo;ve supported or released open-source software. Indeed for the past few years, all our new software is open source, and we\u0026rsquo;re in the process of replacing old closed code with new, so that eventually all our code will be open source. But this is the first time we\u0026rsquo;ve contributed extensively to something that isn\u0026rsquo;t focussed primarily on us, and our services. This is a project that we will find very useful, but it is a general purpose tool, and it\u0026rsquo;s already gaining traction in the community.\nBackground and motivations A while back, I was tasked to do a quick spike of work on testing the theory that we could use automated form generation tools to bring new interfaces to our users more quickly, and make them easier for \u0026ldquo;people who aren\u0026rsquo;t devs\u0026rdquo; to adapt and manage. We wanted to build a new user interface for registering content, and especially we wanted to make it easier for funders to register the grants they were awarding. As well as being more approachable by a less-technical audience, we also wanted these forms to be accessible (in terms of a11y and users of assistive technology) and localisable - we wanted a solution that would cater to the needs of our rapidly diversifying membership.\nEnter JSON Schema We were clear about one side of the puzzle - we knew that we had to look beyond the XML ecosystem upon which much of our existing system is built - and landed on JSON Schema. JSON Schema is a \u0026lsquo;vocabulary that allows you to annotate and validate JSON documents\u0026rsquo;. This means you can describe the shape you expect your data to take, and apply constraints-based validation to that. Which means, in terms of a form library, that you can infer the structure of the form and test that the data entered into it matches what you expect. More than that, you can use that built-in validation to provide error messages to help people get the data right, first time.\nWorking backwards from the outcome, the argument for adopting JSON Schema is compelling. It provides a mechanism for checking that data you are handling (for example, receiving input from a form) conforms to the constraints that you declare, but also allows you to tell people up-front, in a human and machine-readable way, what structure and format you will accept. This closed-loop of data annotation and validation gets more appealing when you look at the wide adoption of JSON Schema across languages and libraries. You can pretty much guarantee that for whatever client or server -side technology you are using, there will be a JSON Schema validator for it. Being able to share schemas across your systems (and equally importantly, with third parties) moves JSON schema from \u0026lsquo;just\u0026rsquo; being about data validation, to a key supportive technology.\nBuilding a form derived from a JSON Schema is an equally attractive prospect. JSON Schema was conceived during the AjaxWorld conference in 2007 as a \u0026lsquo;JSON-based format for defining the structure of JSON data\u0026rsquo;, and its use as a form-generation tool is relatively new, but there is growing community interest. There is even a discussion about how to best create a JSON Schema vocabulary, specifically geared towards addressing some of the needs of form generation users. However, even in its current form, a JSON Schema can be passed to a library, and a very serviceable user interface appears. The devil is always in the detail, and the client-side libraries differ in their abilities to customise areas such as layout (you may not always want your form fields to appear in exactly the same order as they do in your JSON Schema), custom elements (you might want something that wasn\u0026rsquo;t a form input, or that changes based on user input) and localisation. The ability to flexibly customise the appearance and behaviour of the interface was a key factor in our selection of a client-side form generation library.\nChoosing a library The other side of the puzzle was less clear - choosing a UI library that would take this JSON Schema, and turn it into a useful, and usable, form. I made the prototype using the venerable React JSON Schema form. This worked well as a proof of concept, but veered dramatically off our chosen Frontend stack of VueJS and Vuetify, and had some architectural constraints that would limit the scope of customisations we could make to our forms. So I went off looking for libraries that would work with our stack and came up with Vuetify JSON Schema Form, and JSON Forms.\nVuetify JSON Schema Form matched our stack perfectly, but made some interesting decisions about the layout of data within the form, and that wouldn\u0026rsquo;t suit our purposes without dramatic modification.\nJSON Forms was an abstracted library, with a core handling the JSON Schema transformation and validation, and separate rendering libraries to handle the form generation. This was great - they had renderers for Angular, React, and even some support for VueJS. But not Vuetify.\nClearly, we were going to have to make something.\nWe made contact with the maintainers of both short-listed libraries to see how we could collaborate in creating a tool that would meet all of our (and hopefully, much of the wider community\u0026rsquo;s) requirements. Both maintainers were very helpful, and we had constructive discussions in both cases. In the end, we decided that the abstracted nature of the JSON Forms project was a better fit for our needs, providing a flexible platform on which we - and others - could extend. We were fortunate to receive funding from the Gordon and Betty Moore Foundation (Grant Agreement #10485) in order to accelerate this work, so we could provide a Grant Registration UI more quickly. We paid a large portion of that funding to the library maintainers, and Crossref contributed a portion of my time on the project. This allowed us to enter into an agreement with EclipseSource, the maintainers of JSON Forms, to collaboratively develop the new VueJS and Vuetify renderer library. Stefan Dirix, the lead maintainer, worked with me to build it.\nWe didn\u0026rsquo;t forget about Vuetify JSON Schema Form though, and by way of appreciation for their help in the early stages, Crossref made a contribution towards the continued development of that library.\nJSON Forms - now with Vuetify Work started on the JSON Forms Vuetify renderer set in September 2021 - Stefan quickly created the first early prototypes of the new form renderers - but then we had a stroke of luck. Our repository received more input from the community. The one that made us sit up and take real notice was the news that someone else had already ported the JSON Forms React renderer set to Vue/Vuetify - and was offering this as a contribution. Krasimir Chobantonov\u0026rsquo;s fantastic first contribution got merged in at the end of the month. This propelled the project forward massively, and was an early validation of the value of working in the open. Needless to say, we were very grateful. Another example of the open source value chain was that Stefan - as the maintainer - could take the time to carefully review and tidy up the incoming code, so what was merged was the product of two great developers.\nHaving this great head start meant we could turn our attention to one of the other big areas we wanted to get right - localisation. Traditionally, JSON Schema -generated forms have handled localisation (translation of text and adjustment of date and numerical formats) by wholesale duplication and translation of the schema. This is cumbersome, and doesn\u0026rsquo;t integrate very well with custom error messages, nor external sources of interface messages (think form labels, descriptions, placeholders). So Stefan came up with a proposal, which we accepted, to add complete i18n support to the library. We now have a mechanism by which you can hook up a translation engine of your choice, and JSON forms will use that to lookup messages, before falling back to the validator (also localised!) and finally, the JSON Schema\u0026rsquo;s defaults. This gives much stronger integration and allows the community to plug in their existing localisation methods - no wasted effort.\nSince the localisation addition, we\u0026rsquo;ve been working on fine-tuning the layout engine, making bug fixes, and integrating more closely with the underlying Vuetify library. This allows developers to more easily use the existing Vuetify parameters to change the style and behaviour of their form widgets. Again, no wasted effort. We\u0026rsquo;re lucky to have an active community - @kchobantonov continues to make great contributions and push the library forward in unexpected ways - and the library is gaining popularity, with an average of a few hundred downloads per day. Some of our funder members have already seen this work in action, and given their feedback on early iterations of the user interface that supports registering grant records. We\u0026rsquo;ll be releasing this publicly very soon to get feedback from members - and then using that feedback to iterate on the grants registration form, and look towards extending it to other record types. Open source POSItivity A continuous theme throughout this project has been the willingness of people working on these open source projects to be generous with their time and experience. Whether it has been form generation libraries, the JSON Schema project or maintainers of localisation plug-ins - help, advice and encouragement have never been far away. And that\u0026rsquo;s appreciated. But it\u0026rsquo;s not something that we, or any other organisation who relies on the software they produce, should take for granted. Open source software helps everyone who uses it, and there\u0026rsquo;s a real opportunity within our community to make meaningful steps towards supporting its sustainability. Ironically, it\u0026rsquo;s often the most-used general purpose tools that get the least attention. We can change that.\nLook out for more Look out for more posts from the engineering team, coming soon!\nReferences JSON Binpack: A space-efficient schema-driven and schema-less binary serialization specification based on JSON Schema (Chapter 3.2.1 History and Relevance)\nhttps://web.archive.org/web/20071026190426/http://www.json.com/2007/09/27/json-schema-proposal-collaboration/\n", "headings": ["TL;DR","Background and motivations","Enter JSON Schema","Choosing a library","JSON Forms - now with Vuetify","Open source POSItivity","Look out for more","References"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/interoperability/", "title": "Interoperability", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/isr-part-three-where-does-crossref-have-the-most-impact-on-helping-the-community-to-assess-the-trustworthiness-of-the-scholarly-record/", "title": "ISR part three: Where does Crossref have the most impact on helping the community to assess the trustworthiness of the scholarly record?", "subtitle":"", "rank": 1, "lastmod": "2022-10-17", "lastmod_ts": 1665964800, "section": "Blog", "tags": [], "description": "Ans: metadata and services are all underpinned by POSI.\nLeading into a blog post with a question always makes my brain jump ahead to answer that question with the simplest answer possible. I was a nightmare English Literature student. \u0026lsquo;Was Macbeth purely a villain?\u0026rsquo; \u0026lsquo;No\u0026rsquo;. *leaves exam*\nJust like not giving one-word answers to exam questions, playing our role in the integrity of the scholarly record and helping our members enhance theirs takes thought, explanation, transparency, and work.", "content": "Ans: metadata and services are all underpinned by POSI.\nLeading into a blog post with a question always makes my brain jump ahead to answer that question with the simplest answer possible. I was a nightmare English Literature student. \u0026lsquo;Was Macbeth purely a villain?\u0026rsquo; \u0026lsquo;No\u0026rsquo;. *leaves exam*\nJust like not giving one-word answers to exam questions, playing our role in the integrity of the scholarly record and helping our members enhance theirs takes thought, explanation, transparency, and work.\nSome of the elements Amanda outlines in the previous posts in this series (Part 1, Part 2) really resonated from a product perspective:\nWe must be cautious that our best practices for demonstrating legitimacy and identifying deceptive behaviour do not raise already-high barriers for emerging publications or organizations that present themselves in ways that some may not recognize as professional standards. Disruption is different from deception. Crossref has an opportunity to think about how to identify deceptive actions and pair that with our efforts to bring more people on board and support their full participation in our ecosystem.\nWe don\u0026rsquo;t have the means or desire to be the arbiter of research quality (whatever that means). However, we operate neutrally, at the center of scholarly communications, and we can help develop a shared consensus or framework. Our metadata elements and tools can be positioned to signal or detect trustworthiness. An important distinction is that we can play a role in assessing legitimacy (activities of the actors) but not in quality (calibre of the content itself).\nCrossref has lots of plans (and lots to do) to improve our role in ISR Rather than a long list of things we want to do in terms of tools, services, and functionality, it feels more manageable to break this work into three key areas.\n1. Collecting better information in better ways We think many elements of the metadata our members record with us help expose important information about the research, e.g., authors, publication dates, and abstracts. We also help our members assess submissions for originality via our Similarity Check service, and the ongoing migration to iThenticate V2 aims to better support this aspect of the publication process.\nBeyond this, as Amanda points out, \u0026lsquo;once members start registering their content, their metadata speaks about their practices\u0026rsquo;. Seeing who published a work along with the metadata they provide; validated ORCID IDs to identify the authors, reference lists and links to related research and data, and important updates to the work via Crossmark, all contribute to showing not just the \u0026lsquo;what\u0026rsquo; but the \u0026lsquo;how\u0026rsquo; so that the community can use that information to support their decision-making.\nI always want to stress that this work is not just an \u0026lsquo;ask\u0026rsquo; for our members. We are moving in the same direction as we improve the things we do to support organizations in registering their records with us, answering their questions, working with partner organizations like PKP, consulting with our community on pain points, and thinking about how we can better enhance and facilitate their work. We\u0026rsquo;ve been fortunate that our community has taken the time to engage in discussions with Turnitin on iThenticate improvements, do user testing sessions as we build simple user interfaces to record grants, lead calls and conversations on improving grant metadata and supporting the uptake of ROR and data citation, and provide thoughtful feedback on our recent preprint on CRE metadata. This all helps us to explain, structure, and prioritize our product work.\nThere are also some closely related R\u0026amp;D-led projects that are already informing our thinking:\nA more responsive version of participation reports so that it\u0026rsquo;s easier for members to identify gaps in their metadata and compare against others. Making it easier to get metadata back in a format where members can easily redeposit it. Better matching to help us and our members augment the metadata they send us to add value to the work we all do. We said in the previous blog posts that we\u0026rsquo;ll pose questions about what kinds of metadata give what kind of levels of trustworthiness, and have previously highlighted the following activities:\nReporting corrections and retractions through Crossmark metadata. We know that our members are collecting this information, but often it isn\u0026rsquo;t making it through metadata workflows to us. We\u0026rsquo;re part of the NISO CREC (Communication of Retractions, Removals, and Expressions of Concern) working group with many of our members and metadata users, as this feels like something critical to address.\nAssessing originality using Similarity Check. On average, we\u0026rsquo;re seeing 320 new Similarity Check subscribers each year, with over 10 million checks being done each year by our members. Establishing provenance and stakeholders through ORCID and ROR. At the time of writing, we have over 30,000 ROR IDs in Crossref, and this is growing steadily across different record types. ROR is keen to support adoption and so are we. Acknowledging funding and other support through the use of the Open Funder Registry and registering grants metadata. This has improved in quality and completeness since we launched the Funder Registry in 2014 and with more comprehensive support for grants in more recent years. But we still have work to do, as this paper by Kramer and de Jonge points out: The availability and completeness of open funder metadata.\nCiting data for transparency and reproducibility, including linking to related research data. Scholix, MDC and STM Research Data groups. Demonstrating open peer review by registering peer review reports. Members have already recorded over 300,000 peer reviews with Crossref, opening up this information on their processes.\nIn your organization, what weight do you give these? We know that some of our members register some of these things in more volume than others - is that due to their perceived value, technical limitations, or \u0026lsquo;we\u0026rsquo;re working on it, give us time?\u0026rsquo; Do you think of them in the context of the integrity of the record or are we off the mark? Are there other things we haven\u0026rsquo;t mentioned in this blog that we could capture, report on and highlight? 2. Disseminating this information and supporting its downstream use We want to make it as easy as possible for everyone to access and use the metadata our members register with us. Especially as some of the biggest metadata users are our members and, more selfishly, us! But there\u0026rsquo;s no point collecting metadata to support ISR if it\u0026rsquo;s unwieldy and difficult to access and use.\nWe\u0026rsquo;re working on a project, described in the mid-year community update by a number of my colleagues to break down internal metadata silos and model it in a more flexible way. This will lend itself to better information collection and exchange, and support of the Research Nexus by building a relationships API to let anyone see all of the relationships Crossref can see between a given work and well, anything else related to it (citations, links to preprints, links to data to name but a few).\nPart of that work will involve supplementing the metadata our members register with high-quality, curated data from selected sources, making it clear where those assertions have come from.\nWe want our API to perform consistently and well, to contain all the metadata our members register, handle it appropriately, and be able to keep the information in it up-to-date.\nOur API will underpin the reports we provide our members (among other things) so that we can provide simple interfaces for organizations to check how they\u0026rsquo;re doing along with more functional requests. Do their DOIs resolve? Are they submitting metadata updates when they publish a correction? How much will they be billed in a given quarter? We have a lot of internal reporting and need to build more, and if we want to use these, chances are many others do too, so we should open those up.\n3. Trying to live up to POSI to underpin this work When I see a new project, initiative, tool or service in the research ecosystem the first thing I want to do is find out about the organization itself so that I can base some decisions on that. Lateral reading in action.\nAt Crossref, we want to show who we are beyond just our tools, services, and products and be transparent about our values. That\u0026rsquo;s why we have adopted the Principles of Open Scholarly Infrastructure or POSI for short. Now we need to meet these principles and we\u0026rsquo;re working towards that. POSI proposes three areas that an Open Infrastructure organization like Crossref can address to garner the trust of the broader scholarly community: accountability (governance), funding (sustainability), and protection of community interests (insurance). POSI also proposes a set of concrete commitments that an organization can make to build community trust in each area.\nSo POSI isn\u0026rsquo;t just opening code and metadata, it\u0026rsquo;s telling our community how we handle membership, governance, product development, technical and financial stability and security, holding our hands up when we\u0026rsquo;ve got something wrong, and actively looking to improve upon the things we do.\nAre you still reading? If so, you\u0026rsquo;ve done better than many of my examiners, I\u0026rsquo;m sure. So stay with us as we work together to ensure we bring quality, transparency, and integrity to the work we all do.\nThe next part in this series will report back on the feedback and discussions and potentially propose some new or adjusted priorities. Join us at the Frankfurt bookfair this week (hall 4.2, booth M5) or comment on this post below.\n", "headings": ["Crossref has lots of plans (and lots to do) to improve our role in ISR ","1. Collecting better information in better ways","2. Disseminating this information and supporting its downstream use","3. Trying to live up to POSI to underpin this work"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/financials/", "title": "Financials", "subtitle":"", "rank": 4, "lastmod": "2022-10-10", "lastmod_ts": 1665360000, "section": "Operations & sustainability", "tags": [], "description": "In recent years, we operate on a budget of around $12 million (USD). About one-third of our revenue comes from annual dues (e.g., membership fees, subscriptions) and two-thirds from services (e.g., Content Registration, Similarly Check document checking). Our fees are set and reviewed by the Membership \u0026amp; Fees committee, which includes our staff, board, and community members. This group also created a set of fee principles which were approved by the board in 2019.", "content": "In recent years, we operate on a budget of around $12 million (USD). About one-third of our revenue comes from annual dues (e.g., membership fees, subscriptions) and two-thirds from services (e.g., Content Registration, Similarly Check document checking). Our fees are set and reviewed by the Membership \u0026amp; Fees committee, which includes our staff, board, and community members. This group also created a set of fee principles which were approved by the board in 2019.\nAbout 80% of our expenses are related to people - staff, benefits, and contracted support. 20% of our costs are everything else - hosting costs, licensing fees, events, and costs to do business like banking fees and insurance.\nEach year we strive to generate a small operating net and have been able to do so nearly every year.\nWe also maintain a reserve fund to support long-term sustainability. An Investment Committee was formed in 2021 to update our investing policies, and we will share more later this year.\nBelow is a look at how our operations have changed over time.\nAnnual financial reporting As a not-for-profit, we are tax-exempt, and to maintain that status, we undergo a financial audit each year by an independent accounting firm. Our auditors prepare our Form 990, which the US IRS requires and is made publicly available. It gives an overview of what we do, how we are governed, and detailed financial information.\nBelow are our recent Form 990s.\n2017 Form 990\n2018 Form 990\n2019 Form 990\n2020 Form 990\n2021 Form 990\n2022 Form 990\n", "headings": ["Annual financial reporting"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/isr-part-two-how-our-membership-approach-helps-to-preserve-the-integrity-of-the-scholarly-record/", "title": "ISR part two: How our membership approach helps to preserve the integrity of the scholarly record", "subtitle":"", "rank": 1, "lastmod": "2022-10-10", "lastmod_ts": 1665360000, "section": "Blog", "tags": [], "description": "In part one of our series on the Integrity of the Scholarly Record (ISR), we talked about how the metadata that our members register with us helps to preserve the integrity of the record, and in particular how \u0026rsquo;trust signals\u0026rsquo; in the metadata, combined with relationships and context, can help the community assess the work. In this second blog, we describe membership eligibility and what you can and cannot tell simply from the fact that an organisation is a Crossref member; why increasing participation and reducing barriers actually helps to enhance the integrity of the scholarly record; and how we handle the very small number of cases where there may be a question mark.", "content": "In part one of our series on the Integrity of the Scholarly Record (ISR), we talked about how the metadata that our members register with us helps to preserve the integrity of the record, and in particular how \u0026rsquo;trust signals\u0026rsquo; in the metadata, combined with relationships and context, can help the community assess the work. In this second blog, we describe membership eligibility and what you can and cannot tell simply from the fact that an organisation is a Crossref member; why increasing participation and reducing barriers actually helps to enhance the integrity of the scholarly record; and how we handle the very small number of cases where there may be a question mark.\nWho can become a Crossref member and do we check new applicants? Membership is open to organisations that \u0026ldquo;produce professional and scholarly materials and content\u0026rdquo;, and this is deliberately defined broadly. We’re a global community of members with content in all disciplines, in many formats, with all kinds of business models - research institutions, publishers, government agencies, research funders, banks, museums and many more. Essentially, if your content is likely to be cited in the research ecosystem and you consider it part of the evidence trail, then you’re eligible to join.\nWe ask organisations to complete an online application form and accept our member terms. On receipt of the application, we run a few very basic checks to ensure that:\nThe applicant can meet the membership criteria and seems to have the capacity to fulfill the obligations (and follow our code of conduct). We are legally permitted to accept them as a member (for example, we can’t accept applications from some countries due to sanctions. They haven\u0026rsquo;t previously been a member of Crossref whose membership was revoked. They haven\u0026rsquo;t misrepresented themselves in the application (such as their location). The applicant or an affiliate is not already a member of Crossref (so that we can advise they join under a single membership fee). As long as the applicant can meet these requirements, and as long as they are able to pay any membership fees upfront for their first year of membership, they are able to become a Crossref member, get a DOI prefix, and start registering their metadata to share it with the global scholarly community. We are aware that some organisations in some regions may not be able to join Crossref independently. There may be barriers for them - the cost of membership fees, the fact that we only accept payment in US dollars, language barriers or technical barriers. To help increase participation globally, we work with sponsors in some regions. All sponsors facilitate membership for organisations who wish to participate in Crossref. They pay one central membership fee on behalf of all the members they work with, and they also pay content registration fees on behalf of their members. Many sponsors register content on behalf of their members, and even if they don’t, most provide local language and technical support. Sponsors are able to charge for their services, but it can be a very economical route for a member to join. In the last year, out of the 2,322 new members that we’ve welcomed, almost 58% joined via a sponsor.\nWe also waive registration fees for members in certain lower income countries who join via three of our sponsors, and we are planning to expand this program soon (pending board approval in November). [EDIT 2022-November-23: The new Global Equitable Membership (GEM) Program was approved and takes effect 1st January 2023]\nThe importance of keeping barriers to entry low As you can see, the checks that we run on new applicants are fairly limited in scope. In the last year, we’ve welcomed 2,322 new members and we only declined 39 applications. And 34 of these declined applications were effectively from one organisation whose membership was revoked in 2019.\nEven this minimal set of checks takes a lot of research and keeps our member support specialists very busy - thank you Sally Jennings and Robbykha Rosalien (as well as contractors Kim and Collin). So why shouldn\u0026rsquo;t we run more extensive checks on new member applicants? Why don’t we check the quality of their content, or that they are following best practices? Why don’t we decline membership for organisations that can’t demonstrate editorial integrity or that aren’t meeting 100% of the membership obligations from the start?\nNevermind the additional capacity that more extensive checks on the over 200 applicants we receive per month would entail, it\u0026rsquo;s more fitting with our mission to:\nenable equitable participation; and focus on evidence: Equitable participation Inclusivity is very important to us - after all, one of our organisational truths (the guiding principles for everything we do) is “come one, come all”, and this is mirrored in the POSI principles that commit us to broad stakeholder representation. We know that for new organisations, it may take them a while to be able to completely fulfil the membership obligations. We support them with information to help them understand what being a participant in the Crossref community entails. These organisations would have less of a chance of developing better practices if we were to limit membership in Crossref to \u0026lsquo;proven\u0026rsquo; candidates. Besides, it would introduce a race condition; if joining and sharing metadata through Crossref is widely considered best practice, new entrants need to join Crossref in order to show that they are adopting best practices.\nTrust signals and the Research Nexus Secondly, it\u0026rsquo;s not our role to make such a call; we don’t have the expertise to decide if an organisation would be considered “good” at what they are producing; there are other organisations guiding in this area, such as with the Principles of Transparency and Best Practice in Scholarly Publishing. Instead, we focus on the decision-making tools, metadata, and relationships that can help provide trust signals for the community.\nOnce members start registering their content, their activity and metadata speak about their practices – others in the community can process that metadata, combined with its wider context, and identify trust signals to make their own decisions. That metadata can only be shared in an open and machine-readable way if an organisation joins Crossref and starts registering their records and underpinning data with us. To paint a more detailed picture of the scholarly record, our priority is to get more and varied organisations contributing to the research nexus, rather than putting up barriers and blockers until they are performing perfectly. If they aren’t acting in the best interests of the scholarly community, then having the metadata available to assess will quickly make that obvious and hopefully encourage changes - sunlight being the best disinfectant, as the saying goes.\nAs we said in the first ISR blog:\n“Crossref itself doesn’t assess the quality of content or the integrity of the research process but rather enables those who produce scholarly outputs to provide metadata (effectively evidence) about how they ensure the quality of content and how the outputs fit into the scholarly record.”\nIn our next post in the series, we\u0026rsquo;ll talk more about the workflow and decision-making tools we have in place and are planning to develop. We\u0026rsquo;ll pose questions about what kinds of metadata give what kind of levels of trustworthiness.\nHelping new members become “good Crossref citizens” Once an applicant becomes a member, we help them to completely fulfil the membership terms - ensuring that, for example, they register and display DOIs, keep their metadata up to date, and implement reference linking properly. We have a lot of documentation on our website, we run regular events and webinars, and we have a series of automated onboarding emails for new members to help them move through the key stages of the member journey from set up and onboarding to levelling up and using additional services like Crossmark and Similarity Check. Our staff are also on hand alongside Ambassadors and other members in our Community Forum. Speaking of POSI (and transparent operations) we receive around 3,000 emails per month with support requests so we are gradually moving support from closed 1:1 email to the more public and efficient community support forum.\nWe work with members who aren’t fulfilling the obligations to understand challenges and help explain what they need to do. This is currently reactive, but we have plans to automate checks on whether members are meeting the membership terms in future.\nOutside of confirming that our members are behaving as “good Crossref citizens”, there aren’t many other areas where the membership team typically gets involved. Our mission is to help preserve the integrity of the scholarly record by making the metadata provided by our members openly available in a machine-readable format. We don’t investigate our members’ business practices or take a deep dive into their editorial processes (such as peer review), and there are many areas where we aren’t able to get involved. For example, we cannot arbitrate title ownership disputes. It’s all about preserving the integrity of the scholarly record We do sometimes revoke membership, but this is for limited reasons: unpaid invoices; legal sanctions or judgments against the member or its home country; or contravention of the membership terms. Membership revocation due to unpaid invoices We spend a lot of time communicating with members who haven’t paid their invoices and ensuring they have the information they need to solve the problem. Revoking membership due to unpaid fees is an absolute last step for us, but financial sustainability means we can keep the organisation afloat and keep our infrastructure running.\nWhere members have unpaid fees, we eventually suspend their access to register new records and then ultimately revoke their membership if the fees remain unpaid. Once an organisation’s membership has been revoked, they would need to re-apply if they wanted to become a member again in the future. If accepted, the applicant would need to pay all outstanding invoices before re-joining. In March 2022, we revoked membership for around 140 members due to unpaid invoices (out of a total of over 17,000 active members). Membership revocation due to sanctions Occasionally, we are informed of sanctions that we need to comply with, such as the recent case of Russia invading Ukraine where each Russian member needed to be checked for individual sanctions and some were revoked. Such revocations have to be voted on by the Executive Committee and then ratified by the board. Read more information on our sanctions process. Membership revocation for cause Very occasionally there may be evidence that a member is in contravention of the membership terms. This may include:\nMisrepresentation in the original membership application Fraudulent use of identifiers or metadata Contravening the code of conduct Any other basis set forth in our governing documents. We always try to work together with the member to solve problems, and again, revoking membership is an absolute last step. The revocation has to be voted on by the Executive Committee and then ratified by the board. Our first ever revocation for cause was in July 2019 for OMICS, after the board voted that the US Federal Trade Commission\u0026rsquo;s ruling against them amounted to a cause for revocation. There have been a handful of cases since. For example, most recently in September this year we revoked membership for a member who was registering DOIs for journals with the ISSNs of similarly-named publications.\nThere’s more information about our processes to revoke membership on our website.\nMore participation for the win In conclusion, we believe that the more parties able to participate in Crossref and provide metadata and context for the research nexus, the more robust this makes the scholarly record.\nBut do you agree? Are these measures enough? What other information about our membership operations would help us be more transparent? As we said in our first blog, we need your help to establish whether our approach is still the right one, if we are missing anything and what else we might be able to do.\nHere’s how you can help:\nJoin the discussion about the integrity of the scholarly record on our community forum. Keep an eye out for future blog posts and meetings. We are having a small, in-person discussion prior to the Frankfurt Book Fair and will report on this in a future blog post. Sign up to attend Crossref LIVE22 for updates on these topics and all things Crossref. Join and support initiatives and organisations that we partner with or who use our metadata to look at ethical practices, for example, COPE, DOAJ, and OASPA, and review the Principles of Transparency in Scholarly Publishing, which these organisations worked on with WAME. ", "headings": ["Who can become a Crossref member and do we check new applicants?","The importance of keeping barriers to entry low ","Equitable participation","Trust signals and the Research Nexus","Helping new members become “good Crossref citizens”","It’s all about preserving the integrity of the scholarly record","Membership revocation due to unpaid invoices","Membership revocation due to sanctions","Membership revocation for cause","More participation for the win"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/operations/", "title": "Operations", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/isr-part-one-what-is-our-role-in-preserving-the-integrity-of-the-scholarly-record/", "title": "ISR part one: What is our role in preserving the integrity of the scholarly record?", "subtitle":"", "rank": 1, "lastmod": "2022-09-22", "lastmod_ts": 1663804800, "section": "Blog", "tags": [], "description": "The integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we have launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form. All of this has been done to make it easier for the community to assess the trustworthiness of scholarly outputs. Now that the scholarly record itself has evolved beyond the published outputs at the end of the research process – to include both the elements of that process and its aftermath – preserving its integrity poses new challenges that we strive to meet\u0026hellip; we are reaching out to the community to help inform these efforts.", "content": "The integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we have launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form. All of this has been done to make it easier for the community to assess the trustworthiness of scholarly outputs. Now that the scholarly record itself has evolved beyond the published outputs at the end of the research process – to include both the elements of that process and its aftermath – preserving its integrity poses new challenges that we strive to meet\u0026hellip; we are reaching out to the community to help inform these efforts.\nScholarly research, and therefore scholarly communications, are rapidly changing with the development of new approaches, technologies, and models. We need open scholarly infrastructure that can adapt to these changes and provide trust signals that enable assessment of the integrity of the research and reflect the ways that research is changing. Crossref has been changing and adapting by building on the concept of the scholarly record with our vision of the Research Nexus:\n\u0026ldquo;a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society\u0026rdquo;.\nThe foundation of the scholarly record and Research Nexus is metadata and relationships - the richer and more comprehensive the metadata and relationships in Crossref records, the more context there is for our members and for the whole scholarly research ecosystem. This will lead to a range of benefits from better discovery and saving researchers time to the assessment of research impact and research integrity. This is why Crossref is focused on enriching metadata to provide more and better trust signals while keeping barriers to membership and participation as low as possible to enable an inclusive scholarly record.\nWe want to engage with the community to emphasise this role, share our plans for the future, and get feedback to establish if we are heading in the right direction.\nThis blog explains our current position and will be followed by subsequent posts exploring all our services and plans in this area, as well as more details on our membership operations and policies.\nWhat is “Integrity of the Scholarly Record” (ISR), and how does it feed into Research Integrity? The US National Institutes of Health (NIH) defines research integrity as a set of values in scientific research: honesty; accuracy; efficiency; and objectivity. It’s concerned with the soundness of the process of science. As a subset of that, the outputs of the scholarly publishing process create a “scholarly record” which allows those in the community to find evidence and context to help confirm whether these values have been adhered to. The scholarly record is Crossref’s focus. This means that Crossref itself doesn’t assess the quality of content or the integrity of the research process but rather enables those who produce scholarly outputs to provide metadata (effectively evidence) about how they ensure the quality of content and how the outputs fit into the scholarly record (through reference links, ORCID iDs for authors, ROR IDs for affiliations, funding and licensing information, etc.).\nCrossref members include any organisation that produces research objects and materials (publishers, societies, universities, funders, research institutions, scholars) so they can establish a persistent record—tied to a persistent and unique identifier—for these outputs and supply metadata about this content in an open, machine-readable way. Maintaining this record for the long term, and adding in an important layer of context, establishes the integrity of the scholarly record as well as ensuring it is something that can be used by the whole community to improve scholarly research for generations to come.\nThe scholarly record is about more than just published outputs - it’s also a network of inputs, relationships, and contexts In the past, the Scholarly Record was seen as just the published outputs at the end of the research process - for example, journal articles or book chapters. But as the OCLC Research Group notes in their 2014 report on The Evolving Scholarly Record:\n“The boundaries of the scholarly record are in flux, as they stretch to extend over an ever-expanding range of materials.”\nOCLC describes how outputs at the “process” and “aftermath” stages of the research process are becoming increasingly important alongside the outputs at the traditional “outcomes” stage.\nWe like to take this even further. We think the evolving Scholarly Record is about more than just recording different types of works. As the above report notes “The scholarly record is evolving to have greater emphasis on collecting and curating context of scholarly inquiry […] One can imagine an article in quantitative biology published in a Wiley journal, the data for which resides in Dryad; the e-print in arXiv; and the conference poster in F1000. All of these materials may be considered part of the scholarly record, but no single institution will collect them all. Instead, access is achieved through a coordination of stewardship roles in which the scholarly record is decomposed into discrete, interrelated units that organisations specialize in collecting, preserving, and making available.”\nIt’s this interrelatedness that we think is important, and Crossref plays an important role in collecting, matching, and sharing those relationships. We now focus on this ‘nexus’ - so no longer primarily the different types of objects, but increasingly the interplay and relationships between them. The context, rather than the individual metadata elements, is what’s key.\nMartin Eve explores this idea further in his blog What is the Scholarly Record, suggesting “the scholarly record is a decentralized network of evolving truth assertions” and “Whether a truth assertion is part of the scholarly record is determined by another set of distributed assertions and their power configurations (say, through institutional affiliation) of the individuals who make such assertions.”\nBarbara Fister\u0026rsquo;s excellent talk about the importance of lateral reading as a way to understand information systems discusses how professional fact checkers “engaged in “lateral reading,” check other sources for context before spending time reading and analyzing a source.”\nFister highlights the “SIFT” approach from A Curriculum for Civic Online Reasoning, created by a group of educators at Stanford University for students to evaluate online content. And she argues that this approach is also useful for assessing scholarly materials noting\n“The networked, social nature of scholarship is worth making explicit”.\nWhere does Crossref fit in? Where do we have the most impact and opportunity? To address the question of our role in the integrity of the scholarly record, we need to understand several aspects that Crossref has to balance in this capacity, such as\nWe don’t have the means or desire to be the arbiter of research quality. However, we operate neutrally, at the centre of scholarly communications, and we can help develop a shared consensus or framework. Our metadata elements and tools can be positioned to signal or detect trustworthiness. An important distinction is that we can play a role in assessing legitimacy but not in assessing quality. We must be cautious that our best practices for demonstrating legitimacy and handling less-than-legitimate behaviour do not raise already-high barriers for emerging publications or organisations that present in ways that some may not recognise as professional standards. Disruption is different from deception. In discussions with our board this point has come out strongly: that Crossref has an opportunity to think about how to help the community identify deceptive actions and pair that with our efforts to bring more people on board. Addressing this issue may involve changes to our membership eligibility and processes, bylaws, policies, staff resources, and technical and metadata solutions; actually, a combination of all these aspects. Many of these are projects that are already planned and we have ideas for extending these. We regularly review the process we use for evaluating when and why to revoke membership for reasons other than non-payment. The volume of cases that we believe justify membership revocation\u0026mdash;while a tiny fraction of members\u0026mdash;is growing and does take staff and legal resources to address. Crossref and our members aleady help preserve the integrity of the scholarly record in significant ways Almost all of our services in some way touch on enabling people to express and evaluate trustworthiness; our mission statement commits us to “making research objects easy to find, cite, link, assess, and reuse [\u0026hellip;] all to help put research in context.”\nWe have, of course, specific tools and services that augment this activity too. Many members are active in:\nReporting corrections and retractions through Crossmark metadata. Assessing originality using Similarity Check. Conveying their stewardship via the public participation report. Establishing provenance and stakeholders through funding metadata, ORCID, and ROR. Acknowledging funding through the use of the Open Funder Registry and registering grants metadata. Citing data for transparency and reproducibility, including linking to related research data via Event Data. Demonstrating open peer review by registering peer review reports. As recently concluded in this Nature editorial calling for us to think beyond open references,\n“Depositing all relevant metadata in Crossref should become the norm in scholarly publishing.”\nFor those members just starting out on their journey, there are some immediate specific things that all members are able to do. Check your participation report and start registering more metadata to add that contextual layer:\nReferences Abstracts Corrections and retractions via Crossmark License links ORCID IDs for authors ROR IDs for affiliations Grant IDs for funding acknowledgements Cite data (preferably using DataCite DOIs in reference lists) Register all related objects such as versions and translations via relationships Register grants with Crossref (funder members). By enabling our members to register their research objects and create metadata records about them that are freely and openly shared with the scholarly community, we facilitate them in being able to communicate the context and trustworthiness of that object.\nAnd within that metadata, they can create relationships not just between research objects and also between research stakeholders - the individuals, affiliations, funders, and other players involved. That’s why we work so closely with other parts of foundational scholarly infrastructure (ORCID, DataCite, ROR) and why we now have more than 30 funders registering grants with us. We want to help to capture, identify, and link together all these important elements and more to deliver context for the scholarly record.\nWe started this blog by talking about the changes that are taking place in the world of research and how the infrastructure needs to adapt and change. Although we have extensive plans in place to improve our contribution to ISR, we need your help to establish whether our role is still the right one, whether we are missing anything and what else we might be able to do.\nJoin the discussion about the integrity of the scholarly record, and the Research Nexus on our Community Forum. Keep an eye out for future blog posts and meetings. We are having a small, in-person discussion prior to the Frankfurt Book Fair and will report on this in a future blog post. Sign up to attend Crossref LIVE22 for updates on these topics and all things Crossref. Join and support initiatives and organisations that we partner with or who use our metadata to look at ethical practices in publishing, for example, COPE, DOAJ, and OASPA, and review the Principles of Transparency in Scholarly Publishing, which these organisations worked on with WAME. In the coming weeks, we will post more about our product and metadata plans and also about the specifics of membership operations and cases we see and how we’re currently addressing them.\nPlease share your thoughts! ", "headings": ["What is “Integrity of the Scholarly Record” (ISR), and how does it feed into Research Integrity?","The scholarly record is about more than just published outputs - it’s also a network of inputs, relationships, and contexts","Where does Crossref fit in? Where do we have the most impact and opportunity?","Crossref and our members aleady help preserve the integrity of the scholarly record in significant ways","Please share your thoughts!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2022-board-election/", "title": "2022 Board Election", "subtitle":"", "rank": 1, "lastmod": "2022-09-16", "lastmod_ts": 1663286400, "section": "Blog", "tags": [], "description": "I’m pleased to share the 2022 board election slate. Crossref’s Nominating Committee received 40 submissions from members worldwide to fill five open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000.", "content": "I’m pleased to share the 2022 board election slate. Crossref’s Nominating Committee received 40 submissions from members worldwide to fill five open board seats.\nWe maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000. We have four large member seats and one small member seat open for election in 2022.\nThe Nominating Committee presents the following slate.\nThe 2022 slate Tier 1 candidates (electing one seat): eLife, Damian Pattinson, Executive Director Pan Africa Science Journal, Oscar Donde, Editor in Chief Tier 2 candidates (electing four seats): Clarivate, Christine Stohn, Director of Product Management Elsevier, Rose L’Huillier, Senior Vice President Researcher Products The MIT Press, Nick Lindsay, Journals and Open Access Director Springer Nature, Anjalie Nawaratne, VP Data Transformation \u0026amp; Chief Business Architect Wiley, Allyn Molina, Group Vice President, Research Publishing Here are the candidates\u0026rsquo; organizational and personal statements You can be part of this important process by voting in the election If your organization is a voting member in good standing of Crossref as of September 6th, 2022, you are eligible to vote when voting opens on September 20th, 2022.\nHow can you vote? Your organization’s designated voting contact will receive an email the week of September 19th with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. You will also receive a username and password with a link to our voting platform.\nThe election results will be announced at the LIVE22 online meeting on October 26th, 2022. Save the date! Incoming members will take their seats at the March 2023 board meeting.\n", "headings": ["The 2022 slate","Tier 1 candidates (electing one seat):","Tier 2 candidates (electing four seats):","Here are the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/elections/", "title": "Election process and results", "subtitle":"", "rank": 1, "lastmod": "2022-09-15", "lastmod_ts": 1663200000, "section": "Board & governance", "tags": [], "description": "About board elections Our board terms are three years, and one third of the board is eligible for election every year. There is a Nominating Committee consisting of three board members not up for re-election, and two Crossref members that are not on the board. The purpose of this committee is to review and create the slate each year for nominations to the board, ensuring a fair representation of membership.", "content": "About board elections Our board terms are three years, and one third of the board is eligible for election every year. There is a Nominating Committee consisting of three board members not up for re-election, and two Crossref members that are not on the board. The purpose of this committee is to review and create the slate each year for nominations to the board, ensuring a fair representation of membership.\nThe Nominating Committee meets to discuss the charge, process, criteria, and potential candidates, and puts forward a slate which is at least equal to the number of board seats up for election. Each year members are notified of the election schedule but generally the election opens online in late September and election results are announced at our annual meeting in November.\nIn 2017 the Nominating Committee introduced two things for the first time: to issue an open call for expressions of interest; and to propose a slate with more candidates than seats available.\nIn 2018, the board voted to balance the board with equal numbers of small and large revenue category members starting from the 2019 election.\nPlease see the full current board list.\nThe election takes place in October or November each year during the Annual Meeting. Each member organization designates a voting contact when they join, and that member receives a unique link each September to place their vote on behalf of their organization.\nCurrent election After an open call for expressions of interest, the Nominating Committee selected the following slate of candidates for the 2024 election. The voting contact for each member organization will receive a ballot in September of 2024. The election restuls will be provided at the annual meeting on October 29th, 2024.\nCandidate statements | Election procedures | Proxy\nTier 1, Small member seats (electing two candidates) Katharina Rieck, Austrian Science Fund (FWF) Lisa Schiff, California Digital Library Ejaz Khan, Health Services Academy, Pakistan Journal of Public Health Karthikeyan Ramalingam, MM Publishers Tier 2, Large member seats (electing two candidates) Aaron Wood, American Psychological Association Dan Shanahan, PLOS Amanda Ward, Taylor and Francis Past elections 2023 election The following organizations were elected to the board for three-year terms commencing March 2024:\nBeilstein-Institut, Wendy Patterson Korean Council of Science Editors, Kihong Kim OpenEdition, Marin Dacos Vilnius University, Vincas Grigas Universidad Autónoma de Chile, Dr. Ivan Suazo Oxford University Press, James Phillpotts University of Chicago Press, Ashley Towne Candidate statements | Election procedures | Proxy\n2022 election The following organizations were elected to the board for three-year terms commencing March 2023:\nPan Africa Science Journal, Oscar Donde Clarivate, Christine Stohn Elsevier, Rose L’Huillier The MIT Press, Nick Lindsay Springer Nature, Anjalie Nawaratne Candidate statements | Election procedures | Proxy: in English, em Português, en Español, 한국어, en Français\n2021 election The following organizations were elected to the board for three-year terms commencing March 2022:\nCalifornia Digital Library, University of California, Lisa Schiff Center for Open Science, Nici Pfeiffer Melanoma Research Alliance, Kristen Mueller AIP Publishing (AIP), Penelope Lewis American Psychological Association (APA), Jasper Simons Candidate statements | Election procedures | Proxy: in English, em Português, en Español, 한국어, en Français\n2020 election The following organizations were elected to the board for three-year terms commencing March 2021:\nOpenEdition, Marin Dacos (France) Korean Council of Science Editors, Kihong Kim (South Korea) Scientific Electronic Library Online (SciELO), Abel Packer (Brazil) Beilstein-Institut, Wendy Patterson (Germany) Taylor \u0026amp; Francis/F1000, Liz Allen (United Kingdom) Oxford University Press, James Phillpotts (United Kingdom) Candidate statements | Election procedures\n2019 elections The following organizations were elected to the board for three-year terms:\nClarivate Analytics, Nandita Quaderi eLife, Melissa Harrison Elsevier, Chris Shillum Springer Nature, Reshma Shaikh Wiley, Todd Toler Candidate statements | Election procedures\n2018 election The following organizations were elected to the board for three-year terms:\nAfrican Journals OnLine, Susan Murray California Digital Library, Catherine Mitchell Association for Computing Machinery, Scott Delman Hindawi, Paul Peters American Psychological Association, Jasper Simons Proxy | Candidate statements | Election procedures\n2017 election The following organizations were elected to the board for three-year terms:\nAIP Publishing Inc., Jason Wilde F1000, Liz Allen MIT Press, Amy Brand SciELO, Abel Packer Vilnius Gediminas Technical University Press, Eleonora Dagiene Proxy | Candidate Statements | Election Procedures\n2016 election The following organizations were elected to the board for three-year terms:\nBMJ, Helen King eLife, Mark Patterson Elsevier, Chris Shillum IOP, James Walker Springer Nature, Wim van der Stelt Proxy | Notice | Candidates | Candidate Statements | Election Procedures\n2015 election The following organizations were elected to the board for three-year terms:\nIan Bannerman, Informa UK Paul Peters, Hindawi Bernie Rous, ACM Peter Marney, Wiley John Shaw, Sage Publications 2014 election The following organizations were elected to the board for three-year terms:\nJason Wilde, AIP Publishing Inc. Gary VandenBos, American Psychological Association Gerry Grenier, IEEE Eleonora Dagiene, Vilnius Gediminas Technical University Press Carsten Buhr, Walter de Gryuter Y. H. (Helen) Zhang, Zhejiang University Press 2013 election The following organizations were elected to the board for three-year terms:\nChris Shillum, Elsevier James Walker, IOP Publishing Kathleen Keane, Johns Hopkins University Press Kristen Fisher Ratan, PLOS Wim van der Stelt, Springer 2012 election The following organizations were elected to the board for three-year terms:\nIan Bannerman, Informa UK (Chair) Bernard Rous, ACM (Treasurer) Ahmed Hindawi, Hindawi Robert Campbell, John Wiley \u0026amp; Sons, Inc. Carol Richman, Sage Publications Sven Fund, Walter de Gruyter has replaced Rebecca Simon, University of California Press and will serve out their term expiring in 2014. 2011 election The following organizations were elected to the board for three-year terms:\nTerry Hulbert, American Institute of Physics Linda Beebe, American Psychological Association Gerry Grenier, IEEE Patricia Shaffer, INFORMS Rebecca Simon, University of California Press Chi Wai Lee, World Scientific Publishing 2010 election The following organizations were elected to the board for three-year terms:\nKaren Hunter, Elsevier Steven Hall, IOP Publishing Howard Ratner, Nature Publishing Group Stuart Taylor, The Royal Society Wim van der Stelt, Springer Science + Business Media 2009 election The following organizations were elected to the board for three-year terms:\nBernard Rous, ACM Ian Bannerman, Informa UK Robert Campbell, John Wiley \u0026amp; Sons Inc. Carol Richman, Sage Publications Ahmed Hindawi, Hindawi 2008 election The following organizations were elected to the board for three-year terms:\nTim Ingoldsby, American Institute of Physics Linda Beebe, American Psychological Association Paul Reekie, CSIRO Publishing Anthony Durniak, IEEE Patricia Shaffer, INFORMS Rebecca Simon, University of California Press 2007 election The following organizations were elected to the board for three-year terms:\nBeth Rosner, AAAS (Science) Karen Hunter, Elsevier Howard Ratner, Nature Publishing Group Thomas Connertz, Thieme Publishing Group Wim van der Stelt, Springer Science + Business Media Jerry Cowhig, IOP, who was appointed by the board to fill the Blackwell vacancy, was elected to serve out the term expiring in 2008. 2006 election The following organizations were elected to the board for three-year terms:\nJohn R. White, Association for Computing Machinery Nawin Gupta, University of Chicago Press Carol Richman, Sage Publications Ian Bannerman, Informa UK Limited (Taylor \u0026amp; Francis) Eric A. Swanson, John Wiley \u0026amp; Sons, Inc. 2005 election The following organizations were elected to the board for three-year terms:\nMarc Brodsky, American Institute of Physics Linda Beebe, American Psychological Association Robert Campbell, Blackwell Publishing Anthony Durniak, IEEE Rebecca Simon, University of California Press Paul Weislogel, Wolters Kluwer 2004 election The following organizations were elected to the board for three-year terms:\nBeth Rosner, AAAS (Science) Karen Hunter, Elsevier Science Annette Thomas, Nature Publishing Group Thomas Connertz, Thieme Publishing Group Ruediger Gebauer, Springer Science + Business Media Please contact Lucy Ofiesh with any questions.\n", "headings": ["About board elections","Current election","Tier 1, Small member seats (electing two candidates)","Tier 2, Large member seats (electing two candidates)","Past elections","2023 election","2022 election","2021 election","2020 election","2019 elections","2018 election","2017 election","2016 election","2015 election","2014 election","2013 election","2012 election","2011 election","2010 election","2009 election","2008 election","2007 election","2006 election","2005 election","2004 election"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/membership-operations/", "title": "Membership operations", "subtitle":"", "rank": 4, "lastmod": "2022-09-13", "lastmod_ts": 1663027200, "section": "Operations & sustainability", "tags": [], "description": "Your organization needs to be a member of Crossref in order to get a DOI prefix so you can create Crossref DOIs and register content. All members agree to our membership terms to help ensure the persistence of our infrastructure.\nOn this page, find out more about our membership operations:\nNew member applications The membership terms Applications to become a sponsor Our billing cycle Canceling your membership When Crossref revokes membership Membership and legal sanctions Organizations that claim to be Crossref members New member applications We ask new applicants to complete an application form, and we then check that:", "content": "Your organization needs to be a member of Crossref in order to get a DOI prefix so you can create Crossref DOIs and register content. All members agree to our membership terms to help ensure the persistence of our infrastructure.\nOn this page, find out more about our membership operations:\nNew member applications The membership terms Applications to become a sponsor Our billing cycle Canceling your membership When Crossref revokes membership Membership and legal sanctions Organizations that claim to be Crossref members New member applications We ask new applicants to complete an application form, and we then check that:\nYou meet our membership criteria and can commit to fulfil the member terms. We are legally able to accept your organization as a member. Your organization hasn’t previously been a member of Crossref whose membership was revoked. Your organization hasn’t misrepresented themselves in the application. Your organization is not already a member of Crossref. Membership in Crossref is open to organizations that produce professional and scholarly materials and content, and we treat this broadly - “come one, come all” is one of our organizational truths. We’re a global community of members with content in all disciplines, in many formats, with all kinds of business models - research institutions, publishers, government agencies, research funders, museums and many more. But it’s important that members are able to meet the obligations of membership and work in a way which reflects our code of conduct.\nAs an organization that’s based in the US (and with significant activities in the UK and Europe) there are also some legal limits on our activity - for example, we cannot accept applications for membership from organizations based in some countries due to sanctions. Find out more about sanctions which impact on Crossref membership and the countries which are affected.\nWe may not be able to accept applications for membership from organizations which have previously had their membership revoked or who have misrepresented themselves in their application.\nAnd finally, if your organization is already a member of Crossref (for example, a different department of the same university) we may add you to the existing membership. This ensure that the same organization is not paying multiple membership fees.\nMembership terms New Crossref members agree to these member terms by ticking a box in the application form. These terms then remain in effect permanently - there is no need to renew each year. The terms will only be canceled if one of the following things happen:\nThe member cancels their membership - how to cancel your membership. Crossref revokes your membership - find out more. The terms are updated. Updating the membership terms If we change the Crossref membership terms, we will email the contact that each active member organization has identified as their Primary contact (previously known as \u0026ldquo;Business contact\u0026rdquo;) to let them know about the change. This will happen no fewer than sixty days prior to effectiveness. Members may cancel their membership if they don’t want to accept the new terms.\nApplications to become a sponsor Some small organizations want to register metadata for their research and participate in Crossref but are not able to join directly due to financial, administrative, or technical barriers. Sponsors are organizations who facilitate membership for these organizations by providing administrative, billing, technical, and—if applicable—language support to these organizations. There is quite a high bar to becoming a Crossref sponsor, so not all organizations who apply will be eligible.\nFind out more about becoming a sponsor\nOur billing cycle We send out annual membership fees (for members and sponsors) and annual subscription fees (for service providers and those subscribing to Metadata Plus or other paid-for metadata services) each January. We invoice for content registration on a quarterly basis. All invoices have a term of 45 days. We recommend that members pay using a credit or debit card through our payment portal, but other payment methods are available.\nFind out more about:\nOur billing process What happens if you don’t pay your invoices Canceling your Membership You need to contact us to cancel your membership. You aren’t able to pause your membership, so if you just stop using the service and don’t tell us that you want to cancel, we will still continue to send you annual membership fee invoices. And if you then want to start using the service again in future years, you will need to pay these outstanding membership invoices. If you actually contact us to cancel, we can stop these annual membership invoices from being created.\nHow to cancel your membership.\nRevoking membership As an organization that’s obsessed with persistence, we really try to avoid revoking membership. However, there are limited times when we have to. Find out more.\nMembership and legal sanctions Crossref’s mission is to support global research in and between all countries and we will always do that to the extent possible. But we are also bound by laws in the US, the UK, and the EU, and where sanctions apply we have to comply with these. Find out more\nOrganizations that claim to be Crossref members Occasionally authors contact us as a publisher they are working with has claimed to be a Crossref member, or has claimed to register DOIs, but instead just displays unregistered, DOI-like strings on their website.\nWe don\u0026rsquo;t currently have a list of Crossref members, and as it\u0026rsquo;s the parent publishing organization that joins Crossref (rather than individual journals) it\u0026rsquo;s sometimes hard to work out if a journal is published by a Crossref member or not.\nIf you are an author planning to submit a manuscript to a publisher who claims to be a Crossref member, check the DOIs that they currently display on their website. Copy the DOI as a link, and paste this into a browser. If the DOI resolves to an article, then that DOI has been registered with us, or with another Registration Agency. If the DOI resolves to DOI Foundation Error page, then that DOI has not been registered.\nFor example of a DOI that has NOT been registered, copy and paste https://0-doi-org.libus.csd.mu.edu/10.1234/56yuip into a browser. You will arrive at the DOI Foundation Error page, rather than a scholarly work.\nAlways refer to the Think, Check, Submit website to help you decide whether to submit to a particular journal. If you have any problems with a publisher, do speak to your university of the team at the Committee On Publication Ethics (COPE) for support and guidance.\nThis is by no means comprehensive, but we do keep a list of websites where we have been informed that they are displaying fake DOIs (eg unregistered, DOI-like strings), and/or claiming to be Crossref members. (Do contact us if you are one of these organizations and wish to be removed from the list).\nWe also hold a list of ex-members who have had their membership revoked as they contravened the membership terms.\n", "headings": ["New member applications","Membership terms","Updating the membership terms","Applications to become a sponsor","Our billing cycle","Canceling your Membership","Revoking membership","Membership and legal sanctions","Organizations that claim to be Crossref members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/accessibility-for-crossref-doi-links-call-for-comments-on-proposed-new-guidelines/", "title": "Accessibility for Crossref DOI Links: Call for comments on proposed new guidelines", "subtitle":"", "rank": 1, "lastmod": "2022-09-06", "lastmod_ts": 1662422400, "section": "Blog", "tags": [], "description": "Our entire community \u0026ndash; members, metadata users, service providers, community organizations and researchers \u0026ndash; create and/or use DOIs in some way so making them more accessible is a worthy and overdue effort.\nFor the first time in five years and only the second time ever, we are recommending some changes to our DOI display guidelines (the changes aren’t really for display but more on that below). We don’t take such changes lightly, because we know it means updating established workflows.", "content": "Our entire community \u0026ndash; members, metadata users, service providers, community organizations and researchers \u0026ndash; create and/or use DOIs in some way so making them more accessible is a worthy and overdue effort.\nFor the first time in five years and only the second time ever, we are recommending some changes to our DOI display guidelines (the changes aren’t really for display but more on that below). We don’t take such changes lightly, because we know it means updating established workflows. We appreciate the questions that prompted us to make this recommendation and we know it’s critical that we get community input on the proposed updates.\nTL;DR Here is a quick overview:\nDOIs and URLs themselves don’t really tell readers much. People with visual impairments rely on screen readers to read out loud the contents of a page. We’re asking for the title of each DOI to be added, in an ARIA (Accessible Rich Internet Applications) attribute, so these users understand what these links are for. Accessible text, as this kind of description is known, should be included for all links, but at this time, we’re specifically recommending it for landing pages of newly registered records. It’s not required, yet. We’re proposing a 2 year recommendation period and we want your feedback on the particulars, including timing and how we can help. Please take a short survey and/or get in touch and share your thoughts. We’ll finalize these recommendations after assessing the feedback. Please check back for updates. What is changing, when and why The proposed updates are meant to improve overall usability, particularly for people with visual impairments, by aligning our guidelines with modern accessibility requirements such as the new W3C recommendations and the European Accessibility Act. This means that assistive technologies such as screen readers can interpret DOI links.\nWhy are changes being recommended?\nDOIs are unique and persistent links to items in the scholarly record so it makes sense that they link to the full URLs for the associated content –for example, a journal article. The issue for people who rely on screen readers is that a DOI link doesn’t provide title or other information to give that link context. Users of screen readers need to know what the destination of a link is.\nThese users often lack the context that other users have; in fact, they may be presented with links in a document as a list. That\u0026rsquo;s why all links, not just DOI links, need what is called \u0026ldquo;accessible text.” Providing additional information for links requires ARIA (Accessible Rich Internet Applications) techniques. This speaks to the Web Content Accessibility Guidelines (WCAG), the standard guidelines for accessibility across the web, specifically success criterion 2.4.4 - Link Purpose (In Context), which aims to ‘help users understand the purpose of each link so they can decide whether they want to follow the link.’\nFor your feedback: recommended draft changes\nWe recommend the addition of an aria-label attribute for DOI links, containing as its value the descriptive title of the content represented by the DOI, so that screen readers can interpret DOI links. This means that, while the DOI display itself doesn’t actually change, the link is enhanced with additional, contextual information for the user of assistive technology, in one of two ways, either:\nan aria-label attribute, described as ‘a way to place a descriptive text label on an object,’ identifying the destination, or an aria-describedby attribute pointing to where the destination is identified in the surrounding text. The updated HTML for a journal article*, for example, would be:\n\u0026lt;a href=\u0026quot;https://0-doi-org.libus.csd.mu.edu/10.5555/12345678\u0026quot; aria-label=\u0026quot;DOI for Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory\u0026quot;\u0026gt;https://0-doi-org.libus.csd.mu.edu/10.5555/12345678\u0026lt;/a\u0026gt;\nHere the aria-label has been set to the value of the ‘title’ property as retrieved from the Crossref REST API at https://0-api-crossref-org.libus.csd.mu.edu/v1/works/10.5555/12345678.\n*Note that fields may vary slightly for different record types.\nThis proposed solution allows screen readers to read aloud to users the value of the aria-label attribute, instead of the full DOI in the link text.\nAt this time, we are recommending the change for landing pages in particular, but it can and should be applied to wherever DOI links appear, whenever feasible (more on this below).\nOur guidelines will continue to state that the DOI should always be displayed as a full URL link\u0026ndash;that will not change. Neither will content registration\u0026ndash;we are not asking for additional information in your deposits.\nIt’s not perfect, but it’s very worthwhile\nThis recommendation has some limitations worth noting but it must be said that there is no perfect solution.\nDOI links appear in lots of places - PDFs for one notable example. We reviewed and tested the recommendation with Bill Kasdorf, Principal, Kasdorf \u0026amp; Associates, LLC, Richard Orme, CEO, DAISY Consortium, and George Kerscher, Chief Innovations Officer, DAISY Consortium-Senior Officer, Global Literacy, Benetech, who graciously provided their time and expertise. EPUBs and websites proved to be easy to update; other formats, notably PDFs, less so. Widespread adoption of accessible DOIs is so important and we don’t want confusion or frustration to get in the way of making progress. We support and welcome efforts to include an ARIA attribute wherever DOI links appear, but we recommend focusing on landing pages, for now.\nPatrick Vale, Crossref Senior Front End Developer, explains that:\n”DOI links serve a very specific purpose: to provide the persistent link to an item in the scholarly record. And as such, they present an unusual set of requirements when balancing accurately presenting the information they encode - the persistent link - and making that link accessible, and understandable. With these proposed changes, we hope to strike this balance.“\nWe know it will be a challenge (more on that below) but we think it’s absolutely a worthwhile effort. Indeed, we are undertaking a project to update our own website to meet these recommendations and to review overall accessibility.\nAs Bill Kasdorf notes:\n“Most people have no idea how many people with visual impairments there are. Not only is it unfair to those people not to provide accessible text for links, the authors and publishers of the linked resource are missing a lot of readers. This update is a great move by Crossref, and every bit aligned with its mission to make scholarly content discoverable and consumable.”\nWe propose the following timeline, also for your feedback\nOnce finalized, following community feedback, the updated guidelines will be issued as a recommendation for a suggested period of two years starting next year, 2023. Beginning in 2025, the changes will be required for landing pages of newly registered content (and strongly recommended for existing registered content). Feedback on this approach and timeline is also encouraged.\nHelp us help you We are conscious that adding descriptive information to DOI links places a significant responsibility on the members and Service Providers creating and hosting these links. Therefore, we are also considering the creation of a tool to help with implementation. Initial discussions suggest this could be a JavaScript helper tool, which could be included on member websites. We also welcome feedback as to how such a tool might be implemented, and how it would best integrate with existing sites and workflows.\nCall for comments - by 1st November We hope that this proposal is a welcome one and that the timing is good for moving forward together toward greater accessibility of the scholarly record. We welcome questions, feedback and suggestions through 1st November via the survey below or by email to feedback@crossref.org\nLoading... Small changes, big impact We’re excited to make changes that improve accessibility and we look forward to the community’s response to our proposal. We will share aggregated feedback in an updated post later this year.\nA note on language Multiple sources were consulted to find the most appropriate and inclusive term(s) for users of screen readers in this context. “Print disabled,” for example, seemed to be a good candidate but was ultimately deemed likely to be confusing to a very global publishing audience, who often don’t physically print anything. Sources differ slightly, for example between the US and UK and of course, this English text may well be translated into other languages. Feedback on the terms used here is also very welcome.\nAdditional resources The Inclusive Publishing Hub (DAISY Consortium) National Center on Disability and Journalism (Arizona State University, US) Inclusive Language guidance (UK government) The American Psychological Association (APA) Bias-Free Language Disability Guide The Open Access Books Network (OABN) ", "headings": ["TL;DR","What is changing, when and why","Help us help you","Call for comments - by 1st November","Small changes, big impact","A note on language","Additional resources"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/doi-display-guidelines/", "title": "DOI Display Guidelines", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/martin-paul-eve-is-joining-our-rd-group-as-a-principal-developer/", "title": "Martin Paul Eve is joining our R&D group as a Principal Developer", "subtitle":"", "rank": 1, "lastmod": "2022-08-26", "lastmod_ts": 1661472000, "section": "Blog", "tags": [], "description": "I\u0026rsquo;m delighted to say that Martin Paul Eve will be joining Crossref as a Principal R\u0026amp;D Developer starting in January 2023.\nAs a Professor of Literature, Technology, and Publishing at Birkbeck, University of London- Martin has always worked on issues relating to metadata and scholarly infrastructure. In joining the Crossref R\u0026amp;D group, Martin can focus full-time on helping us design and build a new generation of services and tools to help the research community navigate and make sense of the scholarly record.", "content": "I\u0026rsquo;m delighted to say that Martin Paul Eve will be joining Crossref as a Principal R\u0026amp;D Developer starting in January 2023.\nAs a Professor of Literature, Technology, and Publishing at Birkbeck, University of London- Martin has always worked on issues relating to metadata and scholarly infrastructure. In joining the Crossref R\u0026amp;D group, Martin can focus full-time on helping us design and build a new generation of services and tools to help the research community navigate and make sense of the scholarly record.\nMartin himself explains the logic of this move on his own blog, so I won\u0026rsquo;t attempt to do the same here other than to say:\npraxis makes perfect.\n(mic drop)\nCreated with DALL·E, an AI system by OpenAI with the the prompt: \u0026lsquo;A bookwheel in the style of the 16th-century illustration by Agostino Ramelli and where the books are replaced by open laptops\u0026rsquo;\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/contact-thank-you/", "title": "Thank you for contacting us", "subtitle":"", "rank": 1, "lastmod": "2022-08-04", "lastmod_ts": 1659571200, "section": "", "tags": [], "description": "Thank you Thanks for getting in touch. We will reply by email as soon as we can. In the meantime, please check out our forum where other community members may be able to help sooner.", "content": "Thank you Thanks for getting in touch. We will reply by email as soon as we can. In the meantime, please check out our forum where other community members may be able to help sooner.\n", "headings": ["Thank you"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/arley-soto/", "title": "Arley Soto", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/flies-in-your-metadata-ointment/", "title": "Flies in your metadata (ointment)", "subtitle":"", "rank": 1, "lastmod": "2022-07-25", "lastmod_ts": 1658707200, "section": "Blog", "tags": [], "description": "Quality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors).\nWe make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.", "content": "Quality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors).\nWe make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate. Manuscript tracking services, search services, bibliographic management software, library systems, author profiling tools, specialist subject databases, scholarly sharing networks - all of these (and more) incorporate scholarly metadata into their software and services. They use our APIs to help them get the most complete, up-to-date set of metadata from all of our publisher members. And of course, members themselves are able to use our free APIs too (and often do; our members account for the vast majority of overall metadata usage).\nWe know many organizations use Crossref metadata. We highlighted several different examples in our API case study blog series and user stories. Now, consider how errors could be (and often are) amplified throughout the whole research ecosystem.\nWhile many inaccuracies in the metadata have clear consequences (e.g., if an author’s name is misspelled or their ORCID iD is registered with a typo, the ability to credit the author with their work can be compromised), there are others, like this example of typos in the publication date, that may seem subtle, but also have repercussions. When we receive reports of metadata quality inaccuracies, we review the claims and work to connect metadata users with our members to investigate and then correct those inaccuracies.\nThus, while Crossref does not update, edit, or correct publisher-provided metadata directly, we do work to enrich and improve the scholarly record, a goal we’re always striving for. Let’s look at a few common examples and how to avoid them.\nPagination faux pas First page marked as 1 In the XML registered \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;1\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;1\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; Related REST API query https://0-api-crossref-org.libus.csd.mu.edu/works?filter=type:journal-article\u0026select=DOI,title,issue,page\u0026sample=100\nMore on the problem Very little content begins and ends on page 1. Especially journal articles. But, many members may not know what the page range of the content will be when they register the content with us (perhaps the content in question is an ahead-of-print journal article and the member intends to update this page range later). The issue here is that page range is an important piece of the metadata that we use for citation matching. If the pagination registered with us is incorrect, and it differs from the pagination stated in the citation, our matching process is challenged. Thus, we might fail to establish a citation link between the two works. The page range beginning with page 1 is the most common pagination error that the technical support team sees.\nMore metadata does not mean better metadata.\nOther pagination errors In the XML registered \u0026lt;item_number item_number_type=\u0026#34;article-number\u0026#34;\u0026gt;1\u0026lt;/item_number\u0026gt; More on the problem Like first pages beginning with 1, few internal article numbers are 1. We see a disproportionate number of article number 1s in the metadata. Again, this can prevent citation matching. Mistakes happen in all aspects of life, including metadata entry. That said, if you, as a member, don’t use internal article numbers or other metadata elements that can be registered, a recommendation we’d make is: if you don’t know what the metadata element is, omit it. More metadata does not mean better metadata. If you’d like to know more about what the elements are, bookmark our schema documentation in Oxygen or review our sample XML files.\nIn the XML registered \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;121-123\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;129\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; More on the problem This content either begins on page 121, 122, or 123. It cannot start on all three pages. Ironically, registering a first page of 121-123 ensures that we will not match the article if it is included in a citation for another DOI with a first page of 121, 122, or 123.\nAuthor naming lapses Examples: Titles (Dr., Prof. etc.) in the given_name field; Suffixes (Jr., III, etc.) in the surname field; superscript number, asterisk, or dagger after author names (usually carried over from website formatting that references affiliations); full name in surname field\nIn the XML registered \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;DOCTOR KATHRYN\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;RAILLY\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;DOCTOR JOSIAH S.\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;CARBERRY\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;surname\u0026gt;Mahmoud Rizk\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;additional\u0026#34;\u0026gt; \u0026lt;surname\u0026gt;Asta L Andersen(\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; Related REST API queries https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=professor https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=doctor https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=ingeniero https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=junior https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=III More on the problem Neither Josiah nor Kathryn’s official given name includes ‘doctor,’ thus it should be omitted from the metadata. Including ‘doctor’ in the metadata and/or capping the authors’ names in the metadata does not result in additional accreditation or convey status. Instead, the result is to muddle the metadata record. As with page numbers in the metadata, accurate author names are crucial for citation matching.\nOrganizations as authors slip-ups Examples: The contributor role for person names is for persons, not organizational contributors, but we see this violated from time to time. Unfortunately, no persons are being credited with contributing to content that have these errors present in the metadata record.\nIn the XML registered \u0026lt;contributors\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;surname\u0026gt;Society\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; \u0026lt;person_name contributor_role=\u0026#34;author\u0026#34; sequence=\u0026#34;first\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;University of Melbourne\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;University of Melbourne\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;/contributors\u0026gt; Related REST API queries https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=society https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=university More on the problem We love seeing inclusion of organizational contributors in the metadata, when that metadata is correct. Unfortunately, we do see mistakes where organizations are entered as people and people are inadvertently omitted from the metadata record (sometimes omission of people in the contributor list is intentional, but other times it is a mistake). In the XML above, the organization was entered as an organizational contributor - the organization itself is being credited with the work. This is sometimes confused with an author affiliation or even a ROR ID. Our schema library and XML samples are a great place to start, if you’re interested in learning more about organizational contributors versus author affiliations.\nNull no-nos Examples: Too many times we see \u0026ldquo;N/A\u0026rdquo;, “null”, \u0026ldquo;none\u0026rdquo; in various fields (pages, authors, volume/issue numbers, titles, etc.). If you don’t have or know the metadata, it’s better to omit it for optional metadata elements than to include inaccuracies in the metadata record.\nIn the XML registered \u0026lt;journal_volume\u0026gt; \u0026lt;volume\u0026gt;null\u0026lt;/volume\u0026gt; \u0026lt;pages\u0026gt; \u0026lt;first_page\u0026gt;null\u0026lt;/first_page\u0026gt; \u0026lt;last_page\u0026gt;null\u0026lt;/last_page\u0026gt; \u0026lt;/pages\u0026gt; \u0026lt;person_name sequence=\u0026#34;first\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Not Available\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Not Available\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; \u0026lt;person_name sequence=\u0026#34;additional\u0026#34; contributor_role=\u0026#34;author\u0026#34;\u0026gt; \u0026lt;given_name\u0026gt;Not Available\u0026lt;/given_name\u0026gt; \u0026lt;surname\u0026gt;Not Available\u0026lt;/surname\u0026gt; \u0026lt;/person_name\u0026gt; Related REST API queries https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=null https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=none https://0-api-crossref-org.libus.csd.mu.edu/works?query.author=Not%20Available More on the problem Nulls and Not Availables, like many of the examples in this blog, are not simply agnostic when included in the metadata record. Including nulls in your metadata limits our ability to match references and establish connections between research works. These works do not expand and enrich the research nexus; quite the opposite. The incorrect metadata limits our ability to establish relationships between works.\nWhere to go from here? One thing we’ve said throughout this blog that we’ll reiterate here is: accurate metadata is important. It’s important in itself, and the metadata registered with us is heavily used by many systems and services, so think Crossref and beyond. In addition to that expanding perspective, there are practical steps members and metadata users can take to help us:\nAs a member registering metadata with us:\nmake sure we have a current metadata quality contact for your account and update us if there’s a change if you receive an email request from us to investigate a potential metadata error, help us if you do not know what to enter into a metadata element or helper tool field, please leave it blank; perhaps some of the examples of errors within this blog were placeholders that the responsible members intended to come back to - to correct in time; that’s also a practice to avoid if you find a record in need of an update, update it - updates to existing records are always free (we do this to encourage updates and the resulting accurate, rich metadata, so take advantage of it). As a metadata user:\nif you spot a metadata record that doesn’t seem right, let us know with an email to support@crossref.org and/or report it to the member responsible for maintaining the metadata record (if you have a good contact there) if you’re eager to confirm the last update of a metadata record, our REST API is a great resource; here’s a handy query to use as a starting point: this one returns records on our Crossref prefix 10.5555 that have been updated in 2022: https://0-api-crossref-org.libus.csd.mu.edu/prefixes/10.5555/works?rows=500\u0026filter=from-update-date:2022-01-01,until-pub-date:2022-12-31\u0026mailto=support@crossref.org Making connections between research objects is critical, and inaccurate metadata complicates that process. We’re continually working to better understand this, too. That’s why we’re currently researching the reach and effects of metadata. Our technical support team is always eager to assist in correcting errors. We’re also keen on avoiding those mistakes altogether, so if you are uncertain about a metadata element or have questions about anything included in this blog post, please do contact us at support@crossref.org. Or, better yet, post your question in the community forum so all members and users can benefit from the exchange. If you have a question, chances are others do as well.\n", "headings": ["Pagination faux pas","First page marked as 1","In the XML registered","Related REST API query","More on the problem","Other pagination errors","In the XML registered","More on the problem","In the XML registered","More on the problem","Author naming lapses","In the XML registered","Related REST API queries","More on the problem","Organizations as authors slip-ups","In the XML registered","Related REST API queries","More on the problem","Null no-nos","In the XML registered","Related REST API queries","More on the problem","Where to go from here?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/register-maintain-records/", "title": "Register and maintain your records", "subtitle":"", "rank": 1, "lastmod": "2022-07-22", "lastmod_ts": 1658448000, "section": "Documentation", "tags": [], "description": "Content Registration allows members to register and update metadata via machine or human interfaces. When you join Crossref as a member you are issued a DOI prefix. You combine this with a suffix of your choice to create a DOI, which becomes active once registered with Crossref. Content Registration allows members to register a DOI and deposit or update its associated metadata, via machine or human interfaces.\nBenefits of content registration Academic and professional research travels further if it’s linked to the millions of other published papers.", "content": " Content Registration allows members to register and update metadata via machine or human interfaces. When you join Crossref as a member you are issued a DOI prefix. You combine this with a suffix of your choice to create a DOI, which becomes active once registered with Crossref. Content Registration allows members to register a DOI and deposit or update its associated metadata, via machine or human interfaces.\nBenefits of content registration Academic and professional research travels further if it’s linked to the millions of other published papers. Crossref members register content with us to let the world know it exists, instead of creating thousands of bilateral agreements.\nMembers send information called metadata to us. Metadata includes fields like dates, titles, authors, affiliations, funders, and online location. Each metadata record includes a persistent identifier called a digital object identifier (DOI) that stays with the work even if it moves websites. Though the DOI doesn\u0026rsquo;t change, its associated metadata is kept up-to-date by the owner of the record.\nRicher metadata makes content useful and easier to find. Through Crossref, members are distributing their metadata downstream, making it available to numerous systems and organizations that together help credit and cite the work, report impact of funding, track outcomes and activity, and more.\nMembers maintain and update metadata long-term, telling us if content moves to a new website, and they include more information as time goes on. This means that there is a growing chance that content is found, cited, linked to, included in assessment, and used by other researchers.\nParticipation Reports give a clear picture for anyone to see the metadata Crossref has. See for yourself where the gaps are, and what our members could improve upon. Understand best practice through seeing what others are doing, and learn how to level-up.\nThis is Crossref infrastructure. You can’t see infrastructure, yet research—and researchers all over the world—rely on it.\nShow image × Download the content registration factsheet, and explore factsheets for other Crossref services and in different languages.\nHow content registration works To register content with Crossref, you need to be a member. You’ll use one of our content registration methods to give us metadata about your content. Note that you don’t send us the content itself - you create a metadata record that links persistently (via a persistent identifier) to the content on your site or hosting platform. Learn more about metadata, constructing your DOIs, and ways to register your content.\nYou should assign Crossref DOIs to and register content for anything that is likely to be cited in the scholarly literature.\nNo matter whether you register content using one of our helper tools, or creating your own metadata files, all metadata deposited with Crossref is submitted as XML, and formatted using our metadata deposit schema section. Explore our XML sample files to help you create your own XML.\nWhat types of resources and records can be registered with Crossref? We are working to make our input schema more flexible so that almost any type of object can be registered and distributed openly through Crossref. At the moment, members tend to register the following:\nBooks, chapters, and reference works: includes book title and/or chapter-level records. Books can be registered as a monograph, series, or set. Conference proceedings: information about a single conference and records for each conference paper/proceeding. Datasets: includes database records or collections. Dissertations: includes single dissertations and theses, but not collections. Grants: includes both direct funding and other types of support such as the use of equipment and facilities. Journals and articles: at the journal title and article level, and includes supplemental materials as components. Peer reviews: any number of reviews, reports, or comments attached to any other work that has been registered with Crossref. Pending publications: a temporary placeholder record with minimal metadata, often used for embargoed work where a DOI needs to be shared before the full content is made available online. Preprints and posted content: includes preprints, eprints, working papers, reports, and other types of content that has been posted but not formally published. Reports and working papers: this includes content that is published and likely has an ISSN. Standards: includes publications from standards organizations. You can also establish relationships between different research objects (such as preprints, translations, and datasets) in your metadata. Learn more about all the metadata that can be included in these records with our schema library and markup guides.\nObligations and fees for content registration You pay a one-time content registration fee for each content item you register with us. content registration fees are different for different types of content and sometimes include volume discounts for large batches or backfile material. You don’t pay to update an existing metadata record. It’s an obligation of membership that you maintain your metadata for the long term, including updating any URLs that change. In addition, we warmly encourage you to correct and add to your metadata, and there is no charge for redepositing (updating) existing metadata. Learn more about maintaining your metadata, and managing existing DOIs.\nYour content registration fees are billed quarterly in arrears. This means you’ll usually receive a bill at the beginning of each quarter for the content you registered in the previous quarter. The only exception is if you’ve only registered a small number of DOIs.\nTypes of metadata We collect many different types of metadata. You can read more in our documentation.\nGetting Started with Content Registration Read through Getting started as a new Crossref member in our documentation.\n", "headings": ["Content Registration allows members to register and update metadata via machine or human interfaces.","Benefits of content registration ","How content registration works ","What types of resources and records can be registered with Crossref?","Obligations and fees for content registration ","Types of metadata ","Getting Started with Content Registration"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-i-think-about-ror-as-infrastructure/", "title": "How I think about ROR as infrastructure", "subtitle":"", "rank": 1, "lastmod": "2022-07-08", "lastmod_ts": 1657238400, "section": "Blog", "tags": [], "description": "The other day I was out and about and got into a conversation with someone who asked me about my doctoral work in English literature. I\u0026rsquo;ve had the same conversation many times: I tell someone (only if they ask!) that my dissertation was a history of the villanelle, and then they cheerfully admit that they don\u0026rsquo;t know what a villanelle is, and then I ask them if they\u0026rsquo;re familiar with Dylan Thomas\u0026rsquo;s poem \u0026ldquo;Do not go gentle into that good night.", "content": "The other day I was out and about and got into a conversation with someone who asked me about my doctoral work in English literature. I\u0026rsquo;ve had the same conversation many times: I tell someone (only if they ask!) that my dissertation was a history of the villanelle, and then they cheerfully admit that they don\u0026rsquo;t know what a villanelle is, and then I ask them if they\u0026rsquo;re familiar with Dylan Thomas\u0026rsquo;s poem \u0026ldquo;Do not go gentle into that good night.\u0026rdquo; So far, everyone has heard of it \u0026ndash; it\u0026rsquo;s a very well-known poem indeed. I then explain that \u0026ldquo;Do not go gentle into that good night\u0026rdquo; is a villanelle, and that a villanelle is a poetic form something like a sonnet. So far, everyone also knows what a sonnet is, which is why I use that as a comparison, even though a villanelle isn\u0026rsquo;t all that much like a sonnet, in my opinion. They\u0026rsquo;re both poetic forms, however, with a particular standard number of lines and a particular standard rhyme scheme, so in that sense they certainly are alike.\nOddly enough, I think my early background in the study of poetic form is very much of a piece with my new role here at Crossref as Technical Community Manager for ROR, the Research Organization Registry. Both poetic form and metadata are invisible to most people, but both are valuable infrastructure. Both poetic form and metadata involve generally-accepted practices and standards that differ between different groups of people and change over time. Both writing formal poetry and creating rich metadata can seem burdensome and rigid to some people, but to my mind, both are generative. A solid underlying foundation allows for all kinds of creativity to flourish on the surface.\nThat might be part of why as soon as I heard about ROR I understood its tremendous potential. As someone who\u0026rsquo;s worked in digital humanities and scholarly communication for over fifteen years, I\u0026rsquo;ve long appreciated the value of clean, standard, comprehensive metadata in general. For instance, I explained the origin and value of the Dublin Core metadata standard to many a history scholar in the Omeka workshops I often taught at THATCamp. Later, while overseeing the institutional repository at Virginia Tech University Libraries, I learned even more about both the importance and the difficulty of creating, acquiring, and providing good metadata. When the pandemic began in 2020, I learned more than I ever wanted to know about messy data as Community Lead for The COVID Tracking Project at The Atlantic.\nData and metadata are, let\u0026rsquo;s admit it, very hard to keep clean and consistent as they travel through multiple systems, and that\u0026rsquo;s why it\u0026rsquo;s important to regularize as much as we can through automatic means such as APIs that use agreed-upon standards. Scholarship is a network of networks, and common identifiers like DOIs and ORCIDs enable the interchange of information in those networks about scholarly outputs and scholars, and thus they enable scholarship itself. What could be more important than that?\nBut the organizations that employ, fund, and publish scholarly researchers have had a hard time keeping track of everything \u0026ldquo;their\u0026rdquo; researchers have given to the world. That\u0026rsquo;s the problem that ROR, \u0026ldquo;a community-led registry of open, sustainable, usable, and unique identifiers for every research organization in the world,\u0026rdquo; can help solve. In an ideal world, universities might use ROR IDs to track the research their faculty have produced, certainly, but they might also discover which universities their faculty\u0026rsquo;s co-authors most often come from. Funders might use ROR IDs to identify the research outputs that have benefited from their funds, certainly, but they might also analyze whether they are funding enough researchers from institutions in rural areas. Publishers might use ROR IDs to offer affiliation searching in their own public interfaces, certainly, but they might also create internal reports on compliance with institution-level transformative Open Access agreements. Once something like ROR is widely adopted, the vision of the Research Nexus becomes closer to reality: \u0026ldquo;A rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\u0026rdquo; ROR is all about the \u0026ldquo;organizations\u0026rdquo; part of that alluring vision.\nIf you\u0026rsquo;re curious about ROR and want to learn more (hey, that rhymes!), you might want to watch the highly informative presentation from September 2021 \u0026ldquo;Working with ROR as a Crossref Member\u0026rdquo;, in which you\u0026rsquo;ll learn several interesting things, including the following:\nROR itself is not an organization, but an initiative supported jointly by Crossref, DataCite, and the California Digital Library; Crossref members cited institutional affiliation identifiers as one of their top priorities in 2019, second only to abstracts; The specifics of how one recent ROR integrator, the open access journal publisher Hindawi, used the ROR API to create a typeahead widget in its manuscript submission system that replaces user-supplied free text with a standard institution name and a ROR ID behind the scenes, helping them to generate useful internal reports about institutional payments; and Crossref supports the submission of ROR IDs in its XML content registration process and makes ROR IDs available in its API. I\u0026rsquo;m also enthusiastically inviting you to get in touch with me if you\u0026rsquo;d like to learn more about ROR or if you\u0026rsquo;d like to tell me about your previous experience with ROR. And if you don\u0026rsquo;t get in touch with me, please be aware that I might well reach out to you – I\u0026rsquo;m eager to hear what you hope for from ROR, but also what you\u0026rsquo;re skeptical about. For, after all, I learn by going where I have to go – don\u0026rsquo;t we all?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/seeing-your-place-in-the-research-nexus/", "title": "Seeing your place in the Research Nexus", "subtitle":"", "rank": 1, "lastmod": "2022-06-22", "lastmod_ts": 1655856000, "section": "Blog", "tags": [], "description": "Having joined the Crossref team merely a week previously, the mid-year community update on June 14th was a fantastic opportunity to learn about the Research Nexus vision. We explored its building blocks and practical implementation steps within our reach, and within our imagination of the future.\nRead on (or watch the recording) for a whistlestop tour of everything – from what on Earth is Research Nexus, through to how it’s taking shape at Crossref, to how you are involved, and finally – to what concerns the community surrounding the vision and how we’re going to address that.", "content": "Having joined the Crossref team merely a week previously, the mid-year community update on June 14th was a fantastic opportunity to learn about the Research Nexus vision. We explored its building blocks and practical implementation steps within our reach, and within our imagination of the future.\nRead on (or watch the recording) for a whistlestop tour of everything – from what on Earth is Research Nexus, through to how it’s taking shape at Crossref, to how you are involved, and finally – to what concerns the community surrounding the vision and how we’re going to address that.\nSummary of presentations Click on image above to access the presentation.\nThe idea is simple in principle: scholarly records ought to be transparent – available to examine and learn from for all. Much of scientific production and communication these days has a heavy digital footprint so the Nexus is nothing but simply connecting the loose strands, right? Yet, as the scholarly record is a reflection of the continuous progress made by multiple actors within the context of scientific structures and processes, bringing the Nexus to life is a little short of simple.\n“What we think of as metadata is expanding, and the notion of ‘record types’ is changing” – said Ginny Hendricks. A great majority of scholarly ‘objects’, whether they are data sets, research articles, monographs, or others, undergo many processes (including review, publication, licensing, correction, derivation) and influence knowledge and practice over time.\nMaking that progress visible and discoverable will allow for tracing the development of ideas and changes in our thinking over time. Transparency of the complete scholarly records will help to understand the impact of science funding and changing policies. It can support a more robust and comprehensive assessment of research, and contribute to improving integrity within as well as public trust in sciences.\nThe Research Nexus concept was first introduced by Jennifer Lin in 2017 as “Better research through better metadata”. Important adaptations to the model were needed to break it out of the content-specific schema. Ginny also pointed out that the concept is shared among the scholarly infrastructure community, citing a report from 2015 by OCLC Research on conscious coordination for stewardship of the evolving scholarly record.\nPatricia Feeney has given us reasons for optimism in building a robust Nexus. She’s shown areas of greatest growth in metadata reported to Crossref and shared a public roadmap of types of information we’re asked to enable in the future. We’re seeing a true boom of datasets and peer review reports registrations, and the relationship metadata for our records is improving too. At the dawn of defaulting to open references, 44% of records we hold have associated references and that is growing. Provision of the newly enabled affiliation information (ROR IDs) is on the rise, as is the funder information. Some conversations and questions followed highlighting the need for further guidance in these areas.\nTo make a case for enriching metadata records, Martyn Rittman demonstrated examples of traceability of research influence on realities outside academia. He captured recent examples of data citations and other references present not just between scholarly papers, but also in policy documents and popular media. These allow for greater discoverability of literature – but also show the public influence and impact of the research and the work’s context in our wider society.\nWhile Martyn shared our blue-skies aspiration to streamline Crossref’s APIs to offer insight to all these relationships with a single service, Joe Wass grounded those ambitions in the reality of technical work underway. His team’s attention is divided between three main areas. They continue to maintain and de-bug our existing infrastructure. They are developing self-service solutions for members. Finally, they are mapping and planning improved infrastructure, evaluating technology against the Research Nexus vision.\nBringing it back to the source (of metadata), Rachael Lammey offered a very practical guide to key activities enabling Research Nexus that all members can take on now. She highlighted the benefits of collecting and registering data citations, ROR IDs, and grant funding information. She went on to talk about challenges of subject classification (at a journal level) that our research and development efforts are focusing on at the moment.\nSummary of discussions Publishing has changed dramatically and our members recognise increasing opportunities for transparency of the scholarly record. Breaking the distant vision of Research Nexus down into actionable chunks made it more relatable for call participants. Many reflected on seeing their place in it properly for the first time. Yet, challenges remain and many were brought to the fore in the discussions.\nThe reliability and usability of the technology for registering metadata with Crossref needs to improve. We need to do better in supporting multi-language and multi-alphabet information. Not just developing systems anew, but also streamline the way content is registered and annotated, and continue to disambiguate the competing identifiers. Different record types, chiefly books, present specific challenges in this regard. Finally, making all that metadata accessible and usable is key to enabling insights from the rich data we collectively make available.\nTechnology is important, but won’t overcome the barriers that exist in the mindsets. Siloed thinking means that publishers may not be sensitive to benefits that improved relationship metadata could have for colleagues working on assessment, even within the same institutions. Greater guidance or best practices for new identifiers, such as ORCID, ROR, grants, would allow more publishers to get on board with the changes. Researchers often don’t help the cause either – many don’t realise the role and benefits of metadata for their work and are reluctant to provide rich information related to it, perceiving it as a bureaucratic burden.\nIn a nutshell, I learnt that – while the concept of Research Nexus is pretty complex – we’re all already participating in making it a reality. I’m grateful to the call participants for sharing their challenges and ideas so generously. It means we can work to address those. I’ll be sure to follow-up on requests for support and clearer guidelines about citing data, recording ROR IDs and grants information in the metadata, and we’ll engage our community on complex topics of record updates (corrections, retractions and versions). Be sure to keep in touch with the conversations on the Community Forum. I’ll see you there!\n", "headings": ["Summary of presentations","Summary of discussions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/announcing-our-new-head-of-strategic-initiatives-dominika-tkaczyk/", "title": "Announcing our new Head of Strategic Initiatives: Dominika Tkaczyk", "subtitle":"", "rank": 1, "lastmod": "2022-06-10", "lastmod_ts": 1654819200, "section": "Blog", "tags": [], "description": "TL;DR A year ago, we announced that we were putting the \u0026ldquo;R\u0026rdquo; back in R\u0026amp;D. That was when Rachael Lammey joined the R\u0026amp;D team as the Head of Strategic Initiatives.\nAnd now, with Rachael assuming the role of Product Director, I\u0026rsquo;m delighted to announce that Dominika Tkaczyk has agreed to take over Rachael\u0026rsquo;s role as the Head of Strategic Initiatives. Of course, you might already know her.\nWe will also immediately start recruiting for a new Principal R\u0026amp;D Developer to work with Esha and Dominika on the R\u0026amp;D team.", "content": "TL;DR A year ago, we announced that we were putting the \u0026ldquo;R\u0026rdquo; back in R\u0026amp;D. That was when Rachael Lammey joined the R\u0026amp;D team as the Head of Strategic Initiatives.\nAnd now, with Rachael assuming the role of Product Director, I\u0026rsquo;m delighted to announce that Dominika Tkaczyk has agreed to take over Rachael\u0026rsquo;s role as the Head of Strategic Initiatives. Of course, you might already know her.\nWe will also immediately start recruiting for a new Principal R\u0026amp;D Developer to work with Esha and Dominika on the R\u0026amp;D team.\nWhat does this mean for R\u0026amp;D? Before I talk about what Dominika\u0026rsquo;s move means in practice, I just want to take a moment to thank Rachael for the time she spent working with us. Over the past year, she has injected a massive amount of energy into the group and rebuilt the team\u0026rsquo;s momentum. This is exactly what we asked her to do.\nRachael\u0026rsquo;s first task was to repatriate her two R\u0026amp;D colleagues, who we had loaned to work on other urgent projects. Dominika was the technical lead on the port and relaunching of the REST API. Esha was the technical lead for the ROR initiative. In addition, Rachael has been working with Esha, Dominika, Paul Davis, and me on several shorter-term strategic projects that are shaping our overall development strategy.\nExploring and implementing a new approach to building content registration front ends. This approach is schema-driven and bakes in localization and accessibility support from the start. The new approach is currently the basis for the grant registration tool that our Product \u0026amp; Tech teams are now testing with our new funder members. Exploring and ultimately rejecting a \u0026ldquo;pull-based\u0026rdquo; approach to registering metadata, where Crossref would harvest structured metadata from member landing pages instead of asking members to deposit it with us via XML. You are not really doing R\u0026amp;D unless some of your ideas fail. In this case, we quickly discovered that the logistics of crawling our members’ websites, combined with the sparsity of structured metadata in landing pages, made a pull-based approach fragile and impractical. Exploring the use of ML techniques to fill gaps in the journal classification data that is currently in the REST API. Gaining new data science badges in the process. Exploring alternative approaches to building community-extendable reporting tools using standard data science tooling and techniques. Exploring how we can help reduce support toil by using data science tools like notebooks to create new support tools and self-serve UIs for information frequently requested by members that can otherwise prove difficult to get using our existing tools. Looking at extending the matching technology previously developed by labs to try and better match funder grant-information research outputs. And this is just a sample of projects Rachael helped promote and prioritize. It is the nature of many of the larger R\u0026amp;D projects that you don\u0026rsquo;t see the immediate results until long after they\u0026rsquo;ve been conceived and put into motion. This means that Rachael has been working on some things over the past year that are not yet public.\nBut, with any luck, we may see some significant new developments in how Crossref collects and distributes information about significant updates to the scholarly record- including retractions and withdrawals. We are also likely to see more work to promote data citation amongst our members. And finally, we are likely to see an attempt to create a community-managed and open research classification taxonomy. Of course, as is the case with research projects, there is no guarantee that any of these nascent ideas/projects will make it into a production service. Still, if even one of them does, it will become as vital a part of open scholarly infrastructure as DOIs, ORCIDs, or ROR IDs are now.\nAnd we will have Rachael and the hard work of the R\u0026amp;D group, important cameos from others, and community input to thank for giving them the initial push to realization. So that\u0026rsquo;s a pretty good track record for just a year in the R\u0026amp;D group.\nPassing the torch And this is a track record I\u0026rsquo;m confident that Dominika can match as she takes over Rachael\u0026rsquo;s role.\nSoon after Dominika joined the Crossref R\u0026amp;D team, she started to expand her activities to include more production engineering practice, team leadership, and community outreach. She has also worked extensively with support and outreach- providing them with data science consulting and mentoring in software development. Her new role as the Head of Strategic Initiatives will continue this trend. She will spend less time prototyping software and analyzing data and more time liaising with our members and the broader community to understand their needs and design R\u0026amp;D projects to test approaches to meeting those needs. This means a lot more liaising with other Crossref teams, speaking with our members and the wider community, and participating in working groups and conferences.\nIt also probably means a lot less programming and analysis. But programming and building prototypes are critical to the R\u0026amp;D team. And so the first thing we will do is start recruiting for a new Principal R\u0026amp;D Developer to continue working along with Esha on conducting experiments and developing POCs.\nI’m looking forward to the next year. With Rachael taking the role of Product Director and Dominika taking over as the Head of Strategic Initiatives, we are well-positioned to make profound technical and conceptual improvements to Crossref\u0026rsquo;s services while simultaneously working with the community to line up our next strategic priorities.\n", "headings": ["TL;DR","What does this mean for R\u0026amp;D?","Passing the torch"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rethinking-staff-travel-meetings-and-events/", "title": "Rethinking staff travel, meetings, and events", "subtitle":"", "rank": 1, "lastmod": "2022-06-07", "lastmod_ts": 1654560000, "section": "Blog", "tags": [], "description": "As a distributed, global, and community-led organisation, sharing information and listening to our members both online and in person has always been integral to what we do.\nFor many years Crossref has held both in-person and online meetings and events, which involved a fair amount of travel by our staff, board, and community. This changed drastically in March 2020, when we had to stop traveling and stop having in-person meetings and events.", "content": "As a distributed, global, and community-led organisation, sharing information and listening to our members both online and in person has always been integral to what we do.\nFor many years Crossref has held both in-person and online meetings and events, which involved a fair amount of travel by our staff, board, and community. This changed drastically in March 2020, when we had to stop traveling and stop having in-person meetings and events. Due to the hard work and creativity of our team and the support of our Ambassadors and Sponsors, we were able to move to exclusively online meetings and events and maintain connections with colleagues, members, and much of the scholarly research community.\nOnline meetings have benefits compared to in-person ones; they have a much lower carbon footprint, and they can be more inclusive because people don’t have to find the time and money to travel. But there are limitations to online meetings; individual connections made in person do become harder to maintain, and new connections are more difficult to make and grow online. Sometimes just by sitting with someone, meeting their team and drinking their tea, free-flowing conversation leads to real progress.\nBut with over 17,000 members in 150 countries, our small staff can’t be everywhere, and we need to consider the personal as well as the environmental impacts.\nWhen we started work on the 2022 budget last year, our staff and board took the opportunity to think about our approach, with the goal of not going back to ‘normal’. So we asked ourselves, now that we have a better sense of what works and what doesn\u0026rsquo;t, how can we make our travel and in-person meetings have a greater impact on our goals, while also traveling less and reducing our impact on the environment?\nWe decided that in the context of our mission and values, we had to take into account three key areas:\nThe environment and climate change Inclusion Work/life balance. We developed an updated strategy for in-person and online meetings from 2022 onwards along with a set of recommendations and commitments to reduce our carbon footprint. The commitments were approved by the board at its November 2021 meeting.\nOur plan for online and in-person meetings Online events will generally be aimed at broad groups, in multiple timezones, to inform, update, and test general ideas and assumptions at scale. In contrast, in-person events will be smaller, focusing on deep learning, co-creation, and collaborating through various formats such as workshops, roundtables, or sprints, ideally working toward a specific outcome. These smaller in-person meetings will be scheduled alongside other community events so there will be fewer trips on the whole but each trip more consolidated.\nEach in-person meeting will have stated goals such as recruiting and onboarding a new Sponsor, bringing our Ambassadors together to build relationships and share best practices, or getting experts together in a room to help decide important polices, improve some code, or plan new initiatives. At the moment, we are not planning \u0026lsquo;hybrid\u0026rsquo; events as we don\u0026rsquo;t believe they will help meet our goals.\nWhile online meetings and webinars provide a breadth of interactions, in-person meetings can provide greater depth and opportunities for more meaningful engagement and purposeful discussion, and it is this depth that we have missed over the last two and a half years. Therefore, we are identifying focus countries where we plan on engaging more with local community groups. Each country-level engagement plan includes outreach and communications activities and some in-person meetings.\nFactors and aims for selecting focus countries Inclusion is important for us and we are committed to supporting the needs of our community members worldwide. We aim to combine meaningful conversations with informational activities. We want to provide time in the day for technical problem solving and/or a more strategically focused session, both of which have worked well in the past. We hope to learn more about trends in our selected focus countries, including the challenges our members face, local publishing norms, barriers to participation in Crossref, and understand and help to adapt government policies.\nWe consider a number of factors when selecting countries with which to focus our activities:\nWhere we have a relatively large number of members. Where we are seeing an increase in new members joining. Where we have not undertaken engagement activities in at least 3 years. Where we have good contacts to collaborate with, i.e., a national funder, a sponsor or ambassador, a government body, or another organization aligned with our mission. Where we have very few members but where research output is high according to other sources, in order to understand and overcome barriers to participating in Crossref. Where we can consolidate multiple engagement activities in one trip, for example run a LIVE (informational) meeting or workshop, develop relationships with a key Sponsor, or discuss national research policy with government representatives. Where we can coordinate our engagement efforts alongside other local community events. Our environmental commitments In line with rethinking how we engage with our members and making sure we do so in the most sustainable, inclusive, and impactful way, we are making the following commitments:\nCrossref staff will think strategically and consider environmental, inclusion, and work/life balance issues when they plan travel. We will make the most of in-person events by focusing on those that involve interaction, such as listening and learning from our members and users, deepening relationships, co-creating, and forming new alliances\nWe will travel less and have fewer face-to-face meetings going forward compared with 2019 as a baseline year. The 2022 travel and events budget was reduced by 40% and set at 60% of the 2019 budget. Travel and in-person events for the first half of 2022 have been limited so we will make this same commitment for 2023 still using 2019 as the baseline.\nCrossref will track the carbon footprint of staff travel to meetings and events. We will regularly review the data and find ways to reduce the environmental impact.\nCombine stakeholder visits with event trips and vice versa whenever possible (if you do 1 plane trip to a location 1000 miles away instead of 2 trips, you reduce your impact by 0.5t)\nAs previously planned before the COVID-19 pandemic, the Crossref LIVE Annual Meetings will remain online only and will be held in different time zones. Having them in different time zones will enable global sharing of updates with a lower environmental impact.\nCrossref board meetings will be reduced from three in-person meetings per year to one face-to-face and two online meetings per year.\nFewer staff will attend fewer in-person conferences and will combine them with other travel.\nFor Crossref staff meetings, it is important for our distributed staff to meet face-to-face as a whole organization and as teams. We will plan for one all-staff in person meeting per year (at which there can also be team meetings). Additional team meetings will be based on the reduced travel and meetings budget. Where possible, team meetings will be combined with other meetings (e.g. conferences or other community events).\nWhile trips that combine meetings may mean longer time away from home, we will still try to avoid staff having to travel or be away on weekends. We will also:\nAvoid short-haul flights (under 2.5 to 3 hours) where trains are available. Book hotels within walking distance of the event locations (if safe) in order to reduce taxi use. Use public transport and trains (if efficient and safe). Select hotels that have good sustainability plans in place, seeking out ‘green’ hotels where (if available and within budget). Prioritize locations where the fewest number of staff have to travel or travel the shortest distances. Reporting From now on, we will:\nTrack staff travel incl. the number of trips, miles flown, and the carbon impact. Estimate the carbon footprint of our two offices, staff home working, our data center, and our cloud infrastructure. Track all Crossref-hosted events - in-person and online and review annually (what went well, what can be improved, how to further reduce carbon footprint) as part of the budgeting process. Many organizations are now rethinking how to go about travel, conferences, meetings, and work in general. The pandemic may have been the trigger for a big shift in the ways we work and interact, and not all of it was welcome or should continue; however, sometimes it takes a big event to give us the space to sit back, reflect, and change things for the better going forward. As always, we\u0026rsquo;ll evaluate these approaches over time.\nAll of this means we may be declining some in-person meetings (and when we do, please don’t take it personally) but we still look forward to engaging with our community in a purposeful way.\nThis feels like a good time to give a shout-out to all our Ambassadors and Sponsors around the world who are very important for insight and engagement, and we will continue to partner with them for both online and in-person meetings.\n", "headings": ["Our plan for online and in-person meetings","Factors and aims for selecting focus countries","Our environmental commitments","Reporting"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/vanessa-fairhurst/", "title": "Vanessa Fairhurst", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/terms/", "title": "Membership terms", "subtitle":"", "rank": 5, "lastmod": "2022-06-03", "lastmod_ts": 1654214400, "section": "Become a member", "tags": [], "description": "Updated June 2022\nThese Crossref Terms of Membership (these \u0026ldquo;Terms\u0026rdquo;) set forth the terms and conditions of membership in The Publishers International Linking Association, Inc. d/b/a Crossref (\u0026ldquo;Crossref\u0026rdquo;), a nonprofit corporation organized under the laws of New York, USA.\nBackground Crossref is a not-for-profit membership organization that exists to make scholarly communications better. Its mission is to \"make research outputs easy to find, cite, link, assess, and reuse.\" To that end, Crossref: manages and maintains a database of information (\"", "content": "Updated June 2022\nThese Crossref Terms of Membership (these \u0026ldquo;Terms\u0026rdquo;) set forth the terms and conditions of membership in The Publishers International Linking Association, Inc. d/b/a Crossref (\u0026ldquo;Crossref\u0026rdquo;), a nonprofit corporation organized under the laws of New York, USA.\nBackground Crossref is a not-for-profit membership organization that exists to make scholarly communications better. Its mission is to \"make research outputs easy to find, cite, link, assess, and reuse.\" To that end, Crossref: manages and maintains a database of information (\"Metadata\") that describes and identifies professional and scholarly materials and content (collectively, \"Content\") and persistent identifiers such as Digital Object Identifiers (\"Identifiers\") that point to or give context to the Content online; facilitates the deposit and retrieval of Metadata and Identifiers; enables linking among Content online through embedded reference citations; and offers other online information management tools. All of the above functions and offerings, including associated systems, hardware, software, and know-how, are referred to in these Terms as the \"Crossref Infrastructure and Services.\" Membership in Crossref is open to organizations that produce Content and otherwise meet the terms and conditions of membership established from time to time by Crossref, and to such other entities as Crossref may determine from time to time. Together with Crossref's Articles of Incorporation and Bylaws (collectively, the \"Crossref Governing Documents\"), these Terms govern membership in Crossref. By submitting a membership application, the applicant agrees to be bound by these Terms and, upon Crossref's approval of that application, and receipt of the first annual membership fee, the applicant becomes a \"Member.\" Terms Member's Rights. Subject to these Terms, the Crossref Governing Documents, and Crossref's policies and procedures as promulgated by Crossref's board and staff and made available on the Website (as defined below) from time to time, the Member shall: be entitled to use the Crossref Infrastructure and Services as set forth herein; and have the governance rights afforded to Members in the Crossref Governing Documents. Member's Obligations. As a condition of its membership, the Member shall comply with the provisions of these Terms, including this Section 2. Metadata Deposits. The Member is responsible for depositing accurate Metadata for each Content item: produced by the Member, and/or for which the Member otherwise has rights to cause such Content to be included in the Crossref Infrastructure and Services. All Content described in the two bullet points above is referred to in these Terms as the Member's Content. Timely Metadata Deposits. Prior to, or as soon as reasonably practicable after, online publication of the Member's Content, the Member shall deposit with Crossref the Metadata corresponding to such Content. All deposits of Metadata shall comply with Crossref's technical documentation and schemas, including fields, parameters and other metadata criteria, set forth from time to time in support and best practice documentation on Crossref's website (the \"Website\") and/or email notices. Rights to Content. The Member will not deposit or register Metadata for any Content for which the Member does not have legal rights to do so. Registering Identifiers. The Member shall assign an Identifier to each of its Content items, for registration within the Crossref Infrastructure and Services. Linking. Promptly upon becoming a Member, the Member shall embed the appropriate Identifier(s) within each reference citation appearing in the Member's Content. Reference Linking. Throughout the Term, the Member shall use best efforts to maximize linking through Identifiers to other Content within the Crossref Infrastructure and Services, known in these Terms as \"Reference Linking\". Display Identifiers. With respect to each Identifier assigned to the Member's Content, the Member shall use commercially reasonable efforts to (i) display each Identifier in a location and format that comply with the Crossref Display Guidelines, as updated on the Website from time to time (the \"Display Guidelines\"), and (ii) ensure each Identifier is hyperlinked so as to be citable. Maintaining and Updating Metadata. The Member shall ensure that each Identifier assigned to the Member's Content continuously resolves to a response page (a \"Response Page\") containing, at a minimum, (i) complete bibliographic information about the corresponding Content (including the Identifier), visible on the initial page, with reasonably sufficient information detailing how the Content can be cited and accessed, and/or (ii) a hyperlink leading to the Content itself, in each case in accordance with the Display Guidelines. The Identifier shall serve as the permanent URL link to the Response Page. The Member shall register the Response Page URL with Crossref, keep it up-to-date and active, and promptly correct any errors or variances communicated to the Member by Crossref. The Member shall be exclusively responsible for maintaining the accuracy of data associated with each Identifier relating to the Member's Content, and the validity and operation of the corresponding URL(s) containing the Response Page, and related pages. Some examples of failures to maintain and update Metadata consistent with this Section 2(h) include: 1) publishing or communicating Identifiers without registering them with Crossref; 2) withdrawing content without posting a notification and updating the record's URL/metadata with Crossref; or 3) registering new Identifiers with the Member's own prefix for content that already had Identifiers registered by a prior publisher. Archives. The Member shall use best efforts to contract with a third-party archive or other content host (an \"Archive\") (a list of which can be found here) for such Archive to preserve the Member's Content and, in the event that the Member ceases to host the Member's Content, to make such Content available for persistent linking. The Member hereby authorizes Crossref, solely in the event an Archive becomes the primary location of the Member's Content, to contract directly with such Archive for the purpose of ensuring the persistence of links to such Content. The Member agrees that, in the event that the Content permanently ceases to be maintained by the Member, Crossref is entitled to redirect Identifiers to an Archive or a \"Defunct DOI\" page hosted by Crossref. Content-Specific Obligations. Should the Member choose to register different types of Content and Metadata, such as but not limited to journal articles, book chapters, datasets, conference proceedings, preprints, components, data, peer review reports, versions, or relations, the Member shall be bound by all obligations applicable to each specific record type as set forth on the Website from time to time. Fees. The Member shall pay the Fees described in this Section 3. These Terms refer to Annual Fees and Content Registration Fees collectively as \"Fees.\" Annual Fee. The Member is responsible to pay an annual membership fee (the \"Annual Fee\"). The Annual Fee for a Member's first year of membership is invoiced as a prorated amount for the Member's initial calendar year of membership, to be paid in full for membership. Thereafter, the Annual Fee is invoiced at the beginning of each calendar year. Payment terms are 45 days from the date of invoice. Content Registration Fees. Crossref charges Members a Content Registration fee (collectively, \"Content Registration Fees\") to deposit content with Crossref, as more fully described on the Website from time to time. Content Registration Fees are invoiced on a quarterly basis. Payment terms are 45 days from the date of invoice. Wire Transfer Fees. The Member is responsible for any wire transfer fees and other ancillary costs incurred by Crossref that are associated with the Member's chosen payment methods. Fees for Optional Services. From time to time Crossref charges Members other optional service fees for various optional services offered by Crossref, if and to the extent elected by the Member. These are set forth on the Website and updated from time to time. Intellectual Property Rights. General License. Subject to these Terms, the Member hereby grants to Crossref and its agents a fully-paid, non-exclusive, worldwide license for any and all rights necessary to use, reproduce, transmit, distribute, display and sublicense Metadata and Identifiers corresponding to the Member's Content, in the reasonable discretion of Crossref in connection with the Crossref Infrastructure and Services, including all aspects of Reference Linking and Crossref's various other service offerings. Metadata Rights and Limitations. Except as set forth herein and without limiting Section 4(a) above, Crossref shall not use, or acquire or retain any rights in the deposited Metadata of a Member. Nothing in these Terms gives a Member any rights (including copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed) to any Metadata belonging to another Member. Crossref Intellectual Property. The Member acknowledges that, as between itself and Crossref, Crossref has all right, title and interest in and to the Crossref Infrastructure and Services, including all related copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed, with the exception of rights in the deposited Metadata as set forth in Section 4(b) or expressly provided elsewhere in writing. The Member shall not delete or modify any of Crossref's logos or notices of intellectual property rights on documents, online text or interfaces made available by Crossref. Distribution of Metadata by Crossref. Without limiting the provisions of Section 4 above, the Member acknowledges and agrees that all Metadata and Identifiers registered with Crossref are made available for reuse without restriction through (but not limited to) public APIs and search interfaces, which enhances discoverability of Content. Metadata and Identifiers may also be licensed to third party subscribers along with an agreement for Crossref to provide third parties with certain higher levels of support and service. Use of Marks. Crossref may use the Member's name(s) and mark(s) to identify the Member's status as a member of Crossref. The Member may identify itself as a Crossref member by placing the Crossref mark or Crossref badges (without modification) on its website, by referencing the code provided on the Website. The Member may also identify use of Crossref Identifiers and Metadata, for example within reference lists, using the label \"Crossref.\" Maintenance of the Crossref Infrastructure and Services. Crossref shall use commercially reasonable efforts to maintain the Crossref Infrastructure and Services and to make it continually available for use by Members. Term. These Terms shall remain in effect until and unless superseded by updated Crossref Terms of Membership amended as set forth in Section 18 below. Termination of Membership; Effect. Termination of Membership. A Member's Crossref membership may be terminated: By the Member for convenience upon written notice to Crossref; By the Member for cause (1) in the event of Crossref's material breach of these Terms, which breach remains uncured following 45 days' notice from the Member to Crossref (or is by its nature incapable of cure) or (2) in the event Crossref provides notice of a material amendment to these Terms pursuant to the provisions of Section 18 hereof, and the Member provides notice to Crossref within 60 days of such notice of the Member's objection to such amendment and its intention to terminate; and By Crossref upon written notice to the Member, in accordance with the Crossref Governing Documents, including for (1) a misrepresentation in the Member's membership application; (2) legal sanctions or judgments against the Member or its home country; (3) fraudulent use of Identifiers or Metadata; (4) failure to pay Fees due, which failure persists for 120 or more days following the initial invoice therefor; or (5) any other basis set forth in the Crossref Governing Documents. Review of Termination of Membership. Except where termination is on account of nonpayment of fees, the Executive Committee of Crossref's board shall review and ratify any Crossref decision to permanently terminate a Member's membership or any significant membership benefit (e.g., blocking access to or removing significant amounts of Metadata for multiple items of Content for an extended period), within 10 days of such decision. As part of this review, the Member will have an opportunity to be heard under such reasonable procedures as the board may determine in its good faith. Crossref or the Member may petition the Executive Committee to review and ratify any Crossref decision temporarily restricting the Member's access to or use of the Crossref Infrastructure and Services for a limited period, and the Executive Committee shall determine in its sole discretion whether to conduct such a review. Effect of Termination of Membership. An outgoing Member shall not be entitled to a refund of any Fees that have been paid or waiver of any Fees that have accrued, except that a Member will be entitled to a refund of any prepaid fees representing the remaining portion of the then-current term of such Member's membership in the event of a termination for cause pursuant to Section 9(a)(ii) above. Termination of Membership shall have no adverse effect on Crossref's intellectual property rights in any Metadata or upon any related licenses then in effect. Following termination of its membership, an outgoing Member shall have no further obligation to deposit Metadata with Crossref or to assign Identifiers to its Content, and Crossref shall have no further obligation to register such Identifiers. With respect to Metadata deposited and Identifiers registered prior to such termination: (i) Crossref shall have the right to keep, maintain and use such Metadata and Identifiers within the Crossref Infrastructure and Services; and (ii) the obligations of the Member set forth in Sections 2(h) (i), and (j) of these Terms will survive. Enforcement. Crossref shall take reasonable steps to enforce these Terms, provided that Crossref shall not be obligated to take any action with respect to any Metadata that is the subject of an intellectual property dispute, but reserves the right, in its sole discretion, to remove or suspend access from, to or through such Metadata and/or its associated Content or to take any other action it deems appropriate. Governing Law. These Terms shall be interpreted, governed and enforced under the laws of New York, USA, without regard to its conflict of law rules. All claims, disputes and actions of any kind arising out of or relating to these Terms shall be settled in Boston, Massachusetts, USA. Disputes. Alternative Dispute Resolution. The Member shall promptly notify Crossref of any claim, dispute or action, whether against other Members or Crossref, related to these Terms or any Identifiers or Metadata. Pursuant to the Commercial Arbitration Rules of the American Arbitration Association, a single arbitrator reasonably familiar with the publishing (including online publishing) and internet industries shall settle all claims, disputes or actions of any kind arising from or relating to the subject matter of these Terms between Crossref and the Member. The decision of the arbitrator shall be final and binding on the parties, and may be enforced in any court of competent jurisdiction. Injunctive Relief. Notwithstanding Section 12(a), no party shall be prevented from seeking injunctive or preliminary relief in anticipation, but not in any way in limitation, of arbitration. The Member acknowledges that the unauthorized deposit or use of Metadata would cause irreparable harm to Crossref, the Crossref Infrastructure and Services, and/or other Members, that could not be compensated by monetary damages. The Member therefore agrees that Crossref may seek injunctive relief to remedy any actual or threatened unauthorized deposit or use of Metadata. Indemnification. To the extent authorized by law, the Member agrees to indemnify and hold harmless Crossref, its representatives, and their respective directors, officers and employees, from and against any and all liability, damage, loss, cost or expense, including reasonable attorney fees, costs, and other expenses, to the extent arising from or resulting from such Member's or its agent's or representative's acts or omissions, breach of these Terms, or violation of any third-party intellectual property right. Limitations of Liability. NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR ANY INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, CONSEQUENTIAL DAMAGES OR LOST PROFITS ARISING FROM OR RELATING TO THESE TERMS OR THE CROSSREF INFRASTRUCTURE AND SERVICES, EVEN IF IT HAS BEEN INFORMED IN ADVANCE OF THE POSSIBILITY OF SUCH DAMAGES. NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR (I) ANY LOSS, CORRUPTION OR DELAY OF DATA OR (II) ANY LOSS, CORRUPTION OR DELAY OF COMMUNICATIONS WITH OR CONNECTION TO ANY CROSSREF SERVICE OR ANY CONTENT. Taxes. The Member is responsible for all sales and use taxes imposed, if any, with respect to the services rendered or products provided to the Member hereunder, other than taxes based upon or credited against Crossref's income. Other Terms. Independent Contractors. These Terms will not create or be deemed to create any agency, partnership, employment relationship, or joint venture between Crossref and any Member. The Member shall not have any right, power or authority to enter into any agreement for or on behalf of, or incur any obligation or liability of, or to otherwise bind, Crossref. No Third-Party Beneficiaries. Except to the extent expressly set forth herein, neither party intends that these Terms shall benefit, or create any right or cause of action in or on behalf of, any person or entity other than Crossref and the Member. No Assignment. A Member may not assign, subcontract or sublicense these Terms without the prior written consent of Crossref, and any attempted assignment in violation of the foregoing shall be void. Notices. Written notice under these Terms shall be given as follows: If to Crossref: by emailing member@crossref.org addressing Mr. Edward Pentz, Executive Director. If to a Member: To the name and email address designated by the Member as the Primary Contact (previously \"Business Contact\") in such Member's membership application. This information may be changed by the Member by giving notice to Crossref by email at member@crossref.org. The Member shall also designate a technical, business, voting, billing, and metadata quality contact, and advise Crossref of any changes to such information. Survival. Sections (and the corresponding subsections, if any) 2(g), (h), and (i), 4, 9, 10, 11, 12, 13, 14, and 16, and any other provisions that by their express terms or nature survive, and any rights to payment, shall survive the expiration or termination of these Terms. Headings. The headings of the sections and subsections used in these Terms are included for convenience only and are not to be used in construing or interpreting these Terms. Severability. If any provision of these Terms (or any portion thereof) is determined to be invalid or unenforceable, the remaining provisions of these Terms will not be affected thereby and will be binding upon the parties and will be enforceable, as though said invalid or unenforceable provision (or portion thereof) were not contained in these Terms. Entire Agreement. These Terms, together with any Addenda of Terms executed between Crossref and a Member, constitute and contain the entire agreement between Crossref and such Member with respect to the subject matter hereof, and supersede any prior or contemporaneous oral or written agreements. The \"Background\" section at the beginning of these Terms forms a part of these Terms and is incorporated by reference herein. Amendment. These Terms may be amended by Crossref, via updated Terms posted on the Website and emailed to each Member no fewer than sixty (60) days prior to effectiveness. By using the Crossref Infrastructure and Services after the effective date of any such amendment hereto, the Member accepts the amended Terms. These Terms may also be amended by mutual agreement of a given Member and Crossref by execution of an Addendum of Terms. Data Privacy. By providing Crossref with personal data which was provided to the Member by a natural person(s), including Member staff (the \"origin party\"), the Member guarantees that: the Member collected and processed the data in accordance with applicable law, including the General Data Protection Regulation; the Member acquired the origin party's informed consent to share the data with Crossref; the Member acquired the origin party's consent for the data to be transferred to the United States for processing. The Member further agrees that it will maintain appropriate mechanisms to ensure that it will provide natural person(s) whose personal data it provides to Crossref with a means to have access to, to correct, and to delete such data and understands that the burden is on the Member to communicate such corrections or deletions to Crossref.\nCrossref's Privacy Policy is located here. Compliance. Each of the Member and Crossref shall perform under this Agreement in compliance with all laws, rules, and regulations of any jurisdiction which is or may be applicable to its business and activities, including anti-corruption, copyright, privacy, and data protection laws, rules, and regulations.\nThe Member warrants that neither it nor any of its affiliates, officers, directors, employees, or members is (i) a person whose name appears on the list of Specially Designated Nationals and Blocked Persons published by the Office of Foreign Assets Control, U.S. Department of Treasury (\"OFAC\"), (ii) a department, agency or instrumentality of, or is otherwise controlled by or acting on behalf of, directly or indirectly, any such person; (iii) a department, agency, or instrumentality of the government of a country subject to comprehensive U.S. economic sanctions administered by OFAC; or (iv) is subject to sanctions by the United Nations, the United Kingdom, or the European Union. If you would like to apply to join please visit our membership page which describes the obligations and leads to an application form. Please contact our membership specialist with any questions.\n", "headings": ["Background","Terms"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/annual-call-for-board-nominations/", "title": "Annual call for board nominations", "subtitle":"", "rank": 1, "lastmod": "2022-05-31", "lastmod_ts": 1653955200, "section": "Blog", "tags": [], "description": "The Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in March 2023. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September.\nExpressions of interest will be due Friday, June 24th, 2022.\nAbout the our board elections The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board.", "content": "The Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in March 2023. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September.\nExpressions of interest will be due Friday, June 24th, 2022.\nAbout the our board elections The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board. Board terms are for three years, and this year there are five seats open for election.\nThe board maintains a balance of seats, with eight seats for smaller members and eight seats for larger members (based on total revenue to Crossref). This is in an effort to ensure that the diversity of experiences and perspectives of the scholarly community are represented in decisions made at Crossref.\nThis year we will elect four of the larger member seats (membership tiers $3,900 and above) and one of the smaller member seats (membership tiers $1,650 and below). You don’t need to specify which seat you are applying for. We will provide that information to the Nominating Committee.\nThe election takes place online and voting will open in September. Election results will be shared at the annual meeting in October. New members will commence their term in March 2023.\nAbout the Nominating Committee The Nominating Committee reviews the expressions of interest and selects a slate of candidates for election. The slate put forward will exceed the total number of open seats. The committee considers the statements of interest, organizational size, geography, gender, and experience.\n2022 Nominating Committee:\nAbel Packer, SciELO, Brazil, chair* Patrick Alexander, Penn State University Press, US Nisha Doshi, Cambridge University Press, UK Marc Hurlbert, Melanoma Research Alliance\t, US* Kihong Kim, Korean Council of Science Editors, South Korea* (*) indicates Crossref board member\nWhat does the committee look for The committee looks for skills and experience that will complement the rest of the board. Candidates from countries and regions that are not currently reflected on the board are strongly encouraged to apply. Successful candidates often demonstrate a commitment to or understanding of our strategic agenda or the Principles of Open Scholarly Infrastructure; hold positions within their organizations that may be underrepresented on the board currently; and/or have experience with governance or community involvement. The Nominating Committee will also review the member organization\u0026rsquo;s participation report.\nWho can apply to join the board? Any active member of Crossref can apply to join the board. Crossref membership is open to organizations that produce content, such as academic presses, commercial publishers, standards organizations, and research funders.\nBoard roles and responsibilities Crossref’s services provide central infrastructure to scholarly communications. Crossref’s board helps shape the future of our services, and by extension, impacts the broader scholarly ecosystem. We are looking for board members to contribute their experience and perspective.\nThe role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The board is representative of our membership base and guides the staff leadership team on trends affecting scholarly communications. The board sets strategic directions for the organization while also providing oversight into policy changes and implementation. Board members have a fiduciary responsibility to ensure sound operations. Board members do this by attending board meetings, as well as joining more specific board committees.\nWhat is expected of board members? Board members attend three meetings each year that typically take place in March, July, and November. Meetings have taken place in a variety of international locations and travel support is provided when needed. Following travel restrictions as a result of COVID-19, the board adopted a plan to convene at least one of the board meetings virtually each year and all committee meetings take place virtually. Most board members sit on at least one Crossref committee. Care is taken to accommodate the wide range of timezones in which our board members live.\nWhile individuals apply to join the board, the seat that is elected to the board ultimately belongs to the member organization. The primary board member also names an alternate who may attend meetings in the event that the primary board member is unable to. There is no personal financial obligation to sit on the board. The member organization must remain in good standing.\nBoard members are expected to be comfortable assuming the responsibilities listed above and to prepare and participate in board meeting discussions.\nHow to apply Please click here to submit your expression of interest. We ask for a brief statement about how your organization could enhance the Crossref board and a brief personal statement about your interest and experience with Crossref.\nPlease contact me with any questions at lofiesh@crossref.org\n", "headings": ["About the our board elections","About the Nominating Committee","What does the committee look for","Who can apply to join the board?","Board roles and responsibilities","What is expected of board members?","How to apply"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2022-public-data-file-of-more-than-134-million-metadata-records-now-available/", "title": "2022 public data file of more than 134 million metadata records now available", "subtitle":"", "rank": 1, "lastmod": "2022-05-13", "lastmod_ts": 1652400000, "section": "Blog", "tags": [], "description": "In 2020 we released our first public data file, something we’ve turned into an annual affair supporting our commitment to the Principles of Open Scholarly Infrastructure (POSI). We’ve just posted the 2022 file, which can now be downloaded via torrent like in years past.\nWe aim to publish these in the first quarter of each year, though as you may notice, we’re a little behind our intended schedule. The reason for this delay was that we wanted to make critical new metadata fields available, including resource URLs and titles with markup.", "content": "In 2020 we released our first public data file, something we’ve turned into an annual affair supporting our commitment to the Principles of Open Scholarly Infrastructure (POSI). We’ve just posted the 2022 file, which can now be downloaded via torrent like in years past.\nWe aim to publish these in the first quarter of each year, though as you may notice, we’re a little behind our intended schedule. The reason for this delay was that we wanted to make critical new metadata fields available, including resource URLs and titles with markup.\nCrossref metadata is always openly available via our API. We recommend you use this method to incrementally add new and updated records once you’re up and running with an annual public data file. If you’re interested in more frequent and regular “full-file” downloads, consider subscribing to our Metadata Plus program. Plus subscribers have access to monthly snapshots in JSON and XML formats.\nEvery year our metadata corpus grows. The 2020 file was 65GB and held 112 million records; 2021 came in at 102GB and 120 million records. This year the file weighs in at 160 GB and contains metadata for 134 million records, or all Crossref records registered up to and including April 30, 2022.\nTips for using the torrent and retrieving incremental updates Use the torrent if you want all of these records. Everyone is welcome to the metadata, but it will be much faster for you and much easier on our APIs to get so many records in one file. Here are some tips on how to work with the file.\nUse the REST API to incrementally add new and updated records once you’ve got the initial file. Here is how to get started (and avoid getting blocked in your enthusiasm to use all this great metadata!).\n‘Limited’ and ‘closed’ references are not included in the file or our open APIs. And while bibliographic metadata is generally required, lots of metadata is optional, so that records will vary in quality and completeness.\nQuestions, comments, and feedback are welcome at support@crossref.org.\n", "headings": ["Tips for using the torrent and retrieving incremental updates"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/announcing-our-new-director-of-product-rachael-lammey/", "title": "Announcing our new Director of Product: Rachael Lammey", "subtitle":"", "rank": 1, "lastmod": "2022-05-12", "lastmod_ts": 1652313600, "section": "Blog", "tags": [], "description": "Unfortunately, Bryan Vickery has moved onto pastures new. I would like to thank him for his many contributions at Crossref and we all wish him well.\nI’m now pleased to announce that Rachael Lammey will be Crossref’s new Director of Product starting on Monday, May 16th.\nRachael’s skills and experience are perfectly suited for this role. She has been at Crossref since 2012 and has deep knowledge and experience of all things Crossref: our mission; our members; our culture; and our services.", "content": "Unfortunately, Bryan Vickery has moved onto pastures new. I would like to thank him for his many contributions at Crossref and we all wish him well.\nI’m now pleased to announce that Rachael Lammey will be Crossref’s new Director of Product starting on Monday, May 16th.\nRachael’s skills and experience are perfectly suited for this role. She has been at Crossref since 2012 and has deep knowledge and experience of all things Crossref: our mission; our members; our culture; and our services.\nIn all her roles at Crossref Rachael has demonstrated how community-focused product development can be done.\nStarting as a Product Manager for Similarity Check and Crossmark, she then led community discussions on text and data mining and taxonomies, introduced our support of preprints, and led the very successful ORCID Auto-update integration. She initiated our important partnership with the Public Knowledge Project including scoping and overseeing the joint plugin development work over the years. She helped to grow the Sponsors program, establish the LIVE informational events, oversaw the founding of our ambassador program, engaged more research funders and institutions, and became a go-to person for data citation expertise in our community.\nIn her brief time in our Research \u0026amp; Development team, she helped to kick off that group’s reinvigoration and has engaged with numerous new community and technical initiatives. Such relationships—together with her knowledge of our systems and API—have enabled her to be a key driver in the development and adoption of ROR and grants - two of the highest strategic priorities of recent years.\nRachael says:\n\u0026ldquo;Alignment in planning and focusing on delivering outcomes will be my initial priorities. I\u0026rsquo;m conscious that we have a lot in play and I want to support the product team in their existing and ambitious goals while working with the leadership team and our very diverse community to focus and prioritise our development roadmap. I\u0026rsquo;m really grateful for this opportunity and I am looking forward to working with our members, users, and other open infrastructure organisations in this new capacity\u0026rdquo;.\nOur staff and the board are very enthusiastic about Rachael\u0026rsquo;s appointment and we know our community will be too. Please join us in congratulating Rachael!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-whats-new-with-ithenticate-v2/", "title": "Similarity Check: what’s new with iThenticate v2?", "subtitle":"", "rank": 1, "lastmod": "2022-05-10", "lastmod_ts": 1652140800, "section": "Blog", "tags": [], "description": "Since we announced last September the launch of a new version of iThenticate, a number of you have upgraded and become familiar with iThenticate v2 and its new and improved features which include:\nA faster, more user-friendly and responsive interface A preprint exclusion filter, giving users the ability to identify content on preprint servers more easily A new “red flag” feature that signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement A private repository available for browser users, allowing them to compare against their previous submissions to identify duplicate submissions within your organisation A content portal, helping users check how much of their own research outputs have been successfully indexed, self-diagnose and fix the content that has failed to be indexed in iThenticate.", "content": "Since we announced last September the launch of a new version of iThenticate, a number of you have upgraded and become familiar with iThenticate v2 and its new and improved features which include:\nA faster, more user-friendly and responsive interface A preprint exclusion filter, giving users the ability to identify content on preprint servers more easily A new “red flag” feature that signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement A private repository available for browser users, allowing them to compare against their previous submissions to identify duplicate submissions within your organisation A content portal, helping users check how much of their own research outputs have been successfully indexed, self-diagnose and fix the content that has failed to be indexed in iThenticate. We’ve received some great feedback from iThenticate v2 users and user testers:\n“There are a lot of new and helpful features implemented in version 2 of iThenticate.”\n\u0026ndash; Beilstein Institut\n“The updates to the user interface make working with the new version a pleasure. It has a very modern feel and is easy to use, as an app on a phone. We particularly like being able to click on a link and easily exclude a source from view with just a few clicks. The response time and speed of download are also greatly improved which will cut down processing time on our end.”\n\u0026ndash; Frontiers\n“I like the ability to be able to exclude content directly from the report.”\n\u0026ndash; American Chemical Society\nMore information for administrators and users is available on the Turnitin website: iThenticate v2 documentation.\nUpgrading to iThenticate v2 In September, we started inviting new and existing Similarity Check subscribers using iThenticate in the browser to upgrade to this new version. And now some of the manuscript submission systems have completed their integrations with the new version of iThenticate too, so users of these systems can start to migrate. Morressier users are already using iThenticate v2, and in the next few days, we will be emailing all eJournalPress users. We know the other major manuscript submission systems are also working on their integrations, and we\u0026rsquo;ll be in touch with members using them as soon as they confirm they are ready.\nManuscript tracking system integrations All Similarity Check subscribers using a manuscript management system will particularly appreciate a closer integration with iThenticate v2 which means that users will be able to view their Similarity Report and investigate sources within their manuscript tracking system.\neJournalPress eJournalPress users will also be able to customise their iThenticate v2 settings via a configuration interface and to decide, for example, to include or exclude bibliographies from their Similarity Reports. The new integration will also show the top five matches returned by iThenticate directly in the eJournalPress interface.\neJournalPress configuration settings in iThenticate v2\nEditorial Manager and ScholarOne Aries (Editorial Manager) and Clarivate (ScholarOne) are planning to release their iThenticate v2 integrations later this year and we will be inviting users to upgrade in the coming months.\nPlease check our community forum for updates on manuscript tracking system integrations.\nMore new and improved features User-friendly PDF report “The report is clean and easy to read.”\n\u0026ndash; The National Academies of Sciences, Engineering, and Medicine\n“The clickable links will save us a considerable amount of time as they make it easy for the author to understand where the overlap is coming from, meaning we do not need to spend time clarifying overlap reports to the authors. The summary page is also very useful as authors and editors are easily able to see which sections have been included and excluded from the report.”\n\u0026ndash; Frontiers\nThe PDF version of the Similarity Report has been completely redesigned and can easily be downloaded, emailed and printed. It contains a summary of the report i.e. word count, character count, number of pages, file size, excluded sections, submission, and report dates as well as the similarity score and a list of the top sources with clickable links.\nFirst page of the Similarity Report in iThenticate v2\nSummary and clickable links in the new Similarity Report in iThenticate v2\nCustom section exclusion filter In iThenticate v2, users can now exclude sections that are standard such as authors, affiliations, ethics statements, acknowledgments, etc. from the Similarity Report which often impacts similarity scores. You can choose from the templates available and/or create your own custom section exclusions from the admin portal.\nCustom section exclusion filter in the iThenticate v2 admin portal\nSummary of excluded custom sections on the iThenticate v2 Similarity Report\n“The user interface is definitely more responsive than v1, especially when I am looking at the full-text viewing mode, scrolling through the text to compare matches, reading through the box of text in the matching source [\u0026hellip;] I also especially like the options around excluding, I was able to see our submitted work was also taken into the database and showed matches against the papers we’d uploaded already. Going forward, this is a really interesting thing for us, especially if we are looking at duplicated content in the same journal.”\n\u0026ndash; Taylor \u0026amp; Francis User reporting Details of user activity including folder names, similarity scores, word count, and file format are now also available in iThenticate v2 and downloadable as Excel and csv. files.\nUp next Product development Further enhancements to existing features and interface such as the view full-text mode, user groups, and custom section exclusions are planned for this year. Paraphrase detection and citation matching are currently in development.\niThenticate v2 training iThenticate v2 documentation is available from the Turnitin website. Training videos and webinars will be available later on in the year.\n✏️ Do get in touch via support@crossref.org if you have any questions about iThenticate v1 or v2 or start a discussion by commenting on this blog post below.\n", "headings": ["Upgrading to iThenticate v2","Manuscript tracking system integrations","eJournalPress","Editorial Manager and ScholarOne","More new and improved features","User-friendly PDF report","Custom section exclusion filter","User reporting","Up next","Product development","iThenticate v2 training"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/ambassadors/", "title": "Ambassadors", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/do-you-want-to-be-a-crossref-ambassador/", "title": "Do you want to be a Crossref Ambassador?", "subtitle":"", "rank": 1, "lastmod": "2022-04-14", "lastmod_ts": 1649894400, "section": "Blog", "tags": [], "description": "A re-cap We kicked off our Ambassador Program in 2018 after consultation with our members, who told us they wanted greater support and representation in their local regions, time zones, and languages.\nWe also recognized that our membership has grown and changed dramatically over recent years and that it is likely to continue to do so. We now have over 16,000 members across 140 countries. As we work to understand what’s to come and ensure that we are meeting the needs of such an expansive community, having trusted local contacts we can work closely with is key to ensuring we are more proactive in engaging with new audiences and supporting existing members.", "content": "A re-cap We kicked off our Ambassador Program in 2018 after consultation with our members, who told us they wanted greater support and representation in their local regions, time zones, and languages.\nWe also recognized that our membership has grown and changed dramatically over recent years and that it is likely to continue to do so. We now have over 16,000 members across 140 countries. As we work to understand what’s to come and ensure that we are meeting the needs of such an expansive community, having trusted local contacts we can work closely with is key to ensuring we are more proactive in engaging with new audiences and supporting existing members.\nWe know that Crossref still remains inaccessible to many around the world, and in line with our strategic goal to engage communities, we want to lower the barriers to participation. Our Ambassadors are essential to us achieving this goal as we look to develop additional content in languages other than English, identify organizations to work closer with to support local research ecosystems, provide more in-person and online events in local time zones and languages, and do more in terms of open support via our community forum.\nWhat are our ambassadors up to now? We currently have a team of 30 ambassadors, spanning Indonesia, Turkey, Ukraine, India, Bangladesh, Colombia, Mexico, Tanzania, Cameroon, Nigeria, Russia, Brazil, USA, UAE, Australia, China, Malaysia, Mongolia, Singapore, and Taiwan. The program is reviewed annually, welcoming new faces and sometimes sadly saying goodbye to others. This enables us to continue improving how we work together and ensures the Ambassador team remains a diverse group of committed individuals that have the time and support from Crossref to fully participate in the program.\nOver the last 3 years, we’ve had some great successes alongside a few challenges, not least of which has been working across 15 countries during a pandemic. We have all experienced the additional personal and professional strain that COVID-19 brought along, including shifts in the way we work and anxieties in the way we go about our lives. Of course, it has also meant that all our interactions have been restricted to Zoom, which has many benefits but doesn’t compare to face-to-face interactions when it comes to building strong working relationships, particularly across language and cultural barriers.\nDespite this, our ambassador team helped us run 15 multi-lingual webinars last year, including Content Registration in Arabic, Getting Started with Books in Brazilian Portuguese, and an Introduction to Crossref in Chinese. They also helped us translate various materials and content into other languages, provided feedback on our new developments, took part in beta-testing, provided support to members on our community forum, and participated in calls to contribute to the program\u0026rsquo;s future.\nI love helping people get to know Crossref\u0026rsquo;s products and services.\nI was proud to work as Ambassador and give an online Chinese webinar to introduce Crossref and the services in Oct. 2021.\nI am glad to be of help to Spanish speakers who are not able to grasp all the Crossref information correctly because of a language barrier or because they don\u0026rsquo;t have the time to read and explore all the information available.\nMuy contento de poder formar parte como Embajador y con ello poder promover el uso y aprovechamiento de los productos de Crossref.\nI feel so blessed meeting with many diverse friends in Crossref ranging from Europe to Asia continents.\nFeeling happy by giving back knowledge to my regional community.\nThe future is ours to co-create As countries are slowly dropping restrictions and we are taking our first cautious steps into a potential ‘post-pandemic’ world, our Community Engagement and Communication team has been looking at what this means for our activities in 2022 and beyond.\nA big part of this is identifying local communities and groups to engage with to learn what challenges our members are facing, what barriers to participation in Crossref still exist, and how we can overcome these together. This practice is also fundamental to our vision of the Research Nexus––a rich and reusable open network of relationships connecting research organizations, people, things, and actions––which can only become a reality if everyone can fully contribute to the scholarly record.\nAs such, we would like to expand our Ambassador Program and particularly encourage applications from those based in the following countries:\nArgentina\nChile\nCanada\nCroatia\nEl Salvador\nGermany\nGhana\nIraq\nKenya\nNicaragua\nNigeria\nPeru\nPoland\nVietnam\nBy being one of our ambassadors, you will become a key part of the Crossref community; our first port of call for updates or to test out new products or services, become well connected to our wide network of members, and work closely with us to make scholarly communications better for all.\nIf you are interested in participating, please read more on our Ambassadors page. You can submit an application letting us know why you are interested, how you work with Crossref currently, and a bit more about yourself. We will then follow up with you to discuss your ideas and the program in more detail.\nThe Ambassador Program is quite flexible, so you can choose how and when you contribute based on your comfort levels and other commitments. However, it does come with some minimum requirements of attending two team calls a year, being responsive and letting us know if anything is preventing you from participating, and completing our annual feedback survey so we can continue to improve the program going forward. A good level of English and a firm understanding of our services and systems at Crossref is also a must to participate fully in the program and provide support to others in your local community. If you have just joined Crossref or want to learn more about how to work with us, then the Ambassador program may be too much for you right now, but our documentation has lots of helpful information and step-by-step guides, and you could also look at attending one of our events or joining our community forum.\nIf you have any questions, you can always contact us at feedback@crossref.org. We look forward to hearing from you!\n", "headings": ["A re-cap","What are our ambassadors up to now?","The future is ours to co-create"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/education/", "title": "Education", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/amendments-to-membership-terms-to-open-reference-distribution-and-include-uk-jurisdiction/", "title": "Amendments to membership terms to open reference distribution and include UK jurisdiction", "subtitle":"", "rank": 1, "lastmod": "2022-04-04", "lastmod_ts": 1649030400, "section": "Blog", "tags": [], "description": "Tl;dr Forthcoming amendments to Crossref\u0026rsquo;s membership terms will include:\nRemoval of \u0026lsquo;reference distribution preference\u0026rsquo; policy: all references in Crossref will be treated as open metadata from 3rd June 2022.\nAn addition to sanctions jurisdictions: the United Kingdom will be added to sanctions jurisdictions that Crossref needs to comply with.\nSponsors and members have been emailed today with the 60-day notice needed for changes in terms. Reference distribution preferences In 2017, when we consolidated our metadata services under Metadata Plus, we made it possible for members to set a preference for the distribution of references to Open, Limited, or Closed.", "content": "Tl;dr Forthcoming amendments to Crossref\u0026rsquo;s membership terms will include:\nRemoval of \u0026lsquo;reference distribution preference\u0026rsquo; policy: all references in Crossref will be treated as open metadata from 3rd June 2022.\nAn addition to sanctions jurisdictions: the United Kingdom will be added to sanctions jurisdictions that Crossref needs to comply with.\nSponsors and members have been emailed today with the 60-day notice needed for changes in terms. Reference distribution preferences In 2017, when we consolidated our metadata services under Metadata Plus, we made it possible for members to set a preference for the distribution of references to Open, Limited, or Closed. Prior to the 2017 change, we acted as a broker of 1:1 feeds of parts of metadata for parts of our community - clearly a role that was not scalable.\nWe are well underway to pay back technical debt on our 20-year-old metadata system and effectively rearchitect it. We therefore recently needed to decide whether to rewrite code for a capability that hardly any member was using. Just one member has chosen Closed, and Limited was the default for a while, but the vast majority of our members now prefer Open distribution. Additionally, bringing references in line with other metadata significantly simplifies this work and will speed up the technical development.\nThe Crossref Board discussed the issue in our meeting on 10th March 2022, and voted to remove the reference distribution policy set in 2017. All board motions go on our website, and the wording of this particular motion is:\nResolve that, based on a technical assessment, we will change the reference distribution policy so that all references registered with Crossref are treated the same as other metadata, following a planned transition.\nThis motion means that 60 days from today\u0026mdash;3rd June 2022\u0026mdash;all references in Crossref will be open and after that available through our API. As with all other metadata, if members cannot make references available, or do not want them openly distributed, they can choose not to deposit them. However, depositing references is necessary in order to retrieve citation links from our members-only Cited-by API.\nCheck the documentation for information on how to deposit references and use Cited-by. Also look up your participation dashboard to see if you are already registering references and your current distribution setting.\nSanctions jurisdictions Following the UK departing from the European Union, we needed to add the United Kingdom as a separate jurisdiction that we must comply with, alongside the United Nations, the United States of America, and the European Union.\nWhere there are either relevant financial or governance-based sanctions against individuals, organisations, geographic regions, or whole countries, Crossref is legally bound to comply with these four different jurisdictions. These laws supersede our own governing bylaws.\nWe have launched a new operations and sustainability section of our website, which includes a sanctions page which we will keep updated with any changes and actions we\u0026rsquo;re taking.\nThe specific terms that will change The complete membership terms are online here. In the text below, any text to be removed is shown in \u0026lsquo;strike-through\u0026rsquo; text and any additions are in bold. These new terms will be in effect from 3rd June 2022.\n5. Distribution of Metadata by Crossref. Without limiting the provisions of Section 4 above, the Member acknowledges and agrees that, subject to the Member\u0026rsquo;s reference distribution preference,all Metadata and Identifiers registered with Crossref are made available for reuse without restriction through (but not limited to) public APIs and search interfaces, which enhances discoverability of Content. Metadata and Identifiers may also be licensed to third party subscribers along with an agreement for Crossref to provide third parties with certain higher levels of support and service. For the avoidance of doubt, the scope of Crossref\u0026rsquo;s distribution (if any) of a Member\u0026rsquo;s references is based on such Member\u0026rsquo;s reference distribution preference, as established by the Member in accordance with the \u0026ldquo;Reference Distribution\u0026rdquo; page on the Website.\n20. Compliance. Each of the Member and Crossref shall perform under this Agreement in compliance with all laws, rules, and regulations of any jurisdiction which is or may be applicable to its business and activities, including anti-corruption, copyright, privacy, and data protection laws, rules, and regulations.\nThe Member warrants that neither it nor any of its affiliates, officers, directors, employees, or members is (i) a person whose name appears on the list of Specially Designated Nationals and Blocked Persons published by the Office of Foreign Assets Control, U.S. Department of Treasury (“OFAC”), (ii) a department, agency or instrumentality of, or is otherwise controlled by or acting on behalf of, directly or indirectly, any such person; (iii) a department, agency, or instrumentality of the government of a country subject to comprehensive U.S. economic sanctions administered by OFAC; or (iv) is subject to sanctions by the United Nations, the United Kingdom, or the European Union.\nAs always, please get in touch with us via member@crossref.org with any questions.\n", "headings": ["Tl;dr","Reference distribution preferences","Sanctions jurisdictions","The specific terms that will change"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/with-a-little-help-from-your-crossref-friends-better-metadata/", "title": "With a little help from your Crossref friends: Better metadata", "subtitle":"", "rank": 1, "lastmod": "2022-03-31", "lastmod_ts": 1648684800, "section": "Blog", "tags": [], "description": "We talk so much about more and better metadata that a reasonable question might be: what is Crossref doing to help?\nMembers and their service partners do the heavy lifting to provide Crossref with metadata and we don’t change what is supplied to us. One reason we don’t is because members can and often do change their records (important note: updated records do not incur fees!). However, we do a fair amount of behind the scenes work to check and report on the metadata as well as to add context and relationships.", "content": "We talk so much about more and better metadata that a reasonable question might be: what is Crossref doing to help?\nMembers and their service partners do the heavy lifting to provide Crossref with metadata and we don’t change what is supplied to us. One reason we don’t is because members can and often do change their records (important note: updated records do not incur fees!). However, we do a fair amount of behind the scenes work to check and report on the metadata as well as to add context and relationships. As a result, some of what you see in the metadata (and some of what you don’t) is facilitated, added or updated by Crossref.\nMuch of the work is automated but some of it still requires manual intervention (sound familiar?). Here’s an overview:\nBefore registration Our open APIs allow for Crossref metadata to be used throughout research and scholarly communications systems and services, before and after records are registered with us. Those who have used a search function in something like a manuscript submission system, rather than having to hand key or copy and paste the information, will appreciate how these integrations reduce time, effort and the likelihood of errors in collecting metadata well before it gets to Crossref.\nFor one example, it’s very common for members to use the metadata to add DOIs to reference lists when preparing deposits. Of course, new members first need a prefix (and a memberID and name, but more on that later) in order to register content. We also provide a suffix generator for help in constructing DOIs. If you’re not sure how best to make use of existing metadata in deposits, we’ve got a few options for you. Questions are welcome.\nWe don’t often put it this way but we should: Crossref members rely on the metadata as much, if not more, than the rest of the community. More and better metadata directly benefits our members.\nUpon registration There are a number of ways we work with the metadata when deposits are received.\nChecking for uniqueness In order to avoid duplicate records, we check to make sure that a title or work hasn\u0026rsquo;t been registered before. Depending on what we find, a conflict report or failed registration may result. Adding DOIs to references When references come to us without DOIs, we’ll try to match and add them. ORCID auto-update We automatically update authors’ ORCID records (with their permission of course) whenever deposits include their ORCID iDs. Preprint to VoR reports We compare title information and provide notifications of matching records to members depositing preprints, to help them fulfill their obligation to link to Versions of Record (VoRs), where they exist. Relationships Like preprint to VoR links, components are another kind of relationship. These might be supplementary material such as figures we can link to the ‘parent’ record. Funding data When members register only a funder name as part of the information on who funded the work, we’ll try to match it to its identifier from the Funder Registry, to support better linking between funders and works. Timestamps We add date-times for first created and last updated to member-supplied timestamps. Count of references That’s right, we count all the references for each record that includes them and add the total to the metadata. After registration Once registered, we check, report on and update metadata in a few ways.\nLink checking We email each member a monthly Resolution Report with details of the number of failed and successful resolutions for their DOIs. If someone in the community reports a DOI that isn’t registered, we email the member a DOI Error Report. Citation counts and matches Citation counts for records of members participating in our Cited-by service are openly available in our REST API. The matching citations themselves are available to members, for their own records only. Title transfers Title, prefix and DOI transfers are common and require assistance from our team. MemberID It’s not uncommon for members to have more than one prefix. The memberID means users of the REST API can query for records associated with all of a member’s prefixes. Digital preservation We handle the infrequent but critical update of URLs that are necessary when titles are triggered for digital preservation. We also preserve the metadata itself, with both CLOCKSS and Portico. Of course, since records are often redeposited with updates (note, deposit fees are only charged once per record), some of these processes on our side are repeated as necessary.\nThis list isn’t exhaustive and other needs and opportunities will emerge. For example, we are looking at matching to add ROR IDs, as we do for funderIDs, and doing some research into how we might determine and assert subject classifications at the work-level. If you\u0026rsquo;re interested in more about this kind of work, you\u0026rsquo;ll want to read this recent post by my Labs colleague Dominika on matching grants to outputs.\nGet in touch if you have questions or for more information.\n", "headings": ["Before registration","Upon registration","After registration"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/bruna-erlandsson/", "title": "Bruna Erlandsson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/perspectives-bruna-erlandsson-on-scholarly-communications-in-brazil/", "title": "Perspectives: Bruna Erlandsson on scholarly communications in Brazil", "subtitle":"", "rank": 1, "lastmod": "2022-03-28", "lastmod_ts": 1648425600, "section": "Blog", "tags": [], "description": "\rJoin us for the first in our Perspectives blog series. In this series of blogs, we will be meeting different members of our diverse, global community at Crossref. We learn more about their lives, how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, challenges they face, and plans for the future.\n", "content": "\rJoin us for the first in our Perspectives blog series. In this series of blogs, we will be meeting different members of our diverse, global community at Crossref. We learn more about their lives, how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, challenges they face, and plans for the future.\nIn our first blog, we meet Bruna Erlandsson, Crossref Ambassador in Brazil, co-owner of Linceu Editorial, and client services manager at ABEC Brasil. Bruna has dedicated her career to scholarly publishing and has worked with Crossref for many years. We invite you to have a read and a listen below to meet Bruna!\n\u0026lt;a type=\u0026quot;button\u0026quot; style=\u0026quot;cursor:pointer;\u0026quot; class=\u0026quot;video-language-button\u0026quot; data-videoid=\u0026quot;1030565718\u0026quot; data-playerid=\u0026quot;video-player-perspectives-bruna-english\u0026quot;\u0026gt;Portuguese\u0026lt;/a\u0026gt; \u0026lt;a type=\u0026quot;button\u0026quot; style=\u0026quot;cursor:pointer;\u0026quot; class=\u0026quot;video-language-button\u0026quot; data-videoid=\u0026quot;1030565745\u0026quot; data-playerid=\u0026quot;video-player-perspectives-bruna-portuguese\u0026quot;\u0026gt;English\u0026lt;/a\u0026gt; Tell us a bit about your organization, your objectives, and your role\n​​Conte-nos um pouco sobre sua organização, seus objetivos e sua função\nI am a co-founder of the company Linceu Editorial, dedicated to publishing scientific and technological research in ethical, creative, and innovative ways. We strive to provide quality editorial services that meet standard industry requirements and best practices, increase visibility, attract readers and potential authors, and ensure their work is properly cited. My personal goal is to be recognized by the scientific community for providing excellent service to our clients.\nSou sócia proprietária da empresa Linceu Editorial, que se dedica à editoração de artigos científicos de inúmeras revistas, de forma ética, criativa e inovadora. Buscamos atribuir aos periódicos de nosso portfólio os requisitos de qualidade editorial alinhados às melhores práticas editoriais, de forma que aumentem sua visibilidade e atraiam leitores, potenciais autores e, não menos importante, que recebam citações em seus artigos. Meu objetivo pessoal é obter reconhecimento da comunidade científica por meio de uma prestação de serviço em nível de excelência.\nWhat is one thing that others should know about your country and its research activity?\nO que os outros deveriam saber sobre seu país e sua atividade de pesquisa?\nBrazil is the South American leader in publishing scientific articles in Open Access journals. However, it faces challenges due to the absence of a more comprehensive public policy to support scientific editors. As a result, most journals are produced by teaching and/or research institutions or scientific associations with volunteer editorial teams that, although lacking professional journal production skills, produce high-quality journals. Only a tiny percentage of Brazilian journals are published through commercial publishers.\nO Brasil é o líder sul-americano na publicação de artigos científicos, com destaque para as revistas em acesso aberto. No entanto, enfrenta desafios em função da ausência de uma política pública mais abrangente para apoio aos editores científicos. A maior parte dos periódicos é produzida por instituições de ensino/pesquisa ou Sociedades Científicas, tendo uma equipe editorial voluntária e carecendo de profissionalização em sua produção, embora, em muitos casos, apresentem boa qualidade. Apenas uma pequena porcentagem de periódicos brasileiros é publicada por meio de um publisher comercial.\nAre there trends in scholarly communications that are unique to your part of the world?\nExistem tendências nas comunicações acadêmicas que são únicas em sua parte do mundo?\nI wouldn\u0026rsquo;t say unique. However, adherence to Open Science practices, such as preprints and making research data available, is already part of the editorial culture. On the other hand, open peer review is not yet well accepted by everyone in the scientific community, and only a few journals adopt it. In addition, in some areas of research, such as Education and Social Science, researchers are very active - on forums, in discussions lists and attending the same conferences - so there’s this feeling that ‘everyone knows everyone’ which can then lead to potential conflicts of interest and apprehensiveness around open peer review, particularly when it comes to publishing a negative review.\nEu não diria única, mas penso que, no Brasil, a adesão às práticas da ciência aberta, como publicação em preprint e disponibilização de dados de pesquisa, já fazem parte da cultura editorial. Por outro lado, a revisão aberta ainda não é bem aceita por toda comunidade científica, sendo poucos os periódicos que o adotam. Além disso, em algumas áreas de conhecimento com grande produção local, como por exemplo a Ciências Sociais e Educação, a interação entre membros da comunidade é muito grande, visto que são pesquisadores muito ativos em fóruns, listas de discussões e conferências da área, causando a sensação de que \u0026ldquo;todo mundo conhece todo mundo\u0026rdquo;, resultando em um possível conflito de interesse, visto que existe um grande receio em publicar um parecer aberto, especialmente se o caso for um parecer negativo.\nWhat about any political policies, challenges, or mandates that you have to consider in your work?\nE as políticas, desafios ou mandatos políticos que você deve considerar em seu trabalho?\nIn Latin America we have a large indexing database, Redalyc, and a digital library of Open Access journals, which has recently excluded a number of journals for charging APCs (Article Processing Charges), upon the understanding that this would go against their Diamond Open Access requirement.\nHowever, in Brazil - in general - the understanding of Open Access is not so limited. Charging APCs are in fact encouraged by many as a form of self-sustainability of the journal while still being Open Access.\nAs for challenges, one of the biggest is whether or not to publish in English. Although the number of Brazilian journals that publish exclusively in English or both languages (Portuguese and English) is remarkably high. There is still however a belief that local science is only of interest to the local public, and so some question whether there is a value in publishing in English (or other languages). For example, if an author writes a research paper about a small riverside community in the countryside of Acre state in Brazil, they might ask why someone outside the country would be interested in reading that.\nAqui na América Latina, temos uma grande base indexadora, Redalyc, e biblioteca digital de periódicos de Acesso Aberto que, recentemente, excluíu da base um número considerável de periódicos que cobrassem qualquer tipo de taxa de publicação, por entender que isso iria contra os requisitos de seu modelo de Acesso Aberto Diamante (periódicos em acesso aberto livre de taxa de publicação).\nNo entanto, no Brasil, em geral, o entendimento é outro, a cobrança de taxas de processamento não descaracteriza o acesso aberto, sendo, na verdade, encorajado por muitos como uma forma de auto-sustentabilidade do periódico.\nJá em relação a desafios, acredito que um dos maiores é a questão de publicar ou não em inglês. Embora seja notável o número de periódicos brasileiros que publicam exclusivamente em inglês ou ainda nos dois idiomas (português e inglês), existe ainda a crença de que a ciência local só teria interesse do público local, criando assim o questionamento se há ou não o valor em publicar em outro idioma. Por exemplo, se uma pesquisa estuda algo sobre uma comunidade ribeirinha no interior do estado do Acre, aqui no Brasil, é comum existir a dúvida se algo tão específico seria do interesse de alguém de fora do nosso país.\nHow would you describe the value of being part of the Crossref community; what impact has your participation had on your goals?\nComo você descreveria o valor de fazer parte da comunidade Crossref; que impacto teve sua participação em seus objetivos?\nI get immense value from being part of the Crossref community. Being a Crossref Ambassador brings greater recognition and legitimacy to my role working with editors and adds value to my company’s services as well. The title of Ambassador enhances trust in my opinions, presentations, and when providing support and clarification to those asking questions. However it also comes with a great responsibility to do this well, which motivates me to always keep up to date with developments at Crossref. Through the Ambassador Program I have given several webinars for Crossref and the Associação Brasileira de Editores Científicos (ABEC Brasil), which provide much needed information and support to Portuguese speaking Crossref members as well as enhancing the visibility of my professional activities at Linceu Editorial.\nÉ um valor enorme fazer parte da comunidade Crossref! Ser Embaixadora do Crossref traz um reconhecimento entre os editores e agrega valor aos serviços de minha empresa. Esse título assegura confiabilidade em minhas opiniões, apresentações, e esclarecimentos de dúvidas, o que traz junto uma grande responsabilidade que me motiva a me manter sempre atualizada com tudo em relação ao Crossref. Através do Programa de Embaixadores eu ministrei diversos webinários para a Crossref e também para a Associação Brasileira de Editores Científicos (ABEC Brasil), fornecendo muitas informações necessárias para os membros da Crossref que falam português, e também isso tudo acaba por retornar em visibilidade para as minhas atividades profissionais na Linceu Editorial.\nFor you, what would be the most important thing Crossref could change (do more of/do better in)?\nPara você, qual seria a coisa mais importante que o Crossref poderia mudar (fazer mais/fazer melhor)?\nI think there is still a need for more multilingual training both online and face-to-face, which has been particularly lacking during the pandemic, to provide more information on Crossref services beyond Content Registration. For example Similarity Check is a service that people still have a lot of questions about (such as ‘what is the magic similarity percentage score to identify plagiarism?’ Answer - there isn’t one!). Crossmark is another service where I believe people could benefit from more training on it’s importance in the publication process, not only in cases of retraction but also in guaranteeing that the article is up-to-date and trustworthy. In Brazil many people use Open Journal Systems (OJS) and so the development of Crossref service specific plugins and training on how to use them is really useful!\nAcho que ainda há necessidade de mais treinamentos multilíngues, tanto online quanto presencial – o que tem sido particularmente escasso durante a pandemia – para fornecer mais informações sobre os serviços do Crossref além do Registro de Conteúdo. Por exemplo, o Similarity Check é um serviço sobre o qual as pessoas ainda têm muitas dúvidas (como \u0026lsquo;qual é a porcentagem de similaridade aceitável para identificar plágio?\u0026rsquo; Resposta - não existe!). O Crossmark é outro serviço onde acredito que as pessoas poderiam se beneficiar de mais treinamento sobre sua importância no processo de publicação, não apenas em casos de retratação, mas também para garantir que o artigo esteja sempre atualizado e confiável. No Brasil muitas pessoas usam o Open Journal Systems (OJS) e por isso o desenvolvimento de plugins específicos do serviço Crossref e treinamento sobre como usá-los seriam muito úteis!\nWhich other organizations do you collaborate with or are pivotal to your work in open science?\nCom quais outras organizações você colabora ou é fundamental para o seu trabalho em ciência aberta?\nI contribute to ABEC Brasil in a variety of ways including speaking on short courses about Crossref, designing content for lectures as part of an online program called ABEC Educação (which will be launched soon), and as a volunteer consultant to answer a variety of questions from editors regarding content registration at Crossref.\nContribuo com a ABEC Brasil, participando tanto como ministrante de minicursos sobre ferramentas Crossref quanto como conteudista de um curso no Programa EaD ABEC Educação (que será lançado em breve), além de como consultora voluntária para atender a diversas dúvidas de editores em relação a depósito de conteúdo.\nWhat are the post-pandemic challenges you are facing and how are you adapting to them?\nQuais são os desafios pós-pandemia que você está enfrentando e como você está se adaptando a eles?\nConsidering the current situation in Brazil, I don’t think I would consider us having reached ‘post-pandemic’ just yet. Although vaccination is taking place successfully, there are still many uncertainties and fears. A good example of this is Crossref LIVE Brazil which was canceled at the start of the pandemic and at the moment we still don’t know when we will be able to reschedule this. It still feels too risky to bring a number of speakers from abroad to Brazil and too soon to hold such a large in-person event.\nHowever, if I had to highlight one challenge I\u0026rsquo;ve been facing, it would be something more personal rather than work-related. Beyond a shadow of a doubt, it would be the lack of human contact! It has been really hard to get use to not gathering together with family and friends and not being able to travel, meet new people, and experience new cultures. To deal with it, I spend my free time planning the places I will go to and people I will visit as soon as this whole situation is over!\nPara ser honesta, considerando a realidade atual no Brasil, eu ainda não considero o momento atual \u0026ldquo;pós-pandemia\u0026rdquo;. Embora a vacinação esteja ocorrendo com sucesso, ainda existem muitas incertezas e medos. Um exemplo bem claro é o Crossref Live in Brazil, que foi cancelado assim que a pandemia foi \u0026ldquo;anunciada\u0026rdquo; e, até hoje, não sabemos quando ocorrerá, pois ainda soa muito arriscado trazer palestrantes de fora para o Brasil e também se encontrar com diversas pessoas em um evento presencial.\nNo entanto, se eu tivesse que destacar um desafio que tenho enfrentado, seria algo mais pessoal e não relacionado ao trabalho. E, sem sombras de dúvidas, seria a falta de contato humano! Está sendo realmente complicado se acostumar em não encontrar amigos e familiares, e também não poder viajar e conhecer novos lugares, pessoas e culturas – o jeito que encontrei para lidar com isso é gastar meu tempo livre planejando todos os lugares que irei e todas as pessoas que visitarei assim que essa situação toda passar.\nWhat are your plans for the future?\nQuais são seus planos para o futuro?\nMy plans for the future include continuously learning more and more about scholarly publishing including the various services that Crossref provides. I want to be able to help publishers implement valuable tools into their workflows such as Similarity Check and Crossmark, and contribute to greater scientific dissemination of Brazilian research so that Brazilian journals can get the global recognition, visibility and value they deserve.\nMeus planos para o futuro incluem aprender cada vez mais e mais sobre publicação científica, incluindo os vários serviços que o Crossref oferece. Quero poder ajudar os editores a implementar ferramentas valiosas em seus fluxos de trabalho, como Similarity Check e Crossmark, e contribuir para uma maior divulgação científica das pesquisas brasileiras para que os periódicos brasileiros possam obter o reconhecimento global, visibilidade e valor que merecem.\nThank you, Bruna!\nObrigado, Bruna!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/data-center/", "title": "Data Center", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/outage-of-march-24-2022/", "title": "Outage of March 24, 2022", "subtitle":"", "rank": 1, "lastmod": "2022-03-24", "lastmod_ts": 1648080000, "section": "Blog", "tags": [], "description": "So here I am, apologizing again. Have I mentioned that I hate computers?\nWe had a large data center outage. It lasted 17 hours. It meant that pretty much all Crossref services were unavailable - our main website, our content registration system, our reports, our APIs. 17 hours was a long time for us - but it was also an inconvenient time for numerous members, service providers, integrators, and users. We apologise for this.", "content": "So here I am, apologizing again. Have I mentioned that I hate computers?\nWe had a large data center outage. It lasted 17 hours. It meant that pretty much all Crossref services were unavailable - our main website, our content registration system, our reports, our APIs. 17 hours was a long time for us - but it was also an inconvenient time for numerous members, service providers, integrators, and users. We apologise for this.\nLike the outage last October, the issue was related to the data center that we are trying to leave. However, unlike last time, our single nearby network admin wasn\u0026rsquo;t in surgery at the time. Tim was alerted in the early hours of his morning and was able get up and immediately investigate.\nDespite having both secondary and tertiary backup connections, neither activated appropriately.\nThe problem was with incomplete BGP (Border Gateway Protocol) settings on our primary connection\u0026rsquo;s network provider’s side. We never noticed this because our backup connection had the correct and complete BGP settings. But our backup circuit went down (we don’t know why yet), and when the router with complete settings went down, only the router with the incomplete settings was available and so everything went down.\nWe hadn’t yet fully configured the tertiary connection to cut over automatically. This meant cutting over to the tertiary during the outage would have required manual and potentially error-prone reconfiguration. Not something we wanted to do in a hurry with a sleep-deprived network admin.\nIt’s not an excuse at all. But we are currently down two people in our infrastructure group. One of our infrastructure staff recently left for a startup, and we are already hiring a new third position. In short, our one-long-suffering sysadmin had to field this all by himself. But hey - we are hiring a Head of Infrastructure, and if you are interested you can now see the work you\u0026rsquo;d have cut out for you!\nSo things are back up and we’ve resolved the incident but we are carefully and cautiously monitoring. We will further analyze what went wrong and post an update when we have a clearer picture.\nI apologize for the downstream pain this outage will have inevitably caused. We realize that many people will now be scrambling to clean things up after this lengthy outage.\nMore when I have it… but for now I\u0026rsquo;ll mostly be curled up in a ball.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/post-mortem/", "title": "Post Mortem", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/announcing-the-ror-sustaining-supporters-program/", "title": "Announcing the ROR Sustaining Supporters program", "subtitle":"", "rank": 1, "lastmod": "2022-03-23", "lastmod_ts": 1647993600, "section": "Blog", "tags": [], "description": "In collaboration with California Digital Library and DataCite, Crossref guides the operations of the Research Organization Registry (ROR). ROR is community-driven and has an independent sustainability plan involving grants, donations, and in-kind support from our staff.\nROR is a vital component of the Research Nexus, our vision of a fully connected open research ecosystem. It helps people identify, connect, and analyze the affiliations of those contributing to, producing, and publishing all kinds of research objects.", "content": "In collaboration with California Digital Library and DataCite, Crossref guides the operations of the Research Organization Registry (ROR). ROR is community-driven and has an independent sustainability plan involving grants, donations, and in-kind support from our staff.\nROR is a vital component of the Research Nexus, our vision of a fully connected open research ecosystem. It helps people identify, connect, and analyze the affiliations of those contributing to, producing, and publishing all kinds of research objects. Crossref added support for ROR to its schema and REST API in 2021 and we are asking Crossref members to use ROR IDs for author affiliations in the metadata they deposit with Crossref. But this post is about how the Crossref community can support ROR in another way.\nAll three lead organizations\u0026mdash;as well as the ROR initiative\u0026mdash;have publicly committed to the POSI Principles and we know that our diverse and global community is increasingly interested in showing its support for open scholarly infrastructure too. Now there\u0026rsquo;s an opportunity to show that support; the following blog by Maria Gould, cross-posted from the ROR blog, explains how.\nROR begins a new round of community fundraising Since ROR launched in 2019, we have been charting a path to sustainability that leverages our broad community network and diversifies our funding sources. ROR is currently funded through a combination of in-kind support from its three operating organizations, project-based grant funds, and financial contributions from community members.\nWhile ROR aims to minimize overhead and contain costs, it still requires resources to build and maintain the registry\u0026rsquo;s infrastructure, especially as adoption continues to grow. ROR has been working to establish independent revenue streams that complement ROR\u0026rsquo;s in-kind support, avoid dependence on grant funds, and ensure the registry data remains openly available.\nThis year, ROR is initiating a new round of community fundraising. Building on the community fundraising campaign we ran during 2019-2021, we are renewing a call for organizations to commit to supporting ROR financially. We are launching a Sustaining Supporters program that opens up new ways for organizations to participate in the collective funding of ROR.\nROR Sustaining Supporters program With the Sustaining Supporters program, organizations are encouraged to support ROR\u0026rsquo;s operating expenses on a recurring annual basis. Any organization that signs up to support ROR through the end of 2022 will be recognized as a Founding Supporter and receive a supporter badge that can be displayed on their website.\nWe want to make the process of contributing to ROR as easy as possible. To ensure this is the case, organizations can support ROR at any amount that works for their budget and capacity. Also, to simplify the invoicing process, organizations that are already members of Crossref or DataCite can choose to receive an invoice directly from Crossref and DataCite for their ROR contributions. However, if organizations prefer, they can also be invoiced directly from ROR.\nWhy support ROR ROR aims to be an example of the power and potential of community-funded open infrastructure. ROR is committed to providing open, stakeholder-governed infrastructure for research organization identifiers and associated metadata. Implementation of ROR IDs in scholarly infrastructure and metadata enables more efficient discovery and tracking of research outputs across institutions and funding bodies.\nThe Sustaining Supporters program is the next step in ROR\u0026rsquo;s sustainability journey. ROR is continuing to explore future potential paid service tiers designed for those organizations and companies that rely heavily on our infrastructure, which would complement the supporters program. However, rest assured that any paid services will not impact the availability of ROR data or our commitment to supporting our community, in line with our commitment to the Principles of Open Scholarly Infrastructure (POSI).\nWe\u0026rsquo;ve all seen key infrastructure components disappear, be enclosed, or get acquired. We are also realistic about how much effort and cost is involved in sustaining key components of open infrastructure that the scholarly community depends on. And we are committed to doing this right. That means not just sustaining core infrastructures, but investing in them so that they can evolve alongside community needs.\nROR is a free resource for the research community. However, this shared infrastructure does require a collective funding approach that can sustain it as a common good.\nJoin us! This is an exciting moment to be part of ROR\u0026rsquo;s growth. Let\u0026rsquo;s fund open infrastructure together!\nIf your organization is interested in supporting ROR and helping to fund open, community-led infrastructure, sign up here.\n", "headings": ["ROR begins a new round of community fundraising","ROR Sustaining Supporters program","Why support ROR","Join us!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/", "title": "Operations & sustainability", "subtitle":"", "rank": 1, "lastmod": "2022-03-23", "lastmod_ts": 1647993600, "section": "Operations & sustainability", "tags": [], "description": "We are a not-for-profit organization, registered in the United States of America as a 501(c)6 (a \u0026ldquo;trade association\u0026rdquo;). We are sustained by annual membership fees that are tiered according to organizational size. We also charge a Content Registration fee for each item of content whose metadata is registered with us. We also provide both free and paid-for metadata retrieval options for access to this metadata.\nMembers can and do participate in many ways and contribute to our global network of linked scholarly content.", "content": "We are a not-for-profit organization, registered in the United States of America as a 501(c)6 (a \u0026ldquo;trade association\u0026rdquo;). We are sustained by annual membership fees that are tiered according to organizational size. We also charge a Content Registration fee for each item of content whose metadata is registered with us. We also provide both free and paid-for metadata retrieval options for access to this metadata.\nMembers can and do participate in many ways and contribute to our global network of linked scholarly content. We are governed by sixteen of our members that form our board serving three-year terms.\nCrossref adopted the Principles of Open Scholarly Infrastructure and this section of our website is part of our commitment to transparent operations. From here you can find information about our financials, our membership operations, our policies, and eventually our staff handbooks and other formerly-internal documents.\nSustainability at Crossref Revenue from Crossref\u0026rsquo;s annual dues and services sustains operations. The relevant POSI goals we strive to meet are:\nTime-limited funds are used only for time-limited activities – day-to-day operations should be supported by day-to-day sustainable revenue sources. Goal to generate surplus – it is not enough to merely survive. Producing a small surplus allows us to respond nimbly to opportunities or weather economic downtimes. Goal to create a contingency fund to support operations for 12 months – generating an operating surplus also allows us to create a separate fund that could support operations for a year. Mission-consistent revenue generation – any revenue we generate must be mission-aligned and not run counter to the aims of the organization. Revenue based on services, not data – data related to the running of the research enterprise should be community property. Appropriate revenue sources might include value-added services, consulting, API Service Level Agreements, or membership fees. Transparent operations – achieving trust in the selection of representatives to governance groups will be best achieved through transparent processes and operations in general (within the constraints of privacy laws). Fee principles In July 2019 our board voted to approve the following principles to inform all future fee modeling and decisions.\nCrossref’s fees should:\nEnable us to fulfil our mission to make research outputs easy to find, cite, link, assess, and reuse Encourage best practice and discourage bad practice, as our policies and obligations advise Be non-discriminatory, encouraging broad participation from organizations of all sizes and types Support the long-term persistence of our services and infrastructure, so long as relevant and valuable to the community Deliver value to our members Be transparent and openly available, recommended by the Membership \u0026amp; Fees Committee and approved by the board Be the same for all, not discounted or negotiated individually, to ensure fairness Be independent of our members’ own business models Not always be necessary (e.g., new record types are not usually separate services) Be based on providing services not metadata We are accredited in handling your data Crossref was awarded the SOC 2® accreditation most recently in 2022 after an independent assessment of our controls and procedures by the American Institute of CPA’s (AICPA).\nThe SOC 2® accreditation is awarded to service organizations that have passed standard trust services criteria relating to the security, availability, and processing integrity of systems used to process users’ data and the confidentiality and privacy of the information processed by these systems.\nThe AICPA’s assessment also reviewed our vendor management programs, internal corporate governance and risk management processes, and regulatory oversight.\nFind out more about the SOC accreditation structure\n", "headings": ["Sustainability at Crossref","Fee principles","We are accredited in handling your data"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/membership-operations/revocation/", "title": "Suspending or revoking membership", "subtitle":"", "rank": 4, "lastmod": "2022-03-23", "lastmod_ts": 1647993600, "section": "Operations & sustainability", "tags": [], "description": "Once a member joins Crossref, we expect them to remain a member for the long term. As an organization that’s obsessed with persistence, we really try to avoid suspending or revoking membership. However, there are three key reasons why we may be forced to do this:\nA member leaves invoices unpaid for a long time Legal sanctions or judgments against the Member or its home country A member contravenes the membership terms.", "content": "Once a member joins Crossref, we expect them to remain a member for the long term. As an organization that’s obsessed with persistence, we really try to avoid suspending or revoking membership. However, there are three key reasons why we may be forced to do this:\nA member leaves invoices unpaid for a long time Legal sanctions or judgments against the Member or its home country A member contravenes the membership terms. Process for revoking membership due to unpaid invoices Revoking membership due to unpaid fees is an absolute last step for us, but as a not-for-profit membership organization we have a duty to remain sustainable and manage our finances in a responsible way. Financial sustainability means we can keep the organization afloat and keep our dedicated service to scholarly communications running.\nAfter each invoice is sent, we send a series of automated reminders to the member’s billing contact (and secondary billing contact if available) to make sure they’re clear on when their invoice payment is due. If the invoice remains unpaid after the due date, we send an email to the Billing, Secondary Billing, Primary and Technical contacts on the account to let them know that their account is at risk of suspension. This gives the member time to contact us if there are any issues with the invoice or if they didn’t realise that there was an outstanding invoice.\nIf the invoice remains unpaid, we suspend the member\u0026rsquo;s access to register or update content, but they remain a member. Once a year we contact all members who were suspended due to unpaid invoices to let them know that if the invoices remain unpaid for a further two weeks, their membership of Crossref will be revoked.\nOnce an organization’s membership has been revoked, they would need to re-apply if they wanted to become a member again in the future. We reserve the right to decline the application. If accepted, the applicant would need to pay all outstanding invoices before re-joining.\nProcess for revoking membership due to Legal sanctions or judgments against the Member or its home country Very occasionally there may be a legal requirement or government order requiring us to suspend or revoke membership, most commonly as a result of economic sanctions. In this case, we are unable to work with the member from a set date and will suspend or revoke membership as appropriate. We will provide notification to any impacted members unless notification is prohibited by the legal requirement or order.\nYou can find out more about current sanctions impacting Crossref members here.\nProcess for revoking membership due to contravention of the membership terms There are limited occasions where we are forced to revoke membership of Crossref due to a contravention of the membership terms. This may be due to:\nMisrepresentation in the original membership application Fraudulent use of identifiers or metadata Contravening the code of conduct Any other basis set forth in the governing documents. Step One We contact the member to confirm the issue, investigate further and try to work with the member to rectify the problem if possible.\nStep Two If the issue cannot be resolved in a timely manner, we suspend the member\u0026rsquo;s access to register new DOIs or update their existing DOIs. We continue to work with the member to try to rectify the problem.\nStep Three If the issue still cannot be resolved, we move to revoke membership. The Executive Committee of Crossref’s board reviews and ratifies the decision. The member is able to appeal this decision, and the Executive Committee of Crossref’s board will review and respond to an appeal.\nStep Four We add the member information to the record of revoked membership.\n", "headings": ["Process for revoking membership due to unpaid invoices","Process for revoking membership due to Legal sanctions or judgments against the Member or its home country","Process for revoking membership due to contravention of the membership terms"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/follow-the-money-or-how-to-link-grants-to-research-outputs/", "title": "Follow the money, or how to link grants to research outputs", "subtitle":"", "rank": 1, "lastmod": "2022-03-22", "lastmod_ts": 1647907200, "section": "Blog", "tags": [], "description": "The ecosystem of scholarly metadata is filled with relationships between items of various types: a person authored a paper, a paper cites a book, a funder funded research. Those relationships are absolutely essential: an item without them is missing the most basic context about its structure, origin, and impact. No wonder that finding and exposing such relationships is considered very important by virtually all parties involved. Probably the most famous instance of this problem is finding citation links between research outputs. Lately, another instance has been drawing more and more attention: linking research outputs with grants used as their funding source. How can this be done and how many such links can we observe?\n", "content": "The ecosystem of scholarly metadata is filled with relationships between items of various types: a person authored a paper, a paper cites a book, a funder funded research. Those relationships are absolutely essential: an item without them is missing the most basic context about its structure, origin, and impact. No wonder that finding and exposing such relationships is considered very important by virtually all parties involved. Probably the most famous instance of this problem is finding citation links between research outputs. Lately, another instance has been drawing more and more attention: linking research outputs with grants used as their funding source. How can this be done and how many such links can we observe?\nTL;DR We looked for links between research outputs and grants registered with Crossref. Grant DOIs alone are not enough for linking research outputs with grants, because the funding information in research outputs typically does not contain grant DOIs (yet). Award numbers alone are also not enough because they are not globally unique. We used either grant DOIs (if available) or the combination of award number and funder information to match grants to research outputs. In total, we found 20,834 links between research outputs and registered grants, involving 17,082 research outputs and 3,858 grants (10% of all registered grants)1. Erroneous and incomplete metadata, especially involving award numbers, is the main factor that prevents linking research outputs to grants. Introduction The ecosystem of scholarly metadata is filled with relationships between items of various types: a person authored a paper, an author works at a university, a paper cites a book, a book contains a chapter, a funder funded research. Those relationships are absolutely essential: an item without them is missing the most basic context about its structure, origin, and impact.\nNo wonder that finding and exposing relationships between items in the scientific ecosystem is considered very important by virtually all parties involved. Probably the most famous instance of this problem is finding citation links between research outputs. Another, relatively new example, is linking research outputs with grants used as their funding source.\nAt Crossref, for some time now we have been seeing a steady growth of funder membership and grant registration. We are aware that the possibility of finding relationships between grants and research outputs is a big reason why funders are registering grants with us in the first place. Being able to see which research outputs are being supported by which grants helps reduce the reporting burden on researchers, funders, and institutions alike, especially now with the addition of ROR IDs to help complete the picture. Exposing relationships between research outputs and grants also increases the transparency of funding sources of the research, making it easier to assess and trust scientific findings.\nBut how can we find those relationships and how many of them can we already observe? Thankfully our REST API, recently equipped with the grant metadata, can help us answer these questions.\nThe perfect scenario Imagine a world where the metadata of any scientific output states all relationships with other items existing in the scientific ecosystem, and those related items are always referred to by their persistent identifiers, allowing all this information to be accessed in a fully machine-readable way\u0026hellip; Lovely, right?\nIn the case of citations, in such a perfect world every bibliographic reference has a DOI of the cited item. And in the case of funding information, a scientific paper contains grant DOIs, stating the funded-by relationships between the paper and the grants.\nBut, as the last two years have painfully taught us all, life is not all rainbows and unicorns.\nThe reality kicks in We know that around 71% of bibliographic references are deposited with Crossref without a DOI of the cited item. This means that if we want to establish citation links between items, we need to match the bibliographic references using the provided metadata, which is not a trivial task and can potentially introduce errors.\nAnd the situation with the funding information and grant DOIs is even worse.\nProblem #1: our schema does not allow the publishers to attach grant DOIs to research outputs This issue is 100% on us. Because grant DOIs are relatively new, our deposit schema does not yet allow to specify the grant DOI in the funding information of a research output, even if the publisher wanted to. We are working on changing this.\nInterestingly, it looks like persistent identifiers always find a way. Within over 7.4 million research outputs with funding information, we noticed 6 cases where a grant DOI was provided as an award number. For example in 10.1093/nar/gkaa994 we have the following:\nfunder: [ { name: \u0026#34;Wellcome Trust\u0026#34;, award: [\u0026#34;10.35802/108758\u0026#34;], doi-asserted-by: \u0026#34;publisher\u0026#34;, DOI: \u0026#34;10.13039/100010269\u0026#34; }, ... ] This may not be 100% correct from the schema perspective, but it is very useful when one is interested in linking grants to research outputs!\nBut those cases are extremely rare outliers. For the vast majority of the outputs, grant DOIs are not present in the metadata. This means that, just like in the case of bibliographic references, we have to use the metadata to match funding information to grants.\nFunding information is typically given as a pair: award number, funder information. Grants contain similar metadata. One might be tempted to use only the award number for linking, as in some cases it can look like a grant identifier.\nLet\u0026rsquo;s consider an example. We want to find all papers funded by grant 10.37807/gbmf7622. The award number is GBMF7622. A simple approach might be to search for items with this award number in Crossref\u0026rsquo;s REST API, which returns 12 results2. However, one of the resulting items is the grant itself3. So excluding that, it seems like there are 12-1=11 research outputs funded by this grant.\nSimple and easy, right? Well, think again.\nProblem #2: award numbers are not unique Let\u0026rsquo;s look at another example grant: 10.25585/60000600. Its award number is 2817 and the funder is the US Department of Energy.\nWhen we search for this award we get 10 results4. Like before, one of them is our grant. After examining the remaining 9 we will see that:\n3 items have been funded by the Joint Genome Institute, which according to the Funder Registry has been incorporated into Basic Energy Sciences, which is a descendant of the US Department of Energy 2 items have been funded by International Rett Syndrome Foundation from the US 2 items have been funded by Agencia Nacional de Promoción Científica y Tecnológica from Argentina 1 item has been funded by Arak University of Medical Sciences from Iran 1 item has been funded by Shahrekord University also from Iran So among only 9 items mentioning the same award number we have in fact 5 different grants. Our input grant should probably be linked only to the three items mentioning Joint Genome Institute. The main problem illustrated here is that the award numbers are not globally unique, and thus should not be treated like identifiers.\nIndeed, within 38,326 grants registered so far, we have 37,608 distinct award numbers, and among those, there are 716 award numbers, each of which appears in multiple grants. This issue comes in two flavours: conflicts between and within funders.\nBetween-funder award number conflicts A conflict between funders is when more than one funder uses the same award number for one of their grants. This is expected - award numbers are assigned by funders internally and are not designed to be a globally unique identifier.\nOut of 716 award numbers that appear in multiple grants, 12 are numbers that appear in grants of different funders. For example, there are two grants with the award number 105626:\nSystemic MFG-E8 Blockade as Melanoma Therapy funded by Melanoma Research Alliance Institutional Strategic Support Fund Phase2 FY2014/16 funded by Wellcome Trust Because of those conflicts, we cannot simply rely on the award numbers for linking grants to research outputs. Instead, we have to use more information to be sure that the links are correctly established.\nWithin-funder award number conflicts To our big surprise, it turns out that the majority of the award number conflicts happen not between different funders, but within the grants of a single funder. Out of 716 award numbers that appear in multiple grants, 704 appear in multiple grants of a single funder only. Such situations are not expected and could indicate an error or some other systematic issue with the data.\nInterestingly, out of those 704 award numbers, 700 are associated with the US Department of Energy. We\u0026rsquo;ve followed up with them in order to clarify or resolve this. The US Department of Energy pointed out a fundamental issue with the data model: currently a grant deposited with Crossref has to have at least one funder DOI, and no other way of identifying the associated organisation is allowed. At the same time, some of the facilities that should appear in their grants\u0026rsquo; metadata are not funders at all and thus cannot be identified by a funder DOI. In the future, they plan to identify those facilities in their grant metadata by providing ROR IDs.\nBecause of within-funder award number conflicts, in some cases it might be difficult to distinguish between two grants with the same award number and funder. A solution might be to use additional information or simply not accept any links if a research output cannot be reliably linked to one grant only.\nOur linking approach Based on all those observations, we adopted the following approach:\nWe iterated over all registered grants, for each we performed the following steps: We used award.number:\u0026lt;grant DOI\u0026gt; filter in the REST API to find all items listing a given grant\u0026rsquo;s DOI as the award number. Because this is based on the grant\u0026rsquo;s persistent identifier, we recorded those links without any further verification. We used the award.number:\u0026lt;grant award number\u0026gt; filter in the REST API to find all items listing grant\u0026rsquo;s award number in the funding information. Each resulting item was then verified by comparing the funder information in the item to the funder information in the grant. We recorded the link between the grant and the candidate item only if the verification succeeded. In the final step, we examined all recorded links to make sure that each pair (research output, award number) is linked to at most one grant. Links violating this rule were flagged as not reliable. We used different techniques to verify the funder information between the research output (item) and the grant, depending on what information is available. Grants always have the funder DOI. The item, however, can have the funder DOI, the funder name, or both.\nIf the funder DOI was available on both sides, the following rules were used for the funder verification (ordered by decreasing confidence):\nBoth the item and the grant contain the same funder DOI, for example, 10.35802/089928 and 10.1242/jcs.196758 The funder in the item replaced or was replaced by the funder in the grant (according to the Funder Registry), for example, 10.35802/104848 and 10.1136/medethics-2020-106821 The funder in the paper is an ancestor or a descendant of the funder in the grant (according to the Funder Registry), for example, 10.46936/sthm.proj.2010.40084/60004575 and 10.1016/j.heliyon.2018.e00629 If the funder DOI was not available in the item, the following rules were used for the funder verification (ordered by decreasing confidence):\nThe funder name in the paper is the same (ignoring the case) as the funder name in the grant, for example, 10.35802/110166 and 10.12688/wellcomeopenres.14645.4 The funder name in the item is the same (ignoring the case) as the name of the funder that replaced/was replaced by the funder in the grant, for example, 10.35802/206194 and 10.1172/jci.insight.96381 The funder name in the item is the same (ignoring the case) as the name of the ancestor/descendant of the funder in the grant, for example, 10.46936/cpbl.proj.2001.2191/60002922 and 10.1109/tkde.2016.2628180 Note that this is in fact very similar to our reference matching approach. In both cases, first we search for candidate items, and then verify the candidates by comparing the metadata. The actual metadata used for the verification varies, because different information is typically given in the bibliographic reference and the funding information.\nWhat we found This procedure applied to the entire Crossref dataset resulted in 20,846 links between research outputs and grants5. Of those, 12 were flagged as unreliable, because they involved more than one grant linked to the same item and award number. The rest of this section focuses on the remaining 20,834 links.\nWithin the 20,834 links, we have 17,082 research outputs and 3,858 (10.1%) grants.\nHere is the breakdown into the verification approaches used:\nVerification #links %links The item contains grant DOI - no verification 6 \u0026lt;0.1% Funder DOIs are the same 8,364 40.1% Funder DOIs are related with a replaced/was replaced by relationship 3,704 17.8% Funder DOIs are related with an ancestor/descendant relationship 7,718 37.0% Funder names are the same 591 2.8% The name of the funder in the item is the same as the name of the funder that replaced/was replaced by the funder in the grant 364 1.7% The name of the funder in the item is the same as the name of the ancestor or descendant of the funder in the grant 87 0.4% In most cases, just using the funder DOIs for the verification was enough. Verifying by the funder name added 1,042 links, which is 5% of all links.\nAnd here are statistics for individual funders. Only funders with at least 10 deposited grants are listed in the table. The table shows the number of detected links, the number of distinct research outputs linked, the total number of outputs mentioning the given funder DOI, and the number of grants.\nFunder #links #linked research outputs #total outputs with funder DOI #grants Japan Science and Technology Agency 11,922 10,411 25,779 9,383 Wellcome Trust (including both funder DOIs 10.13039/100004440 and 10.13039/100010269) 8,001 6,246 49,492 17,534 James S. McDonnell Foundation 463 457 2,534 557 Melanoma Research Alliance 152 150 894 392 Asia-Pacific Network for Global Change Research 100 100 838 539 ALS Association 84 78 909 434 U.S. Department of Energy 56 52 97,482 8,462 Gordon and Betty Moore Foundation 51 50 5,928 94 American Cancer Society 3 3 7,276 107 Children\u0026rsquo;s Tumor Foundation 1 1 759 630 American Parkinson Disease Association 0 0 181 12 Neurofibromatosis Therapeutic Acceleration Program 0 0 101 68 International Anesthesia Research Society 0 0 94 34 Australian National Data Service 0 0 92 67 Note that the fourth column reports the total number of outputs registered with Crossref and mentioning the given funder DOI, including grants, journal papers and all other record types.\nIt is interesting to compare the number of linked research outputs for a given funder with the total number of research outputs mentioning a given funder DOI. In general, for a funder that registers grants, the more research outputs mentioning this funder, the more links we should be able to find.\nAnd for some funders (Japan Science and Technology Agency, Melanoma Research Alliance, Asia-Pacific Network for Global Change Research, Wellcome Trust, James S. McDonnell Foundation), the number of linked outputs is indeed high, as compared with how many outputs mention the funder in the first place. This suggests our procedure was quite successful in linking outputs funded by these funders, meaning that in general the metadata in their grants and the funding information in the research outputs match.\nOn the other hand, we have a few funders for which we managed to link only a very small fraction of research outputs. There are several potential explanations here. A simple one is that not all relevant grants have been deposited yet. For example, a funder might be registering new grants only, whereas many research outputs mention older, not yet registered grants. It is also possible that there are systematic differences in how the publishers deposit the funding information in articles and other outputs, and how it is given in grants. Such differences might prevent us from establishing links, contributing to the overall low percentage of linked grants.\nThe importance of being precise Here are some examples of existing links that should\u0026rsquo;ve been found, but were not.\nThe award number in grant 10.48105/pc.gr.93156 is CTF-2020-01-004. This article: 10.3390/ijms22094716 mentions award number 2020‐01‐004 and the same funder (Children\u0026rsquo;s Tumor Foundation). It is very probable that this is the same grant, but our procedure expects exactly the same award number, and so the two were not linked.\nPaper 10.1128/genomea.00159-18 contains award number 1931 and U.S. Department of Energy as the funder. There are two grants with the same award number and funder: 10.46936/10.25585/60001053 and 10.46936/genr.proj.2000.1931/60002530. It is difficult to choose between them, and these links were marked as unreliable.\nThese examples could be signs of systematic errors and/or discrepancies that effectively prevent linking of those funders\u0026rsquo; grants.\nWhat\u0026rsquo;s next In problems such as linking grants to research outputs, there are typically two key ingredients of the success, which at the same time are the main areas of improvement: the quality of the metadata, and the strength of the linking approach.\nThe metadata could be improved greatly by addressing existing discrepancies between grants and research outputs and allowing (and encouraging!) the publishers to provide grant DOIs in the funding information. Thankfully, we are not alone in those efforts. Both this recent Upstream blog from Alexis-Michel Mugabushaka, and this Scholarly Kitchen post from Robert Harrington call for the development and adoption of grant DOIs in scholarly metadata.\nIn terms of the linking approach, there are some ideas that could be used to further improve the linking accuracy and completeness:\nThe verification by funder name could be fuzzy and allow for minor variations like typos or additional words. Apart from replaced/replaced by and ancestor/descendant, there are other relationships between funders in the Funder Registry: continuation of, incorporates/incorporated into, merged with, renamed as, split into/split from. We could also consider those relationships during the funder validation. Apart from the funder information, there is other information that could be potentially used for verification, for example, the names of the authors and the investigators, the domain, or keywords. If you have any questions, do get in touch!\nAll numbers are as of March 8, 2022\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://api.crossref.org/works?filter=award.number:gbmf7622\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://api.crossref.org/works?filter=award.number:gbmf7622,type:grant\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://api.crossref.org/works?filter=award.number:2817\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe code and data available here: https://gitlab.com/crossref/labs_data_analyses/-/tree/master/analyses/22-01-26-grants-matching\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": ["TL;DR","Introduction","The perfect scenario","The reality kicks in","Problem #1: our schema does not allow the publishers to attach grant DOIs to research outputs","Problem #2: award numbers are not unique","Between-funder award number conflicts","Within-funder award number conflicts","Our linking approach","What we found","The importance of being precise","What\u0026rsquo;s next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-registry-of-editorial-boards-a-new-trust-signal-for-scholarly-communications/", "title": "A Registry of Editorial Boards - a new trust signal for scholarly communications?", "subtitle":"", "rank": 1, "lastmod": "2022-03-09", "lastmod_ts": 1646784000, "section": "Blog", "tags": [], "description": "Background Perhaps, like us, you\u0026rsquo;ve noticed that it is not always easy to find information on who is on a journal\u0026rsquo;s editorial board and, when you do, it is often unclear when it was last updated. The editorial board details might be displayed in multiple places (such as the publisher\u0026rsquo;s website and the platform where the content is hosted) which may or may not be in sync and retrieving this information for any kind of analysis always requires manually checking and exporting the data from a website (as illustrated by the Open Editors research and its dataset).", "content": "Background Perhaps, like us, you\u0026rsquo;ve noticed that it is not always easy to find information on who is on a journal\u0026rsquo;s editorial board and, when you do, it is often unclear when it was last updated. The editorial board details might be displayed in multiple places (such as the publisher\u0026rsquo;s website and the platform where the content is hosted) which may or may not be in sync and retrieving this information for any kind of analysis always requires manually checking and exporting the data from a website (as illustrated by the Open Editors research and its dataset).\nFor well-established as well as early career researchers, membership of an editorial board demonstrates their contribution to their community, brings prestige, improves (or maintains) their professional profile and often increases their chances of being published.\nWhilst most journal websites only give the names of the editors, others possibly add a country, some include affiliations, very few link to a professional profile, an ORCID ID. Even when it\u0026rsquo;s clear when the editorial board details were updated, it\u0026rsquo;s hardly ever possible to find past editorial boards information and almost none lists declarations of competing interest.\nWe hear of instances where a researcher\u0026rsquo;s name has been listed on the board of a journal without their knowledge or agreement, potentially to deceive other researchers into submitting their manuscripts. Regular reports of impersonation, nepotism, collusion and conflicts of interest have become a cause for concern.\nSimilarly, recent studies on gender representation and gender and geographical disparity on editorial boards have highlighted the need to do better in this area and provide trusted, reliable and coherent information on editorial board members in order to add transparency, prevent unethical behaviour, maintain trust, promote and support research integrity.\nRegistry of Editorial Boards We are proposing the creation of some form of Registry of Editorial Boards to encourage best practice around editorial boards\u0026rsquo; information and governance that can easily be accessed and used by the community.\nWhat we have in mind A Registry of Editorial Boards could be a new trust-signal for Crossref members and details would be included on a member\u0026rsquo;s Participation Report.\nCrossref members would register and maintain this information for their journal titles in a similar way as they currently manage their metadata. Only the owner of the title, or their trusted service provider, would be able to update it. Editors would be linked by ORCID iD and ROR and Crossref would use \u0026lsquo;autoupdate\u0026rsquo; to push editorship information to ORCID profiles, saving researchers time. The information would be made available via Crossref\u0026rsquo;s API.\nThis new service would introduce more transparency and automation to the editorial process and connect content platforms (i.e. peer review management systems, publishers\u0026rsquo; websites, ORCID and other author register systems, ROR, bibliographic databases, etc.) and make available current and historical information on editorial boards including metadata on the editorial boards\u0026rsquo; full affiliations.\nThe benefits for the community The benefits would be wide-ranging for the different stakeholders in the scholarly communications community, from publishers, researchers, institutions, funders, bibliometricians to librarians including:\nproviding those involved in the peer review process and research ethics a single, authoritative and up-to-date resource on editorial boards\nreducing fraudulent claims to be or to have been on an editorial board of a publication in order to be published or publish others\nconnecting and automating editorship role updates with e.g. ORCID, ROR, etc.\ngenerating a detailed analysis of the publication practices of editorial board members and their close contacts assessing any relationships between authors, reviewers and editorial board members for conflict of interest, etc.\nsupporting researchers responding to a request to join an editorial board, making proactive approaches to a journal or wanting to ensure that an editorial board is representative of its community and assess its levels of diversity and inclusivity\nproviding increased visibility to researchers, particularly to early career researchers\nYour feedback Before we progress further, we would like to fully understand what the needs of the community are and whether members would be willing and have the capacity to participate and contribute regularly in registering and maintaining details of their editorial boards.\n✏️ Please let us know what your thoughts and experience are with editorial boards by completing this brief survey by 31 March 2022.\n", "headings": ["Background","Registry of Editorial Boards","What we have in mind","The benefits for the community","Your feedback"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/enrich-services/", "title": "Enrich Services", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/posi-fan-tutte/", "title": "POSI fan tutte", "subtitle":"", "rank": 1, "lastmod": "2022-03-08", "lastmod_ts": 1646697600, "section": "Blog", "tags": [], "description": "Just over a year ago, Crossref announced that our board had adopted the Principles of Open Scholarly Infrastructure (POSI).\nIt was a well-timed announcement, as 2021 yet again showed just how dangerous it is for us to assume that the infrastructure systems we depend on for scholarly research will not disappear altogether or adopt a radically different focus. We adopted POSI to ensure that Crossref would not meet the same fate.", "content": "Just over a year ago, Crossref announced that our board had adopted the Principles of Open Scholarly Infrastructure (POSI).\nIt was a well-timed announcement, as 2021 yet again showed just how dangerous it is for us to assume that the infrastructure systems we depend on for scholarly research will not disappear altogether or adopt a radically different focus. We adopted POSI to ensure that Crossref would not meet the same fate.\nPOSI proposes three areas that an Open Infrastructure organization can address to garner the trust of the broader scholarly community: accountability (governance), funding (sustainability), and protection of community interests (insurance). POSI also proposes a set of concrete commitments that an organization can make to build community trust in each area. There are 16 such commitments.\nIn our announcement of Crossref’s adoption of POSI, we made two critical points:\nOne doesn’t have to meet all the commitments of POSI already to adopt it. For one thing, this would make it impossible for new organizations to adopt POSI. So instead, we should view the adoption of the POSI principles as a “statement of intent” against which stakeholders can measure an organization\u0026rsquo;s progress. That, conversely, meeting all of the POSI principles doesn’t mean an organization can relax. It is always possible for an organization to regress on a particular commitment. For example, an emergency expenditure might mean that the organization no longer maintains a 12-month contingency fund and therefore has to replenish it. With these two points made, we ended our announcement with a candid self-audit against the principles. We concluded that Crossref was already entirely or partially meeting the requirements of 15 of the 16 POSI commitments. And adopting the 16th commitment would just formalize a direction Crossref had already been heading toward for several years. We also said that we would update our self-audit regularly.\nBut before we continue with the Crossref POSI audit update, we should talk about the immediate aftermath of our adopting the principles.\nSince Crossref adopted POSI, nine other organizations have made the same commitment and conducted similar self-audits. We affectionately call them the “POSI Posse”.\nDryad ROR JOSS OurResearch OpenCitations DataCite OA Switchboard Sciety Europe PMC These organizations represent a critical part of the hidden infrastructure that scholarly research depends on every day. By committing to POSI, they are helping ensure their accountability to the research community. They are also emphasizing that stakeholders must participate in the governance and stewardship of organizations running that infrastructure.\nBut perhaps most importantly- these ten organizations that have publicly committed to adopting POSI will not suddenly disappear or change priorities without giving the community time to react and, if need be, intervene.\nThere are also more quotidian advantages to these organizations adopting POSI. Adopting the principles makes it easier for the respective organizations to collaborate to make research infrastructure more effective and efficient. The foundation of effective collaboration is trust. And, so by agreeing that we share basic principles of operation, we virtually eliminate a whole slew of negotiations that typically need to occur before two organizations trust each other enough to collaborate closely on projects.\nOne of Crossref’s strategic priorities is to “collaborate and partner” with other organizations on improving our open scholarly infrastructure. And the easiest way to collaborate with us is to adhere to the same principles. So we look forward to more scholarly infrastructure organizations adopting POSI in 2022 so that, together, we can make research infrastructure work better.\nEstablishing this level of trust has already paid significant dividends with the Research Organization Registry (ROR) - a relatively new infrastructure project founded jointly by DataCite, CDL, and Crossref.\nHaving nine organizations adopt POSI so soon after our announcement was a wonderful feeling. It is hard for us to convey how happy we are about this without gushing.\nHere is a picture of me gushing.\nBut now we have some outstanding business to update our self-audit.\nThis post is the first of our regular updates on our progress (or regress) on meeting the POSI principles.\nTL;DR We didn’t regress on any commitment. We’ve improved a little bit where we were not meeting the POSI principles, but we have still not met all our POSI commitments.\nArea Commitment 2020 2021 Governance Coverage across the research enterprise Non-discriminatory membership Transparent operations Cannot lobby Living will Formal incentives to fulfill mission \u0026amp; wind-down Stakeholder-governed Sustainability Time-limited funds are used only for time-limited activities Goal to generate surplus Goal to create a contingency fund to support operations for 12 months Mission-consistent revenue generation Revenue based on services, not data Insurance Available data (within constraints of privacy laws) Patent non-assertion Open source Open data (within constraints of privacy laws) Details Stakeholder governance moves from red to yellow Our only red mark in our POSI self-audit was against the principle of stakeholder governance. Our board did not yet reflect our members\u0026rsquo; diversity or the broader stakeholder community. In particular, as funders have become more central to shaping the scholarly communications landscape, it seemed important that Crossref have funder representation in our governance.\nSo this year, the Crossref nominations committee was charged with proposing a board slate that addressed some of our representational gaps. They did this, and as a direct result, two of the members elected to next year\u0026rsquo;s board were a funder (Melanoma Research Alliance) and a significant preprint platform (Center for Open Science).\nThese new additions to our board mark a significant improvement in stakeholder governance, but we can do more. Researchers and research institutions are also substantial Crossref stakeholders. We need to have a better representation of their concerns.\nAlso, there are still members of the scholarly communications community who depend on Crossref but cannot afford to join it because our fees are too high for them. Since membership is a prerequisite to participation in Crossref governance, we are also placing emphasis on figuring out how to further extend Crossref membership to those who still cannot afford it, through programs like Sponsorship, country-level journal gap analyses work, and a forthcoming fee review. So this is a source of stakeholder governance inequity that may be best handled by our membership \u0026amp; fees committee rather than our nominations committee.\nIn short, we’ve made progress on our stakeholder governance commitment. Still, we need to do more- so we are updating our adherence to the POSI stakeholder governance principle from red to yellow.\nAnother place where we have improved things is under the banner of “transparency.” But here, we see one of the shortcomings of the ‘traffic light” representation used in the self-audit. The degree that one meets a commitment falls along a gradient. And this gradient cannot be represented accurately in the ternary classification of red/yellow/green. So while last year we marked ourselves as “green” under the commitment to transparency, over the past year we have become greener. We did this by creating sections on our website that provide further detail on our governance and finances- even including the 990 forms that are required by US tax authorities for non-profits when they submit their taxes. So what do we do here? Make it neon-green? Make it blink?\nSustainability moves from yellow to chartreuse stays yellow In our first self-audit, we had several yellow marks- places where we were doing OK, but where we needed to make improvements.\nThe first yellow mark involved one of the principles of “sustainability,” which stipulates that an organization should have a goal to create a contingency fund to support operations for 12 months. At the time, we had a contingency fund of 9 months. The board instructed the finance committee to develop a plan for meeting the new 12-month goal. To do this, the board decided to create three funds. The first is fairly flexible and holds operating expenses for three months. Staff leadership can use this fund at their discretion to manage cash flow issues and support budgeted expenses. The second fund is the fund that holds operating expenses for 12 months. This fund is board-restricted and is only meant to be used in emergencies to help with substantial changes in our financial position or to, in extremis, fund an orderly wind-down of Crossref’s operations. Furthermore, the board’s investment committee established guidelines for investing our operating and investment surpluses. Any surpluses are first applied to supporting the 3-month fund. Once that funding goal is met, any surpluses are applied to the 12-month fund. And once both the 3-month and 12-month funding goals are met, any further surpluses will be put into another board-restricted fund that can be used to fund new investments or new Crossref initiatives.\nBut again, the simple yellow mark against this item does not capture this level of detail. We only get to turn it green once we have the 12-month fund in place.\nIt looks like we will meet the goal in 2022, but it is hard to say exactly when. If we did shades of color- we might make it chartreuse. But nobody wants to see chartreuse. So while we have made significant progress here, our commitment to maintaining a 12-month contingency fund remains yellow until we have reached our goal.\nPatent non-assertion stays yellow The second yellow mark was against our publishing a patent-non-assertion statement. This feels like a missed opportunity because it will be straightforward for us to do, but we have not yet done it. We have never applied for patents, and we don’t intend to start. In short, nothing is blocking us from doing this other than our natural reluctance to have to draft anything that involves lawyers. Our lawyers are very nice people, but everything we have to draft with them makes our eyes glaze over. We need to get this done ASAP in 2022.\nOpen source remains yellow The third yellow mark makes me cringe because, as technical director, it is firmly in my bailiwick. We have committed to open-sourcing all of our code. In last year’s self-audit, I predicted that we should be able to open all of our code within 12 to 18 months. I was wrong. That means this commitment remains yellow. And what’s more- it is likely to remain yellow for a year or two. Let me try and explain why.\nFirst, I should note that all new services that we’ve written since 2007 have been released as open-source (under an MIT license). These include our REST API, Crossmark, Metadata Search, and Event Data. You can find all our open-source code on Gitlab.\nThis leaves us with our “content system” with its legacy code, which handles content registration, OAI-PMH, OpenURL, and XML APIs. This code was originally developed for Crossref by a third party (who I won’t name because they are in no way to blame for our predicament). Crossref only took over the development of the code base internally ~ 2010. But the system has accumulated over twenty years of technical debt and includes many once-common engineering practices that are deprecated (to put it delicately). Additionally, the code is a labyrinth of dependencies on very old libraries under very old licenses.\nAnd although we have spent much of the past two years replacing critical parts of the system’s authentication and authorization code, I am certain that there remain swathes of code that, under scrutiny, would prove a security nightmare.\nNow we know that so-called “security through obscurity” is bad practice. Our legacy code base illustrates the point. We had credentials embedded in the code. We had backdoors and application-level root access. We had countless places where we didn’t sanitize input. But the code was private- and so it gave developers a false sense of confidence when they occasionally made these shortcuts in the interest of developing new features more quickly. And in those early days of hyper-growth, we often had to develop things very, very quickly. Technical debt, like any debt, is a tradeoff.\nAs I said- we’ve cleaned a ton of this stuff up. For example, we’ve replaced our primary authentication system. But this experience has made us better appreciate just how difficult it would be to harden a system this old.\nAnd besides, we are already replacing it - albeit incrementally. We have been extracting and rewriting key components of the old system, and we plan to continue to extract and rewrite until there is nothing left of the old code. All this new code is, naturally, open-source. And it follows modern security practices.\nAnd so we face a difficult choice- do we try and fix code that is hard to fix and that we are replacing anyway- or do we just focus just on replacing the code and making sure the new, open-source code follows modern security best -practices? We’ve chosen to take the latter route. But it does mean this entry will have a yellow circle next to it for a few more years as we replace things.\nOpen data moves from yellow to green And this brings us to our final yellow mark- which was next to the principle of open data. The root of the problem is that what we colloquially call “Crossref metadata” is a mix of elements, some of which come from our members, some from third parties, and some from Crossref itself. These elements, in turn, each have different copyright implications.\nOn top of this, Crossref has terms and conditions for its members and terms and conditions for specific services. These terms and conditions grant Crossref the right to do things with some classes of metadata and not do things with other classes of metadata - regardless of copyright.\nThe net result is that users can freely use and redistribute any metadata they retrieve via our APIs or in our periodic public data files. But it also means we cannot just slap a CC0 waiver on all the data. Instead, we have to specify exactly what copyright and terms apply to each class of data. We’d never done this in a clear and accessible way, so some of our users were understandably concerned that maybe we were hedging or perhaps the reuse rights were unclear. But we are not hedging; they are clear. They just weren\u0026rsquo;t documented. And now they are. In human-readable form. And soon-to-be in machine-readable form. So we can move this from yellow to green.\nReflections on the year since our adoption of POSI When the Crossref board adopted POSI last year, frankly, a few of us were surprised. We never doubted Crossref’s direction as an open infrastructure organization, but we were not sure that others would see the value in making a public commitment to the principles. We’d heard some people say that they thought adopting them would be seen as “Virtue Signaling.” Which, to be fair, it is. This shouldn’t be surprising or contentious. Our entire scholarly communication system is based on virtue signaling. But, of course, the term “virtue signaling” (with scare quotes) is also sometimes used to insinuate that such signaling is disingenuous and designed primarily for marketing purposes. And that would be a real danger. But the principles were drafted with a built-in safeguard against disingenuous use. The commitments POSI lists are practical things that can be verified by anyone. Is our data open? Does the diversity of our board reflect the diversity of our stakeholders?\nSo from the start, we knew that the community would be able to hold us to our commitments. And knowing that made it imperative that we develop a mechanism and process for tracking whether we were meeting them. Thus was born the self-audit.\nAnd the self-audit, in turn, has served as a forcing function to ensure that we didn’t just launch a proclamation and then forget about it. We needed to integrate our POSI commitments into all aspects of our day-to-day work. As such, “Live up to POSI” is now a prominent part of Crossref’s Strategic Agenda. POSI has become a fundamental part of our planning and our public product roadmap. POSI has even become a part of our internal staff annual development plans.\nAdopting POSI has changed the way we work. It has changed the way the board works. It has changed the way staff works.\nAnd we hope that it is having a similar effect on our fellow POSI Posse.\nBut how about changing the way POSI works? Now that Crossref and the nine other members of the POSI Posse have had a year of considering and/or living up to the POSI standards, what would we change? What would we add?\nA few themes have started to emerge as we’ve fielded questions from the current POSI Posse and others who have expressed an interest in adopting POSI.\nHow does POSI apply to non-membership organizations? Can POSI apply to commercial organizations? How could POSI be extended to apply to open infrastructure organizations outside of scholarly communication? How in the hell do you pronounce “POSI?” We’ve tried to answer some of these questions in the POSI FAQ, but can we update POSI so that we don’t need the FAQ? Or at least so that we can start a new FAQ?\nAnd, critically, if we change POSI, how do we ensure we make it stronger and not weaker? Because, to be candid, some of the questions that we’ve fielded have come from parties concerned that POSI is too restrictive. That, for example, the stipulation that revenue should be based on services and not on data makes for inflexible business models. Yes. It does. Deliberately.\nBecause one of the biggest barriers to a community being able to fork digital infrastructure is closed (incl. fee-based) data. And one of the fundamental positions of POSI is one the authors learned from open-source communities. This is that these efforts can fail no matter how much care you take to ensure financial sustainability and how much care you take to ensure community-based governance. The ultimate power the open-source community has is to take the code and fork it. This is the insurance policy that helps keep open source projects honest. And we have tried our best to bake this lesson into the POSI principles. We don’t want to weaken POSI. They are, after all, principles.\nSo in 2022, we look forward to more organizations endorsing POSI. And the current POSI Posse has started a conversation about how we can strengthen the principles and also extend them so that they can more easily be applied to different kinds of organizations and perhaps even in different sectors. A summary of these discussions will be published in the coming weeks.\nBut how will we open these conversations to the broader community? How will we engage those who have yet to adopt the principles but are interested in doing so? What about those interested but perhaps only if they are adapted in some way?\nWe already have a mechanism for soliciting feedback, questions, and suggestions concerning POSI. However, it is a relatively primitive system, based on either sending email to one of the POSI Posse or raising a GitLab ticket. It was the best we could do in the short time we had to put together the POSI site. An MVP, if you will. The feedback mechanism served us well over the past year; we engaged with many interested parties and even managed to help nine of them adopt the principles.\nBut as with all things POSI - there is room for improvement. And so, we hope to have a more user-friendly way to solicit public feedback and hold discussions. This feedback and our own experiences with adopting POSI over the past year will, in turn, inform our efforts at revising POSI to take into account the things we’ve learned since POSI was originally written.\nSo look out for announcements on the POSI site. And we look forward to another year of expanding the list of POSI adopters and continuing our own POSI progress. If you’re POSI-curious, get in touch with any of the ten POSI adopters to start a conversation about your own path towards truly open infrastructure.\n", "headings": ["TL;DR","Details","Stakeholder governance moves from red to yellow","Sustainability moves from yellow to chartreuse stays yellow","Patent non-assertion stays yellow","Open source remains yellow","Open data moves from yellow to green","Reflections on the year since our adoption of POSI","But how about changing the way POSI works?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/image-integrity-help-us-figure-out-the-scale-of-the-problem/", "title": "Image integrity: Help us figure out the scale of the problem", "subtitle":"", "rank": 1, "lastmod": "2022-02-07", "lastmod_ts": 1644192000, "section": "Blog", "tags": [], "description": "Some context The Similarity Check Advisory Group met a number of times last year to discuss current and emerging originality issues with text-based content. During those meetings, the topic of image integrity was highlighted as an area of growing concern in scholarly communications, particularly in the life sciences.\nOver the last few months, we have also read with interest the recommendations for handling image integrity issues by the STM Working Group on Image Alteration and Duplication Detection, followed closely image integrity sleuths such as Elizabeth Bik and have, like many of you, noticed that image manipulation is increasingly given as the reason for retractions.", "content": "Some context The Similarity Check Advisory Group met a number of times last year to discuss current and emerging originality issues with text-based content. During those meetings, the topic of image integrity was highlighted as an area of growing concern in scholarly communications, particularly in the life sciences.\nOver the last few months, we have also read with interest the recommendations for handling image integrity issues by the STM Working Group on Image Alteration and Duplication Detection, followed closely image integrity sleuths such as Elizabeth Bik and have, like many of you, noticed that image manipulation is increasingly given as the reason for retractions.\nImage integrity issues are often associated with paper mill activity but can also originate from an individual’s intentional or unintentional unethical behaviour. Currently, such issues with figures and images are being identified manually or by using an image integrity tool, comparing images within the same article and/or the publisher’s past publications only - and we know that this is a source of frustration for the Crossref members we have spoken to.\nWhat next ? As reported in Nature last December, we believe Crossref is in a unique position to spearhead a cross-publisher solution, similar to what we do for text-based originality checking, as part of our Similarity Check service.\nBefore we start exploring potential software options, we need your help to understand:\nthe scale of the issues and whether these are focused on specific disciplines the type of issues we should prioritise e.g. duplication, beautification, rotation, plagiarism, GAN-generated images/deep-fakes, etc. what software (if any) members are using or trialling whether a cross-publisher service with the collective benefit of shared images would be of sufficient interest to the community ✏️ Let us know what your experience and thoughts are on image integrity by completing this survey.\nWe’re planning to complete our research and share with you the results along with our proposed next steps soon.\n", "headings": ["Some context","What next ?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/admin-tool/", "title": "Admin Tool", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hiccups-with-credentials-in-the-test-admin-tool/", "title": "Hiccups with credentials in the Test Admin Tool", "subtitle":"", "rank": 1, "lastmod": "2022-01-26", "lastmod_ts": 1643155200, "section": "Blog", "tags": [], "description": "TL;DR We inadvertently deleted data in our authentication sandbox that stored member credentials for our Test Admin Tool - test.crossref.org. We’re restoring credentials using our production data, but this will mean that some members have credentials that are out-of-sync. Please contact support@crossref.org if you have issues accessing test.crossref.org.\nThe details Earlier today the credentials in our authentication sandbox were inadvertently deleted. This was a mistake on our end that has resulted in those credentials no longer being stored for our members using our Test Admin Tool - test.", "content": "TL;DR We inadvertently deleted data in our authentication sandbox that stored member credentials for our Test Admin Tool - test.crossref.org. We’re restoring credentials using our production data, but this will mean that some members have credentials that are out-of-sync. Please contact support@crossref.org if you have issues accessing test.crossref.org.\nThe details Earlier today the credentials in our authentication sandbox were inadvertently deleted. This was a mistake on our end that has resulted in those credentials no longer being stored for our members using our Test Admin Tool - test.crossref.org.\nTo be clear, this error has had no impact on the production Admin Tool - doi.crossref.org - or any member’s access to registering content therein. If you’re a member who registers content with us using our helper tools (e.g., the web deposit form) or OJS, you’re likely unfamiliar with the Test Admin Tool, and this issue will not affect you or your registration of content.\nWe don’t configure all member accounts for the Test Admin Tool, so, fortunately, this is an issue for the minority of our members. That said, for those members who do use the Test Admin Tool, this is not a trivial problem. And, we’re going to dedicate additional resources across the organization to ensure it is fixed.\nNext steps We’ve repopulated the credentials in the Test Admin Tool based on our production accounts. It was our best option. While we don’t know your current credentials, our support and membership teams do know that the majority of our members using the Test Admin Tool have historically shared credentials between the Test Admin Tool and our production Admin Tool - doi.crossref.org. That means that many of you will be able to access the Test Admin Tool using those shared credentials; but some of you - who have used different credentials between the two systems - will not.\nWe also know that for many of you testing submissions is an integral step in your workflow, so we’ve determined this is an all-hands-on-deck situation and our staff, across the organization, will be assisting members who have issues with access to test.crossref.org. Starting today, we’re actively monitoring submissions to the Test Admin Tool for access errors through Friday, 11 February. We’ll be proactively contacting affected members to reset their passwords. If you encounter problems before we reach out to you, please do contact us at at support@crossref.org and include ‘Accessing Test Admin Tool’ in your subject line.\n", "headings": ["TL;DR","The details","Next steps"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-ror-some-update-to-our-api/", "title": "A ROR-some update to our API", "subtitle":"", "rank": 1, "lastmod": "2022-01-19", "lastmod_ts": 1642550400, "section": "Blog", "tags": [], "description": "Earlier this year, Ginny posted an exciting update on Crossref’s progress with adopting ROR, the Research Organization Registry for affiliations, announcing that we\u0026rsquo;d started the collection of ROR identifiers in our metadata input schema. 🦁\nThe capacity to accept ROR IDs to help reliably identify institutions is really important but the real value comes from their open availability alongside the other metadata registered with us, such as for publications like journal articles, book chapters, preprints, and for other objects such as grants.", "content": "Earlier this year, Ginny posted an exciting update on Crossref’s progress with adopting ROR, the Research Organization Registry for affiliations, announcing that we\u0026rsquo;d started the collection of ROR identifiers in our metadata input schema. 🦁\nThe capacity to accept ROR IDs to help reliably identify institutions is really important but the real value comes from their open availability alongside the other metadata registered with us, such as for publications like journal articles, book chapters, preprints, and for other objects such as grants. So today\u0026rsquo;s news is that ROR IDs are now connected in Crossref metadata and openly available via our APIs. 🎉\nThis means ROR can be used by and within all the tools services that integrate with Crossref APIs to analyse, search, recommend, or evaluate research. It’s an important element of the Research Nexus, our vision of a fully connected open research ecosystem, and helps identify, share, and link the affiliations of those producing and publishing different types of research or receiving grants.\nNow that this metadata is available, it helps confer the downstream benefits of ROR for different (and interconnected) groups:\nIt makes it easier for institutions to find and measure their research output by the articles their researchers have published, or perhaps make it easier to track the grants they’ve received. Funders need to be able to discover and track the research and researchers they have supported. Academic librarians need to easily find all of the publications associated with their campus. Journals need to know where authors are affiliated so they can determine eligibility for institutionally sponsored publishing agreements. Editors can use more accurate information on author and reviewer institutions during the peer review process, which can help avoid potential conflicts of interest. Those are just a handful of use cases, which is why disseminating ROR affiliation identifiers via our APIs is so important; it lets others choose to do what they need to with the information, without restriction.\nThe story so far A growing number of our members have started to include ROR in the metadata they register with us, so we’re excited to be able to see this via simple API queries.\nAt the time of writing we can see nearly 4,000 RORs being registered by these 21 members (we\u0026rsquo;ve removed test accounts). Note that many of these are being baked into metadata being registered for grant records, also recently released and now findable through the REST API:\n\u0026#34;Wellcome\u0026#34;: 2821, \u0026#34;Natural Resources Canada/CMSS/Information Management\u0026#34;: 277, \u0026#34;University of Szeged\u0026#34;: 139, \u0026#34;RTI Press\u0026#34;: 104, \u0026#34;American Cancer Society\u0026#34;: 103, \u0026#34;University of Missouri Libraries\u0026#34;: 77, \u0026#34;Keldysh Institute of Applied Mathematics\u0026#34;: 52, \u0026#34;Boise State University, Albertsons Library\u0026#34;: 52, \u0026#34;Australian Research Data Commons (ARDC)\u0026#34;: 52, \u0026#34;The Neurofibromatosis Therapeutic Acceleration Program\u0026#34;: 49, \u0026#34;Boise State University\u0026#34;: 12, \u0026#34;The ALS Association\u0026#34;: 11, \u0026#34;Children\u0026#39;s Tumor Foundation\u0026#34;: 9, \u0026#34;Episteme Health Inc\u0026#34;: 3, \u0026#34;The University of the Witwatersrand\u0026#34;: 2, \u0026#34;Office of Scientific and Technical Information\u0026#34;: 2, \u0026#34;AGH University of Science and Technology Press\u0026#34;: 2, \u0026#34;York University Libraries\u0026#34;: 1, \u0026#34;SZTEPress\u0026#34;: 1, \u0026#34;Masaryk University Press\u0026#34;: 1, \u0026#34;Institut für Germanistik der Universität Szeged\u0026#34;: 1, Our grants schema accommodated ROR first, so it\u0026rsquo;s the funder members and grant records that dominate the adoption of ROR\u0026hellip; so far! But there are a few articles and reports there too already. These record types include ROR in their records:\n\u0026#34;Grant\u0026#34;: 3047, \u0026#34;Report\u0026#34;: 382, \u0026#34;Dissertation\u0026#34;: 164, \u0026#34;Journal Article\u0026#34;: 140, \u0026#34;Conference Paper\u0026#34;: 22, \u0026#34;Posted Content\u0026#34;: 12, \u0026#34;Dataset\u0026#34;: 7, \u0026#34;Monograph\u0026#34;: 6, \u0026#34;Book\u0026#34;: 3, \u0026#34;Chapter\u0026#34;: 2, \u0026#34;Proceedings Series\u0026#34;: 1, \u0026#34;Peer Review\u0026#34;: 1, \u0026#34;Journal Issue\u0026#34;: 1, \u0026#34;Book Set\u0026#34;: 1, \u0026#34;Book Series\u0026#34;: 1 We can currently see 205 different ROR IDs in Crossref metadata, with the most frequently provided ROR ID being: https://ror.org/02jx3x895, or University College London as it’s also known as.\nIf you’re a Crossref member keen to assert affiliation identification in your content, our recent webinar, Working with ROR as a Crossref member: what you need to know, covers all the detail.\nInterested in using the information? Dig into our REST API documentation and into the API itself, use the polite pool if you can (i.e. identify yourself). There’s also a wealth of information on the ROR support site or being shared among integrators in the growing ROR community.\nJoin us in doing more with ROR!\n", "headings": ["The story so far"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/code-of-conduct/", "title": "Code of conduct", "subtitle":"", "rank": 1, "lastmod": "2022-01-19", "lastmod_ts": 1642550400, "section": "Code of conduct", "tags": [], "description": "At Crossref, we assume that most people are intelligent and well-intentioned, and we’re not inclined to tell people what to do. However, we want all engagement with Crossref to be safe and productive for everyone involved. To that end, this Code of Conduct frames our expectations for behavior when interacting with Crossref staff and our community—in person and online—including when enquiring about membership, at our events, during meetings and webinars, through social media, email, support tickets, and forum discussions.", "content": "At Crossref, we assume that most people are intelligent and well-intentioned, and we’re not inclined to tell people what to do. However, we want all engagement with Crossref to be safe and productive for everyone involved. To that end, this Code of Conduct frames our expectations for behavior when interacting with Crossref staff and our community—in person and online—including when enquiring about membership, at our events, during meetings and webinars, through social media, email, support tickets, and forum discussions.\nCrossref is dedicated to providing a harassment-free experience for everyone. We welcome your questions, concerns, and feedback relating to any aspect of our work and offer many opportunities to do so.\nExpected behavior While interacting within a Crossref environment in any way, we expect you to respect all staff and community members, regardless of any diversity characteristics which include but are not limited to:\nage citizenship status disability ethnicity family and other caring responsibilities gender geographic location military/veteran status national origin physical appearance political beliefs pregnancy/parental status professional career level race religion/value system sex sexual orientation socio-economic background/social class Our global community consists of publishers, editors, funders, developers, librarians, researchers, and more, across a wide variety of disciplines. We value the participation of every member and want everyone to have a fulfilling and enjoyable experience in their interactions with Crossref and the wider community.\nIn short: We expect all community members to abide by these guidelines in all their interactions in the Crossref community, both online and in-person. We do not accept harassment or offensive behavior anywhere. It’s counter to Crossref’s values and is counter to our values as human beings.\nAs such, we expect the following:\nAll communication should be appropriate for a professional audience of people of many different backgrounds. Sexual language and imagery are never appropriate in such a context, nor is language referring to personal qualities or characteristics or group membership. Empathize and be respectful of others. Be agreeable even when you disagree. Be polite even if there is cause to complain. Do not use insulting language, harass anyone, impersonate people, or expose their private information. Be encouraging. Include others in the conversation where appropriate and value their contributions. Additionally, avoid jargon, slang, and cultural references that can exclude others from engaging. Participate constructively. Aim to improve the discussion and ensure that your contribution is on topic and helpful for others. Never post spam or attempt to mislead others. Pay attention to non-verbal communication. Ensure that your behavior is considerate and any physical contact is consensual. Reporting If someone makes you or anyone else feel unsafe or uncomfortable or otherwise violates the Code of Conduct, please bring this to the attention of Crossref staff or email the report to conduct@crossref.org. All reports will be seen by Lucy Ofiesh and Ginny Hendricks and will be treated privately.\nSteps Crossref will take Participants will be asked to stop any harassing behavior and are expected to comply immediately. Anyone violating this Code of Conduct will be blocked—without warning—from the space where the incident occurred, such as our social media channels, the Crossref community forum, or expulsion from the meeting/event. They may also be banned from all interaction on all platforms for a period of time at our discretion. Crossref may publish a statement publicly about the incident, but never without the permission of those who have been harmed by the incident, and always within the law. Reinstatement Requests to be reinstated after being blocked or banned may be sent to conduct@crossref.org and will be considered.\nReviewing this policy The Crossref Code of Conduct is adapted from several others, including O’Reilly Media Conferences and FORCE11. We review this Code of Conduct regularly and learn from other organizations. Contributions have been made by: Ginny Hendricks, Geoffrey Bilder, Shayn Smulyan, Laura J. Wilkinson, Vanessa Fairhurst, Rosa Morais Clark, Isaac Farley, Amanda Bartell, Rachael Lammey, Lucy Ofiesh.\nPlease contact conduct@crossref.org with any suggestions.\nWe thank our community for your help in keeping Crossref a welcoming, respectful, and friendly community for all participants.\n", "headings": ["Expected behavior","Reporting","Steps Crossref will take","Reinstatement","Reviewing this policy"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2021/", "title": "2021", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/event-data/", "title": "Event Data", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-now-with-added-references/", "title": "Event Data now with added references", "subtitle":"", "rank": 1, "lastmod": "2021-11-10", "lastmod_ts": 1636502400, "section": "Blog", "tags": [], "description": "Event Data is our service to capture online mentions of Crossref records. We monitor data archives, Wikipedia, social media, blogs, news, and other sources. Our main focus has been on gathering data from external sources, however we know that there is a great deal of Crossref metadata that can be made available as events. Earlier this year we started adding relationship metadata, and over the last few months we have been working on bringing in citations between records.", "content": "Event Data is our service to capture online mentions of Crossref records. We monitor data archives, Wikipedia, social media, blogs, news, and other sources. Our main focus has been on gathering data from external sources, however we know that there is a great deal of Crossref metadata that can be made available as events. Earlier this year we started adding relationship metadata, and over the last few months we have been working on bringing in citations between records.\nOur members deposit references alongside other metadata, and we have a lot of them. In fact, we have over 1.2 billion, with hundreds of thousands of new references added each day. While our metadata APIs make it easy to see which works are cited, it is much more difficult to find a list of citations to a specific work. We can make this easier by presenting citations as events in Event Data. Now that the huge majority of our members have responded positively to the Initiative for Open Citations (I4OC) campaign and Crossref’s open-by-default reference policy, the move to make this data available via Event Data is a natural step.\nA bumpy ride, but we got there Adding such a large amount of data means a significant increase in the data coming into Event Data, which has presented some challenges. We’ve known for some time that Event Data is not very stable, but we expected it to cope with the new data coming in. We have mitigated by initially only looking at new data, not trying to immediately back-fill with old references. Unfortunately, even with this limitation it hasn’t been a smooth ride, and our first effort to put references into Event Data uncovered bugs we didn’t know about and we had to walk back the changes. We tried again and found that we were hitting rate limits for our own APIs. This is a sure sign of technical debt: we shouldn’t need to be shifting large amounts of our own data from one place to another, and not at rates that could be putting stress on APIs used by others in the community.\nWe have managed to work around these problems and I’m pleased to say that we are now adding metadata from reference lists to Event Data. They can be accessed via the Event Data API: https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=10\u0026source=crossref\u0026relation-type=references\u0026from-collected-date=2021-10-01\nWhere to next? There remains work to be done. We would like to backfill references, and there is also further work to include relationships to objects that have identifiers other than Crossref records (genes, proteins, ArXiv identifiers, and so on). Our work on investigating sources is proceeding and we will be looking to add more next year. While possible, these steps will be costly and time-consuming if we proceed without significant changes to the infrastructure supporting Event Data.\nWhen we started Event Data the volumes of data were much smaller and our infrastructure coped well, but as we’ve said here before, it’s in need of an overhaul. In fact, our recent experience and some other considerations are making us look at some very fundamental changes in how we record events.\nWe are therefore working on a new data model that will allow events to be stored alongside the rest of our metadata. This work is still in the early stages, but if we are successful it will mean that we won’t need to move data between databases. It will also make it easier to provide access to all of our reference metadata along with other relationships that we’re not currently able to provide, and give us the capacity to add new data sources.\nOpen references [EDIT 6th June 2022 - all references are now open by default with the March 2022 board vote to remove any restrictions on reference distribution].\nIt is worth noting that only open references will be available via Event Data. This covers 88% of works with references at present. Members have the option to deposit references with limited visibility, meaning only Metadata Plus users can access them; or closed visibility, meaning that only the member who owns the cited work can retrieve the citation, via Cited-by.\nWe encourage our members to make their references open and deposit them as metadata. It makes them usable downstream by thousands of tools that researchers use. Including open references also improves the quality of metadata, and there are reciprocal benefits for the large number of members who openly share their reference data: they contribute to a large, openly available pool of data with many applications that advance research, and drives usage of the content published by our members.\nIf you are a Crossref member and unsure whether your reference metadata is open or not, check your participation report. This will tell you the percentage of your records with deposited references, and the percentage of those that are open. You can change the reference visibility preference for each DOI prefix that you own by contacting our support team. For guidance on how to deposit references, see our user documentation.\n", "headings": ["A bumpy ride, but we got there","Where to next?","Open references"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/come-and-get-your-grant-metadata/", "title": "Come and get your grant metadata!", "subtitle":"", "rank": 1, "lastmod": "2021-11-08", "lastmod_ts": 1636329600, "section": "Blog", "tags": [], "description": "Tl;dr: Metadata for the (currently 26,000) grants that have been registered by our funder members is now available via the REST API. This is quite a milestone in our program to include funding in Crossref infrastructure and a step forward in our mission to connect all.the.things. This post gives you all the queries you might need to satisfy your curiosity and start to see what\u0026rsquo;s possible with deeper analysis. So have the look and see what useful things you can discover.", "content": "Tl;dr: Metadata for the (currently 26,000) grants that have been registered by our funder members is now available via the REST API. This is quite a milestone in our program to include funding in Crossref infrastructure and a step forward in our mission to connect all.the.things. This post gives you all the queries you might need to satisfy your curiosity and start to see what\u0026rsquo;s possible with deeper analysis. So have the look and see what useful things you can discover.\nHow it started Back in 2017 we posted the outcomes of some discussions with a newly-reformed Funder Advisory Group, plotting Crossref\u0026rsquo;s path. In 2018, Wellcome described their rationale for supporting the grants effort with the help of Europe PMC, and in 2019 the sub-groups of the Advisory Board put out a call for feedback on the metadata plan as the fee model they created was also approved by our board.\nSince late 2019, research funders have been registering metadata and identifiers for their grants with us. We currently have a healthy 26k grants registered with us, via 13 funding organisations. I’d specifically highlight Wellcome for volume (registering via Europe PMC), and the Australian Research Data Commons (ARDC) who was the first funder that included ROR IDs in their grant metadata, really getting the value of connecting all related entities and contributors.\nThe reasons for registering grants with Crossref? Let\u0026rsquo;s recap:\nSupport of open data and information about grants Streamlined discovery of funded content Improved analytics and data quality More complete picture of outputs and impact Better value from investments in reporting services Improved timeliness, completeness and accuracy of reporting: save time for researchers More complete information to support analysis and evaluation without relying on manual data entry How it\u0026rsquo;s going For grant information to be used, it’s key that it is is openly available and disseminated as widely as possible. That work starts with funders registering their grants, and continues with us. Now that we’ve completed the REST API\u0026rsquo;s Elasticsearch migration, we’re happy to announce that all our grant information is now available via our REST API.\nHere’s a snippet of the kind of metadata you can see related to the grants registered with us. This is information related to grant record https://0-doi-org.libus.csd.mu.edu/10.35802/218300, found using this request (https://0-api-crossref-org.libus.csd.mu.edu/works/10.35802/218300) which you can use to see the full metadata record:\n\u0026#34;publisher\u0026#34;: \u0026#34;Wellcome\u0026#34;, \u0026#34;award\u0026#34;: \u0026#34;107769\u0026#34;, \u0026#34;DOI\u0026#34;: \u0026#34;10.35802/107769\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;grant\u0026#34;, \u0026#34;created\u0026#34;: { \u0026#34;date-parts\u0026#34;: [ [ 2019, 9, 25 ] ], \u0026#34;date-time\u0026#34;: \u0026#34;2019-09-25T07:17:20Z\u0026#34;, \u0026#34;timestamp\u0026#34;: 1569395840000 }, \u0026#34;source\u0026#34;: \u0026#34;Crossref\u0026#34;, \u0026#34;prefix\u0026#34;: \u0026#34;10.35802\u0026#34;, \u0026#34;member\u0026#34;: \u0026#34;13928\u0026#34;, \u0026#34;project\u0026#34;: [ { \u0026#34;project-title\u0026#34;: [ { \u0026#34;title\u0026#34;: \u0026#34;Initiative to Develop African Research Leaders (IDeAL)\u0026#34; } ], \u0026#34;project-description\u0026#34;: [ { \u0026#34;description\u0026#34;: \u0026#34;Research is key in tackling the heath challenges that Africa faces. In KWTRP we have been committed to building sustainable capacity alongside an active and diverse research programme covering social science, health services research, epidemiology, laboratory science including molecular biology and bioinformatics. Our strategy has been successful in delivering high quality PhD training, leveraging individual funding and programme funding in order to place students in productive groups and provide high quality supervision and mentorship. Here we plan to consolidate and build on these outputs to address long-term sustainability. We will emphasise the full career path needed to generate research leaders. KWTRP aims to address capacity building for research through an initiative that employs a progressive and long term outlook in the development of local research leadership. The overall aim of the \\\u0026#34;Initiative to Develop African Research Leaders\\\u0026#34; (IDeAL) is to build a critical mass of African researchers who are technically proficient as scientists and well-equipped to independently lead science at international level, able to engage with funders, policy makers and governments, and to act as supervisors and mentors for the next generation of researchers.\u0026#34;, \u0026#34;language\u0026#34;: \u0026#34;en\u0026#34; }, If you dig in, you can see information about the project, investigators (including their ORCID iDs), the funder, award type, amount, description of the grant, and a link to the public page showing information about the grant. More information on the required and optional fields is available in our grants markup guide.\nHere are some examples of the kind of things you can now ask:\nShow me who is registering grants: https://0-api-crossref-org.libus.csd.mu.edu/types/grant/works?rows=0\u0026amp;facet=funder-name:*\nShow me all of the grants registered by Wellcome: https://0-api-crossref-org.libus.csd.mu.edu/works?query.funder-name=Wellcome\u0026filter=type:grant\nShow me all of the grants associated with the investigator name Caldas: https://0-api-crossref-org.libus.csd.mu.edu/works?query.contributor=Caldas\u0026filter=type:grant\nAnd bibliographic queries finding entries in\u0026hellip;\nAward number: https://0-api-crossref-org.libus.csd.mu.edu/works?query.bibliographic=7196\u0026filter=type:grant\nProject title: https://0-api-crossref-org.libus.csd.mu.edu/works?query.bibliographic=RIZ1\u0026filter=type:grant\nMore to do This is a milestone but it\u0026rsquo;s not the end of the story. We have more to add relationships, encourage the use of this metadata amongst publishers and their platforms, and to add grant records to our tools such as Participation Reports and Metadata Search. But in the meantime, feel free to get in touch if you have queries about registering grants with us or about using the related metadata in your tools and services.\nThis information will grow over time as more funders join Crossref and add their grant metadata and as more analyses is possible. We\u0026rsquo;re looking forward to the next steps!\n", "headings": ["How it started","How it\u0026rsquo;s going","Show me who is registering grants:","Show me all of the grants registered by Wellcome:","Show me all of the grants associated with the investigator name Caldas:","Award number:","Project title:","More to do"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/simcheck-new-thank-you/", "title": "Thank you for your application", "subtitle":"", "rank": 1, "lastmod": "2021-11-03", "lastmod_ts": 1635897600, "section": "Become a member", "tags": [], "description": "Thanks for applying for the Similarity Check service. Once we\u0026rsquo;ve confirmed that you meet the obligations of having full-text URLs supplied in over 90% of your registered content, we\u0026rsquo;ll work with the team at Turnitin to check that they can index your content. If they have any problems, they\u0026rsquo;ll contact the technical contact you have just supplied.\nOnce your content has been indexed, Turnitin will provide you with your login details for the iThenticate system, and we send you more details to help you get underway.", "content": "Thanks for applying for the Similarity Check service. Once we\u0026rsquo;ve confirmed that you meet the obligations of having full-text URLs supplied in over 90% of your registered content, we\u0026rsquo;ll work with the team at Turnitin to check that they can index your content. If they have any problems, they\u0026rsquo;ll contact the technical contact you have just supplied.\nOnce your content has been indexed, Turnitin will provide you with your login details for the iThenticate system, and we send you more details to help you get underway.\n", "headings": ["Thanks for applying for the Similarity Check service."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/simcheck-transition-new-thank-you/", "title": "Thank you for your application", "subtitle":"", "rank": 1, "lastmod": "2021-11-03", "lastmod_ts": 1635897600, "section": "Become a member", "tags": [], "description": "Thanks for applying for Similarity Check Once we\u0026rsquo;ve confirmed that you meet the obligations, we\u0026rsquo;ll work with the team at Turnitin to check that they can index your content. If they have any problems, they\u0026rsquo;ll contact the technical contact you have just supplied. Once you\u0026rsquo;re all set up, they\u0026rsquo;ll provide you with your login details for the iThenticate system, and we\u0026rsquo;ll help you get yourself set up and underway.", "content": "Thanks for applying for Similarity Check Once we\u0026rsquo;ve confirmed that you meet the obligations, we\u0026rsquo;ll work with the team at Turnitin to check that they can index your content. If they have any problems, they\u0026rsquo;ll contact the technical contact you have just supplied. Once you\u0026rsquo;re all set up, they\u0026rsquo;ll provide you with your login details for the iThenticate system, and we\u0026rsquo;ll help you get yourself set up and underway.\n", "headings": ["Thanks for applying for Similarity Check"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/update-on-the-outage-of-october-6-2021/", "title": "Update on the outage of October 6, 2021", "subtitle":"", "rank": 1, "lastmod": "2021-10-27", "lastmod_ts": 1635292800, "section": "Blog", "tags": [], "description": "In my blog post on October 6th, I promised an update on what caused the outage and what we are doing to avoid it happening again. This is that update.\nCrossref hosts its services in a hybrid environment. Our original services are all hosted in a data center in Massachusetts, but we host new services with a cloud provider. We also have a few R\u0026amp;D systems hosted with Hetzner.\nWe know an organization our size has no business running its own data center, and we have been slowly moving services out of the data center and into the cloud.", "content": "In my blog post on October 6th, I promised an update on what caused the outage and what we are doing to avoid it happening again. This is that update.\nCrossref hosts its services in a hybrid environment. Our original services are all hosted in a data center in Massachusetts, but we host new services with a cloud provider. We also have a few R\u0026amp;D systems hosted with Hetzner.\nWe know an organization our size has no business running its own data center, and we have been slowly moving services out of the data center and into the cloud.\nFor example, over the past nine months, we have moved our authentication service and our REST APIs to the cloud.\nAnd, we are working on moving the other existing services too. For example, we are in the midst of moving Event Data and, our next target, after Event Data, is the content registration system.\nAll new services are deployed to the cloud by default.\nWhile moving services out of the data center, we have also been trying to shore up the data center to ensure it continues to function during the transition. One of the weaknesses we identified in the data center was that the same provider managed both our primary network connection and our backup connection (albeit- on entirely different physical networks). We understood that we really needed a separate provider to ensure adequate redundancy, and we had already had a third network drop installed from a different provider. But, unfortunately, it had not yet been activated and connected.\nMeanwhile, our original network provider for the first two connections informed us months ago that they would be doing some major work on our backup connection. However, they assured us that it would not affect the primary connection- something we confirmed with them repeatedly since we knew our replacement backup connection was not yet active.\nBut, the change our provider made did affect both the backup (as intended) and the primary (not intended). They were as surprised as we were, which kind of underscores why we want two separate providers as well as two separate network connections.\nSo both our primary and secondary networks went down while we had not yet activated our replacement secondary network.\nAlso, our only local infrastructure team member was in surgery at the time (He is fine. It was routine. Thanks for asking).\nThis meant we had to send a local developer to the data center, but the data center’s authentication process had changed since the last time said developer had visited (pre-pandemic). So, yeah, it took us a long time to even get into the data center.\nBy then, our infrastructure team member was out of surgery and on the phone with our network provider, who realized their mistake and reverted everything. This whole process (getting network connectivity restored, not the surgery) took almost two hours.\nUnfortunately, the outage didn’t just affect services hosted in the data center. It also affected our cloud-hosted systems. This is because all of our requests were still routed to the data center first, after which those destined for the cloud were split out and redirected. This routing made sense when the bulk of our requests were for services hosted in the data center. But, within the past month, that calculus had shifted. Most of our requests now are for cloud-based services. We were scheduled to switch to routing traffic through our cloud provider first, and had this been in place, many of our services would have continued running during the data center outage.\nIt is very tempting to stop this explanation here and leave people with the impression that:\nThe root cause of the outage was the unpredicted interaction between the maintenance on our backup line and the functionality of our primary line; Our slowness to respond was exclusively down to one of the two members of our infrastructure staff being (cough) indisposed at the time. But the whole event uncovered several other issues as well.\nNamely:\nEven if one of our three lines had stayed active, the routers in the data center would not have cut over to the redundant working system because we had misconfigured them and we had not tested them; We did not keep current documentation on the changing security processes for accessing the data center; Our alerting system does not support the kind of escalation logic, and coverage-scheduling that would have allowed us to automatically detect when our primary data center administrator didn’t respond (being in surgery and all) and redirect alerts and warnings to secondary responders; and We need to accelerate our move out of the data center. What are we doing to address these issues?\nCompleting the installation of the backup connection with a second provider; Scheduling a test of our router’s cutover processes where we will actually pull the plug on our primary connection to ensure that failover is working as intended. We will give users ample warning before conducting this test; Revising our emergency contact procedures and updating our documentation for navigating our data center’s security process; Replacing our alerting system with one that gives us better control over escalation rules; and Adding a third FTE to the infrastructure team to help us accelerate our move to the cloud and to implement infrastructure management best practices. October 6th, 2021, was a bad day. But we’ve learned from it. So if we have a bad day in the future, it will at least be different.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/principles-practices/", "title": "Metadata principles and practices", "subtitle":"", "rank": 1, "lastmod": "2021-10-22", "lastmod_ts": 1634860800, "section": "Documentation", "tags": [], "description": "When you register your content with us, you create a metadata record for a digital object. The metadata within that record becomes an enduring, widely distributed connection to the research nexus.\nOur requirements are minimal, beyond basic bibliographic metadata. We’d like to require everything, but don’t because:\nNot all metadata fields are relevant. For example, not all journals have volumes and issues, and not all articles have funding. Our members are not always able to send us everything, and having some metadata is better than having no metadata.", "content": "When you register your content with us, you create a metadata record for a digital object. The metadata within that record becomes an enduring, widely distributed connection to the research nexus.\nOur requirements are minimal, beyond basic bibliographic metadata. We’d like to require everything, but don’t because:\nNot all metadata fields are relevant. For example, not all journals have volumes and issues, and not all articles have funding. Our members are not always able to send us everything, and having some metadata is better than having no metadata. For example, it’s better to have an identifier attached to basic bibliographic information than for there to be no identifier at all. Some metadata are hard to come by. For example, digitized back issues may not have good reference lists available. However, we hope all members will follow our metadata best practices rather than just meeting the basic requirements. This will ensure that the records and identifiers you register with us are discoverable and connected.\nPrinciples (modeled on Metadata 20/20 principles) Metadata 20/20 has a set of basic principles that can be applied to our metadata to ensure that it is Compatible, Complete, Credible and Curated.\nPrinciples are aspirational - they help us define what we hope to accomplish with our metadata. So while we don’t meet all of the principles completely, they can still guide us as we move forward. Let\u0026rsquo;s take a look at the Metadata 20/20 principles one-by-one.\nCOMPATIBLE: provide a guide to content for machines and people So, metadata must be as open, interoperable, parsable, machine actionable, human readable as possible.\nHow are we compatible? The metadata provided to Crossref is made freely and openly available through our APIs Crossref metadata is provided in both JSON and XML formats. Our JSON and ‘UNIXSD’ XML formats are comprehensive and contain all metadata registered with us. We also provide limited metadata tailored for specific purposes via content negotiation (BibTeX, RIS, RDF). We try to make use of vocabularies and identifiers as much as possible, and allow free text only when there is no other option. What more can we do? Provide a JSON schema to make REST API outputs easier to ingest. Adopt and support existing and new standards that define the metadata we collect. COMPLETE: reflect the content, components and relationships as published So, metadata must be as complete and comprehensive as possible.\nHow are we complete? We aim to collect all metadata that is relevant to describing and using the scholarly content registered with us, and work to make it possible for members to send this metadata to us.\nWhat more can we do? A lot, this is our biggest challenge - we need to:\nMake it easy for members to send metadata to us. Make it easy for members to assess the metadata they have sent to us. Evolve our schema (or evolve beyond an XML schema) to quickly to support new types of content and metadata segments. CREDIBLE: enable content discoverability and longevity So, metadata must be of clear provenance, trustworthy and accurate.\nHow are we credible? Our metadata is provided to us by our members, and we don’t curate or clean up the metadata in any way. We do insert metadata into outputs such as DOI matches for citations, recursive relationships, and clearly flag those pieces as being inserted by Crossref in our metadata outputs.\nThis means, good or bad, metadata accuracy depends on the quality of metadata provided by our members.\nWhat more can we do? We can:\nFacilitate reporting and correction of metadata errors identified by metadata users. Create tools to help members assess their metadata quality. CURATED: reflect updates and new elements So, metadata must be maintained over time.\nHow are we curated? An important obligation for our members is to keep metadata up to date - for some this may mean periodically updating registered URLs, for others this may mean ensuring license and Crossmark data is current. (Find out more about maintainng metadata.)\nWhat more can we do? Assess and report URLs that are broken. Provide tools to allow members to assess their license metadata. Make sure that DOIs that move from member to member are maintained. ", "headings": ["Principles (modeled on Metadata 20/20 principles)","How are we compatible?","What more can we do?","How are we complete?","What more can we do?","How are we credible?","What more can we do?","How are we curated?","What more can we do?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lindsay-russell/", "title": "Lindsay Russell", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/more-new-faces-at-crossref/", "title": "More new faces at Crossref", "subtitle":"", "rank": 1, "lastmod": "2021-10-21", "lastmod_ts": 1634774400, "section": "Blog", "tags": [], "description": "Looking at the road ahead, we’ve set some ambitious goals for ourselves and continue to see new members join from around the world, now numbering 16,000. To help achieve all that we plan in the years to come, we’ve grown our teams quite a bit over the last couple of years, and we are happy to welcome Carlos, Evans, Fabienne, Mike, Panos, and Patrick.\n", "content": "Looking at the road ahead, we’ve set some ambitious goals for ourselves and continue to see new members join from around the world, now numbering 16,000. To help achieve all that we plan in the years to come, we’ve grown our teams quite a bit over the last couple of years, and we are happy to welcome Carlos, Evans, Fabienne, Mike, Panos, and Patrick.\nOur Software Development team has seen the most growth with the addition of Carlos, Mike, Panos, and Patrick; collectively, they bring specialist skills that are helping us to pay down technical debt, modernize our underlying infrastructure, and prepare for a consistent front-end experience. As a member of the Product team, Fabienne has a fresh take on our Similarity Check service, steering the upgrade to iThenticate v2. And Evans brings a scientific researcher perspective to our Member Experience team along with experience as a member who’s worked with our tools.\nAnd now some words from each of them.\nCarlos Del Ojo Elias I am a computer scientist with a master’s degree in Bioinformatics. Previously I used to work as a security auditor. I\u0026rsquo;ve got experience in research and software development both in academia and industry. It\u0026rsquo;s very exciting for me to join Crossref as a Senior software developer on the technology team. My current project involves working on the authentication and authorization subsystems, exploring state-of-the-art technologies in order to improve our services. I have always enjoyed contributing to the open-source community, so it is a pleasure for me to work in an organization that promotes the principles of openness and transparency of software and data. Evans Atoni I am a member of the Technical Support team having joined Crossref just a few weeks ago. I’m passionate about advancing open access and POSI. Helping our members sort through knotty technical queries and building relations with them to service their very diverse needs is what I’m most excited about in my role. In my spare time, I enjoy anything outdoors, family time, and traveling. I work remotely from Nairobi, Kenya. Fabienne Michaud I joined Crossref in April 2021 as a Product Manager for scholarly stewardship which includes the content comparison tool Similarity Check and I am thrilled to be a member of such a lovely, supportive and international team. I have a background in teaching and have worked in academic, research, and not-for-profit libraries in the UK for over 20 years in academic liaison, customer services, and management roles. These experiences have given me a user-centered approach and a drive to find collaborative, reliable, and pertinent technological solutions to support the research and scholarly community. Since starting at Crossref and, through my work with the Similarity Check Advisory Group, I have developed a good understanding of the current ethical issues facing the publishing sector (such as paper mills and other manipulations of the publication process) and a particular interest in how AI and automation tools can play a part in addressing these challenges. Mike Gill I’ve been a software developer for twenty years, having studied software engineering at university. During my career, I have worked mostly in the banking and engineering industries so this is my first time working in scholarly publishing. I confess that before joining Crossref I wasn’t aware that the community existed so I was excited to see how I could ply my trade in this new (to me!) field. The role also appealed as, having primarily been a team leader/line manager in my recent career, this was an opportunity to be hands-on again and work with modern languages such as Kotlin. In the end, though, what really sealed it for me was reading on the Crossref website that ‘we take the work seriously but not necessarily ourselves’ which basically sums me up. So I knew I’d be in good company and that has proven to be the case!\nPanos Pandis I joined Crossref as a Senior Software Developer in 2020, in the middle of the coronavirus pandemic. Moving to Crossref has been a much-needed breath of fresh air. I\u0026rsquo;m a big fan of open-source, and at Crossref, it just feels like home. Even more so after our recent commitment to the Principles of Open Scholarly Infrastructure (POSI). My main focus at the moment is Crossref\u0026rsquo;s Event Data service. I\u0026rsquo;m fascinated by the potential of Event Data and the broad audience I get to support and communicate with through the project. So if you spot me in a room, feel free to ask me anything about Clojure/Kotlin, Event Data, obscure technology, or kombucha recipes.\nPatrick Vale I\u0026rsquo;m delighted to have joined Crossref as the first Frontend Developer. My role covers the inauguration of a scalable framework in which we can build future User Interfaces, and generally making people\u0026rsquo;s lives easier as they interact with our products and services - if a human uses it, I\u0026rsquo;m interested! It\u0026rsquo;s my intention to provide a platform on which we can quickly iterate to build and adapt our interfaces to suit the rapidly changing needs of our community. It\u0026rsquo;s been a pleasure to learn about the impact Crossref has across the scholarly spectrum; and to work with a team of open, practical, and downright friendly colleagues is a privilege. Outside of work, I enjoy cycling, growing things, and most recently, avoiding two small cats while moving from anywhere to anywhere around the house. Your contributions have been impactful and it will be fun to see all that you will surely contribute to our road ahead!\n", "headings": ["Carlos Del Ojo Elias","Evans Atoni","Fabienne Michaud","Mike Gill","Panos Pandis","Patrick Vale"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reports/", "title": "Reports", "subtitle":"", "rank": 1, "lastmod": "2021-10-17", "lastmod_ts": 1634428800, "section": "Documentation", "tags": [], "description": "Our reports are tools to help you evaluate and improve your metadata. The dashboard gives an overview of our ever-growing corpus of metadata. How good is your metadata? Find out using Participation Reports and other reports to evaluate your metadata records, check for any issues, and learn how to resolve them. Some reports are sent to you by email, and some are available on our website.\nReports by email We send reports by email from reports@crossref.", "content": "Our reports are tools to help you evaluate and improve your metadata. The dashboard gives an overview of our ever-growing corpus of metadata. How good is your metadata? Find out using Participation Reports and other reports to evaluate your metadata records, check for any issues, and learn how to resolve them. Some reports are sent to you by email, and some are available on our website.\nReports by email We send reports by email from reports@crossref.org to specific contacts on your account. Do add this address to your email contacts list or safe senders list to ensure that you receive them. These reports are intended to help you keep your metadata records up-to-date, and include:\nConflict report - this report shows where two (or more) DOIs have been submitted with the same metadata, indicating that you may have duplicate DOIs. You’ll start receiving conflict reports if you have at least one conflict. These reports are sent out on a monthly basis, or more frequently if your number of conflicts peaks by over 500, and it is sent to the main Technical contact on your account. DOI error report - a DOI error report is sent immediately when a user informs us that they’ve seen a DOI somewhere which doesn\u0026rsquo;t resolve to a website, and it is sent to the main Technical contact on your account. Resolution report - this monthly report shows the number of successful and failed DOI resolutions for the previous month, and it is sent to the Primary contact on your account (please note - this contact used to be known as the Business contact). Schematron report - the main Technical contact on your account may also receive periodic Schematron reports if there’s a metadata quality issue with your records. If you aren’t receiving reports, please check the emails aren’t being caught by your spam filter. It might also be because we our contact information for your organization is not current, or you aren’t the designated reports person in our database. Please contact us and we’ll sort it out for you.\nIf you are not the appropriate person to receive reports, we can send reports to a different email address. If you don’t find our reports useful, please contact us and we’ll see what we can do.\nIf you need information about something that’s not covered by your reports, please explore all the information you can access through our REST API, or contact us for help.\nReports available on our website These reports are available on our website:\nBrowsable title list Conflict report Depositor report DOI crawler report Field or missing metadata report Missed conflict report Schematron report XML journal list ", "headings": ["Reports by email ","Reports available on our website "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/metadata-user/", "title": "Metadata User Working Group", "subtitle":"", "rank": 2, "lastmod": "2021-10-13", "lastmod_ts": 1634083200, "section": "Working groups", "tags": [], "description": "Overview Despite a large, international and growing community of users, the particulars of metadata user workflows are still too often unclear or undocumented. Questions of efficiencies, thresholds, how records and elements are evaluated for usefulness and how multiple integrations are managed will be explored as a group, through a series of calls and asynchronous work. The group is expected to run through 2022.\nGoals Document workflows Highlight the efforts of metadata users in enabling discovery/discoverability Determine directions for improved engagement Inform approaches to product planning Participants The group is a mix of service subscribers using different interfaces:", "content": "Overview Despite a large, international and growing community of users, the particulars of metadata user workflows are still too often unclear or undocumented. Questions of efficiencies, thresholds, how records and elements are evaluated for usefulness and how multiple integrations are managed will be explored as a group, through a series of calls and asynchronous work. The group is expected to run through 2022.\nGoals Document workflows Highlight the efforts of metadata users in enabling discovery/discoverability Determine directions for improved engagement Inform approaches to product planning Participants The group is a mix of service subscribers using different interfaces:\nAchraf Azhar, Centre pour la Communication Scientifique Directe (CCSD) Satam Choudhury, HighWire Press Nees van Eck, CWTS-Leiden University Bethany Harris, Jisc Ajay Kumar, Nova Techset David Levy, Pubmill Bruno Ohana, biologit Michael Parkin, European Bioinformatics Institute (EMBL-EBI) Axton Pitt, Litmaps Dave Schott, Copyright Clearance Center (CCC) Stephan Stahlschmidt, German Centre for Higher Education Research and Science Studies (DZHW) Once concluded, results of the discussions will be shared with the community.\n", "headings": ["Overview","Goals","Participants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/schema-library/", "title": "Schema library", "subtitle":"", "rank": 1, "lastmod": "2021-10-11", "lastmod_ts": 1633910400, "section": "Documentation", "tags": [], "description": "The metadata that our members register with us is stored as XML, and our XML schema provides a structure and set of rules to keep everything consistent and interoperable.\nIf you\u0026rsquo;re using one of our helper tools, you don\u0026rsquo;t need to worry too much about the schema, as the tool you use will transform your information into XML for you. This means that if you are using one of our helper tools, you don\u0026rsquo;t need to read this section of the documentation.", "content": "The metadata that our members register with us is stored as XML, and our XML schema provides a structure and set of rules to keep everything consistent and interoperable.\nIf you\u0026rsquo;re using one of our helper tools, you don\u0026rsquo;t need to worry too much about the schema, as the tool you use will transform your information into XML for you. This means that if you are using one of our helper tools, you don\u0026rsquo;t need to read this section of the documentation.\nHowever, if you\u0026rsquo;re sending us XML directly, it\u0026rsquo;s important that you understand the schema, so this section of the documentation is for you.\nAs a registration agency of the International DOI Foundation, we follow the ISO/IEC 11179 Metadata Registry (MDR) standard, which specifies a schema for recording both the meaning and technical structure of the data for unambiguous usage by humans and computers.\nWe have a single deposit schema, which supports a range of different record types (see full list of record types).\nWe support several versions of our schema but we recommend using the latest version (v5.3.1). For certain types of updates, we also offer the resource-only section of the schema. (Here\u0026rsquo;s more information on what you can update with a resource-only deposit).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/outage-of-october-6-2021/", "title": "Outage of October 6, 2021", "subtitle":"", "rank": 1, "lastmod": "2021-10-06", "lastmod_ts": 1633478400, "section": "Blog", "tags": [], "description": "On October 6 at ~14:00 UTC, our data centre outside of Boston, MA went down. This affected most of our network services- even ones not hosted in the data centre. The problem was that both of our primary and backup network connections went down at the same time. We\u0026rsquo;re not sure why yet. We are consulting with our network provider. It took us 2 hours to get our systems back online.", "content": "On October 6 at ~14:00 UTC, our data centre outside of Boston, MA went down. This affected most of our network services- even ones not hosted in the data centre. The problem was that both of our primary and backup network connections went down at the same time. We\u0026rsquo;re not sure why yet. We are consulting with our network provider. It took us 2 hours to get our systems back online.\nWe are going to reprocess content that was in the process of being registered at the time of the outage in order to make sure everything gets registered correctly. This may take a few days to complete.\nWhy did we have such a complete outage and why did it take us so long to fix it? We still run a significant amount of our infrastructure in a data centre outside of Boston that we manage ourselves. Even though we\u0026rsquo;ve been moving many of our services to the cloud, all our traffic was still routed through the data centre - so when it went down, most of our cloud services were unavailable as well.\nIt took us a long time to fix this because our infrastructure team only has two people in it. Only one of them is located near the data centre and was at the doctor’s when the outage occurred. Although we were alerted to the problem immediately, we had to send one of our development team members to the data centre to diagnose and fix the problem.\nWe have been aware of these weaknesses in our system since I took the role of director of technology in 2019, and we have been putting most of our efforts over the past two years into fixing them.\nWe know that an organisation of our size has no business trying to run and maintain a physical data centre ourselves. One of the strengths of cloud-based systems is that they can be administered from anywhere and don\u0026rsquo;t require anyone to physically go to a data centre to replace failed hardware or check that network connections are, in fact, live. We\u0026rsquo;ve been trying to move to the cloud as fast as we can. All new services that we build are cloud-based. At the same time we\u0026rsquo;ve been moving systems out of the data centre - starting with those that put the biggest load on our systems. To further aid this process we have budgeted to add an FTE to the infrastructure team in 2022.\nWhat is really painful about this event is that we had just completed the last bit of work we needed to do before changing our traffic routing so that it would hit the cloud first instead of the data centre first. This would not have avoided the outage we just experienced, but it would have made it a bit less severe.\nWhat is even more painful is that we had recently installed a third network connection with an entirely different provider because we were worried about just this kind of situation. But this third connection wasn’t yet active.\nWe already have a long list of tickets that we’ve created to address problems we faced in recovering from this outage. The list will undoubtedly grow as we complete a postmortem over the next few days. I will report back when we have more detail of what happened and have a solid plan for how to avoid anything similar in the future.\nWe know that an outage of this severity and duration has caused a lot of people who depend on our services extra work and anxiety. For this, we apologise profusely.\nBut at least we didn’t need to use an angle grinder.\n", "headings": ["Why did we have such a complete outage and why did it take us so long to fix it?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/events/", "title": "Webinars and events", "subtitle":"", "rank": 1, "lastmod": "2021-10-01", "lastmod_ts": 1633046400, "section": "Webinars and events", "tags": [], "description": "We attend, speak at, and participate in a number of community events held by organizations in the scholarly community each year––held online and in person––such as conferences, workshops, hack days, and seminars You can also attend our regular online events which are a great way to find out more about the various Crossref services.\nWe also host our events under a program known as \u0026ldquo;Crossref LIVE.\u0026rdquo; We\u0026rsquo;ve recently changed our emphasis from broadcast/informational to learning/ listening to our membership community.", "content": "We attend, speak at, and participate in a number of community events held by organizations in the scholarly community each year––held online and in person––such as conferences, workshops, hack days, and seminars You can also attend our regular online events which are a great way to find out more about the various Crossref services.\nWe also host our events under a program known as \u0026ldquo;Crossref LIVE.\u0026rdquo; We\u0026rsquo;ve recently changed our emphasis from broadcast/informational to learning/ listening to our membership community. Crossref LIVEs are free to attend and open to all who want to learn more about our services, with a program tailored to each country or segment of the community. At these events –– held online and in-person –– we also build time to have conversations with attendees and get feedback on what we\u0026rsquo;re doing right and what could be better.\nUpcoming events (in person/online) Events with a ⭐️ are wholly or partly hosted by Crossref (listed in order of event date).\nTable view Got a suggestion for an event (online/in-person) ? We\u0026rsquo;d like to hear from you! You are welcome to suggest topics we should explain/collaborate on/demonstrate that you think would be helpful as a Crossref member or to the community.\n\u0026nbsp;\u0026nbsp;SUGGEST TOPICS\u0026nbsp;\u0026nbsp;\rThe annual meeting archive Find out more about our annual meeting or browse our archive of annual meetings from 2001 through to last year. Its a real trip down memory lane!\nPast recordings and events +- Events in Arabic\rMetadata webinar series with Knowledge E - Recordings playlist\n#1) Slides - September 21, 2022 #2) Slides - October 5, 2022 #3) Slides - October 19, 2022\nندوه عن كيفية استخدام كروس مارك باللغة العربية | Crossmark How-To Arabic webinar - September 15, 2020 Slides, Recording Crossref LIVE Arabic - May 11, 2020 - Recording - Slides Reference Linking \u0026amp; Cited-By - November 8, 2018 - Slides, Recording Getting Started with Content Registration - September 17, 2018 Slides, Recording Introduction to Crossref - August 6, 2018 Recording +- Events in Bangladesh\rCrossref Live Bangladesh - An Introduction to Crossref Bangladesh - May 2, 2023 - Recording +- Events in Chinese\rCrossref Live Chinese网络研讨会——Crossref简介 - October 14, 2021 - Slides, Recording +- Events in Indonesian\rCrossref Jakarta 2024 - August 27, 2024\nCrossref LIVE Indonesia webinar series: Introduction to Crossref, Content Registration, The Value of Crossref metadata - July 13 - 15 - Online Recordings\n+- Events in Korean\rCrossref LIVE Korea - June 17 2020 - Online (in Korean) - Recording - Slides\n+- Events in Nepal\rCrossref Live Nepal - An Introduction to Crossref - May 2, 2023, Recording +- Events in Portuguese\rMantenha seus DOIs atualizados: A importância dos metadados (em Português) - March 17, 2022, Recording and Slides Introdução ao registro de livros no Crossref (Introduction to Registering Book Content Webinar, Brazilian Portuguese) - November 17, 2021 - Recording and Slides Introduction to Crossmark/Crossmark: O que é e como usar - October 19, 2020 Recording Melhores Práticas para Registro de Conteúdo/Crossref Content Registration - October 7, 2020 Slides, Recording\nCrossref LIVE Brazil - June 25 2020 - Online (in Portuguese) - Recording - Slides Introduction to Participation Reports webinar - October 30, 2019 - Slides, Recording Reference Linking \u0026amp; Cited By webinar - April 16, 2019 Recording Registering content and adding to your Crossref metadata - November 26, 2018 Slides, Recording Introduction to Similarity Check webinar - April 10, 2018 Slides, Q\u0026amp;A, Recording Funder Registry - in Portuguese - September 26, 2016 Slides Getting started with Content Registration: Portuguese Webinar - September 5, 2017 - Slides, Recording +- Events in Russian\rUnderstanding Crossref reports - in Russian - April 18, 2019 Slides, Recording Introduction to Crossmark - in Russian - December 6, 2018 Slides, Recording Crossref and OJS - in Russian - November 22, 2018 Recording +- Events in Spanish\rMantener sus DOI actualizados: la importancia de los metadatos (en español) - April 6, 2022 - Recording and Slides Crossmark (en español) - September 30, 2021 - Recording Registro y actualización de contenido en Crossref / Register and update content in Crossref - October 1, 2020 Slides, Recording Crossref LIVE Spanish - May 19, 2020 - Recording, Slides Reference linking and Cited-by - in Spanish - November 7, 2018 Slides, Recording Introduction to Crossref and Content Registration - in Spanish - October 24, 2018 Slides, Recording\n+- Events in Turkish\rCrossref LIVE Turkey - February 18, 2020 - Online (in Turkish) - Recording Introduction to Crossref: Turkish Seminar November 2, 2017 Recording +- Events in Ukrainian\rParticipation Reports webinar in Ukrainian Learn more +- Getting started\rGetting started as a new Crossref member Get started with Reference Linking - May 23, 2018 Slides, Recording Getting started with looking up metadata - March 8, 2018 Slides, Recording OpenCon Oxford 2020 Recording Get started with Content Registration - October 17, 2017 Slides, Recording +- How-to\rParticipation Reports - October 7, 2020 - Slides, Recording The ins and outs of our Admin Tool - March 5, 2020 - Recording Crossmark how-to - May 15, 2018 Slides, Recording Crossref Cited-by how-to - June 13, 2017 - Slides, Recording +- About our services\rParticipation Reports and Reference Metadata (Indonesian time zone) - October 12 Recording Similarity Check iThenicate v2 - October 6, 2022 Recording Introduction to Similarity Check - April 30, 2020 Slides, Recording Research infrastructure with and for funders - September 6, 2019 Recording Crossref Cited-by - June 8, 2017 Slides, Recording Introduction to Crossmark - November 21, 2017 latest version: January 18, 2018 Slides, Recording Content Registration maintaining metadata - May 17, 2017 Slides, Recording Funding data and the Funder Registry - April 4, 2017 Slides, Recording Similarity Check Members update - March 02, 2017 Recording (must register first to view) Crossmark update - February 23, 2017 Slides, Recording Similarity Check update - September 20, 2016 Slides +- Other\rMaking the world a PIDder place: it’s up to all of us! (Co-hosted by DataCite, Crossref, ORCID \u0026amp; ROR) - Sep 22, 2021 Recording, Slides How to manage your metadata with Crossref - November 18, 2020 Slides, Recording Participation Reports webinar - October 7, 2020 Slides, Recording Finding your way with Crossref: getting started \u0026amp; additional services - September 2, 2020 [Slides](/ pdfs /finding-your-way-with-crossref-getting-started-and-additional-services-sep2-2020.pdf), Recording Not just identifiers: why Crossref DOIs are important - September 2, 2020 Slides, Recording Getting started with books at Crossref - July 22, 2020 Slides, Recording Introduction to ROR - April 29, 2020 Recording Proposed schema changes - have your say - January 2, 2020 Slides, Recording Using ORCID in publishing workflows - September 16, 2019 Recording Crossref and DataCite joint data citation webinar - February 4, 2019 Slides, Recording Where does publisher metadata go and how is it used? - September 11, 2018 Webinar Slides: Laura Wilinson, Webinar Slides: Anna Tolwinska, Webinar Slides: Stephanie Dawson, Webinar Slides: Pierre Mounier, Recording Maintaining your metadata - April 24, 2018 Slides, Vimeo Recording, Recording Preprints and scholarly infrastructure - January 30, 2017 Slides, Recording Beyond OpenURL: Making the most of Crossref metadata - July 12, 2017 Slides, Recording +- Past LIVEs\rCrossref Annual Meeting \u0026amp; Board Election #CRLIVE22 - October 26, 2022 - Recording, Vimeo Recording, Slides, Slides PDF, Recording transcript, Posters from community guest speakers, Zoom Q\u0026amp;A transcript\nCrossref community call: the Research Nexus, Google slides, pdf slides - June 14, 2022 Crossref LIVE21 Annual Meeting \u0026amp; Board Election #CRLIVE21 - November 9 - Online Recording, Slides\nThe Benefits of Open Infrastructure (APAC time zones) - October 29, 2021 Recording, Slides\nSE Asia webinar series - September 2 - 5, 2020 Recording\nCrossref LIVE UK/EU - October 8 2020 - Online - Recording - Slides Crossref LIVE US/Canada - October 6 2020 - Online - Recording - Slides\nCrossref LIVE Brazil - June 25 2020 - Online (in Portuguese) - Recording - Slides\nCrossref LIVE Korea - June 17 2020 - Online (in Korean) - Recording - Slides Crossref LIVE Arabic - May 11 2020 - Online (in Arabic) - Recording - Slides\nCrossref LIVE Spanish - May 19 2020 - Online (in Spanish) - Recording - Slides\nCrossref LIVE CA - September 2019 - Oakland, CA Crossref LIVE Kuala Lumpur- July 8 2019 - Kuala Lumpur, Malaysia Crossref LIVE Bangkok- July 10 2019 - Bangkok, Thailand Crossref LIVE Bogota - June 4 2019 - Bogota, Colombia Crossref LIVE Kyiv - March 26 2019 - Kyiv, Ukraine Publisher workshop: metadata, Open Access and more in collaboration with the British Library - February 5 2019 - London, UK Crossref LIVE Mumbai - December 5 2018 - Mumbai, India OpenCon satellite event Oxford in collaboration with the Bodleian Library and Centre for Digital Scholarship - November 30 2018 - Oxford, UK Crossref LIVE Goiania - September 18 2018 - Goiania, Brazil Crossref LIVE Fortaleza - September 20 2018 - Fortaleza, Brazil Crossref LIVE Hannover - June 27 2018 - Hannover, Germany Crossref LIVE Ulyanovsk - June 18 2018 - Ulyanovsk, Russia Crossref LIVE Cape Town - April 19 2018 - Cape Town, South Africa Crossref LIVE Pretoria - April 17 2018 - Pretoria, South Africa Crossref LIVE Tokyo - February 14 2018 - Tokyo, Japan OpenCon satellite event Oxford in collaboration with the Bodleian Library and Centre for Digital Scholarship - December 1 2017 - Oxford, UK Crossref LIVE Yogyakarta - November 20 2017 - Yogyakarta, Indonesia Crossref LIVE Turkey - October 26 2017 - Online event Crossref LIVE London - September 26 2017 - London, UK Joint Global Infrastructure Conference (ORCID/Crossref/DataCite) - June 15 2017 - Seoul, South Korea Crossref LIVE Seoul - June 12 2017 - Seoul, South Korea Crossref LIVE Boston - May 30 2017 - Boston, MA, USA Crossref/THOR Outreach Meeting - April 24 2017 - Warsaw, Poland Crossref LIVE Beijing - March 30 2017 - Beijing, China Crossref LIVE Brazil - December 16 2016 - São Paulo, Brazil Asia Pacific community webinar - December 14, 2016 Slides Crossref LIVE Brazil - December 13 2016 - Campinas, Brazil Crossref LIVE DC - June 16 2016 - Washington DC, USA Crossref South Africa Workshop on Good Practice Publishing - September 1 2015 - Pretoria, South Africa Crossref International Workshop - July 7 2015 - Amsterdam, The Netherlands Crossref International Workshop - June 11 2015 - Vilnius, Lithuania Crossref International Workshop - April 29 2015 - Shanghai, China Crossref International Workshop - March 3 2014 - Barcelona, Spain\nPast community events +- 2025 events\rHow good is your metadata? A tour of the Crossref Participation Reports tool - February 6, 2025\n+- 2024 events\rCrossref 2024 Annual Meeting and Board Election - October 29, 2024\nServicios de Crossref para revistas y libros y los beneficios del DOI: CRECS – 2024 (Arequipa-Perú) - May 10, 2024\nCrossref community call: The shape of things to come - May 08, 2024\nConexiónCrossref Bogotá24 - February 29, 2024\n+- 2023 events\rNational Science Library Chinese Academy of Sciences - December 14 - 15 - Zhuhai, China Crossref Services for Librarians and Journal Editors - November 29\nAfricArXiv Open Science Webinar Series\nFeria Internacional del Libro de Guadalajara (Guadalajara International Book Fair) - November 25 - December 3\nCrossref for Research Scholars and Librarians, India - November 18\nFORM Community Development Series: Metadata Makes Open Research Better - November 15 2023 Charleston Conference - In Person: November 6 – 10; Online: November 27 – December 1\nBetter together webinar series together with DataCite and ORCID - November 1 Crossref Annual Meeting and Board Election #CRLIVE23 - October 31\nOpen Science and Innovation in Ukraine - October 26 - 27 Lithuanian Periodical Press conference - October 20\nCUUL Annual Research Dissemination Conference - October 19 - 20\nBetter together webinar series together with DataCite and ORCID - September 28, November 1\nOpen Access Days 2023 - September 27 - 29 - Berlin, Germany\nSciELO Network Meeting - September 25 - 26\nIntroduction to Crossref-Tanzania - September 26\nCrossref and Retraction Watch - September 27\nWorkshop on Open Citations and Openly Scholarly Metadata 2023 - September 26 - 27\nOpen Access Days 2023n - September 27 - 29 - Berlin, Germany\nBetter together webinar series together with DataCite and ORCID - September 29 Presentation\nCrossref and Retraction Watch - September 27\nWorkshop on Open Citations and Openly Scholarly Metadata 2023 - September 26 - 27\nAltum Funder Forum - September 20 - Alexander, VA\nKenya National Open Science Dialogue - September 20 - Kenya\nEIFL Lithuania - September 20\nOASPA Online Conference on Open Access Scholarly Publishing 2023 - September 19 - 21 COPE: Navigating Corrections, Retractions, and Expressions of Concern: workshop - September 19\nCOPE: Navigating Corrections, Retractions, and Expressions of Concern: workshop - September 19\n2023 EIFL General Assembly - September 18 - 20\nALPSP - September 13 - 15 - Manchester, UK\nMake Data Count summit - September 12 - 13\nPKP Software Sprint 2023 - September 11 - Hannover, UK Janeway Conference - September 8\nSheffield\u0026rsquo;s OpenFest - September 7\nDigital Preservation Coalition\u0026rsquo;s PID - September 6\nBetter Together: Improving Access to the Global Scholarly Record: Recording, Presentation - August 30 SciELO 25 Years Conference - July 28 ISMTE Global Event - July 18 - 20\nISSI 2023 - July 2 - 5 - Bloomington, IN\nJoint Conference on Digital Libraries (JCDL) - June 26 - 30\nSSP Virtual 5K Run, Walk, and Roll Returns for Second Year #SSP5K - June 10\nOpen Repositories in South Africa - June 12 - Stellenbosch, South Africa EASE Conference 2023 - June 1 - 3 - Istanbul\nSociety for Scholarly Publishing (SSP) 2023 Conference - May 31 - June 2 - Portland, OR\nAfLIA/SPARC Africa Post-conference Workshop on Library Publishing - May 28\nGetting started at Crossref - an introduction for new members including a review of PKP\u0026rsquo;s OJS Plug-in - May 26\nRegistering content with Crossref - Bangladesh - May 11 Metascience Conference 2023 - May 10 - Washington, DC Metadata connects the global community: a mid-year update - May 10\nIntroduction to Crossref - Bangladesh - May 9 An Introduction to Crossref - Nepal - May 9 Crossref Community update: metadata connects the global community Recordings and presentation - May 3\ncsv,conf,v7 2023 Conference - April 19-20\nFORCE2023 - April 18 - 20\nUKSG - April 13 - 15 - Glasgow, UK\nRDA 20th Plenary Meeting - Gothenburg (Hybrid) - March 21 - 23\nNISO Plus 2023 - February 14 - 16\nAPE 2023 - January 10 - 11 - Berlin\n+- 2022 events\rMunin Conference Panel on Open Identifiers\nBetter Together: Complete Metadata as Robust Infrastructure (APAC region) - Recording - November 28\nFeria Internacional del Libro de Guadalajara ( Guadalajara International Book Fair) - November 28-30\nISMTE 2022 Global Event - November 1\n2022 Charleston Conference - November 2 - 5\n2022 SSP Regional Meetup - October 27 - Oxford, UK Crossref Annual Meeting \u0026amp; Board Election #CRLIVE22 (CRLIVE22), Online - October 26\nHow can additional Crossref services help your organization? - October 24 Frankfurt Book Fair 2022, Stand M5, Hall 4.2 - October 17 - 21\nCrossref and Metadata: Advanced Insights - October 12 Better Together: Facilitating FAIR Research Output Sharing (APAC time zones) (with ORCID, Crossref, and DataCite) Recording, Slide deck ABEC (Associação Brasileira de Editores Científicos) Annual Meeting - Oct 4 Crossref and Metadata: An Introduction - September 27 Pubmet - Sept 14 - 16 - Online\nALPSP Annual Conference and Awards 2022 - September 14 - 16 - In person\nOASPA Online Conference on Open Access Scholarly Publishing - September 20 - 22 - Online\nDataCite Annual Member Meeting - September 22 - Online\nEurope PMC AGM - September 22 - London, UK\nPlagiarism detection in the evolving publishing landscape: Best practices for journals - September 22 - Online\nBetter Together: Open new possibilities with Open Infrastructure (APAC time zones)(with ORCID, Crossref, and DataCite) - June 27, 2022 Data Policy IG: Exploring features and improving standards for data availability statements - June 21, 2022 Crossref community call: the Research Nexus, June 14, 2022 - Recording: Western time zone, Recording: Eastern time zone, Recording transcript, Zoom Q\u0026amp;A transcript, Google slides, PDF, Posters from community guest speakers\nWorking with Scholarly APIs: A NISO Training Series - May 12 - Online\nALPSP Climate change: Practical steps to take action - March 16 - Online\nNISO Plus 2022: Global Conversations: Global Connections - February 15 - 17 - Online\nParis Open Science European Conference - February 4 - 6 - Online\n+- 2021 events\rFORCE2021 - December 7 - 9 - Online\nIntroduction to Registering Book Content Webinar, Brazilian Portuguese - November 22 Crossref Annual Meeting LIVE21 presentation - November 17 Crossref LIVE21 - November 9 - Online\nKnowledgeE OA week - Towards a more Knowledgeable world: Open Access research in MENA - October 28 - Online\nISMTE 2021 Global Virtual Event - October 11 - 14, 2021 - Online SSP New Directions Seminar - October 6 - 7, 2021 - Online\nARMA Annual Conference 2021 - October 6 - 7, 2021 - Online\nBeilstein Open Science Symposium 2021 - October 5 - 7, 2021 - Rüdesheim, Germany RORing-at-Crossref community webinar - September 29, 2021 - Online\nPeer Review Week 2021 - September 20 - 24, 2021 - Online\nOASPA conference 2021 - September 21, 2021 - Online The 25th International Conference on Science, Technology and Innovation Indicators, STI 2021 - September 15, 2021 - Online STI 2021 Conference - September 13 - 17, 2021 - Online Korean Council of Science Editors 10th anniversary conference - September 8, 2021 - Online OAI12 - September 6 - 10, 2021 - Online The Geneva Workshop on Innovations in Scholarly Communication - September 6 - 10, 2021 - Online\nASAPBio #feedbackASAP - July 21, 2021 - Online\nCrossref LIVE Indonesia webinar series - July 13 - 15, 2021 - Online - Recordings PKP Annual Meeting 2021 - June 18, 2021 - Online Japan Open Science Summit 2021 - June 15, 2021 - Online EOSC Symposium 2021 Programme - June 15, 2021 - Online\nCrossref update: The Road Ahead (Eastern timezones) - June 9, 2021 - Online Recording, Slides Crossref update: The Road Ahead (Western timezones timezones) - June 8, 2021- Online Recording, Slides\nSSP 2021 Virtual Meeting - May 24 - 27, 2021 - Online\nUNAM webinar: Infraestructura Académica Abierta: uso y explotación de metadatos - May 13, 2021 (Online)\nLibrary Publishing Forum - May 10 - 14, 2021 - Online\nEARMA Digital Event - Global grant identifiers: building a richer picture of research support - May 6, 2021 - Online\nLIS-Bibliometrics 2021 - May 5, 2021 - Online\nLos Metadatos Para la Comunidad de Investigacion - May 4, 2021 - Online - Recording - Slides\nJATS-con 2021 - April 27 – 29, 2021 - Online\nRDA 17th Plenary Meeting - April 20 - 23, 2021 - Online\nEARMA 2021 - April 14 - 20, 2021 - Online\nUKSG - April 12 - 14, 2021 - Online\nNORF Open Research in Ireland webinar: Infrastructures for Open Research - March 30, 2021 - Online STM Research Data - March 22, 2021 - Online\nMozfest 2021 - March 16, 2021 - Online\nEstablishing Open \u0026amp; FAIR Research Data: Initiatives and Coordination - March 22, 2021 - Online\nNAS Journal Summit - March 22, 2021\nPIDapalooza - January 28, 2021 - Online\nAcademic Publishing in Europe Nr. 16 (APE)](https://www.ape2021.eu/) - January 12 - 13, 2021 - Online\n+- 2020 events\rOpenCon Oxford 2020 in in collaboration with the Bodleian Library - December 4, 2020 - Online Crossref LIVE20 - November 10, 2020 - Online\nScholix Working Group: stakeholder uptake and next steps for article/data linking - November 12, 2020 - Online\nFrankfurt Book Fair 2020 - October 14 - 18, 2020 - Online\nOASPA 2020 Conference - September 21-24, 2020 - Online\nPlatform Strategies 2020 - September 23, 2020 - 24 - New York, NY\nABEC Annual Meeting 2020 - September 22-25, 2020 - Online\nCrossref Live (US timezones) - October 6, 2020 - Online\nWorkshop on Open Citations and Open Scholarly Metadata 2020 - September 9, 2020 - Online\nPUBMET2020 - Septelmber 16-18, 2020 - Online\nPIDapalooza - January 29-30, 2020 2020 - Lisbon, Portugal\nROR Community Meeting - January 28, 2020 - Lisbon, Portugal ASAPbio January 2020 workshop: A Roadmap for Transparent and FAIR Preprints in Biology and Medicine - January 20, 2020 - Hinxton, UK Academic Publishing in Europe (APE) - January 13-14, 2020 - Berlin, Germany NISO Plus - February 23 -25, 2020 - Baltimore, USA SocietyStreet (virtual) - March 26, 2020 - Washington, DC\nER\u0026amp;L - March 8 - 11, 2020 - Austin, TX CASE (webinar) - August 21, 2020 - Online\n+- 2019 events\rOpenCon Oxford Satellite - December 6, 2019 - Oxford, UK SPARC Africa Open Access Symposium 2019 - December 4-7, 2019 - Cape Town, South Africa STM Week - December 3-4, 2019 - London, UK\nMunin Conference on Scholarly Publishing - November 27-28, 2019 - Tromsø, Norway SpotOn London - November 21, 2019 - London, UK euroCRIS Strategic Membership Meeting - November 18-20, 2019 - Münster, Germany 7th Annual PKP Conference - November 18-20, 2019 - Barcelona, Spain Crossref Annual Community Meeting #CRLIVE19 - November 13-14, 2019 - Amsterdam, Netherlands 2LATmetrics conference - November 4-6, 2019 - Cusco, Perú Charleston Library Conference - November 4-8, 2019 - Charleston, SC RDA - October 23-25, 2019 Helsinki, Finland FORCE2019 - October 16-17, 2019 - Edinburgh, UK Frankfurt Book Fair 2019 - October 16-20, 2019 - Frankfurt, Germany 6:AM Altmetrics Conference - October 8-11, 2019 - Stirling, UK ISMTE Europe - October 3, 2019 - Oxford, UK Transforming Research - September 26-27, 2019 - Washington, DC Silverchair Platform Strategies - September 25-26, 2019 - New York, NY European Research Innovation Days - September 24-26, 2019 - Brussels, Belgium PubMet - September 18-20, 2019 - Zadar, Croatia ARMS 2019 - September 17-20, 2019 - Adelaide, South Australia iPres - September 16-20, 2019 - Amsterdam, Netherlands ALPSP 2019 - September 11-13, 2019 - Berkshire, UK 17th International Conference on Scientometrics \u0026amp; Infometrics (ISSI2019) - September 2-5, 2019 - Rome, Italy RDA UK Workshop - July 16, 2019 - London, UK OAI11 - June 19-21, 2019 - Geneva, Switzerland ARMA conference, 2019 - June 17-18, 2019 - Belfast, Northern Ireland 4th Regional Meeting of Academic Journal Editors - June 5-7, 2019 - Medellín, Columbia International Conference on Electronic Publishing (ElPub) 2019 - June 2-4 - Marseille, France CALJ Annual Conference 2019 - June 1-2, 2019 - Vancouver, BC SSP 41st Annual Meeting - May 29-31, 2019 - San Diego, CA iAnnotate 2019 - May 22-23, 2019 - Washington, DC JATS-Con 2019 - May 21, 2019 - Cambridge, UK Library Publishing Forum - May 8-10, 2019 - Vancouver, BC 8th International Scientific and Practical Conference - April 23-26, 2019 - Moscow, Russia STM US Annual Conference 2019 - April 11-12, 2019 - Washington, DC UKSG 42nd Annual Conference and Exhibition - April 8-10, 2019 - Telford, UK Metrics in Transition Workshop 2019 - March 27-28, 2019 - Göttingen, Germany\nThe London Book Fair 2019 - March 12-14, 2019 - London, UK IFLA SIG on Library Publishing 2019 Midterm Meeting - February 28 - March 1, 2019 - Dublin, Ireland AAP/PSP 2019 Annual Conference - February 6-8, 2019 - Washington, DC Publisher workshop: metadata, Open Access and more - February 5, 2019 - London, UK PIDapalooza 2019 - January 23-24, 2019 - Dublin, Ireland APE 2019 - January 16, 2019 - Berlin, Germany SSP Pre-Conference - January 15, 2019 - Berlin, Germany\n+- 2018 events\rCoko London - December 5 - London, UK\nSTM Week 2018 Tools and Standards - Decemer 4 - London, UK\nOpenCon Oxford - November 30 - Oxford, UK\nMunin Conference on Scholarly Publishing - November 28 - 29 - Tromsø, Norway\nCrossref LIVE18 #CRLIVE18 - November 13 -14 - Toronto, Canada\nSpotOn London - November 3 - London, UK\nWorkshop on Research Objects - October 29 - Amsterdam, Netherlands\nRDA 12th Plenary Meeting - October 26 - Gaborone, Botswana\nSciDataCon 2018 - October 22 - 26 - Gaborone, Botswana\nFrankfurt Book Fair - October 10 - 12 - Frankfurt, Germany\nFORCE2018 - October 10 - 12 - Montreal, Canada\nTransforming Research - October 3 - 4 - Providence, RI\nSciELO 20 Years - September 26 - 28 - Sao Paulo, Brazil\n5AM London 2018 - September 25-28 - London, UK\nCOASP 2018 Conference - September 19 - Vienna, Austria\nDublin Core Metadata Initiative - September 10 - 13 - Porto, Portugal\nOpenCitations workshop - September 3 - 5 - Bologna, Italy\nEuropean Association of Science Editors (EASE) - June 7 - 10 - Bucharest, Romania\nINORMS Conference - Jun 6 - 8 - Edinburgh, UK\nSSP 40th Annual Meeting, Booth #212A - May 30 - June 1 - Chicago, IL, USA\nLibrary Publishing Forum - May 21 - 23 - Minneapolis, MN, USA\n3er Congreso Internacional de Editores Redalyc - May 16 - 18 - Trujillo, Peru\nCSE 2018 Conference - May 5 - 8 - New Orleans, LA, USA\nMIT Better Science Ideathon - April 23 - Cambridge, MA, USA\nComputers in Libraries 2018 - April 17 - 19 - Arlington, VA, USA\nEARMA Conference 2018 - April 16 - 18 - Brussels, Belgium\nInternational Publishing Symposium - April 12 - 13 - Oxford, UK\nUKSG 2018 - April 9 - 11 - Glasgow, Scotland\nISMTE 2018 Asia Pacific Conference - March 27 - 28 - Singapore\nNFAIS 2018 Annual Conference - February 28 - March 2 - Alexandrea, VA, USA\nResearcher to Reader - February 26 - 27 - London, UK\nASAPBio Meeting - February 7 - 9 - Chevy Chase, Maryland, USA\nLIS Bibliometrics - January 30 - London, UK\nPeer Review Transparency Workshop - January 24 - Cambridge, MA, USA\nPIDapalooza - January 23 - 24 - Girona, Catalonia, Spain\n+- 2017 events\rAGU - December 14 - 15 - New Orleans, LA, USA\nSTM Innovations Seminar 2017 - December 6 - London, UK\nCrossref #LIVE17 - November 14 - 15 - Singapore\nXUG eXtyles User Group Meeting - November 2-3 - Cambridge, MA, USA\nDublin Core 2017 - October 26-29 - Washington, DC, USA\nFORCE 2017 - October 25-27 - Berlin, Germany\nFrankfurt Book Fair, #FBM17 - October 11-15 - Frankfurt, Germany\nAltmetrics Conference 4:AM - September 26 - 29 - Toronto, Canada\nCOASP 2017 - September 20 - 21 - Lisbon, Portugal\nALPSP Conference - September 13 - 15 - Netherlands\nISMTE North American Conference - August 10-11 - Denver, CO, USA\nLIBER 2017 Conference - July 5-7 - Patras, Greece\nALA Annual Conference (ALCTS CRS) - June 25 - Chicago, IL, USA\nEMUG 2017 - June 22-23 - Boston, MA, USA\nORCID Identifiers and Intellectual Property Workshop - June 22 - Paris, France\n32nd Annual NASIG Conference 2017 - June 8 -11 - Indianapolis, IN, USA\nSSP 39th Annual Meeting - Booth 500A - May 31-June 2 - Boston, MA, USA\n5th World Conference on Research Integrity - May 28 - 31 - Amsterdam, The Netherlands\nWikiCite 2017 - May 23-25 - Vienna, Austria\nCSE 2017 Annual Meeting - May 20-23 - San Diego, CA, USA\n2017 ScholarONE User Conference - May 3-4 - Madrid, Spain\nUKSG 2017 Annual Conference - April 10-12 - Harrogate, UK\nHighwire Spring Publishers Meeting - April 4-6 - Stanford, CA, USA\nCNI Spring 2017 Membership Meeting - April 3-4 - Albuquerque, NM\nISMTE 2017 Asian-Pacific Conference - March 27-28 - Beijing, China\nCOPE China Seminar 2017 - March 26 - Beijing, China\nACRL 2017 Conference - March 22-25 - Baltimore, MD, USA\n#FuturePub 10 - New Developments in Scientific Collaboration Tech - March 13 - London, UK\nResearch Libraries UK (RLUK) - March 8-10 - London, UK\nPSP 2017 Annual Conference - February 1-3 - Washington, DC, USA\n+- 2016 events\rSTM Digital Publishing Conference – December 6-8 – London, UK\nOpenCon 2016 – November 12-14 – Washington DC, USA\nPIDapalooza – November 9-10 – Reykjavik, Iceland\nFrankfurt Book Fair – October 19-23 – Frankfurt, Germany (Hall 4.2, Stand 4.2 M 85)\nORCID Outreach Conference – October 5-6 – Washington DC, USA\n3:AM Conference – September 26 – 28 – Bucharest, Romania\nOASPA – September 21-22 – Arlington VA, USA\nALPSP – September 14-16 – London, UK\nSciDataCon – September 11-17 – Denver CO, USA\nVivo 2016 Conference – August 17-19 – Denver CO, USA\nACSE Annual Meeting 2016 – August 10-11 - Dubai, UAE\nCASE 2016 Conference – July 20-22 - Seoul, South Korea\nSHARE Community Meeting - July 11-14 - Charlottesville, VA, USA\nSee also the archive of all our annual meetings and if you’re interested in viewing recordings about our services and more, check out our YouTube channel.\nBack to top of the page\nContact our outreach team if you\u0026rsquo;d like us to participate in an event or meet us at one of the above.\n", "headings": ["Upcoming events (in person/online)","Got a suggestion for an event (online/in-person) ?","The annual meeting archive","Past recordings and events","Learn more","Past community events"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2021-board-election/", "title": "2021 Board Election", "subtitle":"", "rank": 1, "lastmod": "2021-09-28", "lastmod_ts": 1632787200, "section": "Blog", "tags": [], "description": "We are pleased to share the 2021 board election slate. Crossref’s Nominating Committee received over 60 submissions from members worldwide to fill five open board seats. It was a fantastic group of applicants and showed the strength of our membership community.\nThere are five seats open for election (three small, two large), and the Nominating Committee presents the following slate.\nThe 2021 slate Candidate organizations, in alphabetical order, for the Small category (three seats available):", "content": "We are pleased to share the 2021 board election slate. Crossref’s Nominating Committee received over 60 submissions from members worldwide to fill five open board seats. It was a fantastic group of applicants and showed the strength of our membership community.\nThere are five seats open for election (three small, two large), and the Nominating Committee presents the following slate.\nThe 2021 slate Candidate organizations, in alphabetical order, for the Small category (three seats available):\nCalifornia Digital Library, University of California, Lisa Schiff Center for Open Science, Nici Pfeiffer Melanoma Research Alliance, Kristen Mueller Morressier, Sebastian Rose NISC, Mike Schramm Candidate organizations, in alphabetical order, for the Large category (two seats available):\nAIP Publishing (AIP), Penelope Lewis American Psychological Association (APA), Jasper Simons Association for Computing Machinery (ACM), Scott Delman Here are the candidates\u0026rsquo; organizational and personal statements You can be part of this important process by voting in the election If your organization is a voting member in good standing of Crossref as of September 20, 2021, you are eligible to vote when voting opens on September 29, 2021.\nHow can you vote? On September 29, 2021, your organization\u0026rsquo;s designated voting contact will receive an email with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. You will also receive a user name and password with a link to our voting platform.\nThe election results will be announced at the LIVE21 online meeting on November 9, 2021. Save the date!\n", "headings": ["The 2021 slate","Here are the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-news-ithenticate-v2.0-ready-for-launch/", "title": "Similarity Check news: iThenticate v2.0 ready for launch", "subtitle":"", "rank": 1, "lastmod": "2021-09-20", "lastmod_ts": 1632096000, "section": "Blog", "tags": [], "description": "Crossref Similarity Check news: iThenticate v2.0 ready for launch\nLast year, we announced the upcoming launch of a new version of iThenticate, the product from Turnitin that powers Crossref Similarity Check. We know some of you have been waiting a long time for this upgrade and we are very happy to share with you that we are now ready to release it.\nWe will be rolling out this new version in stages, so not everyone will be able to upgrade to the new version immediately.", "content": "Crossref Similarity Check news: iThenticate v2.0 ready for launch\nLast year, we announced the upcoming launch of a new version of iThenticate, the product from Turnitin that powers Crossref Similarity Check. We know some of you have been waiting a long time for this upgrade and we are very happy to share with you that we are now ready to release it.\nWe will be rolling out this new version in stages, so not everyone will be able to upgrade to the new version immediately. We\u0026rsquo;ll start with new Crossref Similarity Check subscribers who use iThenticate in the browser, and one member who uses iThenticate via the eJournalPress API integration.\nNext month, we will reach out to existing Crossref Similarity Check subscribers who use iThenticate in the browser (rather than through a manuscript tracking system), and further eJournalPress users. From then on, we\u0026rsquo;ll be contacting those of you who use Similarity Check through your manuscript tracking system, as and when your providers are ready to work with the new version.\nCrossref Similarity Check - first things first Crossref Similarity Check is a content comparison tool, powered by iThenticate and produced by Turnitin, to check the originality of scholarly works and detect potential cases of plagiarism. Crossref members are eligible for this service, which offers them a reduced rate for document checking (plus enhanced functionality) in exchange for making their own published content available to be indexed into the iThenticate database.\nThe Crossref Similarity Check service continues to grow in membership (1,531 members in 2020; 1,964 members in 2021, to date) and in the number of documents checked (1,922,621 manuscripts checked between January and July 2020 and 2,419,612 over the same period this year).\nJust as with the current version of iThenticate, Crossref Similarity Check subscribers will be able to compare documents against a vast database of internet sources and over 78 million full-text documents contributed by the Crossref members that use the service:\nCrossref - research articles, books, and conference proceedings provided by publishers of scholarly content all over the world Crossref posted content - preprints, eprints, working papers, reports, dissertations, and many other types of content that has not been formally published but has been registered with Crossref Internet - a database of archived and live publicly-available web pages, including billions of pages of existing content, and with tens of thousands of new pages added each day Publications - third-party periodical, journal, and publication content including many major professional journals, periodicals, and business publications from sources other than Crossref Similarity Check members Your Indexed Documents - other documents you have uploaded for checking (within your Crossref Similarity Check user account only, and not added to iThenticate\u0026rsquo;s main indexes) What\u0026rsquo;s new We are delighted to introduce the following new features and enhancements with iThenticate v2.0:\nIncreased document upload capacity Suspicious and hidden character detection Preprint exclusion filter Refreshed and responsive interface Similarity reports - save and share Annotations Content portal Improved API Increased document upload capacity This new version of iThenticate has an increased document upload capacity of up to 800 pages/200 MB and a Google Drive document upload functionality. Please note that per-document fees allow for a maximum of 25,000 25,000 characters (EDIT 21/11/4: words), as one billable unit (25,001-50,000 25,000 characters (EDIT 21/11/4: words) is two billing units, and so on).\nSuspicious or hidden character detection A new \u0026lsquo;Red flag\u0026rsquo; feature, highlighted at the top right hand side of the Similarity report and with in-line markers, signals the detection of hidden text such as text/quotation marks in white font or suspicious character replacement e.g., the substitution of a Latin e for a Cyrillic е or a Latin o for a Greek ο, which may have been deliberately added to avoid text-matching detection.\nRed flag feature: Hidden characters in the iThenticate v2.0 Similarity report\nPreprint exclusion filter Increasingly, authors are making available a preprint of their article, either before or at the same time as submitting it to a journal. With Turnitin, we have therefore developed a new exclusion filter for \u0026lsquo;Preprint Sources\u0026rsquo;, which can be applied directly from your Similarity report.\nRefreshed and responsive interface The new iThenticate has a cleaner, more intuitive and accessible interface, with responsive design for ease of use on different screen sizes. The Similarity report is no longer a static image but a text that can be searched, copied and pasted. The display of matches has been improved and simplified with two views only: \u0026lsquo;Sources overview\u0026rsquo; and \u0026lsquo;All sources\u0026rsquo;.\nSimilarity report in iThenticate v2.0\nSimilarity reports - save and share You can now save Similarity reports as a PDF file and share them via email through the iThenticate interface with authors. Please note: this is still work in progress and enhancements to this feature will be released in the coming months.\nAnnotations Annotations in Similarity reports is a brand new feature available in private mode only (in shared folders) in this initial release. Annotations will display the date, time and comments and can be edited or deleted as required. These private annotations will not be included in the \u0026lsquo;save and share\u0026rsquo; features mentioned above. Public, shareable, annotations will be included in a future release.\nPrivate annotations in the new Similarity report\nContent portal The new \u0026lsquo;Content portal\u0026rsquo; is a useful tool to check how much of your own published content has been successfully indexed into the iThenticate database and is now searchable. It will also help you self-diagnose and fix the content that has failed to be indexed.\nImproved API for subscribers who integrate Similarity Check with their manuscript tracking system API users will benefit from a new integration with manuscript tracking systems which will allow the display of the largest matching word count and the top 5 source matches alongside the Similarity score.\nWhat\u0026rsquo;s next We\u0026rsquo;re expecting a number of new features and enhancements to iThenticate version 2.0 as well as further manuscript tracking system API integrations in the coming months:\nUser/usage reporting functionality Editorial Manager API integration Further enhancements to the Similarity report user interface Parent/child account management reporting, to assist Crossref Sponsors Public vs. private annotations Document resubmission flow Customisable welcome email We\u0026rsquo;ll keep you posted We will post updates here as soon as new features, enhancements and API integrations are available and/or we are ready to upgrade the next group of members.\nWe\u0026rsquo;ll be contacting subscribers in stages to upgrade you to the new version, so keep your eyes open for an email from us. As you know, you have to supply full-text Similarity Check URLs in your Crossref metadata for over 90% of your own published content in order to be eligible for the service. We\u0026rsquo;ll be checking that anyone who wants to upgrade to v2.0 is still at 90% or above. You can check this yourself in advance on our eligibility checker - if you\u0026rsquo;ve fallen below 90%, the tool will give you instructions for adding your missing full-text Similarity Check URLs.\nIn the meantime, you will find the Similarity Check service documentation for the current version of iThenticate on our website. The documentation for the new version can be found on the Crossref Similarity Check site provided by Turnitin.\n✏️ Do get in touch via support@crossref.org if you have any questions or suggestions or start a discussion on our Community Forum\n", "headings": ["Crossref Similarity Check - first things first","What\u0026rsquo;s new","Increased document upload capacity","Suspicious or hidden character detection","Preprint exclusion filter","Refreshed and responsive interface","Similarity reports - save and share","Annotations","Content portal","Improved API for subscribers who integrate Similarity Check with their manuscript tracking system","What\u0026rsquo;s next","We\u0026rsquo;ll keep you posted"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/lesson-learned-the-hard-way-lets-not-do-that-again/", "title": "Lesson learned, the hard way: Let’s not do that again!", "subtitle":"", "rank": 1, "lastmod": "2021-09-08", "lastmod_ts": 1631059200, "section": "Blog", "tags": [], "description": "TL;DR We missed an error that led to resource resolution URLs of some 500,000+ records to be incorrectly updated. We have reverted the incorrect resolution URLs affected by this problem. And, we’re putting in place checks and changes in our processes to ensure this does not happen again.\nHow we got here Our technical support team was contacted in late June by Wiley about updating resolution URLs for their content. It\u0026rsquo;s a common request of our technical support team, one meant to make the URL update process more efficient, but this was a particularly large request.", "content": "TL;DR We missed an error that led to resource resolution URLs of some 500,000+ records to be incorrectly updated. We have reverted the incorrect resolution URLs affected by this problem. And, we’re putting in place checks and changes in our processes to ensure this does not happen again.\nHow we got here Our technical support team was contacted in late June by Wiley about updating resolution URLs for their content. It\u0026rsquo;s a common request of our technical support team, one meant to make the URL update process more efficient, but this was a particularly large request. Shortly thereafter, we were provided with nearly 1,200 separate files by Atypon on behalf of Wiley in order to update the resolution URLs of ~9 million records. We manually spot checked over 50 of these files, because, prior to this issue, our technical support team did not have a mechanism to automatically check for errors. That labor intensive review did not turn up any problems. That is, those 50 samples had no errors with the headers, like were found later.\nAmong the files we didn’t check, there were headers included in the files with different owning fromPrefix and acquiring toPrefix members’ DOI prefixes. In a URL update request, the prefixes should always be the same.\nAnd still other files included requests to update records with DOIs that had never even been registered. Here are some examples:\nH:email=support@crossref.org;fromPrefix=10.5555;toPrefix=10.5555\n10.5555/doi1 http://www.newurl.com/whatever\n10.5555/doi2 http://www.newurl.com/whatever2\nIn the example above, these fictional DOIs are both under prefix 10.5555. Thus, the result of this request will ONLY be that the resolution URLs of DOI 10.5555/doi1 and 10.5555/doi2 are updated in the metadata.\nH:email=support@crossref.org;fromPrefix=10.5555;toPrefix=10.9876 10.5555/doi1 http://www.newurl.com/whatever\n10.5555/doi2 http://www.newurl.com/whatever2\nIn this second example, these fictional DOIs are both under prefix 10.5555, but because the toPrefix in the header differs from the fromPrefix, the result of this request will be that the resolution URLs of 10.5555/doi1 and 10.5555/doi2 are updated in the metadata AND the owning prefix of both records will be transferred from prefix 10.5555 to prefix 10.9876.\nWe kicked off the URL update request on 30 June and all legitimate DOIs whose files were free of errors were updated by 7 July (yes, it takes about a week to update the resolution URLs for ~9 million records).\nOn 9 July, Peter Strickland of the International Union of Crystallography, one of 22 members affected by this mistake, contacted us to enquire how/why much of their content was resolving to incorrect URLs and why ownership of their content appeared within our search interface to be Wiley. Peter was rightly concerned. We were, too. Our technical support team quickly elevated this issue, because, frankly, this is not the first time our finicky URL update process has caused unwanted metadata updates, albeit not quite at this volume.\nHow we investigated the problem We rallied our internal team. We investigated and discovered that we believed that some ~600,000 DOIs were erroneously included and updated in the requested 1,200 files. We later extended that estimate to include other conditions, in order to be as cautious as we could, to over 1 million DOIs. In the end, we determined that the incorrect files attempted updates of 1,228,041 DOIs. Due to the errors in the files (i.e., erroneous headers and non-registered DOIs), we only actually updated and then reverted 520,512 DOIs. The other 700,000+ DOIs were never updated (because of errors in the original files provided to us) or simply had never been registered with us.\nPrior to this mistake, Crossref had never reverted a member’s metadata update before. To be clear, and as I said above, we have had other URL update mistakes over the years, like this one; they were just smaller in scale. We knew there were holes in our process that needed to be plugged. And we knew we needed a better solution for members to manage these updates themselves without our manual intervention. So, while there were mistakes made in the files supplied to us, this was our error and we’re fixing it; more on that below.\nFor this situation, we quickly realized that reversion of the metadata update was the best option for us, albeit we did not have an existing process in place to execute that reversion. That’s because we only keep the current version of each metadata record. We couldn’t back out of the change; we couldn’t simply restore these records to the metadata registered with us as of late June, because we no longer had an easily accessible, central record of those previous resolution URLs. What we did have was a record of all the previous submissions made against each DOI, so our technical team, focused their efforts there.\nHow we fixed all those records We had two errors to correct: the ownership transfers (those records that had inadvertent and mismatched from/to prefixes) and the incorrect resolution URLs. We reverted all of the ownership transfers on 9 July and then double and triple checked that ownership during the week of 12 July to ensure we didn’t miss anything.\nThe resolution reversion was more complicated. We invested in creating a patch to identify the records that had been updated by our team, and then extract the last legitimate resolution URL registered with us by the owning member in order to revert the metadata for each record. In order to provide confidence that this mistake was contained, we also built a check into the patch to ensure that those DOIs that did have their ownership temporarily transferred were not updated during the few days that ownership was incorrect. That check helped us determine that none of the 520,512 DOIs were incorrectly updated beyond this mistaken URL update request.\nThe technical team built and tested this patch. The tests turned up gaps in the patch, so we refined it during the week of 2021 July 12. We kicked off the reversion of these records on Monday, 19 July at 20:05 UTC and the patch completed all reversions at 20:14 UTC, Thursday, 22 July.\nIn the end, we successfully reverted all of the resolution URLs for those 520,512 DOIs we identified; provided daily updates and apologies to the 22 affected members; together we worked some longer hours; and persevered.\nEd updates everyone internally on the situation and thanks all the people who worked together to resolve the issue\nNext up We don\u0026rsquo;t want this to ever happen again. Like, never. We clearly need to make changes to our internal processes to prevent this in the future.\nHere’s what’s ahead:\nWe are building a checker that we can run URL update files through to automate and our checks. This means we will be able to check every single file in a large batch, rather than relying on manual and labor intensive spot-checking;\nAs said above, one compounding issue in this mistake was the mismatched from/to prefixes in the file headers. Our technical support team uses the same file headers to transfer ownership/stewardship of a record or set of records between members AND to update resolution URLs. These two tasks are almost never legitimately completed in the same file. That is, there is usually a lag between ownership transfers and resolution URL updates (most members will request an ownership transfer and then a month or two later update their URLs). Because of this, simply decoupling these two tasks (feel free to follow our work at this link) would help eliminate a glaring risk, so we’re working on that too;\nLastly, we’re researching ways we can streamline resource resolution URL updates. You can also monitor our progress on this one. No promises or specifics yet, but we’re eager to reduce toil on our technical support team, avoid problems like this one, and provide members safe and straightforward ways to better update your metadata.\nThanks for the support of the whole Crossref team and our community - and for reading this far! Never a dull moment\u0026hellip;\n", "headings": ["TL;DR","How we got here","How we investigated the problem","How we fixed all those records","Next up"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/url-updates/", "title": "URL Updates", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/anna-tolwinska/", "title": "Anna Tolwinska", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-conversations-audio-blog-about-helping-open-science/", "title": "Crossref Conversations: audio blog about helping open science", "subtitle":"", "rank": 1, "lastmod": "2021-08-20", "lastmod_ts": 1629417600, "section": "Blog", "tags": [], "description": "Crossref Conversations is an audio blog we\u0026rsquo;re trying out that will cover various topics important to our community. This conversation is between colleagues Anna Tolwinska and Rosa Morais Clark, discussing how we can make research happen faster, with fewer hurdles, and how Crossref can help. Our members have been asking us how Crossref can support open science, and we have a few insights to share. So we invite you to have a listen.", "content": "Crossref Conversations is an audio blog we\u0026rsquo;re trying out that will cover various topics important to our community. This conversation is between colleagues Anna Tolwinska and Rosa Morais Clark, discussing how we can make research happen faster, with fewer hurdles, and how Crossref can help. Our members have been asking us how Crossref can support open science, and we have a few insights to share. So we invite you to have a listen.\n[UPDATE: Since this recording ROR IDs are now part of the Crossref schema.]\nHelpful links Here are links to all the sources mentioned in the recording.\nRecording transcript Lots of great information on our blog Send questions to: feedback@crossref.org Let\u0026rsquo;s continue the conversation on our Community Forum Metadata 20/20 - great information about how richer more open metadata can make research happen faster Crossref’s Board votes to adopt the Principles of Open Scholarly Infrastructure (POSI) Helping researchers identify content they can text mine Thanks for listening!\n", "headings": ["Helpful links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/affiliations/", "title": "Affiliations", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/schema/", "title": "Schema", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/some-rip-roring-news-for-affiliation-metadata/", "title": "Some rip-RORing news for affiliation metadata", "subtitle":"", "rank": 1, "lastmod": "2021-07-26", "lastmod_ts": 1627257600, "section": "Blog", "tags": [], "description": "We’ve just added to our input schema the ability to include affiliation information using ROR identifiers. Members who register content using XML can now include ROR IDs, and we’ll add the capability to our manual content registration form, participation reports, and metadata retrieval APIs in the near future. And we are inviting members to a Crossref/ROR webinar on 29th September at 3pm UTC.\nThe background We’ve been working on the Research Organization Registry (ROR) as a community initiative for the last few years.", "content": "We’ve just added to our input schema the ability to include affiliation information using ROR identifiers. Members who register content using XML can now include ROR IDs, and we’ll add the capability to our manual content registration form, participation reports, and metadata retrieval APIs in the near future. And we are inviting members to a Crossref/ROR webinar on 29th September at 3pm UTC.\nThe background We’ve been working on the Research Organization Registry (ROR) as a community initiative for the last few years. Along with the California Digital Library and DataCite, our staff has been involved in setting the strategy, planning governance and sustainability, developing technical infrastructure, hiring/loaning staff, and engaging with people in person and online. In our view, it’s the best current model of a collaborative initiative between like-minded open scholarly infrastructure (OSI) organizations.\nLast year, Project Manager Maria Gould described the case for publishers adopting ROR and ROR was ranked the number one priority at our last in-person annual meeting. Now it’s time that Crossref’s services themselves took up the baton to meet the growing demand.\nThe inclusion of ROR in the Crossref metadata will help everyone in the scholarly ecosystem make critical connections more easily. For example, research institutions need to monitor and measure their output by the articles and other resources their researchers have produced. Journals need to know with which institutions authors are affiliated to determine eligibility for institutionally sponsored publishing agreements. Funders need to be able to discover and track the research and researchers they have supported. Academic librarians need to easily find all of the publications associated with their campus.\nEarlier this month, GRID and ROR announced that after working together to seed the community-run Research Organization Registry, GRID would be retiring from public service and handing the proverbial torch over to ROR as the scholarly community’s reliable universal open identifier for affiliations. That means that our members who have been using GRID now need to consider their move to ROR and think about how they can add ROR IDs into the metadata that they manage and share through Crossref.\nThe plan We’ve been able to include ROR IDs for our grant metadata schema as affiliation information for two years, since July 2019. And the Australia Research Data Commons (ARDC) was the first member to add ROR IDs to the Crossref system in 2020. In early July, we completed the work to accept ROR IDs for affiliation assertions for all other types of records with an affiliation or institution element, such as journal articles, book chapters, preprints, datasets, dissertations, and many more.\nNext, we will commence the plans to support ROR in our other tools and services, such as Participation Reports. We’ll work on alignment with the Open Funder Registry and share our plans to collect the information via the new user interface we’re developing for registering and managing metadata. Open Journal Systems (OJS) already has a ROR Plugin, developed by the German National Library of Science and Technology (TIB). This supports the collection of ROR IDs and future releases of this plugin and the OJS DOI plugin will allow including ROR IDs in the metadata sent to Crossref, to support thousands of our members to share ROR IDs via their Crossref metadata. We also aim to add ROR to our metadata retrieval options, including the REST API, which recently saw the start of an unblocking with our move to a more robust technical foundation.\nThe call for participation Many Crossref publishers, funders, and service providers are already planning to integrate ROR with their systems, map their affiliation data to ROR, and include ROR in Crossref metadata. In addition to publishers and funders, libraries, repositories, and other stakeholders are developing support for ROR. For example, the Plan S Journal Checker tool uses ROR IDs to let people check whether a particular journal is compliant with an author\u0026rsquo;s funder and institutional open access policies. In addition, the ROR website shows a growing list of active and in-progress ROR integrations.\nCrossref members registering research grants via Altum’s ProposalCentral system can already add ROR IDs. Now those registering articles, books, preprints, datasets, dissertations, and other research objects, can start including much clearer and all-important affiliation metadata as part of their content registration going forward. As with all newly-introduced metadata elements, we recommend adding ROR IDs from now and ongoing, but planning a distinct project to backfill older records. We know that more than 80% of records have been updated and enriched at least once with additional and cleaner metadata, so as members do this routinely, they can include ROR IDs alongside updating URLs, license or funding information, and other metadata.\nFor information on how ROR will be supported in the Crossref metadata, take a look at our latest schema release (version 5.3.0) or in this journal article example XML.\nJoin the discussion in our forum below and register for the Crossref/ROR webinar on September 29th at 3pm UTC to learn all you need to know about incorporating ROR into your Crossref metadata.\n", "headings": ["The background","The plan","The call for participation"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rfp-help-evaluate-the-reach-and-effects-of-metadata/", "title": "RFP: Help evaluate the reach and effects of metadata", "subtitle":"", "rank": 1, "lastmod": "2021-07-21", "lastmod_ts": 1626825600, "section": "Blog", "tags": [], "description": "UPDATE, 14 October 2021:\nWe received several excellent proposals in response to this RFP and we’d like to thank everyone involved for their time and enthusiasm.\nWe are excited to announce the two projects that have been selected, to run through early 2023. Stay tuned!\nWith or Without: Measuring Impacts of Books Metadata\nThis project will test the premise that academic books metadata improves discoverability and usage by assessing the impact of book chapter records with DOIs (unique from metadata associated with the entire book) with associated chapter and book attributes.", "content": "UPDATE, 14 October 2021:\nWe received several excellent proposals in response to this RFP and we’d like to thank everyone involved for their time and enthusiasm.\nWe are excited to announce the two projects that have been selected, to run through early 2023. Stay tuned!\nWith or Without: Measuring Impacts of Books Metadata\nThis project will test the premise that academic books metadata improves discoverability and usage by assessing the impact of book chapter records with DOIs (unique from metadata associated with the entire book) with associated chapter and book attributes. The study aims to prove or disprove its hypothesis and rank metadata attributes by their association with successful content discovery and access. The findings will be considered alongside similar metadata research in order to develop a metadata efficacy framework, which can be used to determine the return on metadata investments by publishers and service providers.\nLettie Y. Conrad and Michelle Urberg, Independent consultants\nMetadata For Everyone\nThis project will explore the metadata quality, consistency and completeness from various individual journals and communities. The project will pay special attention to elements that are most likely to vary across cultures, such as names and those that are potentially multi-lingual, with the understanding that metadata issues do not affect nor impact all communities in the same way.\nJuan Pablo Alperin, Associate Director of Research, Public Knowledge Project \u0026amp; Co-Director, Scholarly Communications Lab\nMike Nason, Scholarly Communications \u0026amp; Publishing Librarian, University of New Bruinswick Libraries\nMarco Tullney, Head of Publishing Services \u0026amp; Coordination Open Access at TIB – Leibniz Information Centre for Science and Technology\nWe’re excited (and a little nervous) to launch a new research project designed to assess the effects of metadata on research communications. We’re expecting this effort to be a significant contribution to the existing research on the topic and we’re really looking forward to getting started. We’re also a little nervous because of course we don’t know what the conclusions will be (after all, if we did, we wouldn’t be starting this project).\nAssume nothing It seems logical and very widely accepted that more and better metadata leads to good things. Does it? If so, how and how do we know that? What does the ‘before and after’ look like when metadata is corrected or enhanced? There are so many questions, so many stakeholders and enough variation around record types (books come to mind) and disciplines (hello citation styles) that the topic warrants all the attention it gets and more. This project is designed to be very broad in scope, sampling from various criteria, and is expected to last about a year.\nInterested in getting involved? If you’re a researcher involved in scientometrics or bibliometrics or if you’re a consultant with experience in original research, please have a read of the RFP and get in touch with a statement of interest by 1st September or with questions in the meantime. We’re looking for an individual, research group or organization that will work with us over the course of the project to define terms, finalize the approach, analyze the data and communicate the results, whatever they may be.\nRFP responses are requested by 1st September so don’t hesitate to get in touch with questions.\nIf you’re interested in the project but not in responding to the RFP, you may still be able to help. We would appreciate wide circulation of this announcement to help us find qualified respondents to the RFP so please do share this with your network. And, of course, we hope you stay tuned for the outcome of the work. Check back with us on that in about a year\u0026hellip;\n", "headings": ["Assume nothing","Interested in getting involved?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/behind-the-scenes-improvements-to-the-rest-api/", "title": "Behind the scenes improvements to the REST API", "subtitle":"", "rank": 1, "lastmod": "2021-07-06", "lastmod_ts": 1625529600, "section": "Blog", "tags": [], "description": "UPDATE, 24 August 2021: All pools have been migrated to the new Elasticsearch-backed API, which already appears to be more stable and performant than the outgoing Solr API. Please report any issues via our Crossref issue repository in Gitlab.\nUPDATE, 9 August 2021: The cutovers for the polite and Plus pools are delayed again. We\u0026rsquo;re still working to ensure acceptable performance and stability before serving responses from the new application and infrastructure.", "content": "UPDATE, 24 August 2021: All pools have been migrated to the new Elasticsearch-backed API, which already appears to be more stable and performant than the outgoing Solr API. Please report any issues via our Crossref issue repository in Gitlab.\nUPDATE, 9 August 2021: The cutovers for the polite and Plus pools are delayed again. We\u0026rsquo;re still working to ensure acceptable performance and stability before serving responses from the new application and infrastructure. Each cutover is currently delayed by one more week\u0026ndash;the polite pool is scheduled for 2021 August 17 and the Plus pool is scheduled for 2021 August 24.\nUPDATE, 2 August 2021: The cutovers for the polite and Plus pools are delayed. We\u0026rsquo;ve been mirroring traffic to the new polite pool and want to ensure acceptable performance and stability before serving responses from the new application and infrastructure. Each cutover is currently delayed by one week\u0026ndash;the polite pool is scheduled for 2021 August 10 and the Plus pool is scheduled for 2021 August 17.\nUPDATE, 13 July 2021: The first stage of the cutover is complete, so requests to the public pool are now being served by the new REST API. We took a slightly different approach to performing the cutover, so the \u0026ldquo;Documentation\u0026rdquo; and \u0026ldquo;Temporary domain\u0026rdquo; sections below have been updated.\nOur REST API is the primary interface for anybody to fetch the metadata of content registered with us, and we\u0026rsquo;ve been working hard on a more robust REST API service that\u0026rsquo;s about to go live.\nThe REST API is free to use and it gets around 300 million requests each month (we encourage users to adhere to our etiquette guidelines to keep things running smoothly). It is used for bibliometric studies, by platforms like Dimensions, by organizations like the National Library of Sweden, and to support countless other efforts.\nWe also offer enhanced access to our APIs and other services with Metadata Plus, and we recommend it for production services and others that benefit from guaranteed up-time, a higher rate limit, and priority support from our helpful staff.\nFor a while now, we\u0026rsquo;ve been working to migrate the REST API from Solr to Elasticsearch and from our datacenter to a cloud platform in order to address issues of scalability and extensibility.\nWe\u0026rsquo;re pleased to announce that we\u0026rsquo;ll be cutting over to the Elasticsearch-backed version of the REST API over the next few weeks, beginning July 13. This cutover will occur one pool at a time\u0026ndash;the public pool will be migrated first, followed by the polite pool on August 3, and the Plus pool on August 10 (see \u0026rsquo;etiquette\u0026rsquo; link above if you\u0026rsquo;re unfamiliar with our different pools). Please note updates at the top of this post for changes to the original schedule.\nWe\u0026rsquo;ve thoroughly tested the functionality and performance of the new REST API, and we\u0026rsquo;d like to invite you to test it out before we move production traffic to the new service. Try out your favorite API queries at https://0-api-production-crossref-org.libus.csd.mu.edu/. Feature parity, but note a few differences One of our primary objectives was to maintain feature parity between the old and new services, avoiding any breaking changes that might cause problems for existing services integrating with the REST API. We implemented a regression test suite which has given us the confidence to make such a foundational change. During the course of this project, we found it necessary and a good opportunity to make a few modifications. In each case, we analyzed usage and aimed to avoid making any breaking changes. We hope these represent improvements to the behavior and consistency of the REST API.\nThe group-title filter uses exact matching. This filter previously worked but was undocumented and unsupported.\nThe directory filter is deprecated. This was meant to be an experimental, unsupported filter, and the data has not met the standard we require.\nThe affiliation facet returns counts of affiliation strings rather than counts of terms within affiliation fields (thus resolving this Github issue).\nCursors may be used to page through results from the /members, /funders, and /journals routes, in addition to /works.\nWhile we suggest that everyone use cursors for pagination, we still support the offset functionality. We have introduced a limit of 80000 for offset values for the /members /funders and /journals routes\noffset behavior is slightly changed, now applying to the sum of rows and offsets rather than just offsets.\nThe published field is now present in API responses.\nThe /licenses route returns paged results.\nSorting by submitted is no longer supported. This was never officially supported or documented.\nThe /quality route has been removed. This was an undocumented, experimental feature.\nFunder name in /works metadata is the name provided by the publisher.\nEmpty relation fields correctly return an empty object.\nOnly ISBN and isbn-type for a record will be returned. ISBNs for associated volumes will be omitted.\nThe institution field is a list.\nquery uses different stop word defaults, though we expect querying to remain roughly the same.\nAPI responses may feature slightly different scores, as they come from different backends.\nSome technical notes on the cutover Documentation The above changes are documented in our new REST API documentation, which is now automatically generated via Swagger, resulting in more comprehensive coverage and more efficient feature development. During the cutover, the right documentation for you will depend on which pool you are using. The documentation for the new API can be found by visiting the API in a browser, or by navigating to https://0-api-crossref-org.libus.csd.mu.edu/help; and the docs for the old API remain here: https://github.com/CrossRef/rest-api-doc. The Github-hosted documentation will be deprecated once the cutover is complete.\nThis may not come as news, but bears repeating as we mentioned GitHub. We have moved our source code repositories from GitHub to GitLab, including all of our issue tracking.\nTemporary domain UPDATE: We ended up performing the public pool cutover via reverse proxies instead of redirects\u0026ndash;please disregard the note about temporary domains below. The api.crossref.org domain will remain the domain regardless of which pool you\u0026rsquo;re using or where we are in the cutover process.\nPlease note that the api.production.crossref.org domain is a temporary domain we are using during this cutover period. Traffic will be redirected to the new service one pool at a time via a 307 http redirect. Once the cutover is complete, we will go back to using the api.crossref.org domain. Do not update any software, scripts, libraries, tools, etc. to use the temporary domain.\nDifferences in query results Due to inherent differences in how Solr and Elasticsearch perform queries and rank results, you may see slightly different results when comparing the old and new services. If for whatever reason your workflow involves using multiple API pools (which we don\u0026rsquo;t recommend), you may see inconsistent results. Cursor behavior Cursors may break if your script is paging through results at the exact moment the cutover is performed, and you should retry your request once the release is complete. We will post the precise maintenance window to https://0-status-crossref-org.libus.csd.mu.edu/.\nFiling issues Feature requests and bug reports should be filed into the Crossref issue repository in Gitlab during this testing phase and once the new Elasticsearch-backed API is live in production.\nComing next While we hope the benefits of improved stability and extensibility are as exciting to you as they are to us, \u0026ldquo;feature parity\u0026rdquo; may not be the most thrilling message for our API users. In truth, one of the more exciting aspects of completing this migration is the end of the code freeze we instituted at the start of this effort. Now, we can work on new feature development and a continuous stream of bug fixes. We also improved the automatic test coverage as part of the work, meaning we can deliver features with greater confidence.\nThe first new feature we\u0026rsquo;ll be delivering via the REST API will be support for the \u0026ldquo;grants\u0026rdquo; record type, allowing for the retrieval of metadata for grants that have been registered with us, now numbering over 20,000 from 8 different funder members. This work is well underway and will be released once we are confident that the new REST API is stable in production. From there, we\u0026rsquo;ll continue to select the highest priority issues from our REST API backlog.\nAs always, should you have any questions about our REST API, check out the metadata retrieval section of our website, start a discussion on our community forum, file a Gitlab issue as mentioned above, or you can contact us via support@crossref.org.\n", "headings": ["Feature parity, but note a few differences","Some technical notes on the cutover","Documentation","Temporary domain ","Differences in query results","Cursor behavior","Filing issues","Coming next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/doaj/", "title": "DOAJ", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/doaj-and-crossref-sign-agreement-to-remove-barriers-to-scholarly-publishing-for-all/", "title": "DOAJ and Crossref sign agreement to remove barriers to scholarly publishing for all", "subtitle":"", "rank": 1, "lastmod": "2021-06-21", "lastmod_ts": 1624233600, "section": "Blog", "tags": [], "description": "22 June 2021, London, UK and Boston, MA, USA — The future of global open access publishing received a boost today with the signing of a Memorandum of Understanding between the Directory of Open Access Journals (DOAJ) and Crossref. The MOU formalizes an already strong partnership between the two organisations and furthers their shared pursuit of an open scholarly communications ecosystem that is inclusive of emerging publishing communities.\nBoth organisations aim to encourage the dissemination and use of scholarly research using open infrastructure, online technologies, regional and international networks, and community partners - all supporting local institutional capacity and sustainability around the world.", "content": "22 June 2021, London, UK and Boston, MA, USA — The future of global open access publishing received a boost today with the signing of a Memorandum of Understanding between the Directory of Open Access Journals (DOAJ) and Crossref. The MOU formalizes an already strong partnership between the two organisations and furthers their shared pursuit of an open scholarly communications ecosystem that is inclusive of emerging publishing communities.\nBoth organisations aim to encourage the dissemination and use of scholarly research using open infrastructure, online technologies, regional and international networks, and community partners - all supporting local institutional capacity and sustainability around the world.\n“DOAJ is delighted to be formalizing today’s agreement with Crossref, an organization we are already closely aligned with. Together we stand a greater chance of encouraging an open, fair, and fully inclusive future for scholarly publishing,” said Lars Bjørnshauge, DOAJ Founder and Managing Director.\nThe agreement will enable content from journals indexed on DOAJ to be more easily identified through the use of Crossref metadata. The MOU also covers the exchange of a variety of services and information and greater coordination of technical and strategic requirements between DOAJ and Crossref. Included too is the development of outreach and training materials, coordination of service and feature development, as well as research studies to explore the overlaps and gaps in the journals and metadata covered by each organisation.\n“As academic-led journals continue to grow in number and geographic reach, it’s important we support this community more effectively. Our partnership with DOAJ means we can share strategies, data, and resources in order to lower barriers for emerging publishers around the world,” said Ginny Hendricks, Crossref’s Director of Member \u0026amp; Community Outreach.\nAbout DOAJ DOAJ is a community curated online directory that indexes and provides access to high quality, open access, peer reviewed journals. DOAJ deploys more than one hundred carefully selected volunteers from among the community of library and other academic disciplines to assist in the curation of open access journals. This independent database contains over 15,000 peer-reviewed open access journals covering all areas of science, technology, medicine, social sciences, arts and humanities. DOAJ is financially supported worldwide by libraries, publishers and other like-minded organisations. DOAJ services (including the evaluation of journals) are free for all, and all data provided by DOAJ are harvestable via OAI/PMH and the API. See doaj.org for more information.\nAbout Crossref Crossref makes research objects easy to find, cite, link, assess, and reuse. We’re a not-for-profit membership organisation that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context. Visit crossref.org for further information.\nPlease contact louise@doaj.org or feedback@crossref.org with any questions.\n", "headings": ["About DOAJ","About Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-help-us-fill-in-the-gaps/", "title": "Event Data: Help us fill in the gaps", "subtitle":"", "rank": 1, "lastmod": "2021-06-11", "lastmod_ts": 1623369600, "section": "Blog", "tags": [], "description": "UPDATE August 2, 2021: This work was awarded to Laura Paglione of the Spherical Cow Group.\nTo date, we have collected around 740 million events from 12 different source since we launched our Event Data service service in 2017. Each event is an online mention of the research associated with a DOI, either via the DOI directly or using the associated URL. However, we know that there is much more out there.", "content": "UPDATE August 2, 2021: This work was awarded to Laura Paglione of the Spherical Cow Group.\nTo date, we have collected around 740 million events from 12 different source since we launched our Event Data service service in 2017. Each event is an online mention of the research associated with a DOI, either via the DOI directly or using the associated URL. However, we know that there is much more out there. Because of this, we would like to explore where we could expand.\nWe invite proposals to conduct a gap analysis for Event Data sources, looking at what we currently collect and seeing what more could be added. For the most relevant new sources, we are seeking an estimate of the effort to include them, and establish whether it is possible: we know that there are sources that are paywalled or with restrictive licensing not compatible with Event Data.\nThe aim of the project is to identify a list of potential new sources. With community input, we will look to add a number of these to Event Data in the future based on needs and priorities.\nFor full details of the requirements and how to make a proposal, see here. The deadline for proposals is 11 July 2021 and we anticipate that the work will be completed by the end of October 2021.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/request-for-proposal/", "title": "Request for Proposal", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/an-advisory-group-for-preprints/", "title": "An Advisory Group for Preprints", "subtitle":"", "rank": 1, "lastmod": "2021-06-09", "lastmod_ts": 1623196800, "section": "Blog", "tags": [], "description": "We are delighted to announce the formation of a new Advisory Group to support us in improving preprint metadata. Preprints have grown in popularity over the last few years, with increasing focus brought by the need to rapidly disseminate knowledge in the midst of a global pandemic. We have supported metadata deposits for preprints under the record type ‘posted content’ since 2016, and members currently register a total of around 17,000 new preprints metadata records each month.", "content": "We are delighted to announce the formation of a new Advisory Group to support us in improving preprint metadata. Preprints have grown in popularity over the last few years, with increasing focus brought by the need to rapidly disseminate knowledge in the midst of a global pandemic. We have supported metadata deposits for preprints under the record type ‘posted content’ since 2016, and members currently register a total of around 17,000 new preprints metadata records each month.\nAs preprints develop and different practices arise, we are keen to re-examine the metadata schema: to do this properly we need community input. We want to ensure that the schema is fit for purpose and supports the diversity of ways in which preprints are posted, linked with other objects, and used. Metadata schema need regular review, and this is just one example of a number of areas we are looking to update. Several topics we see as a high priority for preprints are better notification for when a preprint has been withdrawn or removed, accurate recording of versioning, and better indication of preprint server names.\nWe have invited a number of organizations we know to be active in this area, and are looking forward to some very positive discussions. Participants span five continents and include members who post preprints, indexing services, and others with significant experience in the area of preprints. The first meeting took place earlier this week and brought up a diverse range of themes that will be tackled in future meetings.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/relationships/", "title": "Relationships", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/time-to-put-the-r-back-in-rd/", "title": "Time to put the “R” back in “R&D”", "subtitle":"", "rank": 1, "lastmod": "2021-06-07", "lastmod_ts": 1623024000, "section": "Blog", "tags": [], "description": "It is time to put the \u0026lsquo;R\u0026rsquo; back into R\u0026amp;D.\nThe Crossref R\u0026amp;D team was originally created to focus on the kinds of research projects that have allowed Crossref to make transformational technology changes, launch innovative new services, and engage with entirely new constituencies. Some Illustrious projects that had their origins in the R\u0026amp;D group include:\nDOI Content Negotiation Similarity Check (originally CrossCheck) ORCID (originally Author DOIs) Crossmark The Open Funder Registry The Crossref REST API Linked Clinical Trials Event Data Grant registration ROR And for each project that has graduated, there have been several that have not.", "content": "It is time to put the \u0026lsquo;R\u0026rsquo; back into R\u0026amp;D.\nThe Crossref R\u0026amp;D team was originally created to focus on the kinds of research projects that have allowed Crossref to make transformational technology changes, launch innovative new services, and engage with entirely new constituencies. Some Illustrious projects that had their origins in the R\u0026amp;D group include:\nDOI Content Negotiation Similarity Check (originally CrossCheck) ORCID (originally Author DOIs) Crossmark The Open Funder Registry The Crossref REST API Linked Clinical Trials Event Data Grant registration ROR And for each project that has graduated, there have been several that have not. Some projects were simply designed to gather data. Others just didn’t generate enough interest. You are not truly experimenting if you don’t fail occasionally too.\nRecently we’ve been doing very little experimenting of any kind. Instead, the R\u0026amp;D team has mostly been seconded to the software development team to help them through a period of organizational and process change. We would not have made it through the past two years without their help.\nBut now we’re ready to focus on more ‘R’ and less ‘D’. And to that end, we are increasing the size of the team as well. Rachael Lammey will be joining the team as Head of Strategic Initiatives. She will work alongside our Principal R\u0026amp;D Developers, Esha Datta and Dominika Tkaczyk. Together they will be able to engage with new communities and immediately start experimenting with ways in which Crossref might be able to address their needs and use-cases.\nWe hope to soon add to our list of distinguished R\u0026amp;D project alumni.\nRationale \u0026amp; details The Crossref R\u0026amp;D group (AKA \u0026ldquo;Labs\u0026rdquo;) has been the incubator of many services that are now in production and which form a fundamental part of Crossref\u0026rsquo;s identity and value. Similarity Check, ORCID, Crossmark, Open Funder Registry, The REST API, Linked Clinical Trials, and Event Data all started as R\u0026amp;D projects. More recently the enhancement of our reference matching infrastructure and the development and launch of ROR were also R\u0026amp;D projects.\nAnd prior to the formation of the outreach group in 2015, the R\u0026amp;D group also led a critical function engaging with communities that, at the time, Crossref only had tangential connections with: PKP; DOAJ; funders; and the data and altmetrics communities.\nBut since the R\u0026amp;D group merged with the technology team back in 2019, we have done very little \u0026ldquo;R.\u0026rdquo; and very little community engagement of our own. Instead, the R\u0026amp;D team has supported the development team through a period of major cross-cutting projects and organisational change. Dominika has led the REST API rewrite and Esha\u0026mdash;when she is not acting as technical lead on ROR\u0026mdash;has also worked on the API rewrite and has kept Crossref metadata search on its feet. We would not have been able to make it through the past few years without their help.\nThroughout this period, Rachael Lammey has continued the vital work of identifying, engaging with, and advocating for members of our community who we previously didn\u0026rsquo;t even know were members of our community.\nThe strength of the R\u0026amp;D group was that it combined outreach, product, and development functions. It was not only able to engage with new constituencies, but to quickly experiment with ways in which Crossref might be able to serve them. Previously, members of the R\u0026amp;D team would return from a conference or workshop that no Crossref member had ever attended before with a set of new contacts and ideas for new services and tools. They\u0026rsquo;d form interest groups and develop prototypes. Sometimes the interest groups would lead nowhere and sometimes the prototypes would be discarded. But critically, some of them would turn into the major services and organisations that now form a foundational part of open scholarly infrastructure.\nAnd this is why it makes so much sense for Rachael to join the R\u0026amp;D team. The group is most effective when it is able to engage with new communities and immediately start experimenting with ways in which Crossref might be able to address their needs and use-cases. Rachael\u0026rsquo;s extensive experience in both product management and outreach\u0026mdash;combined with Esha and Dominika\u0026rsquo;s experience leading development projects\u0026mdash;is exactly what we need to reinvigorate the group and put the R back into R\u0026amp;D.\nTo kick off, we are going to be working on some small-ish, discrete projects. These include:\nBetter matching and linking of preprints to published articles; Extending our journal title classification to cover all journal and conference proceedings titles; and Tools to allow us to community-source structured metadata correction information and feed it back to our members. We will consult with and update the community on the kinds of projects we are working on through regular tech updates and a revitalised Labs area of our website.\nOh- and we will certainly be designing some new Labs creatures. \u0026ndash;G\n", "headings": ["Rationale \u0026amp; details"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-road-ahead-our-strategy-through-2025/", "title": "The road ahead: our strategy through 2025", "subtitle":"", "rank": 1, "lastmod": "2021-06-03", "lastmod_ts": 1622678400, "section": "Blog", "tags": [], "description": "This announcement has been in the works for some time, but everything seems to take longer when there is a pandemic going on, including finding time and headspace to plan out our strategy for the next few years.\nOver the last year or so we have had our heads down addressing how to scale our 20-yr-old system and operation \u0026ndash; and adapting to new ways of working. But we\u0026rsquo;ve also spent time talking to people, forging alliances, looking ahead, and making plans.", "content": "This announcement has been in the works for some time, but everything seems to take longer when there is a pandemic going on, including finding time and headspace to plan out our strategy for the next few years.\nOver the last year or so we have had our heads down addressing how to scale our 20-yr-old system and operation \u0026ndash; and adapting to new ways of working. But we\u0026rsquo;ve also spent time talking to people, forging alliances, looking ahead, and making plans. So we\u0026rsquo;re happy to now let everyone know exactly what we\u0026rsquo;ve been up to lately, what we are heading towards in 2025, and what projects and programs are prioritised on our near-term agenda.\nTl;dr Introducing the new Crossref strategy through 2025, extending the one we published in 2018 There are now two additional strategic goals, to make six: bolstering our team; living up to POSI Good progress has been made in reducing operational and technical debt - a lot of learning too We\u0026rsquo;re unblocking stuff to get more done, including expanding R\u0026amp;D (more on that next week) We have a new public roadmap 🎉 Come to next week\u0026rsquo;s mid-year update webinar to hear what\u0026rsquo;s happening and up next. The emergence of a strategic agenda 2018 seems like a decade ago, doesn\u0026rsquo;t it? Back then we set out a 2018-2021 strategic direction\u0026mdash;now archived\u0026mdash;that described four goals: adapt to expanding constituencies; simplify and enrich services; selectively collaborate and partner with others; and improve our metadata quality and comprehensiveness. These themes were formed from the output of a planning exercise with our board in mid-2017 which tackled scenarios that remain true today, including: the increasing diversity in scholarly publishing (library-publishing, academic-led journals, shifting geographic dominance, etc.); the growth in preprints and other content formats; the sustainability of scholarly publishing (who is funding it and whether that is an expanding or shrinking pool); and the increase in policy and regulation in this space.\nThat meeting was the catalyst for embracing openness and a broader set of constituents. It was also decisive about Crossref’s role in this evolving community to focus on our core competencies, defined as:\nA reputation as a trusted, neutral one-stop source of metadata and services Managing scholarly infrastructure with technical knowledge and innovation Convening and facilitating scholarly community collaboration. So you can see how we got to focusing on metadata, services, infrastructure, and broad community collaboration.\nAhh, 2019, such an innocent time When we wrote our post at the end of 2019 A turning point is a time for reflection we highlighted\u0026mdash;with data\u0026mdash;how different the Crossref community is nowadays. The post also linked to the results of our \u0026lsquo;value\u0026rsquo; research project and a fact file which had even more hard data and posed the question Which Crossref initiatives should be top or bottom priorities?. To answer that, the LIVE19 annual meeting group voted (using betting chips) on priority initiatives, with the following results:\nSupport and implement ROR Metadata best practices and principles Support for multiple languages Address technical and operational debt Schema updates such as JATS and CRediT Engagement with funders We all know what happened next: the collective health and social trauma of the COVID-19 pandemic. All of us struggled. You all did too. Homeschooling, homeworking, homestaying. Caring for\u0026mdash;and even saying goodbye to\u0026mdash;sick friends and family. Also beloved colleagues. Alongside these unfamiliar new stresses, members were joining in growing numbers, funders kept joining to register grants, conferences went online and we loved them (before then hating them), the number of records we hosted kept going up, and publishing (especially preprints) skyrocketed.\nThe plan hasn\u0026rsquo;t actually changed much. Those charts in the 2019 fact file still make for remarkable reading as those same trends continue. We simply haven\u0026rsquo;t had time to update people on where we are with plans. So it\u0026rsquo;s high time we give an update on these priorities as well as contextualise them in longer-term goals.\nBut first, some framing The chart below shows the approach we took to organise our thinking. A lot of it isn\u0026rsquo;t new; we have had the current mission statement, key messages (rally, tag, run, play, make), and truths since the rebranding work in 2015/2016. More recently, we have added POSI to our values, describing the principles and rules by which we operate as a committed open scholarly infrastructure organization.\nWe already have a lot of 'words'. So why do we also need a vision statement and where do the goals fit in? In order to prioritize the things we will work on first, we need to be able to track everything to a higher vision, ensuring that everything we do is working toward an agreed destination. When we have organization-wide goals, it means that everyone is clear on the direction, is able to prioritize individual and team work, and can see how their contribution fits in. This, in turn, instills confidence, and motivation - amongst staff as well as members and users. Our working vision statement (feedback needed!) is:\nWe envision a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nA vision is, of course, shared. It isn\u0026rsquo;t Crossref-specific but describes the world in which we all want to work together in future.\nNow for those contextual six goals Full details are on the new strategy page but here\u0026rsquo;s a summary below.\nThis goal is all about people, support, culture, and resilience. Not just because we\u0026rsquo;re coming through a panedmic, but also because we\u0026rsquo;re growing and we need to be able to scale and manage growth more purposefully, with appropriate policies, fees, and resources.\nWe published a POSI self-assessment earlier this year and like-minded initiatives are following suit. This is a stated goal because we want to be held publicly accountable to the Principles of Scholarly Infrastructure standards of governance, insurance, and sustainability.\nThis goal centres on growth, strengthening relationships, community facilitation, and content. Working with a growing number of Sponsors helps us lower barriers to participation around the world, including in languages other than English. Expanding the support we offer for research funders and institutions are priorities.\nThis goal involves researching and communicating the value of richer, connected, and reusable, open metadata, and incentivising people to meet best practices, while also making it possible (and easier) to do so.\nWe\u0026rsquo;ve always collaborated but we want to work even more closely with like-minded organisations to solve problems together. Perhaps in future we could also partner with others to find operating efficiencies for our overlapping stakeholders.\nThis goal is all about focus. And about delivering easy-to-use tools that are critically important for our community. A lot of invisible work has been happening behind the scenes; we\u0026rsquo;ve been strengthening (and will continue to strengthen) our code-base (while opening up all code) in order to unblock some of the initiatives we know people have been waiting for.\nRead more about what projects are included in the above goals in our full 2025 strategic agenda.\nYou\u0026rsquo;re invited to a mid-year update webinar Rather than saving everything for our annual\u0026mdash;usually November\u0026mdash;meeting, we\u0026rsquo;ll also do a mid-year update and plan to do so in May or June every year from now on, in addition to the November updates which include the board election and governance and budget information.\nThis year, we\u0026rsquo;re covering some of the main product development work we have completed, underway, and planned for the next quarter. We\u0026rsquo;ll run it live twice - once for those nearby The Americas timezones (June 8th 3pm UTC) and once for those nearby Asia Pacific timezones (June 9th 6am UTC). We have a lot to cover in 90 minutes\u0026mdash;including unveiling [our public roadmap[(http://bit.ly/crossref-roadmap)]\u0026mdash;but we\u0026rsquo;re going to try really hard to have a few minutes to discuss questions too.\nIn the meantime, or indeed anytime, join the discussion over on our community forum - see the discussion below and join in on our forum.\nWe want to be held accountable to these goals so we’re reliant on you, as a community, to let us know what you think of our 2025 strategic agenda. As always; we’re grateful for your support and advice.\n", "headings": ["Tl;dr","The emergence of a strategic agenda","Ahh, 2019, such an innocent time","But first, some framing","Now for those contextual six goals","You\u0026rsquo;re invited to a mid-year update webinar"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/election/", "title": "Election", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/our-annual-open-call-for-board-nominations/", "title": "Our annual open call for board nominations", "subtitle":"", "rank": 1, "lastmod": "2021-05-27", "lastmod_ts": 1622073600, "section": "Blog", "tags": [], "description": "Crossref\u0026rsquo;s Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in 2022. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September. Expressions of interest will be due Friday, June 25th, 2021.\nBoard roles and responsibilities The role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:", "content": "Crossref\u0026rsquo;s Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in 2022. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September. Expressions of interest will be due Friday, June 25th, 2021.\nBoard roles and responsibilities The role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The board is representative of our membership base and guides the staff leadership team on trends affecting scholarly communications. The board sets strategic directions for the organization while also providing oversight into policy changes and implementation. Board members have a fiduciary responsibility to ensure sound operations. Board members do this by attending board meetings, as well as joining more specific board committees.\nCrossref’s services provide central infrastructure to scholarly communications. Crossref’s board helps shape the future of our services, and by extension, impacts the broader scholarly ecosystem. We are looking for board members to contribute their experience and perspective.\nWho can apply to join the board? Any active member of Crossref can apply to join the board. Crossref membership is open to organizations that produce content, such as academic presses, commercial publishers, standards organizations, and research funders. In fact, this year the board has specifically included in the committee’s remit to “propose at least one name from a funder member for the current round of elections.”\nThere is a link at the bottom of this post to submit your expression of interest.\nWhat is expected of board members? Board members attend three meetings each year that typically take place in March, July, and November. Meetings have taken place in a variety of international locations and travel support is provided when needed. Following travel restrictions as a result of COVID-19, the board adopted a plan to convene at least one of the board meetings virtually each year and all committee meetings take place virtually. Most board members sit on at least one Crossref committee. Care is taken to accommodate the wide range of timezones in which our board members live.\nWhile the expressions of interest are specific to an individual, the seat that is elected to the board belongs to the member organization. The primary board member also names an alternate who may attend meetings in the event that the primary board member is unable to. There is no personal financial obligation to sit on the board. The member organization must remain in good standing.\nBoard members are expected to be comfortable assuming the responsibilities listed above and to prepare and participate in board meeting discussions.\nAbout the election The board is elected through the “one member, one vote” policy wherein every member organization of Crossref has a single vote to elect representatives to the Crossref board. Board terms are for three years, and this year there are five seats open for election.\nThe board maintains a balance of seats, with eight seats for smaller members and eight seats for larger members (based on total revenue to Crossref). This is in an effort to ensure that the diversity of experiences and perspectives of the scholarly community are represented in decisions made at Crossref.\nThis year we will elect two of the large member seats (membership tiers $3,900 and above) and three of the small member seats (membership tiers $1,650 and below). You don’t need to specify which seat you are applying for. We will provide that information to the nominating committee.\nThe election takes place online and voting will open in September. Election results will be shared at the November board meeting and new members will commence their term in 2022.\nAbout the nominating committee The nominating committee will review the expressions of interest and select a slate of candidates for election. The slate put forward will exceed the total number of open seats. The committee considers the statements of interest, organizational size, geography, gender, and experience.\n2021 Nominating Committee:\nLiz Allen, F1000/Taylor \u0026amp; Francis, London, UK, committee chair Melissa Harrison, eLife, Cambridge, UK Andrew Joseph, Wits University Press, Johannesburg, South Africa Abel Packer, SciELO, São Paulo, Brazil Lisa Scott, New England Journal of Medicine, Boston, USA How do you apply to join the board? Please click here to submit your expression of interest or contact me.\n", "headings": ["Board roles and responsibilities","Who can apply to join the board?","What is expected of board members?","About the election","About the nominating committee","How do you apply to join the board?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/service-provider-perspectives-a-few-minutes-with-our-publisher-hosting-platforms/", "title": "Service Provider perspectives: A few minutes with our publisher hosting platforms", "subtitle":"", "rank": 1, "lastmod": "2021-05-24", "lastmod_ts": 1621814400, "section": "Blog", "tags": [], "description": "Service Providers work on behalf of our members by creating, registering, querying and/or displaying metadata. We rely on this group to support our schema as it evolves, to roll out new and updated services to members and to work closely with us on a variety of matters of mutual interest. Many of our Service Providers have been with us since the early days of Crossref. Others have joined as scholarly communications has grown and services have evolved.", "content": "Service Providers work on behalf of our members by creating, registering, querying and/or displaying metadata. We rely on this group to support our schema as it evolves, to roll out new and updated services to members and to work closely with us on a variety of matters of mutual interest. Many of our Service Providers have been with us since the early days of Crossref. Others have joined as scholarly communications has grown and services have evolved. Though fewer than 20 in number, their impact far outweighs the size of the group.\nThey, like us, work with a great variety of members and have a broad view into publishing trends. In this post, we focus on views from some of the publishing hosting platform Service Providers, who\u0026rsquo;ve taken the time to share their thoughts on a few questions:\nWhat is the biggest change you\u0026rsquo;ve experienced working with publisher metadata over the last few years and how have you adapted to it? It has become more and more important that not only the DOIs are registered with the minimum of necessary metadata to get the DOIs registered, but that a most complete set of metadata is being sent along \u0026ndash; including author identifiers, funding information, abstracts, licenses, to support other Crossref services and improve discoverability.\n\u0026ndash; de Gruyter\nOur clients are increasingly aware of the key role metadata plays in the effective dissemination of research. With an increasing number of published articles and a clear domination of \u0026ldquo;search engines\u0026rdquo; and aggregation of content, metadata is the primary means of making sure that publications reach the right audience. Publishers\u0026rsquo; value-add includes not just copy editing, formatting, and packaging, but also now creating journal articles for the digital age that are discoverable and well linked to the research corpus. Furthermore, we sense a clear move toward standardization, which goes beyond the structure to introduce standardized semantics: adopting common taxonomies for classifying content in different dimensions. Our response is to introduce effective, automated and consistent services that capture, and surface metadata throughout the value chain from authoring to publication and search.\n\u0026ndash; Atypon\nHighwire\u0026rsquo;s publishers are always looking to use the latest DTD (Document Type Definition) for the content to stay up to current standards. Currently this would be JATS 1.2. They are choosing to remain current so that they can stay on top of all or new metadata that can enrich their deposits. We have handled this well and offer support for the latest version of DTD when they are released, but some publishers are not always familiar with what can/should be deposited with their content and this can be a learning process for them.\n\u0026ndash; MPS Limited\nHow do you explain to clients (and others!) why correct, quality metadata is important? In the digital age, metadata is the key to enabling effective content consumption. Publications that cannot be effectively discovered are of little value. We can only increase the impact of research with \u0026ldquo;discoverable\u0026rdquo; and \u0026ldquo;machine readable\u0026rdquo; publications. So ensuring correct and quality metadata is the key to optimizing not only the processing (finding the right journal, editor, reviewers) but also to positioning each publication properly. As the volume of published scientific research increases, article metadata is the way forward \u0026mdash; it brings \u0026ldquo;order\u0026rdquo; and enables our community to manage this volume.\n\u0026ndash; Atypon\nHighwire always positions itself as \u0026ldquo;good content in\u0026rdquo; means \u0026ldquo;good content out\u0026rdquo;. This is true for our own content stores. Strong and valid metadata will result in valid and strong deposits. We explain this to all new clients on-boarded with Highwire and the use of current standards and for current client projects where content should/can be enriched through re-load.\n\u0026ndash; MPS Limited\nGetting our journals to care about metadata is a two step process: First, make sure they understand how metadata will help their journal succeed (i.e. why it matters to them). Second, make it easy for them to produce metadata while minimizing the cost, time, or complexity of their workflow. The first step – making a case for why metadata matters – is often easier than you\u0026rsquo;d think. At the very least, most journal editors understand that metadata, e.g., JATS or DOI registration, is an important signifier of professionalism / prestige. In other words, they see that top journals publish metadata and want the same for their journal. From a more technical standpoint, metadata is important because that\u0026rsquo;s the format computers understand and, like it or not, the publishing ecosystem relies on computers to deliver all sorts of critical services – such as indexing, archiving, and discoverability. So, if you\u0026rsquo;re not publishing metadata, you\u0026rsquo;re likely missing the benefit of these services. The second step – making it easy to produce metadata – is more difficult. Journal editors generally understand metadata matters but often lack the technical skills or resources necessary to create metadata. This is where a platform, such as Scholastica, can be very helpful. Because platforms work with many journals, they can invest in tools to automate the creation of metadata, reducing costs for all their clients. For example, most platforms offer integrations to support automatic DOI registration. At Scholastica, we\u0026rsquo;re pushing this idea even further with automatic integration to more complicated services such as PubMed Central. By reducing cost and complexity, we can help new or small-budget journals have the same quality metadata normally reserved for large, established journals.\n\u0026ndash; Scholastica\nWe are sending other publishers\u0026rsquo; metadata to academic libraries and distribution channels. Erroneous metadata will have a direct impact on how discoverable a title may be. The more uniform and correct the metadata, the better it will be indexed in other places.\n\u0026ndash; de Gruyter\nWhat is the one industry development or trend you’re most excited about for the near future and why? Open Science and the ability to deliver research with the tools for reproducing it is the most exciting and game changing trend. Technology has enabled the output of science to transition from two-dimensional printed text delivery into globally accessible and responsive web-based delivery. We are now taking the next steps to further leverage web technology to enhance research output with rich assets ranging from audio and video, datasets, executable code, high-resolution imagery, interactive applications and more. As more assets accompany research publications, viewing these assets as modular, individually citable, and reusable becomes a requirement. We are reviewing the whole research output flow from authoring to publishing, and most importantly to its dissemination through the myriad of discovery tools now available.\n\u0026ndash; Atypon\nThe move of everything to the cloud \u0026ndash; this is changing and improving our infrastructure, our possibility to scale and to stay on top of technological development.\n\u0026ndash; de Gruyter\nThanks very much to the interviewees for their time and thoughts. We look forward to working with our entire Service Provider group on questions like these and many more. If you\u0026rsquo;d like more details, you can read about our Service Provider program or contact me for more information.\n", "headings": ["What is the biggest change you\u0026rsquo;ve experienced working with publisher metadata over the last few years and how have you adapted to it?","How do you explain to clients (and others!) why correct, quality metadata is important?","What is the one industry development or trend you’re most excited about for the near future and why?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/service-providers/", "title": "Service Providers", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/next-steps-for-content-registration/", "title": "Next steps for Content Registration", "subtitle":"", "rank": 1, "lastmod": "2021-05-17", "lastmod_ts": 1621209600, "section": "Blog", "tags": [], "description": "UPDATE, 20 December 2021\nWe are delaying the Metadata Manager sunset until 6 months after release of our new content registration tool. You can expect to see the new tool in production in the first half of 2022. For more information, see this post in the Community Forum.\nHi, I’m Sara, one of the Product Managers here at Crossref. I joined the team in April 2020, primarily tasked with looking after Content Registration mechanisms.", "content": "UPDATE, 20 December 2021\nWe are delaying the Metadata Manager sunset until 6 months after release of our new content registration tool. You can expect to see the new tool in production in the first half of 2022. For more information, see this post in the Community Forum.\nHi, I’m Sara, one of the Product Managers here at Crossref. I joined the team in April 2020, primarily tasked with looking after Content Registration mechanisms. Prior to Crossref, I worked on open source software to support scientific research. I’ve learned a lot in the last year about how our community works with us, and I’m looking forward to working more closely with you in the coming year to improve Content Registration tools.\nJust over a year ago, we updated you on the status of Metadata Manager. TL;DR: We learned that our approach with the tool wasn’t flexible enough to easily and quickly add other record types or update the input schema, and paused new development. We’re back with another update on Metadata Manager and our strategy for Content Registration user interfaces (UIs) going forward.\nOur helper tools for Content Registration The bulk of content registered with us is done so programmatically; that is, our members’ (or their service providers’) machines talking to our machines using our APIs. But, there are plenty of our members that don’t have the technical expertise to work with us this way. For those members, we provide various helper tools to assist with manual content registration.\nWe offer a variety of interfaces for registering many different types of content, including Web Deposit form for most record types, Metadata Manager for journal content, and Simple Text Query to register references. Each of these has its own use cases and limitations, leading to a confusing and inconsistent experience for members who are manually depositing metadata. From our perspective, maintaining this many interfaces in different codebases is inefficient, in part because an update to the schema likely leads to separate updates in each of them. A unified user interface to register content would both improve and simplify the user experience for you, our community, and make updates quicker and more efficient. The original goal of Metadata Manager was to be this unified interface. But we’ve learned that the approach we took was flawed: there have been problems reported by users, and the tool itself isn’t flexible enough to easily and quickly add new record types or support new fields when our input schema changes.\nA new approach to helper tools So we’ve decided to build something new and retire the old. We’ll be focusing on creating a brand new Content Registration user interface that will eventually replace Metadata Manager, the Web Deposit form, and Simple Text Query. And what we’ve learned from our experiences with Metadata Manager and Web Deposit has greatly influenced our strategy going forward. The new tool will:\nHave a Community focus Design for small - Our membership demographic is evolving. A large (and growing) number of our members are very small, often with a single publication and no technical resources. Creating XML can be a barrier to participating in Crossref, and our helper tools are designed to lower that barrier. Accessibility and localization support - All of our UIs should support major international accessibility guidelines and translation into local languages, to meet the needs of our global membership. Open source code - Build in the open, so that others can contribute. This could mean an entire UI that we haven’t prioritized, or adding a new translation file, or tweaking some CSS. Follow user-centered design processes Unified user interface - Improve user experience and simplify tools and services by providing members with one place to go to register content via a UI. Rapid iteration - Focus on a technical solution that allows for rapid development of UIs to support new record types and updates to our schema. Building the right features for the right users - The needs of our large members and smaller members are different. Experience has shown us that the core audience for a helper tool is smaller members; we’ll tailor the features to solve the challenges of our smaller members. Allow us to build content for the future Tactical approach to record types - Quickly build UIs in a strategic order. We can’t build support for every record type at once, so we want to identify and build in the areas of highest impact/lowest effort first. Deliberate approach to supported fields - Not all members will supply metadata for all fields in our schema. Building a UI to support all fields for a specific record type before moving on to another slows progress on that next record type. We’ll identify the most-used and most-useful fields to support first, and add more in a future iteration if needed. Deprecating Metadata Manager In order to free up the resources to develop the new Content Registration UIs, we need to stop doing other things - that means not adding to, supporting, or bug-fixing other Content Registration tools. We’re setting an aggressive goal of sunsetting Metadata Manager by the end of 2021, with a commitment to a smooth transition to our new tool. This means that new members should not start using Metadata Manager. New members who need a helper tool have a few choices:\nthose who use the OJS platform from PKP to host their journals (OJS V3 and above) should use the third party Crossref OJS plugin to register their content. other new members should use the Web Deposit form current members who are using Metadata Manager may continue to do so, but are advised that we won’t be doing bug fixes or further development on the tool, and that support will be scaled back. If possible, you should transition over to using the Web Deposit form. This wasn’t a decision made lightly, but one made after considering multiple options and all the data available to us about member usage and internal resources.\nTo highlight some of the data that led to this decision: the Support team tracks the types of support tickets they handle. In 2020, the 3rd most common ticket type was Metadata Manager-related. But less than 4% of metadata records registered with us are registered using Metadata Manager. Supporting Metadata Manager requires resources disproportionate to the amount of use the tool gets. For comparison, twice as many records are registered using the Web Deposit Form, but it generates far fewer Support tickets. To fix the bugs and issues reported about Metadata Manager requires an equally disproportionate amount of developer resources. So far, we have been unable to free up resources we would need to fix them all. Continuing to maintain this tool is effectively preventing us from building something new that will better meet the needs of our smaller members.\nWe know this will surprise and concern some of you, especially heavy users of Metadata Manager. We’re committed to making this a smooth transition, and over the coming months, we’ll provide more guidance to help current members migrate to our other tools.\nInvolving the community Building a tool that allows us to create and adapt content registration forms based on example input files is an exciting new approach - one that will allow us to better serve the needs of our smaller members across multiple record types and support those who want to adapt our tools to their own needs. We’ve already begun work on a proof-of-concept tool aligned with this new strategy and I’m excited to drive it to production. As this project develops, we’ll keep in close contact with members, conducting user interviews, feedback sessions, and using usage data to help guide our decision-making on features and design. As we’ll be building in the open, we’ll have prototypes to share along the way as we iterate to produce a tool that will stand the test of time as well as scale to support even more content and members in future. We welcome your feedback over on our Community Forum, where we’ve set up a dedicated category to discuss this topic.\n", "headings": ["Our helper tools for Content Registration","A new approach to helper tools","Have a Community focus","Follow user-centered design processes","Allow us to build content for the future","Deprecating Metadata Manager","Involving the community"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/sara-bowman/", "title": "Sara Bowman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/doing-more-with-relationships-via-event-data/", "title": "Doing more with relationships - via Event Data", "subtitle":"", "rank": 1, "lastmod": "2021-05-14", "lastmod_ts": 1620950400, "section": "Blog", "tags": [], "description": "Crossref aims to link research together, making related items more findable, increasing transparency, and showing how ideas spread and develop. There are a number of moving parts in this effort: some related to capturing and storing linking information, others to making it available.\nBy including relationship metadata in Event Data, we are taking a big step to improve the visibility of a large number of links between metadata. We know this is long-promised and we’re pleased that making this valuable metadata available supports a number of important initiatives.", "content": "Crossref aims to link research together, making related items more findable, increasing transparency, and showing how ideas spread and develop. There are a number of moving parts in this effort: some related to capturing and storing linking information, others to making it available.\nBy including relationship metadata in Event Data, we are taking a big step to improve the visibility of a large number of links between metadata. We know this is long-promised and we’re pleased that making this valuable metadata available supports a number of important initiatives. We will also be backfilling, so all previously deposited relationships will eventually become available as events. The first step will be to add relationships between items that have DOIs, such as between a research article and a related review report or dataset.\nWhat are relationships? When members register metadata with us, they have the possibility to identify other works, items, and websites that they know are related. This might be supplementary material or previous versions of a work (especially for preprints and working papers). Equally, identifiers for a protein, gene, or organism used in the research can be included. These are recorded as ‘relationships’ and can be accessed in the same way as the rest of the metadata we hold about registered content.\nSome examples Relationships in the metadata show links to the published article from this bioRxiv preprint. In the Crossref Rest API: \u0026#34;relation\u0026#34;: { \u0026#34;is-preprint-of\u0026#34;: [ { \u0026#34;id-type\u0026#34;: \u0026#34;doi\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;10.1038/s41467-020-17892-0\u0026#34;, \u0026#34;asserted-by\u0026#34;: \u0026#34;subject\u0026#34; } ], \u0026#34;cites\u0026#34;: [] }, And now in Event Data: \u0026#34;subj\u0026#34;: { \u0026#34;pid\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1101/2020.05.21.109546\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1101/2020.05.21.109546\u0026#34;, \u0026#34;work_type_id\u0026#34;: \u0026#34;posted-content\u0026#34; }, \u0026#34;obj\u0026#34;: { \u0026#34;pid\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1038/s41467-020-17892-0\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.1038/s41467-020-17892-0\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;doi-literal\u0026#34;, \u0026#34;verification\u0026#34;: \u0026#34;literal\u0026#34;, \u0026#34;work-type-id\u0026#34;: \u0026#34;journal-article\u0026#34; }, Linking to a dataset in the Dryad Digital Repository by a recent eLife article. In the Crossref metadata: \u0026#34;relation\u0026#34;: { \u0026#34;is-supplemented-by\u0026#34;: [ { \u0026#34;id-type\u0026#34;: \u0026#34;doi\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;10.5061/dryad.s58qh\u0026#34;, \u0026#34;asserted-by\u0026#34;: \u0026#34;subject\u0026#34; } ], \u0026#34;references\u0026#34;: [ { \u0026#34;id-type\u0026#34;: \u0026#34;doi\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;10.5061/dryad.s58qh\u0026#34;, \u0026#34;asserted-by\u0026#34;: \u0026#34;subject\u0026#34; } ], \u0026#34;cites\u0026#34;: [] }, And now in Event Data: \u0026#34;subj\u0026#34;: { \u0026#34;pid\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.7554/elife.19920\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.7554/elife.19920\u0026#34;, \u0026#34;work_type_id\u0026#34;: \u0026#34;journal-article\u0026#34; }, \u0026#34;obj\u0026#34;: { \u0026#34;pid\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.5061/dryad.s58qh\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://0-doi-org.libus.csd.mu.edu/10.5061/dryad.s58qh\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;doi-literal\u0026#34;, \u0026#34;verification\u0026#34;: \u0026#34;literal\u0026#34;, \u0026#34;work-type-id\u0026#34;: \u0026#34;Dataset\u0026#34; }, If you are interested in relationships for a single DOI, we still recommend checking the metadata of that record, however Event Data is a great option for looking across multiple records. For example, to check for relationships across a prefix, in a given time period, or for a specific type of relationship.\nData citation Data citations can be included in data deposits in relationship metadata, usually using the ‘is-supplemented-by’ relationship. By creating an event from each relationship, the links between journal articles and books, and the data they rely on are more visible. This makes the data much easier to locate.\nMany datasets have DOIs which are usually recorded with DataCite, meaning you are unlikely to find them via searches of Crossref metadata. Making data citation relationship metadata available in Event Data means it will be available in the same format as citations from datasets to articles (which DataCite sends to Event Data) and citations from articles to datasets from Crossref reference metadata (more to come on this later this year). It also means we will convert this information into Scholix format so that it can be harvested and combined with other sets of Scholix-compliant article/data links. Data citations will therefore be available for the community to identify, share, link and recognise research data. We’re working with initiatives like Make Data Count and STM’s research data program to support the growing uptake of good data citation practices. This is a big step forward in making data citation happen for the community; we have more to do, but Crossref is committed to completing this work as a strategic priority.\nWhat’s next? In this first stage we are adding relationships that link two objects with a DOI, and later this year we will bring in relationships using other identifiers such as accession numbers and URIs. That will make it more straightforward to ask questions of Event Data such as which organisms have relationships to which works with a DOI.\nMore info and staying in touch Find out more about Event Data in our support documentation or check out tickets in the GitLab repo. Keep informed and ask us anything via our community forum for Event Data discussion ", "headings": ["What are relationships?","Some examples","Relationships in the metadata show links to the published article from this bioRxiv preprint. In the Crossref Rest API:","And now in Event Data:","Linking to a dataset in the Dryad Digital Repository by a recent eLife article. In the Crossref metadata:","And now in Event Data:","Data citation","What’s next?","More info and staying in touch"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/sponsors/", "title": "Sponsors program", "subtitle":"", "rank": 4, "lastmod": "2021-05-11", "lastmod_ts": 1620691200, "section": "Get involved", "tags": [], "description": "Some small organizations who want to register metadata for their research and participate in Crossref but are not able to join directly due to financial, administrative, or technical barriers. Our Sponsors program has grown over the last decade and has now become the primary route to membership for emerging markets and small or academic-adjacent publishing operations.\nMore members now join Crossref via a Sponsor than directly. So Sponsors are important partner organizations in helping to scale to meet the growing demand of new members.", "content": "Some small organizations who want to register metadata for their research and participate in Crossref but are not able to join directly due to financial, administrative, or technical barriers. Our Sponsors program has grown over the last decade and has now become the primary route to membership for emerging markets and small or academic-adjacent publishing operations.\nMore members now join Crossref via a Sponsor than directly. So Sponsors are important partner organizations in helping to scale to meet the growing demand of new members.\nSponsors work closely with us in order to provide administrative, billing, technical, and\u0026mdash;if applicable\u0026mdash;language support, to the members they work with. They each have an agreement with us as well as with every member they represent, and they submit an annual report to us detailing their outreach and training activities. Sponsors are not members themselves\u0026mdash;and therefore cannot participate in our community governance\u0026mdash;but the organizations that work with them are full members and may vote in (or stand as a candidate for) our board elections.\nIn turn, we’re keen to support our Sponsors too, by running events and other outreach activities with them and listening to their feedback on what they need from us. You can find a list of our current Sponsors on this page.\nPlease note, we are pausing the acceptance of new Sponsors from regions where Sponsor numbers are already very high or not based in a GEM program country. Additionally, we are focusing on regions where growth is high and no Sponsor is present. By doing so we can focus on growing the program in areas where there is the greatest need.\nSponsor criteria Sponsors are key partners in making Crossref membership benefits available to all and keeping them to their membership obligations. There is quite a high bar to meet to be a Sponsor as we need to be sure you would represent Crossref accurately and successfully, and add value in the community.\nPlease make sure that your organization would be able to meet the criteria below.\nAbout you as a Sponsor You are a recognized organization in good standing with the scholarly community with a clear online presence describing your services. Your services are a good fit with Crossref services. You exhibit a clear knowledge of Crossref and our services, and of what is achieved by registering content with Crossref. You work with a particular segment or region of the research community who wouldn’t otherwise be able to work with Crossref due to barriers such as: Lack of resources either technically, financially, or operationally (or all) Need for support in languages other than English You have the technical know-how and resources to facilitate Content Registration with Crossref on behalf of members and you understand the importance of complete and accurate metadata, not solely registering a DOI. You are aware of the criteria that need to be met on an ongoing basis to participate in optional Crossref services e.g. Similarity Check, Cited-by. You understand that services such as these can only be offered to the Crossref members you work with, and is based on their eligibility for these services. You are aware that Crossref membership is not open to organizations or individuals who are subject to sanctions in the US, UK or EU - more information here. You have a financial/funding model and are capable of covering the membership fees for the members you represent and the content registration fees they incur. You can handle billing on behalf of members and will pay invoices within agreed payment terms. You can demonstrate the above in a documented plan that you would share with us and update and report on annually. You have the ability to work at scale and support a large number of members Your role as a Sponsor You work with organizations to enable them to be members of Crossref. This includes: Managing the membership set-up and joining process in collaboration with the members and our staff. Ensuring that organizations are clear about what they are agreeing to when becoming a Crossref member and joining Crossref through you as their Sponsor. Ensuring relevant contracts are completed by the member, that accurate information about members is sent to Crossref and kept up-to-date, Sharing and explaining member-specific communications and changes, and helping members adapt to new processes or obligations related to Crossref services. You perform checks to make sure the members you are working with are not prohibited from joining Crossref due to OFAC sanctions. You provide support for and promotion of Crossref activities and services. This includes being the first line of technical support for the members you work with, potentially providing training. You communicate with our staff in a timely manner. You positively contribute to the reputation of Crossref and its inclusive mission, adhering to the code of conduct and best practice guidelines that protect and enhance other members of the Crossref community collectively. When Sponsors stop being Sponsors Sometimes, things don\u0026rsquo;t work out, or plans change. We try to make sure that every Sponsor we accept will be able to commit to helping our members long-term. But if they change paths or stop being able to fulfill the role described above, we sometimes need to give notice of termination and to work with the members to find an alternative Sponsor. It is rare, but in such cases we will endeavour to follow these steps:\nLetter sent to Sponsor setting out concerns and asking for a plan to improve within a reasonable time period. We offer to help with this improvement plan, if relevant. Review performance and if no improvement, we inform the Sponsor that they will no longer be able to sign up new members for a short period. Assess progress and if no improvements we formally notify the Sponsor that we\u0026rsquo;re terminating our agreement, which includes 30 days\u0026rsquo; notice. We contact the relevant members to inform them of their options and the timeline. Depending on how many members the Sponsor represents, we may work with them for longer than 30 days to transition the members to new Sponsors or to direct membership. Members who don\u0026rsquo;t move to a new Sponsor will be moved to direct membership and will be responsible for content registration, fees, and administration going forward. Apply to become a Sponsor If you are located in one of the regions where we are seeking new Sponsors and if you are able to fulfil the above criteria, and are ready to apply to become a Sponsor, please contact us with your organization name and website, plus your plans for how you could support Crossref members as a sponsor. After that, we\u0026rsquo;ll ask you to provide additional information so that we can find out more about your motivations, who you would help, your planned activities, and how you would fulfill the role. We normally make a decision about your application within a month after this review.\nWe look forward to hearing from you and working together to lower barriers to participation around the world.\nLooking for a Sponsor? If you have content you want to register with us but are not able to join directly, find a Sponsor to contact about representation, rather than applying to join Crossref directly.\nFor more information, please contact our membership specialist.\n", "headings": ["Sponsor criteria","About you as a Sponsor","Your role as a Sponsor","When Sponsors stop being Sponsors","Apply to become a Sponsor","Looking for a Sponsor?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/service-providers/", "title": "Service providers", "subtitle":"", "rank": 1, "lastmod": "2021-05-04", "lastmod_ts": 1620086400, "section": "Get involved", "tags": [], "description": "Crossref Service Providers are a small group of organizations that work collaboratively with us to help support member obligations and best practices to the benefit of the wider scholarly community.\nService Providers are defined as Third party organizations such as hosting platforms, manuscript submission systems, XML/metadata providers and general publisher services organizations that are selected by Crossref members to retrieve Cited-by matches, create, register, and/or display metadata on their behalf.", "content": "Crossref Service Providers are a small group of organizations that work collaboratively with us to help support member obligations and best practices to the benefit of the wider scholarly community.\nService Providers are defined as Third party organizations such as hosting platforms, manuscript submission systems, XML/metadata providers and general publisher services organizations that are selected by Crossref members to retrieve Cited-by matches, create, register, and/or display metadata on their behalf.\nThe Service Providers/Crossref collaboration Communication is the focus of the Service Provider program. We work together to support the availability of new services and updates for members, identify trends, encourage best practices for everyone involved and prevent and minimize problems in the metadata supply chain.\nService Providers commit to: Support best practice metadata represented in Participation Reports Sharing their relevant workflows with Crossref Make reasonable efforts to accommodate changes to Crossref schema and services Provide Crossref with client services information, including metadata delivery options Staying up to date with Crossref services and policies, including participation in ongoing meetings and communications Providing feedback to Crossref on our services Crossref commits to: Providing advance communication of changes and new services requiring Service Provider awareness and development Provide Service Providers with member services information Helping Service Providers stay up to date with Crossref services and policies, including providing regular meeting opportunities Recognizing registered Service Providers by keeping this page up to date Developing Service Providers participation reports Current Service Providers +- Allen Press\rAllen Press aims to inspire your audience with an innovative blend of printing, marketing, publishing and distribution services crafted with the latest technology. We also serve a unique role in assisting the STEM community share ideas that improve the world. Combined with our state-of-the-art equipment and a team of seasoned pros, we provide rapid production and unmatched quality at competitive prices. Our commitment to excellence and attention to detail are more than just goals—they define how we help our customers.\nWebsite: https://0-www-allenpress-com.libus.csd.mu.edu/\n+- Aptara Inc.\rAs the STM publishing landscape evolves, so do Aptara’s service offerings. Our OA and publishing fee collection service, SciPris, continues to grow in clientele and functionality, keeping pace with evolving publishing agreement types. With PE/PM, copyediting, typesetting, and conversion services still at our core, the expansion of our peer review management, accessibility compliance/tagging, AI/ML, AR/VR, data mining, and content enrichment service lines allow publishers to source end-to-end workflows with Aptara. See our revamped web site for more.\nWebsite: https://www.aptaracorp.com/\n+- Aries Systems\rAries Systems transforms the way scholarly publishers bring high-value content to the world. Our innovative workflow solutions manage the complexities of modern print and electronic publishing—from submission to editorial management and peer review, to production tracking and publishing channel distribution. As the publishing environment evolves, Aries Systems is committed to delivering solutions that help publishers and scholars enhance the discovery and dissemination of human knowledge on a global scale. Publish faster, publish smarter, with Aries Systems.\nWebsite: https://www.ariessys.com/\n+- Atypon\rAtypon was founded in 1996, driven by the desire to democratize scientific research by expanding its availability, which meant giving scholarly publishers the software that they needed to excel in a new—and often intimidating—digital environment.\nAtypon’s initial development efforts resulted in Literatum, our online publishing and website development platform, first released in 1999. What was a five-person technology startup in Silicon Valley, is now an influential global organization, with a team of more than 480 in nine offices across the United States, the UK, Jordan, the Czech Republic, and Georgios’s native Greece.\nWebsite: https://www.atypon.com/\n+- Cadmore Media Inc.\rOur vision for founding Cadmore Media was born out of a broad ambition to transform the dissemination of video and audio content in the scholarly and professional world through expert technology. In addition to creating our own products, we aim to facilitate industry-wide innovation by leading industry efforts to shape and promote best practices and standards, thus benefiting any organization or service whose purpose is to publish or structure research and professional information.\nWebsite: https://cadmore.media/\n+- eJournalPress\reJournalPress is focused on providing web-based technology solutions for the scholarly publishing community. The company was initially founded as a software consulting service assisting companies in designing, programming, and deploying software and mission critical systems. In 1999 eJournalPress utilized this skill set to work with journals and publishers creating a new generation of web-based software tools to support manuscript submission, tracking, and peer review. The result of this engineering effort is EJPress.\nWebsite: https://www.ejournalpress.com/\n+- Ex Libris Ltd.\rAt Ex Libris, we believe in the value of education and research. Our mission is to allow academic institutions to create, manage, and share knowledge. With better tools, our customers achieve their goals and further academic initiatives.\nWebsite:https://exlibrisgroup.com/\n+- Ingenta\rIngenta was formed in 1998 and floated on the AIM market of the London Stock Exchange in April 2000. After a number of smaller acquisitions, the Group expanded through a merger with Vista.\nIngenta headquarters are in Oxford, UK with another office in New Jersey, USA. With industry experience going back nearly 40 years and more than 150 employees, Ingenta serves over 400 trade and scholarly publishers.\nWebsite: https://0-www-ingenta-com.libus.csd.mu.edu/\n+- JSTOR\rJSTOR is a digital library for the intellectually curious. We help everyone discover, share, and connect valuable ideas.\nWebsite: https://0-www-jstor-org.libus.csd.mu.edu/\n+- Knowledge Works Global Ltd.\rKnowledgeWorks Global Ltd. (KGL) combines the content and technology expertise of Cenveo Publisher Services and Cenveo Learning with Sheridan Journal Services and Sheridan PubFactory under the umbrella of the CJK Group.\nWebsite: https://www.kwglobal.com/\n+- MPS Limited\rMPS, a leading global provider of platforms, and content, and learning solutions for the digital world, was established as an Indian subsidiary of Macmillan (Holdings) Limited in 1970. The long service history as a captive business allowed MPS to build unique capabilities and talents through strategic partner programs. MPS is now a global partner to the world’s leading enterprises, learning companies, publishers, libraries, and content aggregators.\nWebsite: https://www.mpslimited.com/\n+- Nova Techset Ltd.\rNova Techset is a leading supplier of prepress services to the STM and academic publishing world. We provide pre-editing, copyediting, composition and ePub solutions as well as the full range of project management services for books and journals. With delivery centers in Bengaluru and Chennai, and offices in the UK and the US, we employ a staff of over 800 skilled and experienced personnel and produce over 1,000,000 book and journal pages a year.\nWebsite: https://www.novatechset.com/\n+- PubPub\rPubPub gives research communities of all stripes and sizes a simple, affordable, and nonprofit alternative to existing publishing models and tools.\nWebsite: https://www.pubpub.org/\n+- Scholastica\rScholastica was founded in 2012 in response to a growing need in academia for an easier, more modern way to peer review research articles and publish high-quality open access journals online.\nWebsite: https://scholasticahq.com/\n+- Sheridan\rAt Sheridan, we have deep roots in the book, journal, magazine, and catalog markets we serve, and our seasoned experts are not only knowledgeable about the conventions of these industries, but also their exciting new frontiers. As part of The CJK Group, under the brand of Sheridan, we’re organized by five locations, each providing specific services to our key markets.\nWebsite: https://www.sheridan.com/\n+- Silverchair\rSilverchair believes that innovation can fulfill the greatest promises of scholarship. As the leading independent platform partner for scholarly and professional publishers, we serve our growing community through flexible technology and unparalleled services. We build and host websites, online products, and digital libraries for our clients’ content, enabling researchers and professionals to maximize their contributions to our world. Our vision is to help publishers thrive, evolve, and fulfill their missions.\nWebsite: https://0-www-silverchair-com.libus.csd.mu.edu/\n+- Walter de Gruyter GmbH\rDe Gruyter publishes first-class scholarship and has done so for more than 270 years. An international, independent publisher headquartered in Berlin - and with further offices in Boston, Beijing, Basel, Vienna, Warsaw and Munich - it publishes over 1,300 new book titles each year and more than 900 journals in the humanities, social sciences, medicine, mathematics, engineering, computer sciences, natural sciences, and law. The publishing house also offers a wide range of digital media, including open access journals and books.\nWebsite: https://0-www-degruyter-com.libus.csd.mu.edu/\nYou can find more information on working with a service provider here or you may contact our community team.\n", "headings": ["Service Providers are defined as","The Service Providers/Crossref collaboration","Service Providers commit to:","Crossref commits to:","Current Service Providers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/joel-schuweiler/", "title": "Joel Schuweiler", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/open-source-code-giving-back/", "title": "Open-source code: giving back", "subtitle":"", "rank": 1, "lastmod": "2021-04-30", "lastmod_ts": 1619740800, "section": "Blog", "tags": [], "description": "TL:DR; Hi, I\u0026rsquo;m Joel GitLab UI unsatisfactory Wrote a UI to use the API Wrote a missing API Open company contributes changes back to another open company Now have a method for getting work done much easier Hurrah! I\u0026rsquo;m Joel, a Senior Site Reliability Engineer here at Crossref. I have a long background in open source, software development, and solving unique problems. One of my earliest computer influences was my father.", "content": "TL:DR; Hi, I\u0026rsquo;m Joel GitLab UI unsatisfactory Wrote a UI to use the API Wrote a missing API Open company contributes changes back to another open company Now have a method for getting work done much easier Hurrah! I\u0026rsquo;m Joel, a Senior Site Reliability Engineer here at Crossref. I have a long background in open source, software development, and solving unique problems. One of my earliest computer influences was my father. He wrote software to support scientists in search of things like the top quark, the most massive of all observed elementary particles.\nOne day my father came home with over 40 floppy disks, excited to have this cool, free operating system called Linux. Together we installed Linux and ended up with a fully functional computer. Learning and using Linux opened up an entirely new world to me of amazing open-source software that I could use freely. As I enjoyed all this new software now available to me, I tried to fix any bugs or problems I\u0026rsquo;d encounter and report solutions for them to the software developers. It felt great to be able to contribute back so others could benefit.\nSoftware teams tend to manage their workflow by writing issues, reviewing them to make sure they make sense and have an achievable goal, estimate how much time it will take to complete, and finally––the crucial step––putting the issues in the order in which they should be completed. To manage my work, I’ve always used Jira––a product designed to help teams of all types prioritize work––and for the first time in over a decade, I find myself not using it in my work.\nProduct development tracking with GitLab The Crossref team took the decision a few years ago to move all our development and product tracking work via GitLab––a commercial open-source product anyone can use to help keep track of software throughout the development life cycle––with an open-by-default policy. Work is tracked using the issues feature of Gitlab. GitLab will host it, so you don\u0026rsquo;t have worry about maintenance and backups. One major drawback I discovered with GitLab, is a lack of maturity when it comes to doing light project management work.\nThis is where the trouble begins with GitLab.\nIn the board view of your issues, you can transition your issues from waiting, to in progress, from in progress to done. The problem with this view is its width-restricted, and things like tags on issues, which are used to help categorize, take up valuable vertical space. With enough tags and a long enough subject line, you can only see five issues at a time on a MacBook Pro monitor, for example.\nIn the list view of your issues, you get a clean compact view; the perfect view to order issues. However there\u0026rsquo;s one major flaw, it\u0026rsquo;s paginated. (You know when you\u0026rsquo;re shopping and they make you click to see another page of goods? Yes, like that.) The problem with GitLab\u0026rsquo;s implementation is you can drag and drop issues on a given page, but there is no way to move the issues to another page in the list of results. Additionally, all newly-created issues are added to the end of the list.\nThe solution I went about finding a solution by visiting GitLab\u0026rsquo;s own public issue page and found that requests requiring user interface (UI) changes would languish; in some cases, they would go years without getting approval. Instead of putting in all the work to open an issue with them, only to have it be discarded or ignored, I decided to look for another way.\nGitLab has an API, what more could I need? I discovered I could log in and get a list of all the issues, by project, and by group. \u0026ldquo;This is perfect!\u0026rdquo;, I thought. I can write my own UI around it. It took three evenings writing a UI that was satisfactory to me. When I started writing javascript to interact with the UI, I learned that the \u0026rsquo;re-ordering of issues\u0026rsquo; didn\u0026rsquo;t actually have an API. Further investigation lead me to the issue tracker where I found an issue by a GitLab employee asking for the same functionality––the ability to re-order issues.\nWhile in a chatroom for GitLab development, I was genuinely surprised by my experience. There was quick attentive help on locating the file I would need to implement the change, they set up a development environment, and even helped submit tests for my code while I worked on updating documentation and writing a changelog entry. It felt like GitLab must’ve designated an employee to work with the community on submitting improvements. In no time, the API for re-ordering was implemented. After the scheduled monthly release of GitLab rolled out with my new API, I was able to easily re-order issues.\nGitLab\u0026rsquo;s response when help was needed along the way was impressive. Now there is a much easier method for getting work done that everyone can use. It’s rewarding when you can contribute back to the community for all to benefit.\nIs GitLab as polished as Jira? No. Did they embrace me making changes by being open from the start and providing help along the way? Yes. Do I see Jira shifting its culture to match? Unlikely.\nBy emulating GitLab, an open organization like Crossref has a shot at encouraging community development.\n", "headings": ["TL:DR;","Product development tracking with GitLab","The solution"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/", "title": "Our people", "subtitle":"", "rank": 1, "lastmod": "2021-04-20", "lastmod_ts": 1618876800, "section": "Our people", "tags": [], "description": "We are a distributed team of dedicated people who mostly like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. Our fascination with expired mints has been described as obsessive by some but we prefer to think of it as a passionate hobby. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide, which we take a bit more seriously than ourselves.", "content": "We are a distributed team of dedicated people who mostly like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. Our fascination with expired mints has been described as obsessive by some but we prefer to think of it as a passionate hobby. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide, which we take a bit more seriously than ourselves.\nClick through to read more about each of us (most of us hate having photos taken so please do be kind). And take a look at our organization chart too, if you like; you can tell we are very hierarchical.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/similarity-check/", "title": "Similarity Check advisory group", "subtitle":"", "rank": 1, "lastmod": "2021-04-12", "lastmod_ts": 1618185600, "section": "Working groups", "tags": [], "description": "The purpose of the Similarity Check Advisory Group is to provide Crossref with policy and technical advice on changes and improvements to the Crossref Similarity Check service. The group is comprised of Crossref members, all of whom are active users of Similarity Check. The Similarity Check Advisory Group is led by a Chair and Crossref Facilitators, who together help to develop meeting agendas, lead discussions and outline group actions in an effort to help drive service improvements.", "content": "The purpose of the Similarity Check Advisory Group is to provide Crossref with policy and technical advice on changes and improvements to the Crossref Similarity Check service. The group is comprised of Crossref members, all of whom are active users of Similarity Check. The Similarity Check Advisory Group is led by a Chair and Crossref Facilitators, who together help to develop meeting agendas, lead discussions and outline group actions in an effort to help drive service improvements. Colleagues from Turnitin will be invited to attend meetings at the discretion of the Chair and Facilitators.\nGroup members Chair: Lauren Flintoft, IOP Publishing Facilitators: Lena Stoll, Crossref; Madhura Amdekar, Crossref\nAdya Misra, Sage Barbara Ryan, Association for Computing Machinery Corrie Petterson, Institute of Electrical and Electronics Engineers Shelly Shochat, Karger Helen Beynon, BMJ Jack Patterson, Wiley John Dufour, American Chemical Society John Sivo, Institute of Electrical and Electronics Engineers Jyoti Bajaj, Taylor \u0026amp; Francis Lois Jones, American Psychological Association Mihail Grecea, Elsevier Paulina Miazgowska, Frontiers Sam Parsons, IOP Publishing Tamara Welschot, Springer Nature (co-Chair) How the group works (and the guidelines) With the exception of Crossref staff, the group will be limited to one representative from each participating publisher, unless particular agenda items or topics call for additional expertise from additional colleagues or departments from within a single organization. Members are, however, free to discuss the information shared during meetings with colleagues or any external party. Members can choose to leave the Advisory Group at any time but are asked to send their resignation in writing to the Chair and Facilitators.\nAdvisory Group members commit to attend all meetings by conference call, and may choose to send a named proxy if they are not available. The schedule of meetings is at the discretion of the Chair and Facilitators and may vary depending on whether there are relevant topics for discussion but are usually held three - four times per year. Notes are circulated by the Facilitator after each call, and any members who were unable to attend a call are asked to ensure they read these and take note of any action items.\nMembers are asked not to invite colleagues or any external party to join Advisory Group meetings unless they have discussed this with the Chair and Facilitators prior to the call. This ensures a consistency in development approach and a level of fluency during meetings.\nPlease contact Lena Stoll with any questions or to apply to join the advisory group.\n", "headings": ["Group members","How the group works (and the guidelines)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/", "title": "Find a service", "subtitle":"", "rank": 4, "lastmod": "2021-04-09", "lastmod_ts": 1617926400, "section": "Find a service", "tags": [], "description": "Metadata enables connections We offer a wide array of services to ensure that scholarly research metadata is registered, linked, and distributed. When members register their content with us, we collect both bibliographic and non-bibliographic metadata. We process it so that connections can be made between publications, people, organizations, and other associated outputs. We preserve the metadata we receive for the scholarly record. We also make it available across a range of interfaces and formats so that the community can use it and build tools with it.", "content": "Metadata enables connections We offer a wide array of services to ensure that scholarly research metadata is registered, linked, and distributed. When members register their content with us, we collect both bibliographic and non-bibliographic metadata. We process it so that connections can be made between publications, people, organizations, and other associated outputs. We preserve the metadata we receive for the scholarly record. We also make it available across a range of interfaces and formats so that the community can use it and build tools with it.\nSome of our services are only available to members - most of these services are included your membership but some involve an extra fee (marked *). We also offer a range of services free of charge to the scholarly community, with the option of premium service versions at an extra charge (marked ~).\nMetadata retrieval (including Metadata Plus~)* Content Registration Reference linking Cited-by Funder Registry Similarity Check* Crossmark Event Data Overview of our services Crossref service Do you need to be a member? About the service and how it benefits you Metadata retrieval No The collective power of our members’ metadata is available to use through a variety of tools and APIs—allowing anyone to search and reuse the metadata in sophisticated ways.\nRead the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Metadata Plus No Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits. Content Registration Yes Content Registration allows members to register and update metadata via machine or human interfaces. Read the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Reference linking No Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. Read the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Cited-by Yes Cited-by shows how work has been received by the wider community; displaying the number of times it has been cited, and linking to the citing content. Read the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Funder Registry No The Funder Registry and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world.\nRead the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Similarity Check Yes A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism.\nRead the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Crossmark Yes The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record.\nRead the factsheet in your language: English, español, العربية, bahasa Indonesia, português do Brasil Event Data No When someone links their data online, or mentions research on a social media site, we capture that Event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it. Register Rich metadata leads to greater discoverability\nThe more complete the metadata registered, the more accurate the view of the scholarly record and the more discoverable the content is to the scholarly community.\nThrough our content registration service, members register and maintain metadata for their content. We are interested in the full range of metadata for each publication, including information on:\ncontributors (such as authors, editors, reviewers) funding (such as funding body, grant number) publication history (such as versions, updates, revisions, corrections, retractions, dates) peer review (such as status, type, reviews) access indicators (such as publication license, text \u0026amp; data mining URLs, machine mining URLs) related resources \u0026amp; associated research artifacts (such as preprints, figures \u0026amp; tables, datasets, software, protocols, research resource IDs). Unique identifiers for authors, organizations, and associated scholarly outputs enhance precision and quality, members can deposit accurate funder acknowledgment metadata when they apply the unique funder identifier in Crossref’s Funder Registry service, a regularly updated, industry-standard taxonomy of grant-giving organizations.\nLink Linking improves the scholarly enterprise\nThe complete set of scholarly links spans time, geography, and disciplinary boundaries.\nWe connect all the metadata elements we can accurately identify, from all phases of publication, across content records in our vast corpus. We link literature to people, literature to resources and associated research artifacts, and soon, literature to the activity surrounding the publication. Amongst the vast web of links, we connect research output content to external tools such as Turnitin’s iThenticate in the Similarity Check service, assisting members in plagiarism detection. With the references deposited by members, Crossref offers a Cited-by service so that participating members can discover all the publications that have cited their content. The upcoming Event Data service will offer links between literature and various platforms where it is shared, discussed, mentioned, referenced, reviewed, and considered.\nRetrieve Metadata is meant to be used\nCrossref delivers metadata to systems throughout scholarly communications making content easy to find, cite, link, assess, and reuse.\nOur metadata retrieval service supports a diverse range of systems by offering a wide range of formats and interfaces. We do this because the range of organizations who use it \u0026ndash; from publishers to libraries, to funders to startups\u0026ndash;and how they use it, are diverse. Using metadata facilitates content discoverability\u0026ndash;if it’s rich metadata, all the better. Crossmark is a powerful example: Metadata is displayed on publication landing pages through the a widget that gives readers quick and easy access to the current status of an item of content. With one click, readers can see if content has been updated, corrected or retracted and access additional metadata provided by the member.\n", "headings": ["Metadata enables connections","Overview of our services ","Register","Link","Retrieve"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/committees/membership-and-fees/", "title": "Membership & Fees committee", "subtitle":"", "rank": 3, "lastmod": "2021-04-09", "lastmod_ts": 1617926400, "section": "Committees", "tags": [], "description": "The Membership \u0026amp; Fees Committee (M\u0026amp;F committee) was established in 2001 and plays an important role in our governance. Made up of organizations that include members, sponsored members, sponsor, and metadata user - some of whom are also board members.\nThe group makes recommendations to the board about fees and policies for all of our services and procedures in relation to fees and community work.\nThey review existing fees to discuss if any changes are needed.", "content": "The Membership \u0026amp; Fees Committee (M\u0026amp;F committee) was established in 2001 and plays an important role in our governance. Made up of organizations that include members, sponsored members, sponsor, and metadata user - some of whom are also board members.\nThe group makes recommendations to the board about fees and policies for all of our services and procedures in relation to fees and community work.\nThey review existing fees to discuss if any changes are needed. They also review new services while they are being developed, to assess if fees should be charged and if so, what those fees should be, in line with our fee principles, and the Principles of Open Scholarly Infrastructure (POSI). In addition, the board can also delegate specific issues about policies and services to the M\u0026amp;F committee.\nMost recently, in 2021, the committee reviewed a proposal to evolve the \u0026ldquo;fee assistance\u0026rdquo; program into a mroe expansive Global Equitable Membership (GEM) program. In 2019 the committee undertook a regular fee review which led to three specific cases being recommended to the board, all of which passed:\nRemoving the Crossmark fee Removing the fees for versions and translations (registered by the same member) Updating a set of principles for guiding fee-setting 2024 Remit At the 2023 November Board meeting, the following scope of work was agreed to support the \u0026ldquo;Resourcing Crossref Long-term\u0026rdquo; project\u0026quot;:\nRecruit committee members as needed to represent Crossref stakeholders Review and provide feedback on project outputs, including SWOT analysis, modelling of new fees, and impact/effort assessments of fee changes Support staff in getting feedback from the community on fee models and possible changes to current fees. Committee work might also include advising on how we engage the community in the process, such as reviewing RFPs for a community engagement consultant, refining the questions we ask, and reviewing the input Make recommendations to the board for any proposed fee changes Share findings publicly with the community M\u0026amp;F committee members Chair: Vincas Grigas, Vilnius University\nCrossref facilitator: Amanda Bartell\nCommittee member Representative Country Size (records) Org/Member type Academicus Journal Arta Musaraj Albania 300 Publisher ACM Scott Delman USA 600,000 Society, Publisher Beilstein Institut* Wendy Patterson Germany 11,000 Publisher, Research Institute Center for Open Science (COS)* Nici Pfeiffer USA 150,000 Researcher Service/Tool Clarivate Analytics* Francesca Buckland USA 11,000 Publisher, Metadata User DOAJ Joanna Ball USA N/A Metadata User eLife Damian Pattinson UK 40,000 Publisher Elsevier* Alok Pendse Netherlands 20,000,000 Publisher Frontiers Marie Souliere Switzerland 400,000 Publisher Institute of Research and Community Services, Diponegoro University Eko Didik Widianto Indonesia 20,000 University, Publisher Japan Association for Language Teaching (JALT) Theron Muller Japan 1000 Society Kampala International University Ademola Olaniyan Uganda 380 University, Publisher Liverpool University Press Clare Hooper UK 46,000 University, Publisher l\u0026rsquo;Université de Parakou Honoré Biaou Benin 60 University, Publisher Noyam Publisher Naa Kai Amanor-Mfoafo Ghana 500 Publisher Open Library of Humanities Rose Harris-Birtill UK 11,000 Scholar-led Pakistan Journal of Botany Rehan Saleem Pakistan 1,500 Publisher TU Delft Open Publishing Frédérique Belliard Netherlands 2,400 University, Publisher Universidad de Guadalajara Ramón Willman Mexico 6,400 University, Publisher University of Lagos Yetunde Zaid Nigeria 130 University, Publisher University of Namibia Anna Leonard Namibia 170 University, Publisher Vilnius University* Vincas Grigas Lithuania 22,587 Society, University, Publisher, Sponsor Wits University Press Andrew Joseph South Africa 1,800 University, Publisher (*) indicates current Crossref board member\nAbout committee participation The M\u0026amp;F Committee meets via one-hour conference calls about four times a year, although this can vary depending on what issues the committee is considering and what topics the board has delegated to them. Often proposals are developed by staff and then reviewed and discussed by the committee – so there is reading to do in preparation for the calls. The committee Chair is a board member and acts as a link between the two groups, presenting M\u0026amp;F recommendations to the board for them to vote.\nThis is very important work and it\u0026rsquo;s essential the committee is broadly representative of Crossref’s diverse membership of over 19,000 organizations in 152 countries.\nAll members agree to abide by Crossref\u0026rsquo;s code of conduct.\nPlease contact the community team with any questions, or Amanda Bartell directly.\n", "headings": ["2024 Remit","M\u0026amp;F committee members","About committee participation"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/ambassadors/", "title": "Ambassadors", "subtitle":"", "rank": 1, "lastmod": "2021-03-10", "lastmod_ts": 1615334400, "section": "Get involved", "tags": [], "description": "Crossref Ambassadors are our trusted contacts who work within our communities (as librarians, researchers, publishers, societies, technologists, innovators) around the world and who share a great enthusiasm and belief in our work. Alongside their varied professional roles, they volunteer to support the scholarly community in their locales with ongoing communication, interactive workshops and training from Crossref. They are Crossref’s eyes and ears in the community, and a special part of our team.", "content": "Crossref Ambassadors are our trusted contacts who work within our communities (as librarians, researchers, publishers, societies, technologists, innovators) around the world and who share a great enthusiasm and belief in our work. Alongside their varied professional roles, they volunteer to support the scholarly community in their locales with ongoing communication, interactive workshops and training from Crossref. They are Crossref’s eyes and ears in the community, and a special part of our team. Meet our Ambassadors and find out more about them.\nWhat is it like to be an Ambassador? By being one of our ambassadors you will become a key part of the Crossref community; our first port of call for updates or to test out new products or services, well connected to our wide network of members and you will work closely with us to make scholarly communications better for all.\nThe benefits to ambassadors are:\nThe opportunity to expand your network and forge new relationships with the scholarly research community. Training opportunities in a range of skills and Crossref services. The inside scoop on all things Crossref and happenings in research communications. A sneak preview of new Crossref services and initiatives. Official endorsement and recognition of your role. How we support them We want to make sure our ambassadors have the help they need to succeed in their roles. We’ll contact you with all the information to bring you on-board and we’ll schedule regular catch-ups to give any updates, answer any questions and generally check-in with you. We’ll also give you access to resource packs to assist your outreach activities (including slide decks and handouts), special edition ambassador digital badges and other goodies. We will also help support your attendance at relevant Crossref meetings so that you can see what we’re working on and what’s coming next!\nHow they support our communities We’re setting up the ambassador program as we’ve heard from members and users they would like\u0026hellip;\nA local expert on-hand to provide support as and when required. Increased training events both online and in person, in your local region, timezone and language. Representatives from Crossref at events in your region. A conduit for other members and stakeholders in the region. A liaison to the Crossref teams in the US and Europe. Apply to become an Ambassador If you are interested in working with us please fill out the form below to give us a little information about yourself. We’ll then get back to you to follow-up and discuss your plans and ideas. The program is very flexible so you can pick and choose what you’d like to be involved in based on your other commitments - we know that people are busy!\nIf you\u0026rsquo;re a brand new member and still getting to grips with everything Crossref, then this role may be a little too advanced for you. However, our services page is full of helpful information and you could look into attending one of our webinars or LIVE local events to find out more.\nLoading... If you are unable to access the form above, please download this pdf and send it back to us.\nPlease contact our outreach team with any questions.\n", "headings": ["What is it like to be an Ambassador?","How we support them","How they support our communities","Apply to become an Ambassador"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/our-ambassadors/", "title": "Meet our ambassadors", "subtitle":"", "rank": 1, "lastmod": "2021-03-10", "lastmod_ts": 1615334400, "section": "Get involved", "tags": [], "description": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.", "content": "The Crossref Ambassador Program is an exciting and important program initiated in early 2018, and one which fully embraces a key strategic focus\u0026mdash;to adapt to expanding constituencies.\nOur Ambassadors are enthusiastic volunteers who work within the global academic community in a variety of ways\u0026mdash;as librarians, researchers, publishers, and societies,\u0026mdash;and all of whom share a strong belief in the mission-driven work we do to improve scholarly research communication. They support us by using their industry expertise, local knowledge, and translation skills to represent Crossref at regional events\u0026mdash;providing training to our members in different languages, locations and time zones.\nSee who is based in your region:\nAfrica Asia Americas Europe Oceania Country groupings are based on the geographic regions defined under the Standard Country or Area Codes for Statistical Use (known as M49) of the United Nations Statistics Division. The assignment of countries or areas to specific groupings does not imply any assumption regarding political or any other affiliation of countries or territories.\nApply to become an ambassador If you are interested in finding out more about the Ambassador Program and working with us please fill out the application form to give us a little information about yourself. We\u0026rsquo;ll then get back to you to follow-up and discuss your plans and ideas.\nBack to top\nPlease contact our outreach team with any questions.\n", "headings": ["Africa","Asia","Americas","Europe","Oceania","Apply to become an ambassador"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/mike-yalter/", "title": "Mike Yalter", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stepping-up-our-deposit-processing-game/", "title": "Stepping up our deposit processing game", "subtitle":"", "rank": 1, "lastmod": "2021-03-08", "lastmod_ts": 1615161600, "section": "Blog", "tags": [], "description": "Some of you who have submitted content to us during the first two months of 2021 may have experienced content registration delays. We noticed; you did, too.\nThe time between us receiving XML from members, to the content being registered with us and the DOI resolving to the correct resolution URL, is usually a matter of minutes. Some submissions take longer - for example, book registrations with large reference lists, or very large files from larger publishers can take up to 24 to 48 hours to process.", "content": "Some of you who have submitted content to us during the first two months of 2021 may have experienced content registration delays. We noticed; you did, too.\nThe time between us receiving XML from members, to the content being registered with us and the DOI resolving to the correct resolution URL, is usually a matter of minutes. Some submissions take longer - for example, book registrations with large reference lists, or very large files from larger publishers can take up to 24 to 48 hours to process.\nHowever, in January and February 2021 we saw content registration delays of several days for all record types and all file sizes.\nTell me more Januaries and Februaries are usually busy at Crossref. Journal ownership changes hands. Members migrate from one platform to another (and can need to update tens of thousands of their resolution URLs). And, many of you are registering your first issues, books, or conferences of the year. Others of you have heard the calls of The Initiative for Open Citations (I4OC) and The Initiative for Open Abstracts (I4OA) and are enriching your metadata accordingly (thank you!). Tickets into our support and membership colleagues peak for the year. But did we see significantly more submissions this year?\nAs you can see, we did see larger-than-normal numbers of submissions in the first two months of the year. For the entire month of January 2021, we received nearly 1 million more submissions into our admin tool deposit queue than we did in January 2020 (2,757,781 in 2021 versus 1,848,261 in 2020). Under normal circumstances, this would lead to an increase in our processing times, so there’s that to consider. But there was also something else at play this year. We desperately needed to upgrade our load balancer, and so we did. Unfortunately, unforeseen at the time, these upgrades caused hiccups in our deposit processing and slowed down submissions even further, building up the number of unprocessed submissions in the queue.\nWhen we saw the impact this was having we suspended the load balancer work until things were stable again. We also increased the resources serving our queue to bring it back down to normal. To make sure we don\u0026rsquo;t face the same problem again, we have put in better tools to detect trends in queue usage- tools which, in turn, will allow us to anticipate problems in the queue instead of reacting to them after they\u0026rsquo;ve already occurred. And as a longer-term project, we are addressing two decades of technical debt and rearchitecting our system so that our entire system is much more efficient.\nGory technical details As part of our effort to resolve our technical debt, we\u0026rsquo;re looking to transition more of our services to the cloud. To accomplish this, we first needed to upgrade our internal traffic handling capabilities to route things to their new locations better. This upgrade caused some unforeseen and hard to notice problems, like the queue being stalled. Since the queue still showed things in process, it wasn\u0026rsquo;t immediately apparent that things were not processing (normally the processing on the queue will clear a thread if a significant problem occurs).\nWe initially noticed a problem on 5 February and thought we had a fix in place on the 10th. But, we again realized on 16 February that the underlying problem had recurred, and we needed a closer investigation.\nFor many reasons it took us too much time to realize the connection, until people started complaining.\nWhile our technical team worked on those load balancer upgrades, some of your submissions lingered for days in the deposit queue. In a few examples, larger submissions took over a week to complete processing. Total pending submissions began to push nearly 100,000, an unusually large backlog. We called an emergency meeting, paused all related work, and dedicated additional time and resources to processing all pending submissions. On 22 February, we completed working through the backlog of pending submissions and new submissions were being processed at normal levels. As we finish up this blog on 2 March, there are less than 3,000 pending submissions in the queue, the oldest of which has been there for less than three hours.\nThis brings us back to the entire rationale for what we are doing with the load balancer - which, ironically, was to move some services out of the data centre so that we could free-up resources and scale things more dynamically to match the ebbs and flows of your content registration.\nBut before we proceed, we\u0026rsquo;ll be looking at what happened. The bumps associated with upgrading ancient software were expected, so we were looking for side effects. We just didn\u0026rsquo;t look in the right place. And we should have detected that the queues had stalled well before people started to report it to us. A lot of our queue management is still manual. This means we are not adjusting it 24x7. So if something does come in when we are not around, it can exacerbate problems quickly.\nWhat are we going to do about it? In a word: much. We know that timely deposit processing is critical. We can and will do better.\nFirst off, we have increased the number of concurrently processing threads dedicated to metadata uploads in our deposit queue from 20 to 25. That’s a permanent increase. A million more submissions in a month necessitates additional resources, but that’s only a short-term patch. And we were only able to make this change recently due to some index optimizations we implemented late last year.\nOne of the other things that we\u0026rsquo;ve immediately put into place is a better system for measuring trends in our queue usage so that we can, in turn, anticipate rather than react to surges in the queue. And, of course, the next step will be to automate this queue management.\nAll this is part of an overall, multi-year effort to address a boat-load of technical debt that we\u0026rsquo;ve accumulated over two decades. Our system was designed to handle a few million DOIs. It has been incrementally poked and prodded to deal with well over a hundred million. But it is suffering.\nAnybody who is even semi-technically-aware might be wondering what all the fuss is about? Why can\u0026rsquo;t we fix this relatively easily? After all, 130 million records\u0026mdash;though a significant milestone for Crossref\u0026mdash;does not in any way qualify as \u0026ldquo;big data.\u0026rdquo; All our DOI records fit onto an average sized micro-SD card. There are open source toolchains that can manage data many, many times this size. We\u0026rsquo;ve occasionally used these tools to load and analyse all our DOI records on a desktop computer. And it has taken in just a few minutes (admittedly using a beefier-than-usual desktop computer). So how can a queue with just 100,000 items in it take so long to process?\nOur scale problem isn\u0026rsquo;t so much about the number of records we process. It is about the 20 years of accumulated processing rules and services that we have in place. Much of it undocumented and the rationale for which has been lost over the decades. It is this complexity that slows us down.\nAnd one of the challenges we face as we move to a new architecture is deciding which of these rules and services are \u0026ldquo;essential complexity\u0026rdquo; and which are not. For example, we have very complex rules for verifying that submissions contain a correct journal title. These rules involve a lot of text matching and, until they are successfully completed, they block the rest of the registration process.\nBut the workflow these rules are designed for is one that was developed before ISSNs were widely deposited and before we had our own, internal title identifiers for items that do not have an ISSN. And so a lot of this process is probably anachronistic. It is not clear which (if any) parts of it are still essential.\nWe have layers upon layers of these kinds of processing rules, many of which are mutually dependent and which are therefore not easily amenable to the kind of horizontal scaling that is the basis for modern, scalable data processing toolchains. All this means that, as part of moving to a new architecture, we also have to understand which rules and services we need to move over and which ones have outlived their usefulness. And we need to understand which remaining rules can be decoupled so that they can be run in parallel instead of in sequence.\nPro tip: Due to the current checks performed in our admin tool, for those of you submitting XML, the most efficient way to do so is by packaging the equivalent of a journal issue\u0026rsquo;s worth of content in each submission (i.e., ten to twelve content items - a 1 MB submission is our suggested file size when striving for efficient processing)\nWhich brings us conveniently back to queues. We did not react soon enough to the queue backing up. We can do much better at monitoring and managing our existing registration pipeline infrastructure. But we are not fooling ourselves into thinking this will deal with the systemic issue.\nWe recognize that, with current technology and tools, it is absurd that a queue of 100,000 items should take so long to process. It is also important that people know that we are addressing the root of the issues as well. And that we\u0026rsquo;re not succumbing to the now-legendary anti-pattern of trying to rewrite our system from scratch. Instead we are building a framework that will allow us to incrementally extract the essential complexity of our existing system and discard some of the anachronistic jetsam that has accumulated over the years.\nContent Registration should typically take seconds. We wanted to let you know, that we know, and we are working on it.\n", "headings": ["Tell me more","Gory technical details","What are we going to do about it?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/about/", "title": "About us", "subtitle":"", "rank": 1, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "About us", "tags": [], "description": "We envision a rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nWe\u0026rsquo;ve been progressing towards this vision for two decades so far. And \u0026ldquo;we\u0026rdquo; means 17,000+ members from 146 countries, 130+ million records, 600+ million monthly metadata queries from thousands of tools across the research ecosystem. An ecosystem that includes several other foundational infrastructure organisations we collaborate with.", "content": " We envision a rich and reusable open network of relationships connecting research organisations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society.\nWe\u0026rsquo;ve been progressing towards this vision for two decades so far. And \u0026ldquo;we\u0026rdquo; means 17,000+ members from 146 countries, 130+ million records, 600+ million monthly metadata queries from thousands of tools across the research ecosystem. An ecosystem that includes several other foundational infrastructure organisations we collaborate with.\nTake a look at our strategic agenda to see the planned work that aims to achieve the vision. The sustainability area aims to make transparent all the processes and procedures we follow to run the operation for the long term, including our financials and our ongoing committment to the Principles of Open Scholarly Infrastructure POSI. The governance area describes our board and its role in community oversight.\nIt also takes a strong team. We are a distributed group of 45 dedicated people who like to play quizzes, talk about celery (sometimes cucumber), measure coffee intake, and create 100s of custom slack emojis. Our fascination with expired mints has been described as obsessive by some but we prefer to think of it as a passionate hobby. We enthusiastically support the Oxford comma but waver between use of American or British English. Occasionally we do some work to improve knowledge sharing worldwide\u0026mdash;which we take a bit more seriously than ourselves.\nTake a look at our organisation chart.\nOur mission Crossref makes research objects easy to find, cite, link, assess, and reuse.\nWe’re a not-for-profit membership organisation that exists to make scholarly communications better. We rally the community; tag and share metadata; run an open infrastructure; play with technology; and make tools and services—all to help put research in context.\nIt’s as simple—and as complicated—as that.\nRally\nGetting the community working together to make scholarly communications better\nTag\nStructuring, processing, and sharing metadata to reveal relationships between research outputs\nRun\nOperating a shared, open infrastructure that is community-governed and evolves with changing needs\nPlay\nEngaging in debate and experimenting with technology to solve our members’ problems\nMake\nCreating tools and services to enable connections and give context\nHow do we work? These are the Crossref \u0026rsquo;truths\u0026rsquo;, the principles that guide everything we do. Read our truths page, with full descriptions for each.\nCome one, come all Smart alone, brilliant together One member, one vote Love metadata, love technology What you see, what you get Here today, here tomorrow How we started We started in January 2000 with one employee, Ed Pentz, as Executive Director. Read about our history in Ed\u0026rsquo;s words. Since we started we\u0026rsquo;ve grown to (as of 2022) forty five and over 17,000 members coming from 146 countries.\nRead our background story, with an overview of our current services, in this document:\nRead or download this background story as a PDF in English, Spanish or Brazilian Portuguese.\nPlease contact our outreach team with any questions.\n", "headings": ["Our mission","How do we work?","How we started"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/crossmark/", "title": "Crossmark advisory group", "subtitle":"", "rank": 2, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Working groups", "tags": [], "description": "The Crossmark Advisory Group’s role is to provide Crossref with policy and technical advice on changes and developments to Crossmark. The group is comprised of Crossref members who have implemented or are planning to implement Crossmark on their publications, and is lead by a Chair and Crossref staff facilitator. It is currently not active.\nGroup members Chair: TBC Facilitator: Martyn Rittman, Crossref\nChristopher McMahon, AIP Emily-Sue Sloane, AIP Theo Bloom, BMJ Keith Waters, CUP Egbert van Wezenbeek, Elsevier Omer Gazit, F1000 Research Michael Evans, F1000 Research Peter Strickland, IUCr Joseph Brown, PLOS Rob O’Donnell, Rockefeller University Press Michael Waters, Springer David Burgoyne, T\u0026amp;F Nicholas Everitt, T\u0026amp;F Edward Wates, Wiley How the group works (and the guidelines) Members commit to attend all meetings by conference call, and may choose to send a named proxy if they are not available.", "content": "The Crossmark Advisory Group’s role is to provide Crossref with policy and technical advice on changes and developments to Crossmark. The group is comprised of Crossref members who have implemented or are planning to implement Crossmark on their publications, and is lead by a Chair and Crossref staff facilitator. It is currently not active.\nGroup members Chair: TBC Facilitator: Martyn Rittman, Crossref\nChristopher McMahon, AIP Emily-Sue Sloane, AIP Theo Bloom, BMJ Keith Waters, CUP Egbert van Wezenbeek, Elsevier Omer Gazit, F1000 Research Michael Evans, F1000 Research Peter Strickland, IUCr Joseph Brown, PLOS Rob O’Donnell, Rockefeller University Press Michael Waters, Springer David Burgoyne, T\u0026amp;F Nicholas Everitt, T\u0026amp;F Edward Wates, Wiley How the group works (and the guidelines) Members commit to attend all meetings by conference call, and may choose to send a named proxy if they are not available. Meeting notes will be circulated to all by the facilitator. The schedule of meetings is at the discretion of the chair and facilitator and may vary depending on whether there are relevant topics for discussion, but will not be more than one per quarter.\nWith the exception of Crossref staff, the group will be limited to one representative from each participating organization, unless particular agenda items or topics call for domain expertise from specific colleagues or departments. Members are, however, free to discuss the information shared during meetings with colleagues or any external party.\nPlease contact Martyn Rittman with any questions or to apply to join the advisory group.\n", "headings": ["Group members","How the group works (and the guidelines)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/event-data/", "title": "Event Data advisory group", "subtitle":"", "rank": 1, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Working groups", "tags": [], "description": "The purpose of the Event Data Advisory Group is to provide Crossref with policy and technical advice regarding developments, changes and improvements to the Crossref Event Data service. The group is comprised of both our members as well as non-members (third party platforms and organizations) who are interested in consuming our Event Data records for their own use case.\nGroup Members Chair: John Chodacki, California Digital Library Facilitator: Martyn Rittman, Crossref", "content": "The purpose of the Event Data Advisory Group is to provide Crossref with policy and technical advice regarding developments, changes and improvements to the Crossref Event Data service. The group is comprised of both our members as well as non-members (third party platforms and organizations) who are interested in consuming our Event Data records for their own use case.\nGroup Members Chair: John Chodacki, California Digital Library Facilitator: Martyn Rittman, Crossref\nEuan Adie, Altmetric Tim Stevenson, BioMed Central Theo Bloom, BMJ Martin Fenner, DataCite Mike Taylor, Digital Science Mark Patterson, eLife Craig Jurney, Highwire Sebastian Pöhlmann, Mendeley Maciej Rymarz, Mendeley Juan Pablo Alperin, Public Knowledge Project Katie Hickling, PLOS Lorraine Estelle, COUNTER Damian Pattinson, Research Square Martijn Roelandse, Springer Nature Christian Hauschke, TIB Leibniz Universität Hannover Mike Thelwell, University of Wolverhampton Liz Ferguson, Wiley Christina Lohr, Elsevier Kaveh Bazargan, River Valley Technologies How the group works (and the guidelines) The Event Data Advisory Group is led by a Chair and a Crossref Facilitator, who together help to develop meeting agendas, lead discussions, outline group actions and rally the community outside of the Advisory Group for support with the service where appropriate.\nThe Working Group is currently not active, however we continue to maintain the Event Data API.\nPlease contact Martyn Rittman with any questions.\n", "headings": ["Group Members","How the group works (and the guidelines)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/truths/", "title": "Our truths", "subtitle":"", "rank": 1, "lastmod": "2021-02-26", "lastmod_ts": 1614297600, "section": "Our truths", "tags": [], "description": "These are the Crossref truths. These are the principles that guide us in everything we do. They were tested with members and adopted by our board in 2016. We tweaked and added \u0026ldquo;What you see, what you get\u0026rdquo; in 2017.\nCome one, come all We define publishing broadly. If you communicate research and care about preserving the scholarly record, join us. We are a global community of members with content in all disciplines, in many formats, and with all kinds of business models.", "content": "These are the Crossref truths. These are the principles that guide us in everything we do. They were tested with members and adopted by our board in 2016. We tweaked and added \u0026ldquo;What you see, what you get\u0026rdquo; in 2017.\nCome one, come all We define publishing broadly. If you communicate research and care about preserving the scholarly record, join us. We are a global community of members with content in all disciplines, in many formats, and with all kinds of business models.\nOne member, one vote Help us set the agenda. It doesn’t matter how big or small you are, every member gets a single vote to create a board that represents all types of members.\nSmart alone, brilliant together Collaboration is at the core of everything we do. We involve the community through active working groups and committees. Our focus is on things that are best achieved by working together.\nLove metadata, love technology We do R\u0026amp;D to support and expand the shared infrastructure we run for the scholarly community. We create open tools and APIs to help enrich and exchange metadata with thousands of third parties, to drive discoverability of our members’ content.\nWhat you see, what you get Ask us anything. We’ll tell you what we know. Openness and transparency are principles that guide everything we do.\nHere today, here tomorrow We’re here for the long haul. Our obsession with persistence applies to all things\u0026mdash;metadata, links, technology, and the organization. But “persistent” doesn’t mean “static”; as research communications continues to evolve, so do we.\nPlease let us know what you think by contacting our outreach team.\n", "headings": ["Come one, come all","One member, one vote","Smart alone, brilliant together","Love metadata, love technology","What you see, what you get","Here today, here tomorrow"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/discuss-all-things-metadata-in-our-new-community-forum/", "title": "Discuss all things metadata in our new community forum", "subtitle":"", "rank": 1, "lastmod": "2021-02-11", "lastmod_ts": 1613001600, "section": "Blog", "tags": [], "description": "TL;DR: We have a Community Forum (yay!), you can come and join it here: community.crossref.org.\nCommunity is fundamental to us at Crossref, we wouldn’t be where we are or achieve the great things we do without the involvement of you, our diverse and engaged members and users. Crossref was founded as a collaboration of publishers with the shared goal of making links between research outputs easier, building a foundational infrastructure making research easier to find, cite, link, assess, and re-use.", "content": "TL;DR: We have a Community Forum (yay!), you can come and join it here: community.crossref.org.\nCommunity is fundamental to us at Crossref, we wouldn’t be where we are or achieve the great things we do without the involvement of you, our diverse and engaged members and users. Crossref was founded as a collaboration of publishers with the shared goal of making links between research outputs easier, building a foundational infrastructure making research easier to find, cite, link, assess, and re-use. It is at the very core of what we do and who we are. Our global community now includes publishers, libraries, government agencies, funders, researchers, universities, ambassadors, and more from over 140 countries. We are also actively part of the larger scholarly research community, which includes other open scholarly infrastructure organizations, metadata users and aggregators, open science initiatives, and others with shared aims and values.\nWhat do we mean by \u0026lsquo;community\u0026rsquo;? ‘Community’ is often one of those words which gets bandied around without much thought given to its meaning. At Crossref, we are aware that expertise lies within our broad, global community and we engage with them (you!) in a variety of ways to ensure that decisions we make are community-led and that what we do, as well as what we don’t do, are in line with the views of our members and developed with your insights and input. We do this via our working groups, committees, ambassador program, beta-testing groups, in-person and online events, webinars, and on-going dialogues and feedback via our support channels and even social media. We are also involved in a number of collaborative projects with other organizations such as ROR, Metadata 2020, Make Data Count, PIDapooloza, and the FREYA project to name but a few.\nCommunity is more than just signing up to be a Crossref member. It’s more than just attending an event or a webinar, or levelling up to include the use of a service like Crossmark or Similarity Check –– it’s really engaging with us and creating something together of shared value for the scholarly community. As an organization, we’ve been so thrilled that there is a new group dedicated to highlighting community managers and our work. We are working with –– and learning a lot from –– the Centre for Scientific Collaboration \u0026amp; Community Engagement to improve the way we interact and involve people in Crossref. The model below shows a trajectory towards true collaboration that we aim to follow in the coming months and years.\nCite as: Center for Scientific Collaboration and Community Engagement. (2020) The CSCCE Community Participation Model – A framework for member engagement and information flow in STEM communities. Woodley and Pratt doi: 10.5281/zenodo.3997802\nIn the current climate, there are additional challenges and limitations on how we interact with all the various communities that we as individuals are a part of, both professionally and personally. I wrote in my last blog about how we have moved our events online and thought about new ways to better connect and engage with our community virtually. One of those ways is our Community Forum.\nThe purpose of our community forum Hosted on the open-source discussion platform Discourse, you can find our forum at community.crossref.org. The goal of the community forum is to create an inclusive, open space where Crossref members, ambassadors, sponsors, service providers, and others who share a passion for scholarly infrastructure, can connect. This enables collaborative problem-solving, the sharing of expertise and experiences across time zones and languages, and allows members to post questions to be answered by other community members or even our staff. Members of the community engage via creating posts, commenting on existing content in the forum, volunteering for working groups or beta-testing projects, helping to co-create materials that include translations and shared FAQs, giving feedback on new developments, and joining online events and webinars. Throughout these interactions, we expect that those who use the community forum will form relationships –– a collective working together to advance their work with Crossref and shape the future of scholarly infrastructure.\nWhen I joined Crossref as Community Manager over three years ago, the idea of a forum had already begun to take shape, but it wasn’t quite there just yet. There was additional research and consultation with the community to be done to check this was the approach we wanted to take.\nThis involved speaking to others working in scholarly communications about forums they were involved in running or were an active participant of –– check out the PKP forum for instance if you haven’t already –– and having numerous valuable conversations about successes, potential downfalls, and realistic expectations. The most important –– and commonly cited –– takeaway is that building an online community takes time. We are still at the start of this journey. It will only work if it is a place of value for all and a place where people feel a sense of belonging and co-ownership.\nPreparing to rollout the forum We tested the platform with a small group of beta-testers and also sent out a survey to over 1,700 of our members, taking a sample with a geographical and organizational spread. The responses thankfully held no major surprises and reinforced our belief that this is something of use to people.\nKey research findings 77% of respondents had previously contacted our Support team for help resolving an issue. 90% stated either ‘yes’ or ‘maybe’ to whether they would use a community forum to post their questions, though over half have never used a forum before. Most common reasons of importance for joining are \u0026lsquo;Community support in solving issues or answering questions\u0026rsquo;, \u0026lsquo;To locate FAQs and quickly find answers to common issues\u0026rsquo;, and \u0026lsquo;To connect with others working in a similar role and/or with similar interests\u0026rsquo; Most commonly-stated things that would discourage or limit member’s participation would be how time-consuming and complex the forum is to use, and any potential language barriers. Things you can do on the forum We hope this will provide a much more open level of support for the community, enabling us to bring out all those great questions and thoughtful conversations we receive via our Support channels into the public sphere, where we can all benefit from these rich exchanges. Ultimately our goal for the future is that this space is owned by you, the Crossref community. This is a platform for you to connect and build relationships with others working in scholarly communications: metadata fanatics, identifier aficionados, developer gurus, and open research enthusiasts - we welcome you all!\nShare what activities or projects you are working on and get input from others. Share issues that you need some help resolving, post a question to the forum in your native language and get help from another community member. Give us feedback on our plans and help us shape future developments at Crossref. Test out new tools and services. Find out about upcoming events and webinars, and share any you think are of interest to the community. Help us identify better ways of working together through Crossref and co-create new materials and projects. How to get started So, how do I sign up you ask? Simply head over to community.crossref.org and set up an account. There\u0026rsquo;s a useful How-To guide available on our welcome post, as well as some Community Guidelines all our members should follow.\nDo you have a question about registering or updating your metadata? Then head over to the Content Registration category and post your query to the group. Want to find out about getting started with Similarity Check service? Then take a look at our Similarity Check topic in our services category. Or maybe you want to know more about upcoming multilingual webinars at Crossref, or perhaps you have one of your own you’d like to share? Then check out the Community Calendar.\nWe’re also looking for talented linguists out there to help us translate our welcome email template into multiple languages so that anyone joining the community can get a welcome in their native language. To join in, visit my post in our ‘Questions from Crossref’ category.\nWe look forward to seeing you in the community soon!\n", "headings": ["What do we mean by \u0026lsquo;community\u0026rsquo;?","The purpose of our community forum","Preparing to rollout the forum","Key research findings","Things you can do on the forum","How to get started"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/cited-by/", "title": "Cited-by", "subtitle":"", "rank": 1, "lastmod": "2021-02-09", "lastmod_ts": 1612828800, "section": "Documentation", "tags": [], "description": "Cited-by supports members who display citations, enabling the community to discover connections between research outputs. Scholars use citations to critique and build on existing research, acknowledging the contributions of others. Members can include references in their metadata deposits which Crossref uses to create links between works that cite each other. The number of citations each work receives is visible to anyone through our public APIs. Through our Cited-by service, members who deposit reference metadata can retrieve everything they need to display citations on their website.", "content": " Cited-by supports members who display citations, enabling the community to discover connections between research outputs. Scholars use citations to critique and build on existing research, acknowledging the contributions of others. Members can include references in their metadata deposits which Crossref uses to create links between works that cite each other. The number of citations each work receives is visible to anyone through our public APIs. Through our Cited-by service, members who deposit reference metadata can retrieve everything they need to display citations on their website.\nMembers who use this service are helping readers to:\neasily navigate to related research, see how the work has been received by the wider community, explore how ideas evolve over time by highlighting connections between works. Watch the introductory Cited-by animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية 한국어 Português do Brasil How Cited-by works Cited-by begins with references deposited as part of the metadata records for your content. Learn more about depositing references.\nShow image\r×\rA member registers content for a work, the citing paper. This metadata deposit includes the reference list. Crossref automatically checks these references for matches to other registered content. If this is successful, a relationship between the two works is created. Crossref logs these relationships and updates the citation counts for each work. You can retrieve citation counts through our public APIs.\nMembers can use the Cited-by service to retrieve the full list of citing works, along with all the bibliographic details needed to display them on their website.\nNote that citations from Crossref may differ from those provided by other services because we only look for links between Crossref-registered works and don\u0026rsquo;t share the same method to find matches.\nObligations and fees for Cited-by Participation in Cited-by is optional, but encouraged. There is no charge for Cited-by. Depositing references is not a requirement, but strongly encouraged if you are using Cited-by. Best practice for Cited-by Because citations can happen at any time, Cited-by links must be kept up-to-date. Members should either check regularly for new citations or (if performing XML queries) set the alert attribute to true. This means the search will be saved in the system and you’ll get an alert when there is a new match.\nDepositing your own references is strongly encouraged if you use Cited-by. If you don\u0026rsquo;t, the citations you retrieve will not include those from your own works. This is likely to lead to under-reporting of citations counts by at least 20% and you are missing the opportunity to point readers to other similar works on your platform.\nShow image × Download the Cited-by factsheet, and explore factsheets for other Crossref services and in different languages.\nHow to access citation matches Members who deposit references allow them to be retrieved by anyone using our public APIs. For example:\nhttp://api.crossref.org/works?filter=has-references:true Also in the metadata is the number of citations a work has received, under the tag \u0026quot;is-referenced-by-count\u0026quot;.\nTo retrieve the full list of citations you need be a member using Cited-by. While anyone can use an API query to see the number of citations a work has received, members can retrieve a full list of citing DOIs and callback notifications informing them when one of their works has been cited. Details of the citing works can be displayed on your website alongside the article.\nIn 2023 we are developing a new API endpoint that will make citations more accessible to the community, for a preview see this announcement.\n", "headings": ["Cited-by supports members who display citations, enabling the community to discover connections between research outputs.","How Cited-by works ","Obligations and fees for Cited-by ","Best practice for Cited-by ","How to access citation matches "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/cited-by/", "title": "Cited-by", "subtitle":"", "rank": 5, "lastmod": "2021-02-09", "lastmod_ts": 1612828800, "section": "Find a service", "tags": [], "description": "Cited-by supports members who display citations, enabling the community to discover connections between research outputs. Scholars use citations to critique and build on existing research, acknowledging the contributions of others. Members can include references in their metadata deposits which Crossref uses to create links between works that cite each other. The number of citations each work receives is visible to anyone through our public APIs. Through our Cited-by service, members who deposit reference metadata can retrieve everything they need to display citations on their website.", "content": " Cited-by supports members who display citations, enabling the community to discover connections between research outputs. Scholars use citations to critique and build on existing research, acknowledging the contributions of others. Members can include references in their metadata deposits which Crossref uses to create links between works that cite each other. The number of citations each work receives is visible to anyone through our public APIs. Through our Cited-by service, members who deposit reference metadata can retrieve everything they need to display citations on their website.\nMembers who use this service are helping readers to:\neasily navigate to related research, see how the work has been received by the wider community, explore how ideas evolve over time by highlighting connections between works. Watch the introductory Cited-by animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية 한국어 Português do Brasil How Cited-by works Cited-by begins with references deposited as part of the metadata records for your content. Learn more about depositing references.\nShow image\r×\rA member registers content for a work, the citing paper. This metadata deposit includes the reference list. Crossref automatically checks these references for matches to other registered content. If this is successful, a relationship between the two works is created. Crossref logs these relationships and updates the citation counts for each work. You can retrieve citation counts through our public APIs.\nMembers can use the Cited-by service to retrieve the full list of citing works, along with all the bibliographic details needed to display them on their website.\nNote that citations from Crossref may differ from those provided by other services because we only look for links between Crossref-registered works and don\u0026rsquo;t share the same method to find matches.\nObligations and fees for Cited-by Participation in Cited-by is optional, but encouraged. There is no charge for Cited-by. Depositing references is not a requirement, but strongly encouraged if you are using Cited-by. Best practice for Cited-by Because citations can happen at any time, Cited-by links must be kept up-to-date. Members should either check regularly for new citations or (if performing XML queries) set the alert attribute to true. This means the search will be saved in the system and you’ll get an alert when there is a new match.\nDepositing your own references is strongly encouraged if you use Cited-by. If you don\u0026rsquo;t, the citations you retrieve will not include those from your own works. This is likely to lead to under-reporting of citations counts by at least 20% and you are missing the opportunity to point readers to other similar works on your platform.\nShow image × Download the Cited-by factsheet, and explore factsheets for other Crossref services and in different languages.\nHow to access citation matches Members who deposit references allow them to be retrieved by anyone using our public APIs. For example:\nhttp://api.crossref.org/works?filter=has-references:true Also in the metadata is the number of citations a work has received, under the tag \u0026quot;is-referenced-by-count\u0026quot;.\nTo retrieve the full list of citations you need be a member using Cited-by. While anyone can use an API query to see the number of citations a work has received, members can retrieve a full list of citing DOIs and callback notifications informing them when one of their works has been cited. Details of the citing works can be displayed on your website alongside the article.\nIn 2023 we are developing a new API endpoint that will make citations more accessible to the community, for a preview see this announcement.\nGetting started with Cited-by Learn more about Cited-by in our documentation.\n", "headings": ["Cited-by supports members who display citations, enabling the community to discover connections between research outputs.","How Cited-by works ","Obligations and fees for Cited-by ","Best practice for Cited-by ","How to access citation matches ","Getting started with Cited-by "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/funders/", "title": "Funder advisory group", "subtitle":"", "rank": 3, "lastmod": "2021-02-07", "lastmod_ts": 1612656000, "section": "Working groups", "tags": [], "description": "The Funder Advisory Group was originally formed to help with the development of our funding data and Funder Registry capabilities. As those services matured, the group re-convened in 2017 to discuss ways in which funders can take advantage of Crossref\u0026rsquo;s infrastructure to register grant metadata and engage more officially by becoming members. As the result of work completed in 2018, research funders are now able to join, register their grants with metadata and DOIs and reap the full benefits of membership.", "content": "The Funder Advisory Group was originally formed to help with the development of our funding data and Funder Registry capabilities. As those services matured, the group re-convened in 2017 to discuss ways in which funders can take advantage of Crossref\u0026rsquo;s infrastructure to register grant metadata and engage more officially by becoming members. As the result of work completed in 2018, research funders are now able to join, register their grants with metadata and DOIs and reap the full benefits of membership. Funders, publishers and the entire community can contribute to and benefit from linking grants to outputs.\nThe current Advisory Group is a mix of members, prospective members and representatives of related initiatives, and meets as needed to discuss Crossref developments as well as changes to the broader landscape, such as the recent OSTP memo.\nSome use cases for registering grants Multi-country funding e.g. the Australian Research Council wants to know which other countries their awardees get additional funding from. Government vs. private funding relationships e.g. which private funders work with which governments to support what kind of research? Co-funding e.g. which other funders do my grantees tend to receive support from as well as us? Portfolio analysis e.g. a funder invests in centers and individual scientists; which effort generates more products? More on use cases and example queries is provided in this blog post overview.\nWhat the group is working on In 2022, the group:\nReviewed the new grant registration form Discussed the recent OSTP memo Completed a survey of needs Convened a sub-group tasked with making recommendations to Crossref based on the survey results: Reduce barriers to volume registration of older grants Better matching of grants to outputs Improve evidence and awareness of the value of registering grants As Crossref works toward those recommendations, the group will meet again in 2023 to determine its next priorities, in addition to ongoing engagement work.\nChair and facilitator: Kornelia Korzec, Crossref Diego-Valerio Chialva, European Research Council Kevin Dolby, Medical Research Council, UK Maisie England, UKRI, UK Patricia Feeney, Crossref Steve Fitzmier, John Templeton Foundation Nina Frentrop, Wellcome, UK Melissa Harrison, EMBL-EBI, UK Brian Haugen, NIH, USA Ginny Hendricks, Crossref Ashley Moore, UKRI, UK Adam Jones, The Gordon and Betty Moore Foundation, USA Natasha Simons, ARC, Australia Justin Withers, ARC, Australia Ashley Farley, The Gates Foundation, USA Kiley Goldstein, The ALS Association, USA Cátia Laranjeira, Foundation for Science and Technology, Portual Cindy Danielson, National Institutes of Health, USA Carly Strasser, CZI, USA David Vestergaard Eriksen, Ministry of Higher Education and Science, Denmark Mogens Sandfaer, Ministry of Higher Education and Science, Denmark M. Brent Dolezalek, James S. McDonnell Foundation, USA Josh Greenberg, Sloan Foundation, USA Katharina Rieck, Austrian Science Fund, Austria Kristen Ratan, Strategies for Open Science (Stratos), USA Linda Kee, American Cancer Society, USA Maryrose Franko, Health Research Alliance, USA Patti Biggs, The Francis Crick Institute, UK Rita Barata, CAPES, Brazil Sheila Rabun, LYRASIS, USA Alexis-Michel Mugabushaka, European Research Council, France Lisa Murphy,\tScience Foundation Ireland, Ireland Alicia Smyth, Science Foundation Ireland, Ireland Ritsuko\tNakajima, JST, Japan Masashi Hara, JST, Japan Yoshiro Hirao, JST, Japan Lucy Ofiesh, Crossref Dominika Tkaczyk, Crossref Erin McKiernan, Open Research Funders Group Michael Parkin, EMBL-EBI, UK Steve Pinchotti, Altum Falk Rekling, FWF, Austria Michaela Strinzel, Swiss National Science Foundation Carly\tRobinson,\tDOE, USA Elizabeth Agee, DOE, USA Natasha Simons, ARDC, Australia Nidhi Wahi,\tNASA, USA Joanne Calhoun, NASA, USA Ginger Strader Minkiewicz, Smithsonian Institution, USA Ann Fust, Swedish Research Council, Sweden Beata Moore, USDA, USA Cynthia Parr, USDA, USA Scott Hanscom, USDA, USA Vicky Crone, USDA, USA Benjamin Missbach, Vienna Science and Technology Fund, Austria Clifford Tatum, SURF, Netherlands Guntram Bauer, Human Frontier Science Program, USA Hans de Jonge, NWO (Dutch Research Council), Netherlands Maria Cruz, NWO (Dutch Research Council), Netherlands Angela Holzer, DFG (German Research Foundation), Germany Richard Heidler, DFG (German Research Foundation), Germany Tobias Grimm, DFG (German Research Foundation), Germany Martin Halbert, National Science Foundation (NSF), USA Shawna Sadler, ORCID Original working group participants A big thanks to the group of volunteers who at various stages jumped in to help make the registration of grants a reality.\nLinks \u0026amp; more information Crossref for funders Registering research grants Metadata schema for grants Please contact Ginny Hendricks with any questions.\n", "headings": ["Some use cases for registering grants","What the group is working on","Original working group participants","Links \u0026amp; more information"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-a-plan-of-action/", "title": "Event Data: A Plan of Action", "subtitle":"", "rank": 1, "lastmod": "2021-02-01", "lastmod_ts": 1612137600, "section": "Blog", "tags": [], "description": "Event Data uncovers links between Crossref-registered DOIs and diverse places where they are mentioned across the internet. Whereas a citation links one research article to another, events are a way to create links to locations such as news articles, data sets, Wikipedia entries, and social media mentions. We\u0026rsquo;ve collected events for several years and make them openly available via an API for anyone to access, as well as creating open logs of how we found each event.", "content": "Event Data uncovers links between Crossref-registered DOIs and diverse places where they are mentioned across the internet. Whereas a citation links one research article to another, events are a way to create links to locations such as news articles, data sets, Wikipedia entries, and social media mentions. We\u0026rsquo;ve collected events for several years and make them openly available via an API for anyone to access, as well as creating open logs of how we found each event. Some organisations are already using Event Data and we are keen for more to come on board.\nLast year we gave an update on Event Data with apologies for being so quiet and a promise of more information at a later date. It\u0026rsquo;s been some time, so here goes\u0026hellip;\nI joined Crossref in the middle of last year as a Product Manager and was tasked with looking into Event Data. The first thing I found was a large amount of enthusiasm for Event Data, both within Crossref and further afield. The idea of gathering information beyond the metadata deposited by our members is popular, and creates valuable connections between DOIs and a range of other sources. Interest spans the spectrum of academic research, publishing, bibliometrics, and beyond.\nAt the same time, I found a project with a very solid, well-built code base but unstable performance. After being put into production in 2018, we didn\u0026rsquo;t provide sufficient support. Coupled with staff changes and other competing priorities, Event Data hasn\u0026rsquo;t had the opportunity to live up to early expectations.\nTo address these issues, we have embarked on a plan to make the server infrastructure more robust, improve monitoring, and make sure that the future of Event Data makes the best use of the resources we have without over-stretching. It means working with the community to determine the most essential aspects of Event Data, and providing support where it\u0026rsquo;s needed.\nThe steps below are not necessarily sequential and some depend on the completion of work in other parts of Crossref, but they outline the priorities we have for Event Data in 2021.\nThe Plan Stability Since we put in place our original Event Data infrastructure, the amount of incoming data has grown, and at an ever-increasing rate. In 2017 we were creating 2 million new events per month, that number is now over 20 million. We have known for some time that we need to refresh the infrastructure, but didn\u0026rsquo;t have the resources to move forward: now we do.\nIn the first part of the plan we will renew the server infrastructure that underpins Event Data. Maybe not a headline-grabbing move, but the aim is to reduce downtime and pull in missing data. Through improving our monitoring and shortening the response time when things go wrong, we will be able to ensure that events are added on a regular basis and the API can reliably handle requests.\nWe\u0026rsquo;ve made the first steps in this direction by upgrading our API infrastructure and making some other tweaks to improve performance. There is still work to do, but we\u0026rsquo;ve already seen a significant improvement in performance with nearly \u0026gt;99.99% uptime in December.\nConsolidation The second component of the plan is to review performance and data quality. We will evaluate the event sources, update artefacts (such as the lists of publisher landing pages and news websites, and review performance reporting. This will help us to have a better understanding of Event Data in its current form: if the stability component is about improving what comes in and goes and out, this part will give us increased confidence in what Event Data already contains.\nFuture roadmap While the two steps above are being carried out, we will revisit the applications of Event Data and talk to organizations that currently use it or have expressed an interest. These conversations will feed into future development in which we will evaluate new sources and other ways to optimize the service.\nCentral to the roadmap will be continued support of the data citation endpoint in Scholix format, which we run in close collaboration with DataCite. Additionally, we will add new data from relationships between Crossref works, for example a preprint is matched to a journal article, or where there are corrections, retractions, or translations of works.\nWe expect to continue supporting the current sources of events and where there are organizations with either a strong interest in a particular source or a database of events that they can send directly, we are keen to build collaborations. Event Data, like everything that Crossref does, is a community-based effort.\nStaying in touch To join the conversation about Event Data and keep informed, head over to our Community pages. You can also check out our Gitlab pages. At the end of last year we updated the Education pages where you can learn more about Event Data.\n", "headings": ["The Plan","Stability","Consolidation","Future roadmap","Staying in touch"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-public-data-file-120-million-metadata-records/", "title": "New public data file: 120+ million metadata records", "subtitle":"", "rank": 1, "lastmod": "2021-01-19", "lastmod_ts": 1611014400, "section": "Blog", "tags": [], "description": "2020 wasn\u0026rsquo;t all bad. In April of last year, we released our first public data file. Though Crossref metadata is always openly available––and our board recently cemented this by voting to adopt the Principles of Open Scholarly Infrastructure (POSI)\u0026lt;/agic––we\u0026rsquo;ve decided to release an updated file. This will provide a more efficient way to get such a large volume of records. The file (JSON records, 102.6GB) is now available, with thanks once again to Academic Torrents.", "content": "2020 wasn\u0026rsquo;t all bad. In April of last year, we released our first public data file. Though Crossref metadata is always openly available––and our board recently cemented this by voting to adopt the Principles of Open Scholarly Infrastructure (POSI)\u0026lt;/agic––we\u0026rsquo;ve decided to release an updated file. This will provide a more efficient way to get such a large volume of records. The file (JSON records, 102.6GB) is now available, with thanks once again to Academic Torrents.\nUse of our open APIs continues to grow, as does the metadata. Last year\u0026rsquo;s file was 112 million records and 65GB. Just nine months later (though it feels longer than that!), the new file is over 120 million records and over 102GB. That\u0026rsquo;s all of the Crossref records ever registered up to and including January, 7, 2021. We continue to see around 10% growth in records each year––and while journal articles account for most of the volume, preprints and book chapters are two of our fast-growing record types. In addition to the growth in the number of records, many of the records are getting bigger and better as members look at their participation report and understand the value of enriching metadata records for distribution throughout the scholarly ecosystem. Elsevier recently opened its references, enriching over 12 million records. A number of members, including Royal Society, Sage, Emerald, OUP, World Scientific and more have started adding \u0026lt;a href=\u0026quot;/blog/open-abstracts-where-are-we/\u0026quot; target=\u0026quot;_blank\u0026quot;gicabstracts which now number over 9 million.\nHelp us help you––using the torrent and other important notes We decided to release these public data files largely to help support COVID-19 research efforts but of course use cases for Crossref metadata vary widely and a few pointers should help all users:\nUse the torrent if you want all of these records. Everyone is welcome to the metadata but it will be much faster for you and much easier on our APIs to get so many records in one file. Use the REST API to incrementally add new and updated records once you\u0026rsquo;ve got the initial file. Here is how to get started (and avoid getting blocked in your enthusiasm to use all this great metadata!). \u0026lsquo;Limited\u0026rsquo; and \u0026lsquo;closed\u0026rsquo; \u0026lt;a href=\u0026quot;/education/content-registration/descriptive-metadata/references/#00564/\u0026quot; target=\u0026quot;_blank\u0026quot;gicreferences are not included in the file or our open APIs. And, while bibliographic metadata is generally required, lots of metadata is optional, so records will vary in quality and completeness. Questions, comments and feedback are welcome at support@crossref.org.\nHere\u0026rsquo;s hoping 2021 is a better year for us all! Stay well.\n", "headings": ["Help us help you––using the torrent and other important notes"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/data-citation/", "title": "Data citation", "subtitle":"", "rank": 1, "lastmod": "2021-01-07", "lastmod_ts": 1609977600, "section": "Get involved", "tags": [], "description": "Why data citation is important Data sharing and citation are important for scientific progress. The three key reasons for this are:\nTransparency and reproducibility: Most scientific results that are shared today are just a summary of what researchers did and found. The underlying data are not available, making it difficult to verify and replicate results. If data would always be made available with publications, transparency of research would be greatly improved.", "content": "Why data citation is important Data sharing and citation are important for scientific progress. The three key reasons for this are:\nTransparency and reproducibility: Most scientific results that are shared today are just a summary of what researchers did and found. The underlying data are not available, making it difficult to verify and replicate results. If data would always be made available with publications, transparency of research would be greatly improved. Reuse: The availability of raw data allows other researchers to reuse the data. Not just for replication purposes, but to answer new research questions. Credit: When researchers cite the data they used, this forms the basis for a data credit system. Right now researchers are not really incentivized to share their data, because nobody is looking at data metrics and measuring their impact. Data citation is a first step towards changing that. How to cite data in your Crossref metadata Crossref members deposit data \u0026amp; software links by adding them directly into the standard metadata deposit. This is part of the existing Content Registration process. You can add these links to your metadata in one of two ways, via the reference metadata you register with Crossref or via the relationships section of the schema.\nReferences The main mechanism for depositing data and software citations is to insert them into an article\u0026rsquo;s reference metadata. To do so, publishers follow the general process for depositing references.\nPublishers can deposit the full data or software citation as a unstructured reference, or they can employ any number of reference tags currently accepted by Crossref. It’s always best to include the DOI (either DataCite or Crossref) for the dataset if possible.\nYou’ll see additional support for data citations in reference lists in the next version of our schema.\nRelationships We maintain a set of relationship types to support the various content items that a research object, like a journal article, might link to. For data and software, we ask members to provide the following information:\nidentifier of the dataset/software identifier type: “DOI”, “Accession”, “PURL”, “ARK”, “URI”, “Other”. Additional identifier types beyond those used for data or software are also accepted, including ARXIV, ECLI, Handle, ISSN, ISBN, PMID, PMCID, and UUID. relationship type: “isSupplementedBy” or “references” (use the former if it was generated as part of the research results). description of dataset or software. Both Crossref and DataCite employ this method of linking. Data repositories who register their content with DataCite follow the same process and apply the same metadata tags. This means that we achieve direct data interoperability with links in the reverse direction (data and software repositories to journal articles).\nYou can see illustrations and examples of this schema in our Data \u0026amp; Software Citation guide.\nHow to access data \u0026amp; software citations Crossref and DataCite make the data \u0026amp; software citations deposited by Crossref members and DataCite data repositories openly available for use for anyone within the research ecosystem (funders, research organisations, technology and service providers, research data frameworks such as Scholix, etc.).\nData \u0026amp; software citations from references can be accessed via the Crossref Event Data API. Citations included directly into the metadata by relation type can be accessed via Crossref’s APIs. We\u0026rsquo;re working to include these relation type citations in the Event Data API as well, so that all data citations will be available via one source.\nScholix Participation The goal of the Scholix (SCHOlarly LInk eXchange) initiative is to establish a high-level interoperability framework for exchanging information about the links between scholarly literature and data. Crossref members can participate by sharing article-data links by including them in their deposited metadata as references and/or relation type as described above. You don\u0026rsquo;t need to sign up or let us know you\u0026rsquo;re going to start providing this information, just start to send it to us in your reference lists or in the relationship metadata.\nIf the reference metadata you are registering with Crossref uses Crossref or DataCite DOIs, the linkage between the publications/data is handled by Crossref - nothing more is needed.\nIf the data (or other research objects) uses DOIs from another source, or a different type of persistent identifier, then you need to create a relationship type record instead. This method also allows for the linkage of other research objects.\nScholix API Endpoint The Event Data service implements a Scholix endpoint in the API. A subset of relevant Events (from the \u0026lsquo;crossref\u0026rsquo; and \u0026lsquo;datacite\u0026rsquo; sources) is available at this endpoint. The filter parameters are the same as specified in the Query API. The response format uses the Scholix schema.\nMake Data Count Crossref participates in the Make Data Count initiative. Make Data Count\u0026rsquo;s focus is on the widespread adoption of standardized data usage and data citation practices, the building blocks for open research data metrics.\nMake Data Count\u0026rsquo;s goals are three-fold:\nIncreased adoption of standardized data usage across repositories through enhanced processing and reporting services Increased implementations of proper data citation practices at publishers by working in conjunction with publisher advocacy groups and societies Promotion of bibliometrics qualitative and quantitative studies around data usage and citation behaviors We\u0026rsquo;re participating to help support and inform data citation work at publishers in conjunction with existing data citation initiatives, so that we can embed data citation into standard publication workflows and give researchers credit for sharing their data.\nIf you have questions about registering data citations with us, you can consult other users on our forum community.crossref.org or open a ticket with our technical support specialists.\n", "headings": ["Why data citation is important","How to cite data in your Crossref metadata","References","Relationships","How to access data \u0026amp; software citations","Scholix Participation","Scholix API Endpoint","Make Data Count"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/initiatives/", "title": "Initiatives", "subtitle":"", "rank": 1, "lastmod": "2021-01-07", "lastmod_ts": 1609977600, "section": "Get involved", "tags": [], "description": "Our communities\u0026rsquo; work to solve problems and support research doesn\u0026rsquo;t stand still. Many people come to Crossref for help either to lead, advise, host, or support new strategic ideas and initiatives.\nSome of them are key to our mission and stand to provide value to our members and the wider research ecosystem. We’ve identified some as special programs so that we can devote more time and attention to them. This support can take a variety of formats: promoting, prototyping, chairing or board/working group participation, adopting, co-developing, resourcing, financing, or advising - all while listening and learning too.", "content": "Our communities\u0026rsquo; work to solve problems and support research doesn\u0026rsquo;t stand still. Many people come to Crossref for help either to lead, advise, host, or support new strategic ideas and initiatives.\nSome of them are key to our mission and stand to provide value to our members and the wider research ecosystem. We’ve identified some as special programs so that we can devote more time and attention to them. This support can take a variety of formats: promoting, prototyping, chairing or board/working group participation, adopting, co-developing, resourcing, financing, or advising - all while listening and learning too.\nStrategic programs we’ve taken on in the past include ORCID, preprints, and CHORUS. And of course we still support them, but that support has become part of our regular practices and processes and those initiatives have graduated to become fully-embedded services or separate organizations.\nWe’re currently focusing our time and effort on three special programs: Research Organization Registry (ROR); the Crossref Grant Linking System; and data citation (including Scholix and Make Data Count). You can read more about all of these by following the links at the side of the page, and let us know if you have any questions about how you or your organization can get involved.\nIt can be hard to keep track of everything, so if we’ve missed something you or your new idea is keen to develop with Crossref support, then do get in touch.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2020/", "title": "2020", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-tribute-to-our-kirsty/", "title": "A tribute to our Kirsty", "subtitle":"", "rank": 1, "lastmod": "2020-12-16", "lastmod_ts": 1608076800, "section": "Blog", "tags": [], "description": "Our colleague and friend, Kirsty Meddings, passed away peacefully on 10th December at home with her family, after a sudden and aggressive cancer. She was a huge part of Crossref, our culture, and our lives for the last twelve years.\nKirsty Meddings is a name that almost everyone in scholarly publishing knows; she was part of a generation of Oxford women in publishing technology who have progressed through the industry, adapted to its changes, spotted new opportunities, and supported each other throughout.", "content": "Our colleague and friend, Kirsty Meddings, passed away peacefully on 10th December at home with her family, after a sudden and aggressive cancer. She was a huge part of Crossref, our culture, and our lives for the last twelve years.\nKirsty Meddings is a name that almost everyone in scholarly publishing knows; she was part of a generation of Oxford women in publishing technology who have progressed through the industry, adapted to its changes, spotted new opportunities, and supported each other throughout. We hope this post will do justice to her memory in our profession.\nKirsty\u0026rsquo;s early career After completing her degree in English and Spanish American Literature at Warwick University, Kirsty started her career in scholarly communications and publishing at Blackwell’s Information Services. She was there for a year before joining CatchWord, an online journal start-up, in 1998, as Electronic Publisher and Account Manager and in 1999 was promoted to the new role of Library Relations Manager.\nCatchWord was acquired by Ingenta and Kirsty moved into product management working on integrating the CatchWord and Ingenta platforms and launching IngentaConnect in 2004. Ingenta became Publishing Technology in 2005 and Kirsty was Product Development Manager working with engineering, business development, and users on developing online products and services. She was also involved in a range of community initiatives including COUNTER, KBART, and ICEDIS.\nJoining Crossref Kirsty\u0026rsquo;s professional headshot\nShe was an early pioneer in electronic and online publishing - an innovator who understood scholarly publishing, technology, libraries, and people - a powerful combination. And Crossref was quick to offer her a role.\nIn Kirsty’s introduction to Crossref she was described by the recruiter as:\nAn experienced and highly capable individual with a solid background in product development, marketing and customer service issues related to the supply of scholarly electronic content from publishers to library and end user audiences. A good communicator and team worker with sound technical understanding and an excellent grasp of publishing industry issues.\nThis adequately captures Kirsty’s impressive professional achievements, but not her personality. Kirsty was a Product Manager at Crossref for 12 years and was a valued and loved friend and colleague. Committed to Crossref\u0026mdash;its values and people\u0026mdash;she was funny, human, and always asked tough questions.\nShe joined us on October 27th, 2008 as our first Product Manager and the third UK employee. In her time at Crossref, Kirsty made a major impact, working on a range of important projects and services - particularly new, innovative services. Not long after she started at Crossref, she wrote a “day in the life” profile for the journal Serials that perfectly captures what it was like in 2009 at Crossref Oxford (there were three of us in Oxford and only ten total staff at Crossref): Meddings, K., 2009. Mini-profile: a day in the life of a product manager. Serials, 22(1), pp.5–6. DOI: http://0-doi-org.libus.csd.mu.edu/10.1629/225\nHer own biography, on her staff page, states:\nKirsty Meddings has been involved in a diverse set of initiatives that have kept her busy since 2008. She has spent most of her career in scholarly communications, in a variety of marketing and product development roles for intermediaries and technology suppliers. She speaks conversational geek and competent publishing, and is working towards fluency in both.\nSee? Funny!\nProfessional achievements Kirsty started out working on CrossCheck, now Similarity Check, the plagiarism screening service that launched in 2008. The service was in need of some attention and better organization - Kirsty got stuck in, whipped it into shape and it has gone on to be one of Crossref’s most widely-adopted services. This article that Kirsty wrote for ISMTE’s publication, EON, remains useful nearly 10 years after it was written! Kirsty successfully managed the partnership with Turnitin (starting as iParadigms), the technical provider for Similarity Check, for many years. Colleagues there are mourning her loss too.\nKirsty was instrumental in launching Crossmark, which became a production service in 2012. After a few changes of hands, she resumed work on the service in recent years, and announced the removal of Crossmark fees to better support uptake in 2020.\nThe addition of clinical trial information to the Crossref metadata was a community-driven initiative, developed from the concept of threaded publications. There were/are lots of moving parts in this initiative, and in many ways it was one of the precursors to the idea of the Research Nexus: linking via metadata and relationships to provide a clearer picture of the ecosystem that exists around a research object.\nWhat was once FundRef (ahh, those logos!) has matured into the Open Funder Registry under Kirsty’s stewardship. In collaboration with Elsevier, the registry has grown from an initial 4,000 funders, to over 25,000 and we can see over 5 million works registered with Crossref that are linked to at least one funder. More recently, Kirsty was the Product Manager for the registration of research grants with Crossref, working with our Funder Advisory Group, and she was starting to work with CDL and DataCite to absorb the Funder Registry into ROR.\nIn 2018, Kirsty launched our first ever dashboard for member best practice. She led the development and design of Participation Reports and the decision of which checks would be most important for the scholarly community to assess. This has quickly become one of Crossref’s most valuable and used tools.\nPublic speaking Kirsty always spoke with authority across a range of topics, appearing totally calm even if she was nervous. Among many talks, she spoke at the STM seminar on Publication Ethics and Research Integrity, ISMTE, UKSG, ALPSP seminars, the COPE Forum, ran numerous CrossCheck, CrossMark, FundRef and TDM webinars, and a recent online LIVE event.\nShe was a frequent presenter at many of Crossref annual meetings, and enjoyed the opportunity to meet and catch up with our members, the board, and the community (many of whom always ask after her). Checking in after conferences on who said what, who’s moving where, what feedback we had, and picking up on opportunities for further collaboration were all things that we looked forward to sharing.\nTo use UKSG’s own words, Kirsty was always a staunch supporter of the organisation - attending, exhibiting, and speaking at many UKSG conferences and events over her whole career. She was also a legend at the dinners, on the dance floor, and in the bar. At the 2019 conference she tallied the votes at the quiz night - Kirsty loved a quiz! We had an all-staff end-of-year quiz via zoom last week and it was just not the same without her.\nHere are Kirsty\u0026rsquo;s slides on SlideShare, some videos of Kirsty\u0026rsquo;s talks on YouTube, and her ORCID record which lists her published works.\nStrong friendships One of the most rewarding experiences of working at Crossref is meeting up with the whole team and with our members. Jetlag, hunting out coeliac-friendly food, staying up far too late chatting, trying to fit in exploring bits of cities around board and other meetings, presenting, organizing, thinking, laughing (I’m sure to the annoyance of other plane passengers)\u0026mdash;these experiences were all part and parcel of working with Kirsty, and where many of us cemented connections with her.\nWe started a message board and within days it was populated with numerous stories, poems, and photos from so many friends and colleagues on whom Kirsty made such a lasting and loving impression.\nKirsty\u0026rsquo;s message board\nIt’s impossible to capture someone’s character in a blog, but some of the words that carry across the messages that people have shared are empathy, compassion, honesty, intelligence, brilliance, sincerity, laughter, human, passion, openness, and fun. We’ll miss her immensely.\nKirsty was somewhat of an expert in grief. She lost her first husband, James Culling, to leukemia in December 2012, leaving her a widow with two sons, Dan, 7 at the time, and Luke, just 6-months old. A few years later, through the charity Widowed And Young (WAY), she met Martin Eggleston. Martin and his daughter Amy joined Kirsty, Dan, and Luke, and they created a very happy blended family. Some of us went to their wedding and it was an incredible event full of love and laughter - and of course music. Always music.\nKirsty represented us, along with Rachael, at the funeral of another colleague last year, Christine Hone, in Amsterdam. Kirsty helped all of us get through the grief then. And because she made it okay to grieve and to talk about grief, it is heartbreaking and also comforting that she is indirectly helping us all now to be better able to handle her own death.\nHow we can honour Kirsty’s memory We heard that Kirsty’s last words were “I’m listening”. Which is just so fitting. She was always ready with an ear, a shoulder to support us all, and indeed she demanded that we express ourselves honestly.\nIf you want to share memories of Kirsty, you can join others who have done so on the message board or just take a few minutes to read through.\nAnd there is a justgiving page in memory of Kirsty for Maggie\u0026rsquo;s Oxford, a branch of a cancer support charity who helped her and her family through James\u0026rsquo;s death and is now helping her family again.\nProfessionally, Kirsty made major contributions at Crossref and in scholarly communications in general. More importantly, she had a profound impact on a personal level with many people. Our thoughts are with Martin, Dan, Amy, and Luke, and also with Kirsty’s mum Val, her brother Colin, her in-laws, her close friends, and all the people who\u0026mdash;like the rest of us\u0026mdash;are better for knowing her, and will never forget her.\n", "headings": ["Kirsty\u0026rsquo;s early career","Joining Crossref","Professional achievements","Public speaking","Strong friendships","How we can honour Kirsty’s memory"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/fast-citable-feedback-peer-reviews-for-preprints-and-other-record-types/", "title": "Fast, citable feedback: Peer reviews for preprints and other record types", "subtitle":"", "rank": 1, "lastmod": "2020-12-09", "lastmod_ts": 1607472000, "section": "Blog", "tags": [], "description": "Crossref has supported depositing metadata for preprints since 2016 and peer reviews since 2018. Now we are putting the two together, in fact we will permit peer reviews to be registered for any record type.\n", "content": "Crossref has supported depositing metadata for preprints since 2016 and peer reviews since 2018. Now we are putting the two together, in fact we will permit peer reviews to be registered for any record type.\nCurrently, peer reviews can be registered for journal articles, but that means that they can only be related to some of the content our members deposit. Preprints, books, chapters, working papers, dissertations, and a host of other works can also be registered with Crossref. A number of these frequently undergo some form of review and many of our members and voices in the community have called for us to widen the net on peer reviews, including journal publishers, book publishers, review platforms, and preprint servers. We\u0026rsquo;ve listened and taken action, and from now on Crossref members can add relationship metadata that links peer reviews to any record type. The metadata will also contain the type of review, stating whether it is a referee report, author response, or community comment, etc. This allows accurate reporting on whether the peer review is happening within a traditional editorial process or elsewhere.\nReviews for preprints In the last decade there has been an increase in the number of disciplines using preprints. Since enabling registration of preprint metadata, it has become our fastest-growing record type. Preprints, working papers, and other forms of early publication help to accelerate dissemination of the latest research and discovery. They can also promote discussion on important topics, and help authors to improve papers before an editorial decision for journal publication. During the COVID-19 pandemic, preprints have become invaluable for speeding the publication of vital research and case studies.\nOn the other hand, preprints do not undergo formal review and editorial approval, leading to concerns about the dissemination of false information. While the issue of misinformation in preprints has been discussed for some time, the COVID-19 pandemic has brought it more sharply into focus. Organizations that post preprints need to balance the benefits of rapid dissemination with promoting their responsible use.\nTo support the feedback process, preprint servers along with a growing number of other platforms and services offer scholars the opportunity to post public comments on preprints. By doing this, they give extra context for readers, provide suggestions for authors, and raise awareness of work that could be flawed or too preliminary.\nAnother growing trend is journal publishers adopting editorial processes that involve preprint-first options and open peer review. As Dr. Stephanie Dawson from ScienceOpen says:\n\u0026ldquo;We have long believed in rewarding reviewers by assigning Crossref DOIs to their open reviews to make them citable objects and we were one of the first users of Crossref\u0026rsquo;s peer review schema. However, a large percentage of the articles reviewed on ScienceOpen are publicly available preprints. The UCL Open: Environment journal hosted on the platform, for example, is based on a workflow of open peer review of preprints. Our customers, editors, reviewers and authors are therefore extremely happy that these reviews can now also be assigned a Crossref peer review DOI for more accountability and transparency in scholarly publishing.\u0026rdquo;\nAt Crossref, we\u0026rsquo;re continually looking to support more record types and relations between them to build trust, support reproducibility and increase discoverability of content. This is another small step in building the research nexus and we look forward to working with members depositing peer reviews of preprints.\n", "headings": ["Reviews for preprints"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/404-support-team-down-for-essential-maintenance/", "title": "404: Support team down for essential maintenance", "subtitle":"", "rank": 1, "lastmod": "2020-12-04", "lastmod_ts": 1607040000, "section": "Blog", "tags": [], "description": "2020 has been a very challenging year, and we can all agree that everyone needs a break. Crossref will be providing very limited technical and membership support from 21st December to 3rd January to allow our staff to rest and recharge. We’ll be back on January 4th raring to answer your questions. Amanda explains more about why we made this decision.\n", "content": "2020 has been a very challenging year, and we can all agree that everyone needs a break. Crossref will be providing very limited technical and membership support from 21st December to 3rd January to allow our staff to rest and recharge. We’ll be back on January 4th raring to answer your questions. Amanda explains more about why we made this decision.\nAs we all know, 2020 has been an unprecedented year, with the COVID-19 pandemic affecting lives across the globe.\nIt’s been amazing to watch our members pivot their working practices and continue to publish content and register it with Crossref to keep the wheels of research and scholarly communications moving.\nSince January, we’ve seen 9,079,082 items registered with Crossref, up 13% on 2019. 2628 new members have also joined during that time and we now have almost 13.5k members from 139 countries. We’ve seen over 337 million requests to our REST API on average per month in 2020, a 9% increase over 2019 (and over 600 million total metadata queries on average per month across all our APIs and services).\nOf course, all this activity brings an increasing number of requests for help and support. Since the start of 2020, we have answered almost 24,000 support tickets from the community. Sometimes these just need a quick answer or a link to our documentation. Sometimes it\u0026rsquo;s a straightforward new member application or a routine query. But sometimes a prospective member needs a lots of advice, sometimes a long-standing member or user needs in-depth investigations and consultancy. Sometimes the request highlights a problem in one of our systems that needs input from our product and development colleagues. But either way, it’s keeping our small team of five full-time employees very busy.\nVanessa wrote earlier in the year about how our Community Outreach team has changed its working practices this year. As Head of Member Experience I’ve been incredibly impressed by the way our membership, support and billing staff have done the same - remaining really focused on the needs of the Crossref community while (at the same time) balancing this with the demands of working from home, childcare, home-schooling, and supporting those affected by the pandemic in their own community. Isaac’s thoughtful post on our forum about his first week working at home because of the pandemic really highlighted some of these challenges.\nWe take work/life balance seriously at Crossref. We want to make sure that we’re are able to continue to help the Crossref community effectively in 2021, but are also able to continue to look after ourselves, our families, and our own communities in this difficult time. We all hope that 2021 will be a very different year, but there’s still likely to be disruption ahead for all of us, and one thing is sure: there will continue to be plenty more requests coming in for our small team to stay on top of in the meantime.\nWith this in mind, we want to make sure that our support staff are able to properly rest and recharge during what is a holiday period for many of us coming up. We’ll be operating with just one person each on the technical support and membership support side between 23rd December and 3rd January. This means that while we’ll be able to answer urgent queries, non-urgent questions will be left unanswered until 4th January. And we’ll not take on any new members between 21st December and 3rd January too.\nWe know many of you will be continuing to work during this period. If you have a non-urgent question, do take a look at our support documentation in the meantime, or see if other members (or our amazing Ambassadors) are able to help on our forum. If you can’t find what you’re looking for and it\u0026rsquo;s urgent, we hope that the limited staff who are on call will still be able to help you out.\nColleagues in the US have recently celebrated their Thanksgiving, and I remain enormously thankful for our team here at Crossref, and for you all in the scholarly community for your enthusiasm for working together collectively to help the world find, cite, link, assess, and reuse scholarly content. We all really appreciate your patience while we reset ready for 2021. Happy Holidays!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/support/", "title": "Support", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossrefs-board-votes-to-adopt-the-principles-of-open-scholarly-infrastructure/", "title": "Crossref’s Board votes to adopt the Principles of Open Scholarly Infrastructure", "subtitle":"", "rank": 1, "lastmod": "2020-12-02", "lastmod_ts": 1606867200, "section": "Blog", "tags": [], "description": "TL;DR On November 11th 2020, the Crossref Board voted to adopt the “Principles of Open Scholarly Infrastructure” (POSI). POSI is a list of sixteen commitments that will now guide the board, staff, and Crossref’s development as an organisation into the future. It is an important public statement to make in Crossref’s twentieth anniversary year. Crossref has followed principles since its founding, and meets most of the POSI, but publicly committing to a codified and measurable set of principles is a big step. If 2019 was a reflective turning point, and mid-2020 was about Crossref committing to open scholarly infrastructure and collaboration, this is now announcing a very deliberate path. And we’re just a little bit giddy about it.\n", "content": "TL;DR On November 11th 2020, the Crossref Board voted to adopt the “Principles of Open Scholarly Infrastructure” (POSI). POSI is a list of sixteen commitments that will now guide the board, staff, and Crossref’s development as an organisation into the future. It is an important public statement to make in Crossref’s twentieth anniversary year. Crossref has followed principles since its founding, and meets most of the POSI, but publicly committing to a codified and measurable set of principles is a big step. If 2019 was a reflective turning point, and mid-2020 was about Crossref committing to open scholarly infrastructure and collaboration, this is now announcing a very deliberate path. And we’re just a little bit giddy about it.\nHere is a picture of me being “giddy.”\nIf you just want to see the principles that the board has endorsed, you can see them here:\nhttps://doi.org/10.24343/C34W2H\nBut if you also want some background and want to understand some of the implications of Crossref adopting the principles, read on…\nWarning - this is a long post.\nBackground and Origins Some of you may be surprised that we’ve done this - simply because you always assumed we operated under these principles anyway. And we have. Mostly.\nThe “Principles of Open Scholarly Infrastructure” were largely inspired by a set of uncodified rules and norms that Crossref had been operating under for years. So how did we get to this circular situation where we are making a big announcement about adopting something we have largely been doing anyway?\nSix years ago I met with Cameron Neylon and Jennifer Lin when they were still at PLOS and we decided that we wanted to write a blog post about\u0026hellip;\nWell, it doesn’t really matter.\nWe never finished writing that blog post because we got distracted by an issue that we kept seeing which was that services that the scholarly community depended on were increasingly taking directions that seemed antithetical to the community’s interests.\nWe were concerned because the scholarly community was becoming increasingly distrustful of infrastructure services. We wondered if there were any practices that we could point to that might mitigate the risk of infrastructure being co-opted and that would help build trust. Fortunately, we had two great models to look at:\nCrossref, which had a set of informal rules and norms that it had followed since its founding (e.g., transparency of operations, being business-model neutral, one member one vote). ORCID, an organisation that was spun-out of Crossref and which had adopted a written set of principles, based largely on codifying practices that they had seen at Crossref. And so we wrote these practices up and added a few that we thought were missing. And we posted a different blog post to the one we had originally planned. It was titled “The Principles of Open Scholarly Infrastructures.” And the blog post became popular. And we did a bunch of talks about the Principles. And, much to our surprise, POSI has influenced the directions and policies of a number of organisations and initiatives since, including SPARC, Invest in Open Infrastructure, Open Data Institute, OA Switchboard, and others.\nElsewhere, community organizations and likeminded community members helped further develop the implementation of POSI through discussions at FORCE11 and through additional blog posts and books. Some, like Dryad and ROR, started to work to align their organizational structure to embrace POSI.\nAnd this left Crossref in a strange position. Although we were largely the inspiration for these Principles - we ourselves had never codified and adopted them.\nMotivations. Why Now? Because it is the right thing to do for those that currently depend on Crossref It is a healthy thing for the organisation to do. Adopting these principles strengthens Crossref’s governance. After twenty years, Crossref infrastructure has become critical to a broad segment of the community. As our membership profile changes, and as our broader stakeholder community expands, we need to explicitly evolve our governance to reflect stakeholders. And it would be irresponsible to continue to have our governance guided by a set of informal conventions. Particularly in the context of a global political period where we’ve seen the informal operating conventions and policy understandings of at least two major democracies ignored or discarded.\nBecause it could help make the creation of new, sustainable, open scholarly infrastructure easier and less expensive There is a lot of new interest in open scholarly infrastructure. New infrastructure services and systems are being proposed almost every month. Many of them seek extensive advice and consulting from Crossref. A subset of these are incubated through Crossref. And a subset of these become Crossref services. Others are spun out as separate organisations (e.g., ORCID) or were specifically initiated as collaborations (e.g., ROR).\nOur experience has been that the vast majority of work involved in these infrastructure projects was in establishing trust amongst the stakeholder community. We think that Crossref adopting the principles will help to address fundamental questions about accountability and sustainability that are inevitably raised when a new constituency approaches Crossref with an idea for collaborating on a new or existing infrastructure service. In short, adopting the principles will make future collaboration easier.\nAdopting the Principles: Plus ça change The Principles of Open Scholarly Infrastructure (POSI) proposes three areas that an Open Infrastructure organisation can address in order to garner the trust of the broader scholarly community: accountability (governance), funding (sustainability), and protection of community interests (insurance).\nPOSI proposes a set of concrete commitments that an organisation can make to build trust in each of these areas. There are 16 such commitments. Of these 16 commitments, Crossref is already completely or partially meeting the requirements of 15. And adopting the 16th commitment just formalises a direction Crossref has been heading toward for several years.\nCritically, “adopting” POSI does not mean that we have to instantly meet all of the criteria. After all, when ORCID adopted its principles, it didn’t meet any of them. They were adopted to make a statement of intent. And they were publicly adopted so that the community could measure the organisation\u0026rsquo;s progress as well as to allow the community to detect if ORCID started to stray from its stated intentions.\nAdopting the principles is akin to adopting a mission statement or a vision statement. It is an aspirational guide, not a description of the status quo.\nHaving said that, the principles are more concrete than a mission or vision statement, and this makes them easier to measure.\nIt is also important to note that the criteria are designed to balance each other. So, for example, one would not want to change the governance or business model to better support the mission if doing so would also threaten the sustainability of the organisation.\nAnd finally, meeting a commitment is an ongoing process - it is not a one-off event. The organisation needs to keep measuring their performance against the principles in order to make sure that they have not inadvertently regressed.\nImplications Before adopting the principles, we did a candid self-audit to see which ones we thought we currently met and which ones we still needed to work on.\nThe three areas and sixteen commitments that are proposed in POSI are all designed to ensure that an infrastructure can not be co-opted by a particular party or interest group.\nAnd the last area, “Insurance,” is the backstop that makes sure that, if some in the community feel that the infrastructure organisation has gone in a radically wrong direction, they can recreate the infrastructure as it was when they were comfortable with it, and they will not be hindered by practices or policies that lock them into the existing organisation.\nThis “insurance” is very much inspired by Crossref. Crossref itself was built, in part, to make sure that publishers were not locked into platforms and that journals and societies were not locked into publishers. Using the indirect Crossref DOI linking mechanism ensures that content can move between platforms and publishers without breaking vital citation links. Moving between platforms or publishers is never easy. And it isn’t cheap. But using Crossref DOIs for citation links at least makes it possible.\nCrossref has an extra insurance level as well. It is built on the DOI and Handle infrastructure. If Crossref were to take a direction that some of its members found unacceptable, those members could join another DOI Registry agency more amenable to them. It wouldn’t be easy. It wouldn’t be cheap. But it would be possible.\nAnd this knowledge helps keep Crossref grounded and attuned to the needs and concerns of its members. We know that our members are not “trapped” with us. We don’t take lightly the trust placed in us. And we know that there is trust still to build with various corners of our community. And it is this knowledge that helps keep us from developing the disdainful, take-it-or-leave-it, attitude that can be the cliché characteristic of infrastructure organisations.\nSo the fundamental, overarching goal of POSI is to set out principles that ensure that the stakeholders of an infrastructure organisation have a clear say in setting its agenda and priorities and that, in extremis, the stakeholders can leave and create an alternative infrastructure if the original organisation becomes unresponsive, hostile, or disappears.\nAs we look at how Crossref currently maps to the principles, please keep in mind three things:\nIf we have marked something as green, that doesn’t mean we think we do this perfectly. It simply means that we already have internal processes that focus on this commitment and we have evidence that these processes have thus far been working. The fact that something is green and has “thus-far been working” does not mean that we should rest easy. We could regress. Our processes need to be able to detect and address regressions. The commitments are supposed to be balanced. So we don’t want to do something to turn something green if it has an irreversible impact on another commitment. So, for example, we should not address a shortfall in the contingency fund by generating revenue in a way that ultimately hurts Crossref’s mission. The implication of #3 above is that it may take us some time to meet all of the commitments. But again, the community can measure our progress against meeting the commitments. So how does Crossref currently meet POSI? Governance 🟢 Coverage across the research enterprise. 🟢 Non-discriminatory membership 🟢 Transparent operations 🟢 Cannot lobby 🟢 Living will 🟢 Formal incentives to fulfil mission \u0026amp; wind-down 🔴 Stakeholder Governed Sustainability 🟢 Time-limited funds are used only for time-limited activities. 🟢 Goal to generate surplus 🟡 Goal to create contingency fund to support operations for 12 months 🟢 Mission-consistent revenue generation 🟢 Revenue based on services, not data Insurance 🟢 Available data (within constraints of privacy laws) 🟡 Patent non-assertion 🟡 Open source 🟡 Open data (within constraints of privacy laws) Governance If an infrastructure is successful and becomes critical to the community, we need to ensure it is not co-opted by particular interest groups. Similarly, we need to ensure that any organisation does not confuse serving itself with serving its stakeholders. How do we ensure that the system is run “humbly”, that it recognises it doesn’t have a right to exist beyond the support it provides for the community and that it plans accordingly? How do we ensure that the system remains responsive to the changing needs of the community?\n\u0026ndash; POSI\nIn the area of governance, Crossref clearly meets six of the seven criteria listed. We will discuss these first.\n🟢 Coverage across the research enterprise it is increasingly clear that research transcends disciplines, geography, institutions and stakeholders. The infrastructure that supports it needs to do the same.\n\u0026ndash; POSI\nCrossref includes members who publish in the STM, HSS and Professional spheres. There are still some gaps in our coverage (e.g., monographs, law), but this is not through policy or lack of trying.\nCrossref has members in 139 countries and has agreements with people in 150 countries. However note that geographic diversity is not the same as language diversity. Although we have members in many countries, the vast majority of our registered content is still in English. This does not reflect the trends in research outputs. We still need to do a lot of work to support non-English publications and non-English speaking members. But we have already identified this as a priority and are working on a number of initiatives to better support research communication in languages other than English.\n🟢 Non-discriminatory membership we see the best option as an “opt-in” approach with a principle of non-discrimination where any stakeholder group may express an interest and should be welcome. The process of representation in day to day governance must also be inclusive with governance that reflects the demographics of the membership\n\u0026ndash; POSI\nIt is first worth noting that “non-discriminatory” does not mean that we cannot have standards, obligations, and rules that all members of Crossref have to adhere to. It simply means that said rules are clear and that we apply them uniformly.\nCrossref has always had catholic membership criteria. Although we have until now historically defined ourselves as a primarily “publisher” organisation, we define “publisher” loosely as anybody who produces content that commonly references or is referenced by scholarly literature. Historically, this has included NGOs, IGO’s, standards bodies, institutional archives, and professional publishers. More recently it has expanded to include preprint archives and funders.\nThe requirements for joining Crossref are few. We admit any applicant who:\nAgrees to the obligations of membership. Can pay the fees. In practice we have historically had a policy of rejecting individuals as members. But even this is probably a pointless distinction as many of our members are “organisations” consisting of one person.\nAnd fundamental to Crossref’s governance is that a member’s influence in the governance of Crossref is not tied to the level of financial investment they make in the organisation. All members have the same single vote. All board members have one vote.\nRecently, we have also made changes to our governance and election process. The first to introduce contested elections for the board. The second to ensure that board membership was proportionally balanced amongst the membership tiers. Even as recently as 2017, when the Board established a Governance Committee, the idea of weighting votes to membership tiers was roundly rejected - on principle.\nThis is not to say that we can relax on this point. For example, as more funders and institutions join Crossref, we will need to make sure that our governance reflects that. We talk about this more in the section on governance.\nSome will also point out that our fees are themselves a form of discrimination as they can still be an insurmountable barrier to some in the community. We understand this and, without trying to make light of or dismiss the situation, we are also confident that we are constantly looking at ways to lower the barrier-to-entry for joining Crossref. Our fees have gone steadily down since we were founded and we are constantly reviewing them to try and make them more equitable. We have created a category of sponsoring organisations to defray the costs of membership. We collaborate closely with organisations like PKP to try and build tools and services that make participation in Crossref easier and less expensive.\n🟢 Transparent operations achieving trust in the selection of representatives to governance groups will be best achieved through transparent processes and operations in general (within the constraints of privacy laws).\n\u0026ndash; POSI\nCrossref has transparent finances and a transparent governance process. Much of this is simply a byproduct of the regulations governing non-profits with tax exempt status in the US and our specific registration as a non-profit membership association in New York State.\nUntil fairly recently, the obvious exception to this was Crossref’s use of pre-picked slates in board elections, but we have since improved this with an open election process.\n🟢 Cannot lobby the community, not infrastructure organisations, should collectively drive regulatory change. An infrastructure organisation’s role is to provide a base for others to work on and should depend on its community to support the creation of a legislative environment that affects it\n\u0026ndash; POSI\nCrossref has never lobbied. Partly this is a byproduct of our commitment to be business-model neutral as most lobbying efforts in the industry seem to center around promoting the views held by members who share a business model.\nBut also, Crossref has never lobbied on its own behalf. We have always relied on our members and the community to point out and promote Crossref if there is any area of legislative policy that the Crossref infrastructure could help with.\n🟢 Living will a powerful way to create trust is to publicly describe a plan addressing the condition under which an organisation would be wound down, how this would happen, and how any ongoing assets could be archived and preserved when passed to a successor organisation. Any such organisation would need to honour this same set of principles\n\u0026ndash; POSI\nCrossref has two relationships that require us to set out plans for an orderly wind-down.\nThe first is a condition of our incorporation as a non-profit in the state of New York. This explicitly includes a provision that requires us to hand over our operations and responsibilities to a successor non profit organisation that has a similar constituency and mission. The NY State Attorney General reviews and approves any major changes to ensure this requirement is met.\nThe second is a condition of our being members of the DOI Foundation, which includes provisions for us to hand over management of DOIs to another registration agency should Crossref ever wind-down. It is worth noting that we have already seen this clause invoked for other registration agencies that have wound down and who have, as part of the DOI Foundation provisions, handed responsibility for their DOIs to Crossref.\nThis is not to say that we are perfect on this score. We do not, for example, have any single place that outlines the steps that would need to be taken in order to execute the requirements laid out by our obligations to the state of New York and the IDF.\n🟢 Formal incentives to fulfil mission \u0026amp; wind-down infrastructures exist for a specific purpose and that purpose can be radically simplified or even rendered unnecessary by technological or social change. If it is possible the organisation (and staff) should have direct incentives to deliver on the mission and wind down.”\n\u0026ndash; POSI\nCrossref has a track record of periodically reviewing our services and decommissioning those that are no longer needed - either because they have fulfilled their specific mission or because there is simply waning interest in them (arguably, the same thing).\nAgain, this is not to say we are perfect on this score. We also have, by our last count, about 30 specialised, overlapping APIs- many of which are used by just a handful of users. These have escaped our normal scrutiny because they never had the status of a formal service and had not been through our product management process.\nBut still, Crossref has long made it a habit to question its own existence. At virtually every board annual strategy meeting we ask the question “will technology X make Crossref unnecessary?” We need to continue with the attitude that the best thing we could do for our members is to make ourselves unnecessary.\n🔴 Stakeholder Governed a board-governed organisation drawn from the stakeholder community builds more confidence that the organisation will take decisions driven by community consensus and consideration of different interests.\n\u0026ndash; POSI\nOverall, Crossref meets most of the Governance requirements with the notable exception of broader stakeholder involvement.\nOf course, the key to this is how you define “stakeholder.”\nSome may dispute this and argue that Crossref “stakeholders” are “publishers” because they are the parties that invested in creating Crossref.\nBut this narrow definition of “stakeholder” - focusing solely on those who have “invested”- is not widely held. In fact, common phrases like \u0026ldquo;stakeholder economy\u0026rdquo; and \u0026ldquo;stakeholder capitalism\u0026rdquo; describe the exact opposite- systems that don\u0026rsquo;t just focus on the “investor”, but which instead balance benefits to the investor with benefits to employees, the broader community, society, and the environment.\nIt is this latter, broader definition of “stakeholder” that is used in POSI.\nAnd just in case anybody still thinks that people other than publishers don’t consider themselves “stakeholders’ in the Crossref infrastructure, we simply point to this, recently tweeted by Brea Manuel, a researcher, in celebration of their publication in Nature Reviews Chemistry (read it, and learn how to recruit and retain a diverse workforce):\nSustainability Financial sustainability is a key element of creating trust. “Trust” often elides multiple elements: intentions, resources, and checks and balances. An organisation that is both well meaning and has the right expertise will still not be trusted if it does not have sustainable resources to execute its mission. How do we ensure that an organisation has the resources to meet its obligations?\n\u0026ndash; POSI\nIn the area of sustainability, Crossref clearly meets four of the five of the criteria listed and is most of the way to meeting the fifth.\n🟢 Time-limited funds are used only for time-limited activities day to day operations should be supported by day to day sustainable revenue sources. Grant dependency for funding operations makes them fragile and more easily distracted from building core infrastructure.\n\u0026ndash; POSI\nCrossref has never supported production activities based on grants. Indeed Crossref’s delivery on this point is what inspired the approach taken in this principle. This distinguishes Crossref from many grant-funded infrastructure initiatives which either barely stay afloat or disappear altogether. Even those that survive often do so by pursuing solutions that align with their funder’s interest over their user’s needs.\n🟢 Goal to generate surplus organisations which define sustainability based merely on recovering costs are brittle and stagnant. It is not enough to merely survive, it has to be able to adapt and change. To weather economic, social and technological volatility, they need financial resources beyond immediate operating costs.\n\u0026ndash; POSI\nCrossref has always attempted to generate a surplus. Crossref has generated surpluses since 2002 - so for 18 years of its 20 year existence.\n🟡 Goal to create contingency fund to support operations for 12 months a high priority should be generating a contingency fund that can support a complete, orderly wind down (12 months in most cases). This fund should be separate from those allocated to covering operating risk and investment in development.\n\u0026ndash; POSI\nCrossref currently has a contingency fund that would support operations for 9 months. Although this may be standard for industry, it seems prudent to extend this in the case of infrastructure organisations, particularly when they are membership organisations. First, the very fact that something is infrastructure implies that the systemic effects of its failing ungracefully could have industry-wide repercussions. Second, the decision-making process of a membership organisation whose governance is voluntary is inherently slower. It has taken Crossref Board 9 months, for example, just to discuss the ramifications of adopting POSI.\nGiven our recent financial performance, we expect Crossref could comfortably increase the contingency fund to support 12 months of operations within the next 2-3 years.\n🟢 Mission-consistent revenue generation potential revenue sources should be considered for consistency with the organisational mission and not run counter to the aims of the organisation.\n\u0026ndash; POSI\nCrossref has a good track record of periodically reviewing our services and fees and adjusting them to better support Crossref’s mission. The role of the Membership \u0026amp; Fees Committee in advising the Board has been critical. The very first example of this was in the early days of Crossref when we dropped matching fees because they were disincentivising members from linking their references. Crossref was also quick to recognise that, in order to support global research and reach smaller publishers in lower income countries, we had to develop a sponsoring mechanism to help defray the costs and ameliorate the technical complexity of participating in Crossref. Most recently we have taken the decision to drop fees for Crossmark as it was clear they had become a barrier to our members distributing retraction and correction notifications in a machine actionable format.\n🟢 Revenue based on services, not data data related to the running of the research enterprise should be a community property. Appropriate revenue sources might include value-added services, consulting, API Service Level Agreements or membership fees\n\u0026ndash; POSI\nCrossref does not charge for or resell its members’ data. Doing so would restrict dissemination and reduce the discoverability of our members’ content. Instead our revenue comes from a combination of membership fees and service fees. The DOI registration is a member service that generates the bulk of our revenue. But our SLA-backed APIs are becoming increasingly popular as members and others seek to integrate Crossref metadata into their production workflows and services.\nInsurance Even with the best possible governance structures, critical infrastructure can still be co opted by a subset of stakeholders or simply drift away from the needs of the community. Long term trust requires the community to believe it retains control. Here we can learn from Open Source practices. To ensure that the community can take control if necessary, the infrastructure must be “forkable.” The community could replicate the entire system if the organisation loses the support of stakeholders, despite all established checks and balances. Each crucial part then must be legally and technically capable of replication, including software systems and data. Forking carries a high cost, and in practice this would always remain challenging. But the ability of the community to recreate the infrastructure will create confidence in the system. The possibility of forking prompts all players to work well together, spurring a virtuous cycle. Acts that reduce the feasibility of forking then are strong signals that concerns should be raised. The following principles should ensure that, as a whole, the organisation in extremis is forkable.\n\u0026ndash; POSI\nCrossref clearly meets two of the four Insurance requirements. And the remaining two can be met easily with some clarification and time.\nThe “governance” section of POSI is designed to ensure that an infrastructure organisation is beholden to the broader stakeholder community and that it can not be co-opted by a particular party or special interest. And the “sustainability” section of POSI is designed to ensure that the infrastructure organisation takes the financial steps to ensure it can weather sudden changes in the financial or technical environment. But the last section, “insurance” is designed to protect stakeholder interests in case either “governance” or “sustainability” fail.\nThe term “forkable” comes from the Open Source software community where it is used to indicate when a software community’s interests diverge and they decide to split a project into several projects, with each new project focusing on a particular sub-community\u0026rsquo;s interests.\nOne of the immediate worries that people have when they first hear of the concept of “forkability” is that it will encourage the creation many variations of a project based on frivolous criteria. But this simply does not happen. Forking a project is never easy and takes a lot of effort. It is only done successfully when a critical mass of the community becomes unhappy with the direction a project is taking and is willing to take on the substantial burden of running an entirely separate project. Without such a critical mass, the fork just withers and has virtually no effect on the original project.\nAnd the reason for this is simple, the mere knowledge that a project is “forkable” forces project maintainers to balance the interests of the community so that no sizable subgroup grows dissatisfied enough to fork the project.\nForkability encourages reponsivness to the community by making sure that the community is not “locked-in.”\nCrossref itself was founded, in part, to prevent lock-in. Use of the DOI in linking citations makes it easier for publishers to move platforms, and for journals and societies to move between publishers.\nAnd Crossref itself is architected in part to ensure that lock-in is not possible. Crossref is just one of several DOI registration agencies. Members unhappy with Crossref, can move to another DOI registration agency and their citation links will continue to work. But there are things we could do to make this even easier.\n🟢 Available data (within constraints of privacy laws) It is not enough that the data be made “open” if there is not a practical way to actually obtain it. Underlying data should be made easily available via periodic data dumps.\n\u0026ndash; POSI\nCrossref provides public APIs that allow users to access Crossref metadata. We are planning to eventually release yearly public data files. We already did this once when we released a public data file in support of COVID-19 research. This in no way prevents the provision of data through paid Service Level Agreement tiers that provide guarantees of regularity, availability or reliability for those that need it. Existing Metadata Plus customers primarily use data that is available through the open API or existing dumps, but value additional services that support their use-cases.\n🟡 Patent non-assertion “The organisation should commit to a patent non-assertion covenant. The organisation may obtain patents to protect its own operations, but not use them to prevent the community from replicating the infrastructure.\n\u0026ndash; POSI\nCrossref has never registered a patent. But the DOI Foundation, with significant support from Crossref, had to respond to (and then monitored) a set of patent applications that, if successful, the DOI System would infringe on. The applications were filed more than 15 years ago and haven’t been successful so these applications aren’t a current concern. As a result of this, the DOI Foundation adopted a patent policy in 2005 that covers all Registration Agencies and protects the DOI System. We may want to register protective patents in the future in order to enable us to defend ourselves against patent trolls.\nThe problem with patents is that they could be used by an organisation to prevent the infrastructure forking. One technique that has been used by major companies to assure communities that they will not be affected by patents, is to make a patent non-assertion covenant. For example, IBM, Microsoft and Google have made non-assertion statements in order to assure the open source and standards communities that they participate in that they will not co-opt an open source project or open standard by asserting patents on code or processes they contribute.\nThough Crossref has never registered a patent, issuing a patent non-assertion covenant would help assure stakeholders that we would not use patents in the future to prevent the community from forking the system.\n🟡 Open source All software required to run the infrastructure should be available under an open source license. This does not include other software that may be involved with running the organisation.\n\u0026ndash; POSI\nAll code for new initiatives since 2007 has been released under an open source MIT license. The legacy Content System code could be open sourced within 12-18 months with no extra effort.\nIf some Crossref stakeholders wanted to “fork” Crossref or leave for another DOI registration agency, their biggest hurdle would be trying to recreate the twenty years worth of rules and algorithms we use for processing and matching metadata. Without access to the source code of the system, it would be almost impossible for these to be reverse engineered.\nSimilarly, without access to the source code of our system - it is difficult to ensure that Crossref is, indeed, non-discriminatory in the way it works with member content. It would be possible, for example, for Crossref to modify its matching algorithms to deliberately favour or deprecate some members’ content.\nIf we want to assure the community that we are managing our member metadata fairly and if we want to provide even better insurance to our members and the broader stakeholders, we should make all of our code open source.\nThe legacy so-called “CS” (content system) is in the process of being refactored. The only reason we cannot open source this immediately is that we still need to make some security changes to it. These security changes are being done as part of a current refactoring project and should be completed without any extra effort within 12-18 months. After that, we can open source the code.\n🟡 Open data (within constraints of privacy laws)\nFor an infrastructure to be forked it will be necessary to replicate all relevant data. The CC0 waiver is best practice in making data legally available. Privacy and data protection laws will limit the extent to which this is possible.\n\u0026ndash; POSI\nAchieving this simply requires us clarifying copyright and license information and that this will not have any effect on the metadata registered in Crossref by our members.\nFirst we should outline the current copyright status of a Crossref metadata record.\nThe fundamental issue is that what we colloquially call “Crossref metadata” is actually a mix of elements, some of which come from our members, and some of which come from third parties and some of which comes from Crossref itself. These elements, in turn, each have different copyright implications.\nOn top of this, Crossref has terms and conditions for its members and terms and conditions for specific services. These grant Crossref the right to do things with some classes of metadata and not do things with other classes of metadata - regardless of copyright.\nLet’s start with the easiest case. Crossref already has two services with CC0 metadata:\nThe Open Funder Registry Event Data Obviously, the POSI open data provision would not change anything for either service.\nThe next easiest case is private data. Crossref collects PII (usernames, passwords IP addresses, etc.). This would remain private. And we will continue to manage it in conformance with GDPR. It would not be affected by the open data provision of POSI.\nNext let’s look at what most people probably think of as “Crossref metadata”- that is, the basic bibliographic metadata that Crossref has collected from its members since its founding (titles, authors, volumes, issues, etc). For the record- this does not include abstracts.\nSince 2000 Crossref has stated that it considers this basic bibliographic metadata to be “facts.” And under US law (Crossref is registered in the US) these facts are not subject to copyright at all. If this data is not subject to copyright at all, there is no way Crossref can “waive the copyright” under CC0. This metadata would not be affected at all under the open data provision of POSI.\nMore recently, some of our members have been submitting abstracts to Crossref. These are copyrighted. In the case of subscription publishers, the copyright usually belongs to the publisher. In the case of open access publishers, the copyright most often belongs to the authors. In both cases, Crossref cannot waive copyright under CC0 because the copyright is not ours to waive. However, we are allowed to redistribute the abstracts with our metadata because that is part of the terms and conditions we have with our members. We already have language that notes the distinct copyright status of the abstracts in our metadata, but, ideally, we should extend our schema to make that information available in a machine actionable form as well. In short, the copyright status of abstracts would not be affected at all by the open data provision of POSI.\nCrossref also has its Reference Distribution Policy that the board adopted in 2017 - limited and closed references are not distributed by Crossref and this won’t change. [EDIT 6th June 2022 - all references are now open by default with the March 2022 board vote to remove any restrictions on reference distribution].\nAnd this leaves us with the one thing that would be affected by the open data provision of POSI- data that is created by Crossref itself as a byproduct of our services. By law, this data is under Crossref’s copyright unless we explicitly waive it. This data includes things like, participation reports, conflict reports, member IDs and Cited-by counts (just the counts, not the references) and any aggregations of our otherwise uncopyrighted data that might, by aggregating it, be subject to sui generis database rights. At the moment, although we distribute this data freely and without restriction, we have no explicit copyright attached to it. All we would be seeking to do is explicitly say that data generated by Crossref will be distributed CC0. Again, at first it would be enough to just specify this in human readable form, along with our other copyright information. But, eventually, we would want to include this information in machine actionable form in the metadata itself.\nTo summarise:\nMetadata type Example Current Copyright Change under POSI Already CC0 Open Funder Registry, Event Data CC0 None Private Log files, user IDs Private None Bibliographic Title, authors, volume, issue Facts None Closed references Facts - but no distribution under the reference distribution board policy from 2017 None Limited references Facts - but no public distribution under the reference distribution board policy from 2017 None Open references Facts None Crossref-generated data Participation data, reports, extracts Copyright Crossref CC0 [EDIT 6th June 2022 - all references are now open by default with the March 2022 board vote to remove any restrictions on reference distribution].\nNo member metadata will be affected by our adopting the open data provision of POSI. The only data that would be affected is data generated by Crossref itself.\nHowever, the adoption of this principle would likely have an effect on our decisions about future services. For example, under this principle we would not launch any new services where the data was not freely reusable or the copyright of the data was not CC0.\nConclusion and Next steps So again we face the paradox- We are announcing something that is simultaneously insignificant and important. It is insignificant in that we are simply saying that we will continue to do what we have largely been doing since Crossref was founded. But it is important because, in codifying what we have been doing, we are also confirming that these principles actually worked. That they were essential to building the trust that allowed us to function over the past twenty years, and they will continue to be essential in the future- as we look to work with existing organisations to strengthen current infrastructures, and work with new stakeholders to develop new infrastructures.\nSo much of the work in building scholarly infrastructure is about building trust. We would love to see other organisations and services adopt POSI as well. Doing so would help us to collaborate more efficiently by allowing us to confirm from the outset that our fundamental values align. And having a set of verifiable commitments that we can point to will also help build the community\u0026rsquo;s trust in our respective organisations and services.\nAnd this brings us to an important point. Although POSI might have been inspired by Crossref, POSI is not a “Crossref thang” and it never has been. The movement to create open scholarly infrastructures and to define and clarify the ground rules within which they operate has become a much broader community concern.\nTo this end, we’ve worked with some sibling infrastructure organisations—such as Dryad and ROR—as well as the original authors of POSI to create a website where we could host the list of principles independent of the original blog post and independent of any single organisation:\nopenscholarlyinfrastructure.org Minimally, this provides a place for anybody who wants to link to or cite POSI - either because they are endorsing them, or because they are simply discussing them. If we see enough activity of this type, then the site could evolve to become a register of those organisations and services who have formally adopted POSI and a place where they can link to their self-assessments against the principles.\nThe community promoting, discussing and applying POSI has long since grown beyond the original authors of the POSI blog post. And it is also much larger than any single organisation. Our hope is that this website encourages that growth.\nAnd, of course, in addition to the external outreach and coordination, Crossref still has internal work to do in addressing the outstanding issues that were raised in our own self-assessment above. We need to increase our contingency funds. We need to publish a patent non-assertion covenant. We need to open source our core software. And we need to clarify our metadata license information and make it explicit that Crossref waives copyright (using CC-0) for any metadata generated by Crossref. And, finally, as Crossref expands and starts working with different stakeholders, we will need to adjust our governance and the composition of our board accordingly. We will, of course, post updates here as we make progress on addressing these areas.\n2020 marked Crossref’s 20th birthday. What a grim year to have an anniversary. But we are, at least, ending it on a little bit of a high. We are delighted that the issue of open scholarly infrastructure has become so prominent in the community. And we are eager to help strengthen and extend this infrastructure. The decision by Crossref’s board to adopt POSI is the equivalent of Crossref finally adopting a written constitution. And it is a fitting launch to our next twenty years.\n", "headings": ["TL;DR","Background and Origins","Motivations. Why Now?","Because it is the right thing to do for those that currently depend on Crossref","Because it could help make the creation of new, sustainable, open scholarly infrastructure easier and less expensive","Adopting the Principles: Plus ça change","Implications","So how does Crossref currently meet POSI?","Governance","Sustainability","Insurance","Governance","🟢 Coverage across the research enterprise","🟢 Non-discriminatory membership","🟢 Transparent operations","🟢 Cannot lobby","🟢 Living will","🟢 Formal incentives to fulfil mission \u0026amp; wind-down","🔴 Stakeholder Governed","Sustainability","🟢 Time-limited funds are used only for time-limited activities","🟢 Goal to generate surplus","🟡 Goal to create contingency fund to support operations for 12 months","🟢 Mission-consistent revenue generation","🟢 Revenue based on services, not data","Insurance","🟢 Available data (within constraints of privacy laws)","🟡 Patent non-assertion","🟡 Open source","Conclusion and Next steps"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/", "title": "Working groups", "subtitle":"", "rank": 1, "lastmod": "2020-11-24", "lastmod_ts": 1606176000, "section": "Working groups", "tags": [], "description": "Advisory groups and working groups help us to stay focused and inclusive. We also have more formal committees that have a role specified in the by-laws or have been set up by the board with a particular remit. We\u0026rsquo;ve also listed a few \u0026lsquo;interest groups\u0026rsquo; and these are the least formal, like community calls, where participants can be involved ad hoc and participate sporadically.\nThese groups are a good way for people across the community to get involved in Crossref\u0026rsquo;s work to support and improve our scholarly infrastructure.", "content": "Advisory groups and working groups help us to stay focused and inclusive. We also have more formal committees that have a role specified in the by-laws or have been set up by the board with a particular remit. We\u0026rsquo;ve also listed a few \u0026lsquo;interest groups\u0026rsquo; and these are the least formal, like community calls, where participants can be involved ad hoc and participate sporadically.\nThese groups are a good way for people across the community to get involved in Crossref\u0026rsquo;s work to support and improve our scholarly infrastructure. They are slightly different as described below but both are open to non-members and members alike.\nAdvisory groups We have advisory groups for established services or ongoing themes to get input and advice from our members and other stakeholders. Each advisory group has a statement of purpose and should represent our broad membership. Each group has a chair and staff facilitator who together set agendas, organize calls, and ensure that the group fulfills its purpose. Each group has an email list and meets regularly via conference call, although the frequency varies by group. These groups tend to be permanent and long-term.\nAdvisory group Facilitator Chair Status Similarity Check Lena Stoll, Madhura Amdekar Lauren Flintoft, IOP Publishing Active Crossmark Martyn Rittman TBC Inactive Funder Community Kora Korzec TBC Active Preprints community Martyn Rittman Oya Rieger, Ithaka Active Event Data Martyn Rittman John Chodacki, California Digital Library Inactive Working groups Working groups are more short-term than advisory groups and are set up for a specific task or ad-hoc purpose. They are usually set up to discuss and scope a specific idea, or oversee prototypes and pilots that could develop into new features. Working groups can also be set up jointly with other organizations to enable us to collaborate on projects.\nWorking groups don\u0026rsquo;t always have a Chair but they bring stakeholders together. A working group either disbands when finished its work or can become an advisory group if and when the board approves the idea or prototype as a production service, feature, or record type.\nWorking group Facilitator Chair Status Conferences and projects Patricia Feeney Aliaksandr Birukou, Springer Nature Inactive Distributed usage logging Martyn Rittman Esther Heuver, Elsevier Retired Linked clinical trials N/A Daniel Shanahan, BioMed Central Inactive Standards Patricia Feeney Retired Taxonomies Rachael Lammey Retired Interest groups Interest groups are more like community discussion forums with fairly low commitment needed from participants, where a large group of people meet to discuss a range of issues and can bring any topic under the theme to Crossref. They are the least formal of all our groups and vary in call frequency and scope.\nInterest group Facilitator Chair Books Kora Korzec David Woodworth, OCLC Metadata Practitioners Patricia Feeney n/a ", "headings": ["Advisory groups","Working groups","Interest groups"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/community/working-for-you/", "title": "Working for you", "subtitle":"", "rank": 1, "lastmod": "2020-11-08", "lastmod_ts": 1604793600, "section": "Get involved", "tags": [], "description": "Innovation never slows, and the ways research is communicated is no different. Crossref has evolved to work with and for many new and emerging organizations and innovations. Find out how our services can work for you, take a look at our current members metadata coverage and get in touch if you\u0026rsquo;d like to see us develop something new.\nFor Publishers Increase discoverability; put your content on the map so that it’s easy to find, cite, link, assess, and reuse.", "content": "Innovation never slows, and the ways research is communicated is no different. Crossref has evolved to work with and for many new and emerging organizations and innovations. Find out how our services can work for you, take a look at our current members metadata coverage and get in touch if you\u0026rsquo;d like to see us develop something new.\nFor Publishers Increase discoverability; put your content on the map so that it’s easy to find, cite, link, assess, and reuse. We help you create persistent links between research outputs such as articles, books, references, data, components, versions, and more. Read more about our services for Publishers.\nFor Editors Your decisions influence what research is communicated and how. Demonstrate your editorial integrity with tools that help you assess a paper’s originality, and properly label and connect updates, corrections, and retractions. Read more about what we offer Editors.\nFor Researchers Find other researchers’ work and let them find yours. Through registering DOIs, we collect and share comprehensive information about research such as citations, mentions, and other relationships. Thousands of tools and services then harness this information\u0026mdash;for search, discovery, and measurement\u0026mdash;through our open APIs. Read more about how Researchers benefit.\nFor Developers If you develop tools and software to find, cite, link, and/or assess research outputs, you can integrate our metadata about scholarly content into your project, through our open APIs. Read more about our tools for Developers.\nFor Research Funders Connect your grants and grantees with their published outputs. We collect and share metadata about published research\u0026mdash;such as Funder IDs, ORCID iDs, licenses and clinical trials\u0026mdash;all of which helps funders measure reach and return. Read more about our services for Funders. Crucially, funders are joining Crossref to register their grants so that they can more easily and accurately track the outputs connected to the research they support.\nFor Librarians Enhance your metadata and connect your discovery and linking services with our metadata records\u0026mdash;they’re all available in XML and JSON through open APIs and search. Read on for our services for Libraries.\nFor Preprint Servers Add your preprints, working papers, and more to the corpus of works that are easy to find, cite, link, assess, and reuse. With a persistent identifier and related metadata, you can create links between different versions of the same document, including those published in journals and elsewhere. Authors can easily cite their preprints in articles, grant applications, and for research assessment. Read more about our support for preprints and related metadata.\nFor Research Institutions Your institution can use Crossref to help with the management, analyses, and reporting of their research activities. Through our open metadata, you can support researchers to bring their ideas to life, identify groups to work with, demonstrate compliance, and share and report on outcomes. Read more about we help Research Institutions.\n", "headings": ["For Publishers","For Editors","For Researchers","For Developers","For Research Funders","For Librarians","For Preprint Servers","For Research Institutions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/calling-all-24-hour-pid-party-people/", "title": "Calling all 24-hour (PID) party people!", "subtitle":"", "rank": 1, "lastmod": "2020-10-13", "lastmod_ts": 1602547200, "section": "Blog", "tags": [], "description": "While we wish we could be together in person to celebrate the fifth PIDapalooza, there\u0026rsquo;s an upside to moving it online: now everyone can participate in the universe\u0026rsquo;s best PID party! With 24 hours of non-stop PID programming, you\u0026rsquo;ll be able to come to the party no matter where you happen to be.\nSend us your ideas for #PIDapalooza21 Now is your chance to share your work in the #PIDapalooza21 spotlight!", "content": "While we wish we could be together in person to celebrate the fifth PIDapalooza, there\u0026rsquo;s an upside to moving it online: now everyone can participate in the universe\u0026rsquo;s best PID party! With 24 hours of non-stop PID programming, you\u0026rsquo;ll be able to come to the party no matter where you happen to be.\nSend us your ideas for #PIDapalooza21 Now is your chance to share your work in the #PIDapalooza21 spotlight! We\u0026rsquo;re seeking proposals for short, interactive sessions about what you are doing––or want to do––with persistent identifiers and the communities that love and use them. #PIDapalooza21 will feature sessions around the broad theme of PIDs and Open Research Infrastructure, focusing on the following areas:\nTheme 1. PIDs 101 For PID beginners! You\u0026rsquo;ve got just 30 minutes to get attendees up to speed on a PID or PIDs. Make it fast! Make it fact-filled! Make it fun!\nTheme 2. PID Communities International Have you always wanted to host a Spanish-language PID session, or bring together PID people in the humanities? Tell us how you\u0026rsquo;d connect with PID peers around the world!\nTheme 3. PID Success Stories There\u0026rsquo;s nothing better than hearing about what\u0026rsquo;s working in the PID world––and why! Share your success stories so we can all benefit from them.\nTheme 4. PID Party! It wouldn\u0026rsquo;t be PIDapalooza without the party sessions, so be creative! Help us make this the best PID party ever!\nPropose a session now! The call for proposals will be open until October 30. Submit your PIDea now!\n*Note: The PIDapalooza submission form uses Google. If you are unable to access Google Forms, email your session idea.\nGet the full low-down on #PIDapalooza21 at the PIDapalooza website.\n", "headings": ["Send us your ideas for #PIDapalooza21","Theme 1. PIDs 101","Theme 2. PID Communities International","Theme 3. PID Success Stories","Theme 4. PID Party!","Propose a session now!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/pidapalooza/", "title": "PIDapalooza", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ease-council-post-rachael-lammey-on-the-research-nexus/", "title": "EASE Council Post: Rachael Lammey on the Research Nexus", "subtitle":"", "rank": 1, "lastmod": "2020-10-12", "lastmod_ts": 1602460800, "section": "Blog", "tags": [], "description": "This blog was initially posted on the European Association of Science Editors (EASE) blog: \u0026ldquo;EASE Council Post: Rachael Lammey on the Research Nexus\u0026rdquo;. EASE President Duncan Nicholas accurately introduces it as a whole lot of information and insights about metadata and communication standards into one post\u0026hellip;\nI was given a wide brief to decide on the topic of my EASE blog, so I thought I\u0026rsquo;d write one that tries to encompass everything - I\u0026rsquo;ll explain what I mean by that.", "content": "This blog was initially posted on the European Association of Science Editors (EASE) blog: \u0026ldquo;EASE Council Post: Rachael Lammey on the Research Nexus\u0026rdquo;. EASE President Duncan Nicholas accurately introduces it as a whole lot of information and insights about metadata and communication standards into one post\u0026hellip;\nI was given a wide brief to decide on the topic of my EASE blog, so I thought I\u0026rsquo;d write one that tries to encompass everything - I\u0026rsquo;ll explain what I mean by that.\nIn the past, Crossref has had the opportunity to talk to EASE members about the importance of registering content whose metadata contains important information related to the article. Richer metadata helps to connect the content to other key information such as who wrote it, who it was funded by, the relevant license, the research it cites, any updates to the work such as corrections and retractions, and the data that underpin the research. The use of open persistent identifiers like DOIs, funder IDs, ORCID iDs and ROR IDs are always recommended.\nSuch rich and connected metadata also helps discoverability of the published research in a different way than just direct access; if you can find something based on looking at the publications related to a particular funder, author, or institution, then there are more ways to come across what you\u0026rsquo;re looking for. Making links between objects underpinning the research also helps put the research in context and can help further research by making connections to other valuable information that may have been more difficult to make otherwise.\nI\u0026rsquo;ve mentioned the Research Nexus in the title of this post. It\u0026rsquo;s achieved by declaring relationships between publications and other associated research objects, and from those objects to related publications. The metadata that reveals relationships between research objects can be as informative as the objects themselves. These relationships can assert certain facts that may not be otherwise obvious: this is our goal with the Research Nexus. These relationships and assertions need to exist not just on the web pages of the outputs, but also reflected in a standard way in the metadata so that the information is computer-readable and can be used at scale. As Jennifer Lin, who coined the term, explains:\n\u0026ldquo;Researchers are adopting new tools that create consistency and shareability in their experimental methods. Increasingly, these are viewed as key components in driving reproducibility and replicability. They provide transparency in reporting key methodological and analytical information. They are also used for sharing the artefacts which make up a processing trail for the results: data, material, analytical code, and related software on which the conclusions of the paper rely. Where expert feedback was also shared, such reviews further enrich this record.\u0026rdquo;\nIn her Crossref blog, Jennifer goes on to give some examples, including:\nLinking to an entire collection of methods and video protocols via Protocols.io Linking to software and peer reviews in JOSS Linking to preprint, data, code, source code, peer reviews in Gigascience I\u0026rsquo;d include an additional example of linking research to the grant using the grant identifier and associated metadata from the funding section of this PLOS paper (read more about the example from EuroPMC who register grants with Crossref for Wellcome).\nThese links can be established by adding them into the Crossref relationship metadata schema. The information is then made available to anyone via our open APIs, so that they can easily see and use the information.\nIn all of these, publishers and other parties are linking to associated research outputs to support the reproducibility and discoverability of content.\nThe reproducibility point is worth reiterating; EASE has always supported projects to maintain high standards around the review of research, publication standards and ethics, and the reduction of research waste. And connecting articles to data, preprints, protocols, and peer reviews, and making the relationships open for analysis will help achieve this.\nWe also know that there are work and cost involved in establishing these links, and we\u0026rsquo;re working on ways to lower the barriers in doing so by:\nRevisiting what we charge to encourage best practice. Starting in 2020, we have removed fees for registering vital information on corrections, retractions and other Crossmark metadata. This is timely in light of the updates to the EASE Standardised Retraction form. We\u0026rsquo;re also working to remove fees for translations and versions that are linked together by the appropriate relationship metadata so that publishers posting translations or different versions of an article don\u0026rsquo;t have to pay multiple times for these. Our Membership \u0026amp; Fees Committee is currently reviewing other ways we can support publishers keen to make these connections. Finding ways to make it easier for publishers to collect this information from authors e.g. submission systems integrations with data repositories to collect robust information on article/data links. Allowing the registration of peer review metadata for content other than journal articles e.g. books, preprints (coming soon). Making it easier for publishers to register this information with us at Crossref via the provision of simple to use tools, interfaces and reporting. The outputs of the research process, such as journal articles, don\u0026rsquo;t exist in isolation - you only have to look at the interest in the corpus of COVID-19 publications, preprints and associated data to see this. This thinking is also supported by campaigns like Metadata 2020 advocating for \u0026ldquo;richer, connected, and reusable, open metadata will advance scholarly pursuits for the benefit of society.\u0026rdquo; The relationships revealed by the Research Nexus may one day help progress research to realise benefits that help us all, providing we all make efforts to effectively support them. More to come\u0026hellip;\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/event-data/", "title": "Event Data", "subtitle":"", "rank": 1, "lastmod": "2020-10-06", "lastmod_ts": 1601942400, "section": "Documentation", "tags": [], "description": "When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it. Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues.", "content": " When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it. Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues. Crossref Event Data captures this activity and acts as a hub for the storage and distribution of this data. An event may be a citation in a dataset or patent, a mention in a news article, Wikipedia page or on a blog, or discussion and comment on social media.\nHow Event Data works Event Data monitors a range of sources, chosen for their importance in scholarly discussion. We make events available via an API for users to access and interpret. Our aim is to provide context to published works and connect diverse parts of the dialogue around research. Learn more about the sources from which we capture events.\nThe Event Data API provides raw data about events alongside context: how and where each event was collected. Users can process this data to suit their requirements.\nWhat is Event Data for? Event Data can be used for a number of different purposes:\nAuthors can find out where their work has been reused and commented on. Readers can access more context around published research, including links to supporting documents and commentary that aren’t in a journal article. Publishers and funders can assess the impact of published research beyond citations. Service providers can enrich, analyze, interpret and report via their own tools Data intelligence and analysis organisations can access a broad range of sources with commentary relevant to research articles. Event Data is available via our API - get started with some queries.\nAnyone can contribute to Event Data by mentioning the DOI or URL of a Crossref-registered work in one of the monitored sources. We also welcome third parties who wish to send events or contribute to code that covers new sources. Learn more about contributing to or using Crossref Event Data.\nAgreement and fees for Event Data Event Data is a public API, giving access to raw data, and there are no fees. In the future we will introduce a service-based offering with additional features and benefits. Learn more about the Event Data terms.\nWhat is an event? In the broadest sense, an event is any time someone refers to a research article with a registered DOI anywhere online. Ideally we would capture all events, but there are limitations:\nWe can’t monitor the entire Internet, and instead check sites that are most likely to discuss academic content. There are still venues that could be relevant and that we do not cover yet. Users online refer to academic content in different ways, sometimes using the DOI but more often using the URL or just the article name. We try to decode mentions of DOIs or a publisher website to get a match to an article but it isn’t always possible. This means we may miss mentions of an article even from sources we are tracking. At present we are not able to track events where no link is included and only the title or other part of the metadata is mentioned.\nFor Crossref Event Data, an event consists of three parts:\nA subject: where was the research mentioned? (such as Wikipedia) An object: which research was mentioned? (a Crossref or DataCite DOI) A relationship: how was the research mentioned? (such as cites or discusses) Show image\r×\rWe determine the relationship from the source of the event, it is an indication of how the subject and object are linked based on broad categories.\nSoftware called agents collect events from various data sources. Most agents are written and operated by Crossref with some code written by our partners. Possible events are passed to the percolator software, which tries to match the event with an object DOI. This process is fully automated.\nShow image × We perform periodic automated checks to the integrity of the data and update event types. Deduplication is also part of the process performed by the percolator.\nTo provide transparency, we keep an evidence record about how we matched the object to the subject. Learn more about transparency in Event Data, including links to the open source code and data.\nThe following agents currently collect data:\nAgent/Data source Event type Crossref metadata Relationships and references to datasets and DOI registration agencies other than Crossref (e.g. DataCite) DataCite metadata Links to Crossref registered content Faculty Opinions Recommendations of research publications Hypothes.is Annotations in Hypothes.is Newsfeed Discussed in blogs and media Reddit Discussed on Reddit Reddit Links Discussed on sites linked to in subreddits Stack Exchange Network Discussed on StackExchange sites Wikipedia References on Wikipedia pages Wordpress.com Discussed on Wordpress.com sites Patent Event Data was historically collected from The Lens. Events from Twitter were collected until February 2023, note that all Twitter events have been removed from search results in accordance with our contract with Twitter; see the Community Forum for more information.\nWhat Event Data is not By providing Event Data, Crossref provides an open, transparent information source for the scholarly community and beyond. It is important to understand, however, that it may not be suitable for all potential users. Here are some of the limitations:\nIt is not a service that provides metrics, collated reports, or offers data analysis. Crossref does not build applications or website plugins for Event Data, for example for displaying results on publisher websites. We do, however, welcome third parties who wish to develop such platforms. Event Data collection is fully automated and therefore may contain errors or be incomplete, we cannot provide any guarantees in this regard and users must assess the quality of the data required for their particular use case. There may also be delays between an event occurring and it appearing in Event Data. Events might be missed due to the limitations of the collection algorithms we use. There is also a small possibility that we link an event to the wrong object. Event Data does not cover every source of academic discussion. In some cases this is because there is no public access to the data; in others it is because we have not had the capacity to build an agent. While we hope the data is useful for many purposes, we encourage users to be responsible and exercise caution when making use of Event Data.\n", "headings": ["When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it.","How Event Data works ","What is Event Data for? ","Agreement and fees for Event Data ","What is an event? ","What Event Data is not "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2020-board-election/", "title": "2020 Board Election", "subtitle":"", "rank": 1, "lastmod": "2020-09-28", "lastmod_ts": 1601251200, "section": "Blog", "tags": [], "description": "This year, Crossref’s Nominating Committee assumed the task of developing a slate of candidates to fill six open board seats. We are grateful that in the midst of a challenging year, we received over 70 expressions of interest from all around the world, a 40% increase from last year’s response. It was an extraordinary pool of applicants and a testament to the strength of our membership community.\nThere are six seats open for election (two large, four small), and the Nominating Committee is pleased to present the following slate.", "content": "This year, Crossref’s Nominating Committee assumed the task of developing a slate of candidates to fill six open board seats. We are grateful that in the midst of a challenging year, we received over 70 expressions of interest from all around the world, a 40% increase from last year’s response. It was an extraordinary pool of applicants and a testament to the strength of our membership community.\nThere are six seats open for election (two large, four small), and the Nominating Committee is pleased to present the following slate.\nThe 2020 slate Candidate organizations, in alphabetical order, for the Small category (four seats available):\nBeilstein-Institut, Wendy Patterson Korean Council of Science Editors, Kihong Kim OpenEdition, Marin Dacos Scientific Electronic Library Online (SciELO), Abel Packer, The University of Hong Kong, Jesse Xiao Candidate organizations, in alphabetical order, for the Large category (two seats available):\nAIP Publishing, Jason Wilde, Oxford University Press, James Phillpotts, Taylor \u0026amp; Francis, Liz Allen Here are the candidates\u0026rsquo; organizational and personal statements You can be part of this important process, by voting in the election If your organization is a voting member in good standing of Crossref as of September 14, 2020, you are eligible to vote when voting opens on September 30, 2020.\nHow can you vote? On September 30, 2020, your organization\u0026rsquo;s designated voting contact will receive an email with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. You will also receive a user name and password with a link to our voting platform.\nThe election results will be announced at LIVE20 virtual meeting on November 10, 2020.\n", "headings": ["The 2020 slate","Here are the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process, by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/bryan-vickery/", "title": "Bryan Vickery", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/ludo-waltman/", "title": "Ludo Waltman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/open-abstracts-where-are-we/", "title": "Open Abstracts: Where are we?", "subtitle":"", "rank": 1, "lastmod": "2020-09-25", "lastmod_ts": 1600992000, "section": "Blog", "tags": [], "description": "The Initiative for Open Abstracts (I4OA) launched this week. The initiative calls on scholarly publishers to make the abstracts of their publications openly available. More specifically, publishers that work with Crossref to register DOIs for their publications are requested to include abstracts in the metadata they deposit in Crossref. These abstracts will then be made openly available by Crossref. 39 publishers have already agreed to join I4OA and to open their abstracts.\n", "content": "The Initiative for Open Abstracts (I4OA) launched this week. The initiative calls on scholarly publishers to make the abstracts of their publications openly available. More specifically, publishers that work with Crossref to register DOIs for their publications are requested to include abstracts in the metadata they deposit in Crossref. These abstracts will then be made openly available by Crossref. 39 publishers have already agreed to join I4OA and to open their abstracts.\nWhere are we at the moment in terms of openness of abstracts? For an individual publisher working with Crossref, the percentage of the publisher’s content for which an abstract is available in Crossref can be found in Crossref’s Participation Reports. The chart presented below gives the overall picture (as of September 1, 2020) for medium-sized and large publishers working with Crossref. The vertical axis shows the number of journal articles of a publisher in the period 2018-2020. Because of the large differences between publishers in the number of articles they publish, this axis has a logarithmic scale. The horizontal axis shows the percentage of the articles of a publisher for which an abstract is available in Crossref. The orange dots represent publishers that have agreed to join I4OA. The publishers colored in blue have not yet agreed to join the initiative.\nA similar chart was published a few months ago in this blog post on the importance of open abstracts. Comparing the above chart with the one published a few months ago, the first effects of I4OA are already visible. While for most publishers the percentage of abstracts available in Crossref has hardly changed, it has increased from 11% to 95% for the Royal Society, one of the founding publishers of I4OA. This reflects the efforts the Royal Society has made over the past months to improve the availability of abstracts in Crossref for its content, not only for new content but also for existing content. For SAGE, another founding publisher of I4OA, the percentage of abstracts available in Crossref has increased from 38% to 50%. A further increase can be expected to take place in the coming months. The third founding publisher of I4OA, Hindawi, has remained at a stable level, with abstracts being available for 97% of its content.\nThe above chart shows that many publishers supporting I4OA are already making abstracts available in Crossref. Other publishers do not yet make abstracts available in Crossref but have nevertheless decided to join I4OA. This is the case for Frontiers, PLOS, and Karger, and also for several smaller publishers not visible in the above chart, such as EMBO and Ubiquity Press. These publishers are currently adjusting their workflows and will start submitting abstracts to Crossref soon.\nOf the publishers that have not yet joined I4OA, some may not yet be aware of I4OA, while others may need more time to decide whether they will join the initiative. As can be seen in the above chart, most publishers that have not yet joined I4OA do not make abstracts available in Crossref at the moment. However, some publishers have not yet joined I4OA even though they do make abstracts available in Crossref. We hope these publishers will join I4OA soon. By joining the initiative, these publishers would formalize their commitment to openness of abstracts.\nNone of the publishers in the above chart makes abstracts available in Crossref for 100% of its journal content. Some publishers, such as Copernicus and Hindawi, are close to 100%, but even these publishers have some content for which no abstract is available. Importantly, this does not necessarily mean that publishers have failed to submit abstracts to Crossref for some of their content. Instead, it may simply mean that some of their journal content does not have an abstract. Research articles usually have an abstract, but many other types of content published in journals, such as book reviews, letters, editorials, and corrections, often do not have an abstract. For most publishers, it is therefore impossible to make abstracts available for 100% of their content. Moreover, since Crossref does not distinguish between different types of content published in journals, we cannot provide separate statistics on the availability of abstracts for different types of journal content.\nAs an example, let’s consider Brill, a publisher that has joined I4OA and that mainly focuses on the humanities and social sciences. Abstracts are available in Crossref for 57% of Brill’s content in the period 2018-2020. This may suggest that Brill has failed to submit abstracts to Crossref for a significant share of its content. However, when we look up journal publications of Brill in 2018 and 2019 in the Web of Science database, abstracts turn out to be available for only 68% of these publications. Assuming that Web of Science has more or less complete coverage of abstracts, this seems to indicate that Brill has already submitted most of its abstracts to Crossref. In fact, Web of Science shows that about a quarter of the publications of Brill are book reviews and that hardly any of these book reviews has an abstract. This illustrates why some publishers, for instance those that publish many book reviews, cannot be expected to get close to 100% availability of abstracts.\nDespite the above caveats, it is clear that there is still a long way to go in improving the availability of abstracts in Crossref. As of September 1, 2020, abstracts were available for 21% of all journal articles in Crossref in the period 2018-2020. In Web of Science (Science Citation Index Expanded, Social Sciences Citation Index, and Arts \u0026amp; Humanities Citation Index), 86% of all journal publications in 2018 and 2019 that have a DOI also have an abstract.\nPublishers who wish to distribute their abstracts openly through Crossref can include them in the normal content registration process. They can send XML to Crossref (using Crossref’s metadata deposit schema), either directly via HTTPS POST or via the Crossref admin system. For back-content, a resubmission of the full XML is required. In addition, various tools can be used to deposit abstracts. Open Journal Systems (OJS) has a plugin that supports the depositing of abstracts. Metadata Manager also facilitates this, but only for journal articles. Crossref’s web deposit form does not yet support abstracts, but Crossref is working on this.\nTo keep track of the progress publishers are making in depositing abstracts in Crossref, we plan to publish regular updates of the chart presented above on the I4OA website. We look forward to witnessing the impact of I4OA in the coming months!\nThank you to guest authors Bianca Kramer and Ludo Waltman, as well as the other founding members of I4OA.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/get-involved-with-peer-review-week-2020-and-register-your-peer-reviews-with-crossref/", "title": "Get involved with Peer Review Week 2020 and register your peer reviews with Crossref", "subtitle":"", "rank": 1, "lastmod": "2020-09-21", "lastmod_ts": 1600646400, "section": "Blog", "tags": [], "description": "Just when you thought 2020 couldn’t go any faster, it’s Peer Review week again! Peer Review is such an important part of the research process and highlighting the role it plays is key to retaining and reinforcing trust in the publishing process. ", "content": "Just when you thought 2020 couldn’t go any faster, it’s Peer Review week again! Peer Review is such an important part of the research process and highlighting the role it plays is key to retaining and reinforcing trust in the publishing process. As the Peer Review Week team states:\n“Maintaining trust in the peer review decision-making process is paramount if we are to solve the world’s most pressing problems. This includes ensuring that the peer review process is transparent (easily discoverable, accessible, and understandable by anyone writing, reviewing, or reading peer-reviewed content) and that everyone involved in the process receives the training and education needed to play their part in making it reliable and trustworthy.”\nA key way that publishers can make peer reviews easily discoverable and accessible is by registering them with Crossref - creating a persistent identifier for each review, linking them to the relevant article, and providing rich metadata to show what part this item played in the evolution of the content. It also gives a way to acknowledge the incredible work done by academics in this area. For Peer Review week last year, Rosa and Rachael from Crossref created this short video to explain more.\nFast forward to 2020 and over 75k peer reviews have now been registered with us by a range of members including Wiley, Peer J, eLife, Stichting SciPost, Emerald, IOP Publishing, Publons, The Royal Society and Copernicus. We encourage all members to register peer reviews with us - and you can keep up to date with everyone who is using this API query. (We recommend installing a JSON viewer for your browser to view these results if you haven’t done so already).\nRegister peer reviews and contribute to the Research Nexus At Crossref, we talk a lot about the research nexus, and it’s a theme that you’re going to hear a lot more about from us in the coming months and years. The published article no longer has the supremacy it once did, and other outputs - and inputs - have increasing importance. Linked data and protocols are key for reproducibility, peer reviews increase trust and show the evolution of knowledge, and other research objects help increase the discoverability of content. Registering these objects and stating the relationships between them support the research nexus.\nPeer reviews in particular are key to demonstrating that the scholarly record is not fixed - it’s a living entity that moves and changes over time. Registering peer reviews formally integrates these objects into the scholarly record and makes sure the links between the reviews and the article both exist and persist over time. It allows analysis or research on peer reviews and highlights richer discussions than those provided by the article alone, showing how discussion and conversation help to evolve knowledge. In particular, post-publication reviews highlight how the article is no longer the endpoint - after publication, research is further validated (or not!) and new ideas emerge and build on each other. You can see a real-life example of this from F1000 in a blog post written by Jennifer Lin a few years ago.\nAs we’ve said before:\nArticle metadata + peer review metadata = a fuller picture of the evolution of knowledge Registering peer reviews also provides publishing transparency and reviewer accountability, and enables contributors to get credit for their work. If peer review metadata includes ORCID IDs, our ORCID auto-update service means that we can automatically update the author’s ORCID record (with their permission), while our forthcoming schema update will take this even further, making CRediT roles available in our schema.\nHow to register peer reviews with Crossref You need to be a member of Crossref in order to register your peer reviews with us and you can currently register peer reviews by sending us your XML files. Unfortunately, you can’t currently register peer reviews using our helper tools like the OJS plugin, Metadata Manager, or the web deposit form. You can find out more about registering peer reviews on our website - we even have a range of markup examples. We know that there’s a range of outputs from the peer review process, and our schema allows you to identify many of them, including referee reports, decision letters, and author responses. You can include outputs from the initial submission only, or cover all subsequent rounds of revisions, giving a really clear picture of the evolution of the article. Members can even register content for discussions after the article was published, such as post-publication reviews.\nGet involved with Peer Review Week 2020 We’re looking forward to seeing the debate sparked by Peer Review Week and hearing from our members about this important area. You can get involved by checking out the Peer Review Week 2020 website or following @PeerRevWeek and the hashtags #PeerRevWk20 #trustinpeerreview on Twitter.\nWe’re excited to see what examples of the evolution of knowledge will be discoverable in registered and linked peer reviews this time next year!\n", "headings": ["Register peer reviews and contribute to the Research Nexus","How to register peer reviews with Crossref","Get involved with Peer Review Week 2020"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/peer-review/", "title": "Peer Review", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/record-types/", "title": "Record Types", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-at-the-frankfurt-digital-book-fair/", "title": "Crossref at the Frankfurt Digital Book Fair", "subtitle":"", "rank": 1, "lastmod": "2020-09-17", "lastmod_ts": 1600300800, "section": "Blog", "tags": [], "description": "Frankfurt Book Fair (#FBM20) will be online this year since people are really not traveling right now. This special edition of #FBM20 will have an extensive digital program in which we will be participating. So you can hang out with us from anywhere in the world! ", "content": "Frankfurt Book Fair (#FBM20) will be online this year since people are really not traveling right now. This special edition of #FBM20 will have an extensive digital program in which we will be participating. So you can hang out with us from anywhere in the world! Similar to the in-person event of years past, members of our technical support, membership, and outreach teams will be on hand at our online Crossref Cafe. Here are our Crossref Cafe hours: Support Membership Community outreach Product Wed 14 Oct 8:00 - 9:00 UTC Paul Sally Vanessa Bryan Wed 14 Oct 14:00 - 15:00 UTC Shayn Anna Susan Sara Thu 15 Oct 8:00 - 9:00 UTC Paul Laura Vanessa Martyn Thu 15 Oct 14:00 - 15:00 UTC Isaac, Shayn Anna, Kathleen Susan Kirsty Fri 16 Oct 8:00 - 9:00 UTC Paul Amanda Vanessa, Rachael Rakesh Fri 16 Oct 14:00 - 15:00 UTC Isaac, Shayn Anna, Kathleen Susan Who will be online:\nSusan, Vanessa, and Rachael can talk to you about our upcoming events. Kirsty can talk to you about Crossmark. Kathleen can explain Similarity Check. Laura can show you how to use Metadata Manager for Content Registration. Isaac, Shayn, and Paul can help troubleshoot any metadata, DOI, or reporting needs. Sara can talk to you about content registration. Anna will give you a \u0026lsquo;metadata health check\u0026rsquo; including a tour of your Participation Report. Rakesh can talk to you about product design. Sally and Amanda can answer your questions about membership. Martyn can talk to you about Cited-by. Bryan can talk to you about recent updates to our products and services. We are happy to schedule one-on-one virtual meetings as well. Please do drop-in to say \u0026ldquo;Guten Tag\u0026rdquo;. We\u0026rsquo;re looking forward to seeing you online! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/event-data/", "title": "Event Data", "subtitle":"", "rank": 5, "lastmod": "2020-09-08", "lastmod_ts": 1599523200, "section": "Find a service", "tags": [], "description": "When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it. Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues.", "content": " When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it. Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues. Crossref Event Data captures this activity and acts as a hub for the storage and distribution of this data. An event may be a citation in a dataset or patent, a mention in a news article, Wikipedia page or on a blog, or discussion and comment on social media.\nHow Event Data works Event Data monitors a range of sources, chosen for their importance in scholarly discussion. We make events available via an API for users to access and interpret. Our aim is to provide context to published works and connect diverse parts of the dialogue around research. Learn more about the sources from which we capture events.\nThe Event Data API provides raw data about events alongside context: how and where each event was collected. Users can process this data to suit their requirements.\nWhat is Event Data for? Event Data can be used for a number of different purposes:\nAuthors can find out where their work has been reused and commented on. Readers can access more context around published research, including links to supporting documents and commentary that aren’t in a journal article. Publishers and funders can assess the impact of published research beyond citations. Service providers can enrich, analyze, interpret and report via their own tools Data intelligence and analysis organisations can access a broad range of sources with commentary relevant to research articles. Event Data is available via our API - get started with some queries.\nAnyone can contribute to Event Data by mentioning the DOI or URL of a Crossref-registered work in one of the monitored sources. We also welcome third parties who wish to send events or contribute to code that covers new sources. Learn more about contributing to or using Crossref Event Data.\nAgreement and fees for Event Data Event Data is a public API, giving access to raw data, and there are no fees. In the future we will introduce a service-based offering with additional features and benefits. Learn more about the Event Data terms.\nWhat is an event? In the broadest sense, an event is any time someone refers to a research article with a registered DOI anywhere online. Ideally we would capture all events, but there are limitations:\nWe can’t monitor the entire Internet, and instead check sites that are most likely to discuss academic content. There are still venues that could be relevant and that we do not cover yet. Users online refer to academic content in different ways, sometimes using the DOI but more often using the URL or just the article name. We try to decode mentions of DOIs or a publisher website to get a match to an article but it isn’t always possible. This means we may miss mentions of an article even from sources we are tracking. At present we are not able to track events where no link is included and only the title or other part of the metadata is mentioned.\nFor Crossref Event Data, an event consists of three parts:\nA subject: where was the research mentioned? (such as Wikipedia) An object: which research was mentioned? (a Crossref or DataCite DOI) A relationship: how was the research mentioned? (such as cites or discusses) Show image\r×\rWe determine the relationship from the source of the event, it is an indication of how the subject and object are linked based on broad categories.\nSoftware called agents collect events from various data sources. Most agents are written and operated by Crossref with some code written by our partners. Possible events are passed to the percolator software, which tries to match the event with an object DOI. This process is fully automated.\nShow image × We perform periodic automated checks to the integrity of the data and update event types. Deduplication is also part of the process performed by the percolator.\nTo provide transparency, we keep an evidence record about how we matched the object to the subject. Learn more about transparency in Event Data, including links to the open source code and data.\nThe following agents currently collect data:\nAgent/Data source Event type Crossref metadata Relationships and references to datasets and DOI registration agencies other than Crossref (e.g. DataCite) DataCite metadata Links to Crossref registered content Faculty Opinions Recommendations of research publications Hypothes.is Annotations in Hypothes.is Newsfeed Discussed in blogs and media Reddit Discussed on Reddit Reddit Links Discussed on sites linked to in subreddits Stack Exchange Network Discussed on StackExchange sites Wikipedia References on Wikipedia pages Wordpress.com Discussed on Wordpress.com sites Patent Event Data was historically collected from The Lens. Events from Twitter were collected until February 2023, note that all Twitter events have been removed from search results in accordance with our contract with Twitter; see the Community Forum for more information.\nWhat Event Data is not By providing Event Data, Crossref provides an open, transparent information source for the scholarly community and beyond. It is important to understand, however, that it may not be suitable for all potential users. Here are some of the limitations:\nIt is not a service that provides metrics, collated reports, or offers data analysis. Crossref does not build applications or website plugins for Event Data, for example for displaying results on publisher websites. We do, however, welcome third parties who wish to develop such platforms. Event Data collection is fully automated and therefore may contain errors or be incomplete, we cannot provide any guarantees in this regard and users must assess the quality of the data required for their particular use case. There may also be delays between an event occurring and it appearing in Event Data. Events might be missed due to the limitations of the collection algorithms we use. There is also a small possibility that we link an event to the wrong object. Event Data does not cover every source of academic discussion. In some cases this is because there is no public access to the data; in others it is because we have not had the capacity to build an agent. While we hope the data is useful for many purposes, we encourage users to be responsible and exercise caution when making use of Event Data.\nGetting started with Event Data Learn more about Event Data in our comprehensive documentation.\n", "headings": ["When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it.","How Event Data works ","What is Event Data for? ","Agreement and fees for Event Data ","What is an event? ","What Event Data is not ","Getting started with Event Data "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/organization-identifier/", "title": "Organization Identifier", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/publishers-are-you-ready-to-ror/", "title": "Publishers, are you ready to ROR?", "subtitle":"", "rank": 1, "lastmod": "2020-08-25", "lastmod_ts": 1598313600, "section": "Blog", "tags": [], "description": "If you manage a publishing system or workflow, you know how crucial—and how challenging!—it is to have clean, consistent, and comprehensive affiliation metadata. Author affiliations, and the ability to link them to publications and other scholarly outputs, are vital for numerous stakeholders across the research landscape. Institutions need to monitor and measure their research output by the articles their researchers have published. Funders need to be able to discover and track the research and researchers they have supported.", "content": "If you manage a publishing system or workflow, you know how crucial—and how challenging!—it is to have clean, consistent, and comprehensive affiliation metadata. Author affiliations, and the ability to link them to publications and other scholarly outputs, are vital for numerous stakeholders across the research landscape. Institutions need to monitor and measure their research output by the articles their researchers have published. Funders need to be able to discover and track the research and researchers they have supported. Academic librarians need to easily find all of the publications associated with their campus. Journals need to know where authors are affiliated so they can determine eligibility for institutionally sponsored publishing agreements.\nUntil recently, an open, unambiguous, and persistent identifier for research organization affiliations has been a missing layer of the scholarly ecosystem. DOIs could identify articles and datasets and other research outputs, and ORCID IDs could identify researchers, but no equivalent solution was available to identify institutions. With the launch of the Research Organization Registry (ROR) in 2019 (which Crossref has helped to develop), the landscape is changing. ROR IDs are an opportunity to make affiliation details easier for publishers to use and easier for those who rely on this data.\nAffiliations are a key piece of Crossref metadata that has been missing, but will soon be supported in the Crossref metadata schema. This means that content registered with Crossref can be associated with a ROR IDs to enable better tracking and discovery of research and other publication outputs by institution.\nWhat is ROR? ROR is the Research Organization Registry––open, noncommercial, community-led infrastructure for research organization identifiers. The registry currently includes globally unique persistent identifiers and associated metadata for more than 98,000 research organizations (as of August 2020).\nROR IDs are specifically designed to be implemented in any system that captures institutional affiliations and to enable connections (via persistent identifiers and networked research infrastructure) between research organizations, research outputs, and researchers.\nROR IDs are interoperable with those in other identifier registries, including GRID (which provided the seed data that ROR launched with), Crossref Funder Registry, ISNI, and Wikidata. ROR data is available under a CC0 waiver and can be accessed via a public API and data dump.\nROR is not the first organization identifier to exist. But ROR is distinct because it is completely open, specifically focused on identifying affiliations, and collaboratively developed by, with, and for key stakeholders in scholarly communications. ROR is operated as a joint initiative by Crossref, DataCite, and California Digital Library, and was launched with seed data from GRID in collaboration with Digital Science. These organizations have invested resources into building an open registry of research organization identifiers that can be embedded in scholarly infrastructure to effectively link research to organizations.\nWhy care about ROR IDs in Crossref metadata? Ed Pentz, Crossref’s Executive Director, explains the key role ROR can play in enriching Crossref metadata:\n“Over the years Crossref has expanded the metadata it collects (for example, ORCID IDs and license URLs) based on the changing needs of our members and the scholarly research community. A key type of metadata that is missing from Crossref is affiliations. We’ve had a lot of feedback from members that adding affiliations should be a priority. At Crossref LIVE19 in Amsterdam, ROR was ranked joint first place for Crossref by the 100 plus attendees at the meeting. For the last few years we’ve been diligently working on the initiative and are very happy that ROR is now coming to fruition.”\nCrossref metadata does include some affiliations already. But this data is not comprehensive or consistent, and appears as free-text strings only (even if originally sourced from a list of institutions). A search for UC Berkeley, for instance, returns multiple variants of the university’s name:\nUniversity of California, Berkeley University of California-Berkeley University of California Berkeley UC Berkeley And likely more\u0026hellip; While it isn\u0026rsquo;t too difficult for a human to guess that \u0026ldquo;UC Berkeley,\u0026rdquo; \u0026ldquo;University of California, Berkeley,\u0026rdquo; and \u0026ldquo;University of California at Berkeley\u0026rdquo; are all referring to the same university, a machine interpreting this information wouldn\u0026rsquo;t necessarily make the same connections. If you are trying to easily find all of the publications associated with UC Berkeley, you would need to run and reconcile multiple searches at best, or miss data completely at worst. This is where an affiliation identifier comes in: a single, unambiguous, standardized identifier that will always stay the same (for UC Berkeley, that would be https://ror.org/01an7q238).\nROR IDs for affiliations can transform the usability of Crossref metadata. While it\u0026rsquo;s crucial to have IDs for affiliations, it\u0026rsquo;s equally important that the affiliation data can be easily used. The ROR dataset is CC0, so ROR IDs and associated affiliation data can be freely and openly used and reused without any restrictions.\nWhat does this mean for publishers? As the Crossref schema update is being cleared for takeoff, this is a good time for publishers and publishing service providers to be thinking about adopting ROR.\nROR IDs can be useful in publishing workflows in a variety of ways. They can easily be implemented into manuscript tracking systems to identify the affiliations of submitting authors and co-authors. This can be done via a simple institution lookup that connects to the ROR API. Authors choose their affiliation from a dropdown list populated from ROR; they do not have to provide a ROR ID or even know that a ROR ID is being collected.\nUpon publication, ROR affiliation data can be included when content is registered with Crossref. ROR IDs are also supported in the JATS XML format that many publishers use. Crossref metadata can be searched and crawled, and the Crossref API will make ROR IDs available so affiliation data can be captured by tools and services and fed into downstream reporting and tracking systems.\nGet ready to ROR! ROR is already working with a number of publishers and service providers that are planning to integrate ROR in their systems, map their affiliation data to ROR IDs, and/or include ROR IDs in publication metadata.\nFor example: Rockefeller University Press has already added the collection of ROR IDs to their publication workflow. Upon submission, the author selects an institutional affiliation from a dropdown list of options that comes from ROR. Rockefeller University Press also relies on this affiliation data for billing and licensing purposes to coordinate Gold Open Access publishing agreements.\nIn addition to publishers, libraries and repositories and other stakeholders are building in support for ROR. You can also see the list of active and in-progress ROR integrations here.\nWe know decisions about identifier adoption aren\u0026rsquo;t easy or immediate, so get in touch with ROR if you have questions or want to be more involved in the project. ROR holds regular community meetings and webinars and supports several community working groups for those interested in implementing ROR IDs and working with ROR data. This is a community-driven effort so we want to hear from you!\n", "headings": ["What is ROR?","Why care about ROR IDs in Crossref metadata?","What does this mean for publishers?","Get ready to ROR!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/evolving-our-support-for-text-and-data-mining/", "title": "Evolving our support for text-and-data mining", "subtitle":"", "rank": 1, "lastmod": "2020-08-21", "lastmod_ts": 1597968000, "section": "Blog", "tags": [], "description": "Many researchers want to carry out analysis and extraction of information from large sets of data, such as journal articles and other scholarly content. Methods such as screen-scraping are error-prone, place too much strain on content sites and may be unrepeatable or break if site layouts change. Providing researchers with automated access to the full-text content via DOIs and Crossref metadata reduces these problems, allowing for easy deduplication and reproducibility. Supporting text and data mining echoes our mission to make research outputs easy to find, cite, link, assess, and reuse.\n", "content": "Many researchers want to carry out analysis and extraction of information from large sets of data, such as journal articles and other scholarly content. Methods such as screen-scraping are error-prone, place too much strain on content sites and may be unrepeatable or break if site layouts change. Providing researchers with automated access to the full-text content via DOIs and Crossref metadata reduces these problems, allowing for easy deduplication and reproducibility. Supporting text and data mining echoes our mission to make research outputs easy to find, cite, link, assess, and reuse.\nIn 2013 Crossref embarked on a project to better support Crossref members and researchers with Text and Data Mining requests and access. There were two main parts to the project:\nTo collect and make available full-text links and publisher TDM license links in the metadata.\nTo provide a service (TDM click-through service) for Crossref members to post their additional TDM terms and conditions and for researchers to access, review and accept these terms.\nThe TDM click-through was launched in May 2014.\nTo date, 37.5 million works registered with Crossref have both full-text links and TDM license information. We continue to encourage all members to include full-text links and license information in the metadata they register to assist researchers with TDM. You can see how each member is doing via its Participation Report (e.g. Wiley\u0026rsquo;s).\nMembers are also making subscription content available for text mining (temporarily or otherwise) for specific purposes, such as to help the research community with its response to COVID-19. Back in April we highlighted how this can be achieved by including:\nA \u0026ldquo;free to read\u0026rdquo; element in the access indicators section of publisher metadata indicating that the content is being made available free-of-charge (gratis)\nAn assertion element indicating that the content being made available is available free-of-charge.\nTo access Crossref\u0026rsquo;s click-through tool for text and data mining, users could log in via their ORCID iD. They could then review TDM license agreements posted by Crossref members and accept, reject or postpone their decisions until later. Having agreed to a publisher\u0026rsquo;s terms and conditions this action was logged against the user\u0026rsquo;s API token which they could use when requesting full-text from the publisher.\nSince the pilot in 2014, only 2 publishers have continued with the tool and fewer than 300 API tokens have been issued.\nPublishers have since developed their own mechanisms for managing TDM requests. The introduction of UK (2014) / EU (2019) copyright exceptions for TDM has significantly reduced the number of requests and at the same time, more and more content is published under an open access license.\nGiven the low take-up of the click-through by both publishers and researchers, its goals are no longer being met. Therefore we will retire the TDM click-through in December 2020. Until that date, it will still operate for the two publishers and various researchers who use it while they finish implementing their alternative plans.\nCrossref will continue to collect member-supplied TDM licensing information in metadata for individual works, and researchers can continue to find this via the Crossref APIs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/text-and-data-mining/", "title": "Text and Data Mining", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/kirsty-meddings/", "title": "Kirsty Meddings", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-news-introducing-the-next-generation-ithenticate./", "title": "Similarity Check news: introducing the next generation iThenticate.", "subtitle":"", "rank": 1, "lastmod": "2020-07-28", "lastmod_ts": 1595894400, "section": "Blog", "tags": [], "description": "Crossref’s Similarity Check service is used by our members to detect text overlap with previously published work that may indicate plagiarism of scholarly or professional works. Manuscripts can be checked against millions of publications from other participating Crossref members and general web content using the iThenticate text comparison software from Turnitin.\n", "content": "Crossref’s Similarity Check service is used by our members to detect text overlap with previously published work that may indicate plagiarism of scholarly or professional works. Manuscripts can be checked against millions of publications from other participating Crossref members and general web content using the iThenticate text comparison software from Turnitin.\nThe 2000 members who already make use of Similarity Check upload almost 2,000,000 documents each month to look for matching text in other publications.\nWe have some great news for those 2000 members –– a completely new version of iThenticate is on its way, and will start to roll out to users in the coming months.\nNew functionality has been developed based on your feedback over the past few years and includes:\nAn improved Document Viewer that makes PDFs searchable and accessible, with responsive design for ease of use on different screen sizes. All of the functionality of the Viewer and the Text-only reports in the previous version have been streamlined into just two views: Sources Overview and All Sources. Improved exclusion options to make refining matches even easier. Smarter citation detection now identifies probable citations both inline and in reference sections. A new “Content Portal” where you can see what percentage of your own content has been successfully indexed for the iThenticate comparison database, and download reports of indexing errors that need to be fixed. A new API for integration with manuscript submission systems allows display of the largest matching word count and the top 5 source matches alongside the Similarity Score. The maximum number of pages and file size per document has been doubled to 800 pages/200 MB. The new document viewer in iThenticate v2.0 Improved reference exclusion Crossref members can use Similarity Check directly by logging in, or via an integration with a submission/peer review system. We are working with many system providers to bring v2.0 to you as soon as possible. In the meantime, we are looking for members to help us test the new system directly in the iThenticate user interface. If you are interested and can spare a few hours some time in the next month please let me know.\nAnd if your organization is not yet using Similarity Check to assess the originality of the manuscripts you receive do take a look at the many benefits the service has to offer.\n", "headings": ["The new document viewer in iThenticate v2.0","Improved reference exclusion"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/meet-the-new-crossref-executive-director/", "title": "Meet the new Crossref Executive Director", "subtitle":"", "rank": 1, "lastmod": "2020-07-23", "lastmod_ts": 1595462400, "section": "Blog", "tags": [], "description": "It’s me! Back in January I wrote, The one constant in Crossref’s 20 years has been change. This continues to be true, and the latest change is that I’m happy to say that I will be staying on as Executive Director of Crossref. At the recent Crossref board meeting, I rescinded my resignation and the board happily accepted this.\n", "content": "It’s me! Back in January I wrote, The one constant in Crossref’s 20 years has been change. This continues to be true, and the latest change is that I’m happy to say that I will be staying on as Executive Director of Crossref. At the recent Crossref board meeting, I rescinded my resignation and the board happily accepted this.\nWhat happened? Well, a lot has changed since I announced that I was leaving back in February. The pandemic has upended “business as usual” and everyone is rethinking pretty much everything. It’s clear that as a result of the crisis, there will be greater economic pressure on our community. These are difficult times and they are going to continue for the foreseeable future.\nThe people at Crossref are amazing and I’ve been impressed and inspired by everyone’s resilience and creativity in responding to these unusual challenges. Crossref has a very special organizational culture and I want to remain a part of it and continue to develop it.\nI’ve also been inspired by the board. In particular, at its July meeting they passed a progressive motion based on a proposal from the leadership team:\nRESOLVED: Crossref should proactively lead an effort to explore, with other infrastructure organizations and initiatives, how we can improve the scholarly research ecosystem. Crossref is committed to the collaborative development of open scholarly infrastructure for the benefit of our members and the wider research community.\nThis is the result of a process that started back in 2019. In the A turning point is a time for reflection blog post, we took a step back as we approached Crossref’s 20th anniversary. We conducted research into the perceived value of Crossref, reflected on what we had achieved, and what the future holds for Crossref. At our annual meeting, \u0026ldquo;the strategy one\u0026rdquo; and in our annual report fact file, we reminded people of the organization’s original founding purpose:\nTo promote the development and cooperative use of new and innovative technologies to speed and facilitate scientific and other scholarly research.\nFollowing on from 2019, as the pandemic hit, we held virtual strategic sessions with the board in March, May and June. These culminated in the motion above, which allows Crossref to fully embrace this simple, but ambitious, vision. This was a game changer for me, and I realized there was nothing else I wanted to do or that better suited my skills and experience than to continue to lead Crossref and work with the community through the next phase of transformation.\nThis is not the time for “business as usual”. We live in an interconnected, interdependent world and open infrastructure organizations have to collaborate more deeply and look at doing things differently in order to improve the scholarly research ecosystem. So - more to come!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-faces-at-crossref/", "title": "New faces at Crossref", "subtitle":"", "rank": 1, "lastmod": "2020-06-30", "lastmod_ts": 1593475200, "section": "Blog", "tags": [], "description": "Please help us welcome new faces at Crossref! Martyn, Sara, Laura, and Mark joined us very recently and we are happy they\u0026rsquo;re with us. Both Martyn and Sara have joined the Product team and this has given us the chance to reorganize the team into the following groups: content registration, scholarly stewardship, scholarly impact, metadata retrieval, and UX/UI leadership. Laura joined the Finance and Operations team to help make the billing process simple for our members. Mark joins the Technology team and one of his projects will be improving the Event Data service.\nIt is exciting to already see the impact of your contributions and look forward to what’s to come!\n", "content": "Please help us welcome new faces at Crossref! Martyn, Sara, Laura, and Mark joined us very recently and we are happy they\u0026rsquo;re with us. Both Martyn and Sara have joined the Product team and this has given us the chance to reorganize the team into the following groups: content registration, scholarly stewardship, scholarly impact, metadata retrieval, and UX/UI leadership. Laura joined the Finance and Operations team to help make the billing process simple for our members. Mark joins the Technology team and one of his projects will be improving the Event Data service.\nIt is exciting to already see the impact of your contributions and look forward to what’s to come!\nAnd now a few words from each of them. Martyn Rittman I am a former university researcher who worked on interdisciplinary projects around life sciences and analytical chemistry, with positions in the UK and Germany. I spent seven years at open access publisher MDPI doing everything from running journals to handling production, developing services for authors and publishers, and supporting preprints. I’m very excited to be joining Crossref as a Product Manager and developing some great products and services that focus on how Crossref-indexed research creates impact. This includes supporting the use of preprint metadata. I’m also looking forward to getting my teeth into event data, which looks at how those in the research community and beyond reference, use, and reuse research. If you are interested in making use of event data or have examples of event data applications, I would like to hear from you. Sara Bowman I’m thrilled to have joined Crossref at this exciting time in the organization. As a member of the Product team, my primary area of focus is content registration, building, and improving tools for our members to deposit rich metadata. I’m particularly interested in how we can create a unified user experience for content registration while supporting the needs of our diverse membership. A scientist by training, I’ve spent the last 6 years working on open source technologies to support scholarly communication, most recently in the role of Product Manager at the Center for Open Science. I’m passionate about open tools and using data to drive product development, building innovative solutions to improve research and scholarly communication.\nLaura Cuniff I joined Crossref two months ago as a part-time Billing Support Specialist on the Finance and Operations team. With the help of my supportive and knowledgeable colleagues, I took on learning the various systems. My goal is to make the billing process as simple as possible for our members by researching, retrieving, and relaying billing information. This allows our members to focus on the reason for their engagement with Crossref. With several part-time jobs cobbled together at different times of the day, I have the flexibility to volunteer with a few organizations in my hometown of Ipswich, MA. If you find yourself at the Ipswich Visitor Center, I may greet you, recommend the most beautiful spots in town, give you a tour of the Ipswich Museum, or send you off with a wonderful Ipswich Humane Group cat or dog! I’m very excited to be here!\nMark Woodhall I am an open-source enthusiast who has worked in a range of technology roles at a variety of companies as a polyglot programmer with experience in Clojure(Script), Java, C#, and JavaScript. It’s really exciting to be working at Crossref as a Senior Software Developer on the Technology team and I’m proud to be part of a team with open source at its heart. I’m really looking forward to getting more involved with event data and building a scalable solution to support its future uses.\nWelcome to the Crossref community Martyn, Laura, Sara, and Mark.\n", "headings": ["And now a few words from each of them.","Martyn Rittman","Sara Bowman","Laura Cuniff","Mark Woodhall"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/community-outreach-in-2020/", "title": "Community Outreach in 2020", "subtitle":"", "rank": 1, "lastmod": "2020-06-29", "lastmod_ts": 1593388800, "section": "Blog", "tags": [], "description": "2020 hasn’t been quite what any of us had imagined. The pandemic has meant big adjustments in terms of working; challenges for parents balancing childcare and professional lives; anxieties and tensions we never had before; the strain of potentially being away from co-workers, friends, and family for a prolonged period of time. Many have suffered job losses and around the world, many have sadly lost their lives to the virus.\n", "content": "2020 hasn’t been quite what any of us had imagined. The pandemic has meant big adjustments in terms of working; challenges for parents balancing childcare and professional lives; anxieties and tensions we never had before; the strain of potentially being away from co-workers, friends, and family for a prolonged period of time. Many have suffered job losses and around the world, many have sadly lost their lives to the virus.\nI’ve been very fortunate that my family and friends remain in good health and very grateful to work for a supportive and caring organization such as Crossref. I don’t usually work from home every day, so adjusting to the ‘new normal’ these last few months has been difficult at times. I certainly miss seeing my colleagues in the Oxford office day-to-day, and now have a new appreciation for the challenges our remote working members of staff face, particularly when it comes to feeling quite isolated at times. I’ve also learnt about the importance of good communication and building in greater flexibility to projects, especially when you are not able to see people face-to-face.\nMy role as Outreach Manager is all about people; it often involves organising and attending industry events as well as running our own educational days, which we call our Crossref LIVE events. The global health crisis brought the majority of international travel to an abrupt halt, something the environment may thank us for, but that also requires a dramatic reimagining of how we can effectively and empathetically engage with our members and the wider community.\nAs our planned in-person events have been postponed, for now, we converted our LIVE events into an online format, which we have so far run in Arabic, Spanish, Korean and Brazilian Portuguese with help from our Ambassadors and technical support team. We have had better attendance and engagement than we ever dreamed, with lots of thoughtful questions and positive feedback. While an online format has its limitations it also brings new opportunities, particularly by enabling us to reach many members who would not be able to attend a physical event. We have more in the works for the rest of the year, so keep a lookout on our webinar and events pages.\nWe have all had to adapt to new ways of living and working this year, but vital research continues to be done and new content continues to be published. We embrace new ways of engaging with our international membership so we can continue to support them in their roles and in working with our systems, despite the uncertain circumstances we find ourselves in.\nLessons learned: Online events need to be much shorter than physical ones. Zoom fatigue is real, no one can stay focused for long periods of time at the screen.\nFlexibility is key, running events in multiple languages and time-zones make them more accessible for a geographically diverse audience, but also ensuring recordings and other materials are readily available means people can engage with the content in their own time. And they do. Our Spanish LIVE on May 19 saw 335 people attend, and a further 304 (so far) watch the recording in their own time.\nDon’t forget to build in time for breaks.\nAlthough it’s impossible to replicate the natural human interaction that occurs at a physical event, an online format can still bring hearts as well as minds together. Break-out rooms, polls, and clever use of chat functionality all help to build engagement and turn a passive audience into active participants.\nPeople love an online quiz.\nPartner with others –– an interesting guest speaker can bring a whole new dynamic to your planned content.\nTake the opportunity to be a little more experimental. We can’t do business as usual right now, so embrace new ideas and see what works!\nHoping you all stay safe and healthy, and that we can meet again in person in 2021.\n", "headings": ["Lessons learned:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/calling-all-prospective-board-members/", "title": "Calling all prospective board members", "subtitle":"", "rank": 1, "lastmod": "2020-05-21", "lastmod_ts": 1590019200, "section": "Blog", "tags": [], "description": "English version –– Información en español –– Version Française\nThe Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in 2021. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September. Expressions of interest will be due Friday, June 19, 2020.\nThe role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:", "content": "English version –– Información en español –– Version Française\nThe Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in 2021. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September. Expressions of interest will be due Friday, June 19, 2020.\nThe role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization;\nProviding financial oversight; and\nApproving new policies and services.\nThe board is representative of our membership base and guides the staff leadership team on trends affecting scholarly communications. The board sets strategic directions for the organization while also providing oversight into policy changes and implementation. Board members have a fiduciary responsibility to ensure sound operations. Board members do this by attending board meetings, as well as joining more specific board committees.\nAs an example, in 2019 the board decided to remove fees for the Crossmark service. This involved a strategic review of the service and its alignment with the mission by the Membership \u0026amp; Fees committee; followed by a review of the financial implications of removing the fee; and ultimately, a vote by the full board to remove the fee starting in 2020.\nCrossref\u0026rsquo;s services provide central infrastructure to scholarly communications. Crossref\u0026rsquo;s board helps shape the future of our services, and by extension, impacts the broader scholarly ecosystem. We are looking for board members to contribute their experience and perspective. I\u0026rsquo;m interested but busy! What is expected of board members? Board members attend three meetings each year that typically take place in March, July, and November. Meetings have taken place in a variety of international locations and travel support is provided when needed.\nStarting in 2020, following travel restrictions as a result of COVID-19, the board introduced a plan to convene at least one of the board meetings virtually each year and all committee meetings take place virtually. Most board members sit on at least one Crossref committee. Care is taken to accommodate the wide range of timezones in which our board members live.\nWhile the expressions of interest are specific to an individual, the seat that is elected to the board belongs to the member organization. The primary board member also names an alternate who may attend meetings in the event that the primary board member is unable.\nBoard members are expected to be comfortable assuming the responsibilities listed above and to prepare and participate in board meeting discussions.\nAbout the election The board is elected through the \u0026ldquo;one member, one vote\u0026rdquo; policy wherein every member of Crossref has a single vote to elect representatives to the Crossref board. Board terms are for three years, and this year there are six seats open for election.\nThe board maintains a balance of seats, with eight seats for smaller publishers and eight seats for larger publishers, in an effort to ensure that the diversity of experiences and perspectives of the publishing community is represented in decisions made at Crossref. This year we will elect two of the larger publisher seats and four of the smaller publisher seats.\nThe election takes place online and voting will open in September. Election results will be shared at the November board meeting and new members will commence their term in 2021.\nAbout the nominating committee The nominating committee will review the expressions of interest and select a slate of candidates for election. The slate put forward will exceed the total number of open seats. The committee considers the statements of interest, organizational size, geography, gender, and experience.\n2020 Nominating Committee:\nMelissa Harrison, eLife, Cambridge, UK, committee chair\nScott Delman, ACM, New York, NY\nSusan Murray, AJOL, Grahamstown, South Africa\nTanja Niemann, Erudit, Montreal, Canada\nArley Soto, Biteca, Bogotá, Colombia\nHow to submit an expression of interest Please click here to submit your expression of interest or contact me with any questions at lofiesh [at] crossref.org.\nVersión en español\nEl Comité de Nominación de Crossref está invitando a expresiones de interés a unirse a la Junta Directiva de Crossref para el período que comienza en 2021. El comité recopilará las respuestas de los interesados ​​y creará la lista de candidatos que nuestra membresía votará en una elección en septiembre. Las expresiones de interés vencen el viernes 19 de junio de 2020.\nLa función de la junta directiva de Crossref es proporcionar supervisión estratégica y financiera de la organización, así como orientación para el Director Ejecutivo y el equipo de liderazgo del personal, con responsabilidades importantes como:\nEstablecer la dirección estratégica para la organización;\nProporcionar supervisión financiera; y\nAprobar nuevas políticas y servicios.\nLa junta es representativa de nuestra base de miembros y guía al equipo de liderazgo del personal sobre las tendencias que afectan las comunicaciones académicas. La junta establece direcciones estratégicas para la organización mientras supervisa los cambios e implementación de políticas. Los miembros de la junta tienen la responsabilidad fiduciaria de garantizar operaciones sólidas. Los miembros de la junta hacen esto asistiendo a las reuniones de la junta, además de unirse a comités de la junta más específicos.\nComo ejemplo, en 2019 la junta decidió eliminar las tarifas de servicio de Crossmark. Esto implicó una revisión estratégica del servicio y su alineación con la misión del comité de Membresía y Tarifas; seguido de una revisión de las implicaciones financieras de eliminar la tarifa; y, en última instancia, un voto de la junta completa para retirar la tarifa a partir de 2020.\nLos servicios Crossref proporcionan infraestructura central para las comunicaciones académicas. La junta directiva de Crossref ayuda a dar forma al futuro de nuestros servicios y, por extensión, impacta el ecosistema académico más amplio. Estamos buscando miembros de la junta para contribuir con su experiencia y perspectiva.\n¡Estoy interesado pero ocupado! ¿Qué se espera de los miembros de la junta? Los miembros de la junta asisten a tres reuniones cada año que generalmente tienen lugar en marzo, julio y noviembre. Las reuniones se han llevado en una variedad de ubicaciones internacionales y se brinda apoyo para viajes cuando es necesario.\nA partir de 2020, después de las restricciones de viaje como resultado de COVID-19, la junta introdujo un plan para convocar al menos una de las reuniones de la junta virtualmente todos los años, y todas las reuniones del comité tienen lugar virtualmente. La mayoría de los miembros de la junta formen parte del menos un comité Crossref. Se tiene cuidado de acomodar la amplia gama de zonas horarias en las que viven los miembros de nuestra junta.\nAunque las expresiones de interés son específicas de un individuo, el asiento elegido para la junta pertenece a la organización miembro. El miembro primario de la junta también nombra a un suplente que puede asistir a las reuniones en caso de que el miembro de la junta principal no pueda.\nSe espera que los miembros de la junta se sientan cómodos asumiendo las responsabilidades anteriores y que se preparen y participen en las discusiones de la reunión de la junta.\nLas reuniones de la junta se llevarán a cabo en inglés, por lo que los posibles miembros de la junta deben sentirse cómodos leyendo material en inglés y en inglés conversacional.\nSobre las elecciones La junta se elige mediante la política de \u0026ldquo;un miembro, un voto\u0026rdquo; en la que cada miembro de Crossref tiene un voto para elegir representantes en la junta de Crossref. Los términos de la junta son de tres años, y este año hay seis asientos abiertos para la elección\nLa junta mantiene un equilibrio de asientos, con ocho asientos para editoriales más pequeñas y ocho asientos para editoriales más grandes, en un esfuerzo por garantizar que la diversidad de experiencias y perspectivas de la comunidad editorial esté representada en las decisiones tomadas en Crossref. Este año elegiremos dos de los asientos de editor más grandes y cuatro de los asientos de editor más pequeños.\nLa elección se realiza en línea y la votación se abrirá en septiembre. Los resultados de las elecciones se compartirán en la reunión de la junta de noviembre y los nuevos miembros comenzarán su mandato en 2021.\nSobre el comité de nominaciones El comité de nominaciones revisará las expresiones de interés y seleccionará una lista de candidatos para la elección. Esta lista presentada excederá el número total de asientos disponibles. El comité considera declaraciones de interés, tamaño organizacional, geografía, género y experiencia.\nComité de nominaciones 2020:\nMelissa Harrison, eLife, Cambridge, UK, committee chair\nScott Delman, ACM, New York, NY\nSusan Murray, AJOL, Grahamstown, South Africa\nTanja Niemann, Erudit, Montreal, Canada\nArley Soto, Biteca, Bogotá, Colombia\nCómo presentar una expresión de interés Por favor haga clic aquí para enviar su expresión de interés o contáctame si tiene alguna pregunta lofiesh [at] crossref.org.\nVersion Française\nAppel à tous les membres potentiels du conseil d\u0026rsquo;administration Le comité de nomination de Crossref invite les personnes qui seraient intéressées à se porter candidates pour l\u0026rsquo;élection au conseil d\u0026rsquo;administration de Crossref, pour le mandat commençant en 2021. Le comité de nomination rassemblera les réponses des personnes candidates et élaborera une liste des candidats, pour lesquels nos membres pourront voter lors des élections au conseil d\u0026rsquo;administration, en septembre. Les candidatures doivent être déposées au plus tard le vendredi 19 juin 2020.\nLe rôle du conseil d\u0026rsquo;administration de Crossref est d\u0026rsquo;opérer une supervision stratégique et financière de l\u0026rsquo;organisation, et de conseiller le directeur exécutif ainsi que l\u0026rsquo;équipe de direction du personnel. Les principales responsabilités du conseil d’administration sont les suivantes :\nFixer l\u0026rsquo;orientation stratégique de l\u0026rsquo;organisation Assurer la surveillance financière Approuver de nouvelles politiques et de nouveaux services Le conseil d\u0026rsquo;administration est représentatif de nos adhérents et guide l\u0026rsquo;équipe de direction du personnel en ce qui concerne les tendances affectant les communications savantes. Le conseil d\u0026rsquo;administration établit des orientations stratégiques pour l\u0026rsquo;organisation, tout en assurant le contrôle des changements et de la mise en œuvre des politiques. Les membres du conseil ont la responsabilité fiduciaire d\u0026rsquo;assurer son bon fonctionnement. Les membres du conseil d\u0026rsquo;administration s’acquittent de cette responsabilité en assistant aux réunions du conseil d\u0026rsquo;administration et en participant à des comités, plus spécifiques, du conseil d\u0026rsquo;administration.\nA titre d’exemple, en 2019, le conseil d\u0026rsquo;administration a décidé de supprimer les frais liés au service Crossmark. Ceci a impliqué un examen stratégique du service et de son alignement avec la mission de Crossref, par le comité des adhésions et frais, puis un examen des implications financières de la suppression des frais, et, finalement, un vote par l\u0026rsquo;ensemble du conseil d\u0026rsquo;administration pour supprimer les frais à partir de 2020.\nLes services de Crossref fournissent une infrastructure centralisée pour les communications savantes. Le conseil d\u0026rsquo;administration de Crossref aide à façonner l\u0026rsquo;avenir de nos services et, par extension, a un impact sur l\u0026rsquo;écosystème universitaire plus large. Les futurs membres du conseil d\u0026rsquo;administration sont recherchés particulièrement pour leur expérience et leur point de vue.\nJe suis intéressé mais très occupé! Qu\u0026rsquo;attend-on des administrateurs? Les membres du conseil d\u0026rsquo;administration assistent à trois réunions par an qui ont généralement lieu en mars, juillet et novembre. Les réunions se déroulent dans des lieux divers, à l\u0026rsquo;échelle internationale, et une assistance financière est octroyée, en cas de besoin, pour le voyage.\nÀ partir de 2020, à la suite des restrictions de voyage causées par la COVID-19, le conseil a présenté un plan pour convoquer au moins une des réunions du conseil en téléconférence chaque année, et toutes les réunions des comités auront lieu en téléconférence. La plupart des membres du conseil d\u0026rsquo;administration siègent à au moins un comité de Crossref. Nous souhaitons préciser que nous prenons soin de prendre en compte le large éventail de fuseaux horaires dans lesquels vivent les membres de notre conseil d\u0026rsquo;administration.\nBien que les manifestations d\u0026rsquo;intérêt émanent d’une personne, le siège pourvu au conseil appartient à l\u0026rsquo;organisation membre dans son ensemble. Le membre titulaire du conseil d\u0026rsquo;administration nomme également un suppléant, qui pourra assister aux réunions en cas d\u0026rsquo;empêchement du membre titulaire du siège au conseil d\u0026rsquo;administration.\nIl est attendu que les membres du conseil d’administration puissent dédier aux responsabilités présentées ci-dessus le temps qui leur est raisonnablement dû, ainsi qu\u0026rsquo;à la préparation et à la participation aux discussions des réunions du conseil.\nÀ propos de l\u0026rsquo;élection Le conseil d\u0026rsquo;administration est élu selon une politique de «un membre, une voix» dans laquelle chaque membre de Crossref dispose d\u0026rsquo;une seule voix pour élire les représentants au conseil d\u0026rsquo;administration de Crossref. Le mandat du conseil d\u0026rsquo;administration est de trois ans et, cette année, six sièges sont à pourvoir lors de des élections de septembre prochain.\nLe conseil d\u0026rsquo;administration maintient un équilibre des sièges, avec huit sièges pour les petits éditeurs et huit sièges pour les grands éditeurs, afin de garantir que la diversité des expériences et des perspectives de la communauté de l\u0026rsquo;édition soit représentées dans les décisions prises à Crossref. Cette année, sont à pourvoir deux sièges de grands éditeurs et quatre sièges de petits éditeurs.\nLe vote aura lieu en ligne et s\u0026rsquo;ouvrira en septembre. Les résultats de ce scrutin seront communiqués lors de la réunion du conseil d\u0026rsquo;administration de novembre et les nouveaux membres commenceront leur mandat en 2021.\nÀ propos du comité de nomination Le comité des candidatures examinera les candidatures et sélectionnera une liste de candidats aux élections. Le nombre de candidats proposés dépassera le nombre total de sièges à pourvoir. Le comité prend en compte les déclarations d\u0026rsquo;intérêt, la taille de l\u0026rsquo;organisation, la géographie, le sexe et l\u0026rsquo;expérience des personnes pour sa sélection.\nComment exprimer une manifestation d\u0026rsquo;intérêt Veuillez cliquer ici pour envoyer votre candidature ou contactez-moi pour toute question à lofiesh [at] crossref.org.\n", "headings": ["I\u0026rsquo;m interested but busy! What is expected of board members?","About the election","About the nominating committee","How to submit an expression of interest","¡Estoy interesado pero ocupado! ¿Qué se espera de los miembros de la junta?","Sobre las elecciones","Sobre el comité de nominaciones","Cómo presentar una expresión de interés","Appel à tous les membres potentiels du conseil d\u0026rsquo;administration","Je suis intéressé mais très occupé! Qu\u0026rsquo;attend-on des administrateurs?","À propos de l\u0026rsquo;élection","À propos du comité de nomination","Comment exprimer une manifestation d\u0026rsquo;intérêt"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/come-for-a-swim-in-our-new-pool-of-education-materials/", "title": "Come for a swim in our new pool of Education materials", "subtitle":"", "rank": 1, "lastmod": "2020-04-29", "lastmod_ts": 1588118400, "section": "Blog", "tags": [], "description": "After 20 years in operation, and as our system matures from experimental to foundational infrastructure, it’s time to review our documentation.\nHaving a solid core of education materials about the why and the how of Crossref is essential in making participation possible, easy, and equitable.\nAs our system has evolved, our membership has grown and diversified, and so have our tools - both for depositing metadata with Crossref, and for retrieving and making use of it.", "content": "After 20 years in operation, and as our system matures from experimental to foundational infrastructure, it’s time to review our documentation.\nHaving a solid core of education materials about the why and the how of Crossref is essential in making participation possible, easy, and equitable.\nAs our system has evolved, our membership has grown and diversified, and so have our tools - both for depositing metadata with Crossref, and for retrieving and making use of it.\nOur new documentation gives the full picture, with each chapter explaining an aspect of Crossref and why it matters, followed by instructions on how to participate. As far as possible, these instructions are given for each of our deposit and retrieval methods.\nThe revised documentation has been edited for use of simple English, and consistent terminology. Specialist vocabulary is explained as it is introduced. Understanding what’s involved across the full range of Crossref services can often seem complicated. This makes the documentation easier for readers, and provides a good basis for human and machine translations.\nThe chapters and sections are modular, so you can approach and combine them in different ways according to your existing knowledge and what you wish to learn. This Choose Your Own Adventure style means that sections don\u0026rsquo;t overlap, avoiding problems of repetition and versioning, and helping us to keep the information current.\nThe revised documentation includes several new topics, including: The importance of metadata, explaining why you might register metadata for different purposes (discoverability, research integrity, reproducibility, and reporting and assessment)\nPersistent identifiers (PIDs), explaining the structure of a DOI, and how you might use DOIs at different levels\nChoosing which way to register your content, including suggested DOI registration workflow and suffix generator to make life easier\nIntroduction to types of metadata, including descriptive (bibliographic), administrative, and structural Version control, corrections, and retractions, including publication stages and DOIs\nMetadata stewardship, including maintaining your metadata, reports, understanding your member obligations, and maintaining your Crossref membership.\nThis new documentation is part of our efforts to make Crossref participation possible, easy, and rewarding for our members large and small, all over the world. It provides a concrete basis on which to build further education and outreach projects in the future. New members will start to see our paced member onboarding program, introducing them to parts of the documentation as and when it\u0026rsquo;s useful to them. And like the rest of the Crossref website, it\u0026rsquo;s all licensed for reuse under CC-BY.\nI would like to say a big thank you to the members of the Education Task Force, who helped guide the development of the new documentation, representing a diverse range of Crossref members large and small from around the world:\nAnjum Sherasiya - India, Editor-in-Chief of Veterinary World, Crossref Ambassador Budi Setiawan - Indonesia, Poltekkes Kemenkes Yogyakarta Caroline Breul - USA, BioOne Isabel Recavarren - Peru, Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica (CONCYTEC), Crossref Ambassador Mike Nason - Canada, Public Knowledge Project (PKP) and University of New Brunswick Nadine van der Merwe - South Africa, Academy of Science of South Africa (ASSAf) Roberto Camargo - Brazil, Associação Brasileira de Editores Científicos (ABEC) Sioux Cumming - UK, INASP Taeil Kim - South Korea, Korean Association of Medical Journal Editors (KAMJE) and from Crossref: Amanda, Esha, Geoffrey, Ginny, Isaac, Kirsty, Patricia, and Susan. Please explore the new documentation, give us your feedback using the yellow \u0026ldquo;Docs feedback\u0026rdquo; button at the bottom of each page, and share this update to spread the word!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossing-the-rubicon-the-case-for-making-chapters-visible/", "title": "Crossing the Rubicon - The case for making chapters visible", "subtitle":"", "rank": 1, "lastmod": "2020-04-29", "lastmod_ts": 1588118400, "section": "Blog", "tags": [], "description": "To help better support the discovery, sale and analysis of books, Jennifer Kemp from Crossref and Mike Taylor from Digital Science, present seven reasons why publishers should collect chapter-level metadata.\nBook publishers should have been in the best possible position to take advantage of the movement of scholarly publishing to the internet. After all, they have behind them an extraordinary legacy of creating and distributing data about books: the metadata that supports discovery, sales and analysis.", "content": "To help better support the discovery, sale and analysis of books, Jennifer Kemp from Crossref and Mike Taylor from Digital Science, present seven reasons why publishers should collect chapter-level metadata.\nBook publishers should have been in the best possible position to take advantage of the movement of scholarly publishing to the internet. After all, they have behind them an extraordinary legacy of creating and distributing data about books: the metadata that supports discovery, sales and analysis.\nLibrarianship, and the management of book catalogs at scale took off in the nineteenth century. The Dewey Decimal Classification, the various initiatives of the Library of Congress and the British Library followed. Innovations from the 1960s gave us MARC records and ISBNs. The late 90s produced ONIX, which gave the book industry a tremendous start in migrating online. However, progress in the decades after appears to have been less dramatic. Some might even argue that this tremendous legacy and wealth of metadata experience has acted as a weight, and has slowed progress. Nowhere is this lack of progress clearer than in the discovery and analysis of book chapters: approximately one-quarter of books published per year has chapter-level metadata, and about two-thirds of books don\u0026rsquo;t have a persistent and open identifier, ratios that have not significantly changed over the last ten years.\nOnly one-quarter of scholarly books make chapter level metadata available\nThe proportion of edited books and monographs with chapter-level data is approximately one-quarter of all books published in the last ten years. Calculating this figure is necessarily approximate, using numbers published in Grimme et al (2019), and based on data and observed trends in both Dimensions and Crossref.\nSo why the lack of progress? For many publishers and their vendor partners, with systems geared up to the efficient delivery of title-level information, the case for moving towards chapter-level metadata can seem daunting (and potentially expensive!).\nMetadata is necessarily detailed and it\u0026rsquo;s not the kind of thing most people will dabble in. Practitioners, as in other technical fields, have expertise that others may find difficult to leverage if they don\u0026rsquo;t know what questions to ask. Organizations often find themselves entrenched in outdated approaches to metadata. Crossref and Metadata 2020 are collaborating to produce arguments why publishers should move from book-level metadata to chapters. They\u0026rsquo;ve been working with representatives from the scholarly community, including both small and large presses, not-for-profits and university presses. Here we present 7 reasons why publishers should collect chapter-level metadata: 1. Increased discoverability\nIncreasingly, we\u0026rsquo;re seeing students and researchers move away from traditional book catalogs and onto more general purpose tools, that are often optimized for journal content, and which may - inadvertently - exclude books and chapters from search results. Making chapter level data and DOIs available places book content into these new channels at no additional cost, and starts to reduce the dependency on specialist vendors. Discovery is simplified, requiring less familiarity or expertise to find relevant book content. 2. Increased usage\nExposing the contents of books at a more granular level drives more users towards the book content, and increasing usage numbers and (depending on platform and business model) revenue.\n3. Matching author expectations\nNew generations of authors expect their content to be easily discoverable in the platforms they use. Without chapter level data, this content won\u0026rsquo;t easily be found in Google Scholar, Mendeley or ResearchGate. For younger researchers, for those in certain disciplines or using resources well-suited to it, if the chapter metadata - which in many cases requires either an introductory paragraph or an abstract - is missing, the book may as well not exist.\n4. Author exposure\nAbout half of scholarly book publishing is thought to be in the form of collected works: books where two or three editors get credit at the top level, but dozens of authors contribute to the chapters. Without chapter level metadata, these authors \u0026ndash; the book authors of tomorrow \u0026ndash; get no credit for their efforts.\n5. Usage and citations reporting\nHaving chapters readily available in the modern platforms means that they start to accumulate evidence of sharing and citations from the moment of being published. Where chapter content is available on its own, the lack of associated metadata inhibits this evidence. After all, the DOI is a citation identifier. Evidence of impact is now critical for research evaluation, funding, tenure and promotion, and without this data, an author\u0026rsquo;s chapter may as well remain unread.\n6. Supporting your authors with funding compliance and reporting\nAuthors are increasingly being mandated by their funders to report back on the status of their books and chapters. And, in the case of Open Books and Open Chapters, the funders and authors are frequently the ultimate clients, who are looking to record and report evidence of both academic or social impact. Making chapter level information and identifiers available will facilitate this evidence gathering, especially for open chapters within otherwise non-open books, and increasingly common phenomena.\n7. Understanding the hot topics in your books\nWhether you use Altmetric, or one of the other data sources that capture book activity, being able to access the social and media metrics of the chapters in your book gives you an immediate insight into the topics that capture interest at a broader level. Vital information when it comes to planning more books in the space, especially if you\u0026rsquo;re on the look out for books with trade crossover potential.\nWith chapter-level data, publishers can summarize their programs and compare how many authors they work with, how many book titles they have and where there might be gaps in subject and authors omitted from the metadata. Does the scholarly record fully reflect each book? If not, there may be a good deal of information that is simply unavailable to the machines that read the metadata and use it in systems throughout scholarly communications. Fortunately, it\u0026rsquo;s becoming easier to manage this data. Although traditional book metadata systems don\u0026rsquo;t always support chapter-level data, they do often permit publishers to register title-level DOIs, and with Crossref encouraging ISBN information alongside the generation of chapter level DOIs, some of the significant challenges have been reduced.\nBoth Crossref and Metadata 2020 offer best practices that make clear the need for richer metadata. It\u0026rsquo;s also important to acknowledge the very real barriers to providing robust metadata, whether for book chapters or anything else, which is why having the conversations and being aware of available resources is important. Because, though it may be difficult, the hurdles are often up-front making the decision to invest in better metadata, factoring in associated costs, setting up workflows, etc.\nBut as we have seen from the previous decades, book publishers and their suppliers are experts in managing substantial amounts of metadata. Just as no-one would argue to roll-back all those advantages, we believe that - once deployed - industry-wide creation and distribution of chapter data would be an advance from which there is no retreat.\nREFERENCES https://riojournal.com/article/38698/\nThe State of Open Monographs Report\nhttps://longleafservices.org/blog/the-sustainable-history-monograph-pilot/\nhttps://scholarlykitchen.sspnet.org/2017/12/07/enriching-metadata-is-marketing/\nhttps://www.ingenta.com/blog-article/five-reasons-chapter-level-metadata-increases-value-academic-books/\n", "headings": ["So why the lack of progress? ","Here we present 7 reasons why publishers should collect chapter-level metadata:","REFERENCES"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/laura-j-wilkinson/", "title": "Laura J Wilkinson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/mike-taylor/", "title": "Mike Taylor", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/memoirs-of-a-doi-detective...its-error-mentary-dear-members/", "title": "Memoirs of a DOI detective…it’s error-mentary dear members", "subtitle":"", "rank": 1, "lastmod": "2020-04-27", "lastmod_ts": 1587945600, "section": "Blog", "tags": [], "description": "Hello, I’m Paul Davis and I’ve been part of the Crossref support team since May 2017. In that time I’ve become more adept as a DOI detective, helping our members work out whodunnit when it comes to submission errors.\nIf you have ever received one of our error messages after you have submitted metadata to us, you may know that some are helpful and others are, well, difficult to decode. I\u0026rsquo;m here to help you to become your own DOI detective.\n", "content": "Hello, I’m Paul Davis and I’ve been part of the Crossref support team since May 2017. In that time I’ve become more adept as a DOI detective, helping our members work out whodunnit when it comes to submission errors.\nIf you have ever received one of our error messages after you have submitted metadata to us, you may know that some are helpful and others are, well, difficult to decode. I\u0026rsquo;m here to help you to become your own DOI detective.\nMotive: ridding the world of bad metadata When depositing xml files to us, there can be a plethora of error messages returned to you in the submission logs. Wait, what are submission logs? If that is the first thing that came to mind, then you’re in the right place; do keep reading.\nMeans: XML deposits After each content registration or update is received into our deposit admin system, it is initially placed in the submission queue and later, once its time comes, is processed. Whether that deposit comes from the web deposit form, Metadata Manager, or a good old fashioned XML deposit, a submission log is created in our system. This log contains important information about the deposit and its success or failures.\nI will go through how you will find and receive this log later on. At the bottom of the submission log you will see a status message that looks like this:\n\u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;***\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;***\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;***\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;***\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; To some, this might look a bit like a crime scene. If the status report displays the same number in the \u0026lt;record_count\u0026gt; and the \u0026lt;success_count\u0026gt;, then no crime (against deposits) has been committed. Everything you have tried to register or update has been successful and we are all free as DOI detectives to knock off early.\nAt some point you will probably come across an error or failure in the submission logs, where the failure count is 1.\n\u0026lt;batch_data\u0026gt; \u0026lt;record_count\u0026gt;1\u0026lt;/record_count\u0026gt; \u0026lt;success_count\u0026gt;0\u0026lt;/success_count\u0026gt; \u0026lt;warning_count\u0026gt;0\u0026lt;/warning_count\u0026gt; \u0026lt;failure_count\u0026gt;1\u0026lt;/failure_count\u0026gt; \u0026lt;/batch_data\u0026gt; For the purposes of this blog, this type of message means a “crime” has been committed. The worst kind of crime - a metadata crime. In the real world, outside of this blog, it just means that your deposit has failed and you need to take some action to fix it. You will also receive accompanying error messages (an evidence log) with details about what went wrong with your submission. We’ll deliver these submission details to you as well in the following ways:\nFor those submitting via the web deposit form, to the email address used to register your submission\nOn screen and within the admin tool using the submission ID for those submitting via Metadata Manager\nFor those submitting XML, to the email included in the \u0026lt;email_address\u0026gt; element of your deposit XML\nYou can also find the submission log in the admin system at any point\nMore information on viewing past deposits in the admin system can be found on our support site.\nThe usual suspects Those serial offenders, when it comes to failed deposits, are:\nTimestamps Misdemeanor - Every deposit has a \u0026lt;timestamp\u0026gt; value, and that value needs to be incremented each time the DOI is updated. This is done automatically for you in Metadata Manager, the Web Deposit Form and the OJS plugin. But if you’re updating an existing DOI by sending us the whole XML file again, you need to make sure that you update the timestamp as well as the field you’re trying to update. Error: \u0026lt;msg\u0026gt;Record not processed because submitted version: 201907242206 is less or equal to previously submitted version 201907242206\u0026lt;/msg\u0026gt; Rehabilitation - simply resubmit your XML file, but make sure that you increment the timestamp value to be larger than the current timestamp value. Titles Misdemeanor - These need to match exactly between what we have on the system against the ISSN/ISBN and what is in the deposit file. Error: \u0026lt;msg\u0026gt;Deposit contains title error: the deposited publication title is different than the already assigned title\u0026lt;/msg\u0026gt; or\nError: \u0026lt;msg\u0026gt;ISSN \u0026#34;123454678\u0026#34; has already been assigned, issn (123454678) is assigned to another title (Journal of Metadata)\u0026lt;/msg\u0026gt; Rehabilitation - you can check the title we have on the system against the ISSN/ISBN on the title list and make the necessary changes, or contact support for us to check the title in our system and make changes to match the title in the deposit to the one in the system, if known. Title level DOIs Misdemeanor - These also need to match up exactly in both system and deposit Error: \u0026lt;msg\u0026gt;Deposit contains title error: The journal has a different DOI assigned; If you want to change the journal\u0026#39;s DOI please contact Crossref support: title=Journal of Metadata; current-doi=10.14393/JoM; deposited-doi=10.14393/JoM.1.1\u0026lt;/msg\u0026gt; Rehabilitation - contact us to change the journal level DOI in the system or change the DOI in the deposit yourself to match the one already registered for the title. Errors in the xml Misdemeanor - Poor formatting, self closing tags, invalid values. Error: \u0026lt;msg\u0026gt;Deposited XML is not well-formed or does not validate: Error on line 538\u0026lt;/msg\u0026gt; Rehabilitation - update the xml file that was deposited as it was not well formed against our schema or as an xml file in general. Check you have saved the file correctly (as an .xml file), edited it in an xml editor and not a word processor and if that fails, then contact support and we will try to assist. We also have a collection of new xml examples you may use as a template. Forensics There are a few tools we offer to help with the deciphering of the error messages –– we think of these as our magnifying glass(es).\nThe Title list: A list of all of the titles in our database, you can check against the ISSN/ISBN to see what the title on our system is and whether it matches the title you have in your deposit.\nThe Depositor Report: Shows all journals, books, and conference proceedings against each member. The report includes all DOIs for each journal, book, conference; the most recently used timestamps; and citation counts for each DOI.\nThe Reports tab in the admin system: You can find out the history behind a DOI by searching against this in the admin console.\nOur common error messages are documented within our support documentation. You can always find out more about most of the error messages are system displays at the link above.\nYou can find the current xml metadata against a DOI by adding the DOI to the end of this link http://0-doi-crossref-org.libus.csd.mu.edu/search/doi?pid=support@crossref.org\u0026format=unixsd\u0026doi= (you might need an xml viewer browser extension to view the xml in a more readable format).\nCalling for backup We’ll also soon be adding more leads to our submission logs and error messages for the best of our detectives. These improvements will point our DOI detectives to better documentation about interpreting error messages and taking the appropriate action to resolve those errors.\nBut there are a lot more error messages out there. If you have trouble deciphering any error message you encounter, then please do send the case number (submission ID) over to CSI (Crossref Support Investigations) at support@crossref.org.\nYou can also find lots of great information in the pages of our new documentation.\n", "headings": ["Motive: ridding the world of bad metadata","Means: XML deposits","The usual suspects","Timestamps","Titles","Title level DOIs","Errors in the xml","Forensics","Calling for backup"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/bylaws/", "title": "Bylaws", "subtitle":"", "rank": 1, "lastmod": "2020-04-20", "lastmod_ts": 1587340800, "section": "Board & governance", "tags": [], "description": "Article I\nMembership\nSection 1. Qualification. Membership in Publishers International Linking Association, Inc. (the “Corporation”) shall be open to any organization that publishes professional and scholarly materials and content and otherwise meets the terms and conditions of membership established from time to time by the Board of Directors of the Corporation (the “Board”), and to such other entities as the Board shall determine from time to time. Section 2. Acceptance of members.", "content": "Article I\nMembership\nSection 1. Qualification. Membership in Publishers International Linking Association, Inc. (the “Corporation”) shall be open to any organization that publishes professional and scholarly materials and content and otherwise meets the terms and conditions of membership established from time to time by the Board of Directors of the Corporation (the “Board”), and to such other entities as the Board shall determine from time to time. Section 2. Acceptance of members. Applications for membership shall be approved by the Board, which may delegate the authority to approve applications to the Executive Director. An applicant shall become a member of the Corporation (a “member”) upon the Corporation’s approval of its membership application and receipt of its first annual membership fee. The record date of membership for the member shall be the date of the Corporation’s receipt of its first annual membership fee following the Corporation’s approval of its membership application. Section 3. Obligations of members. A member shall pay the dues and fees specified in the membership application, and shall have the rights and obligations specified by the Board from time to time including, but not limited to, executing and complying with an agreement among the Corporation and its various members in the form adopted by the Board from time to time. Each member shall provide the Corporation with written notification designating the person who shall be deemed to be its representative to the Corporation for all purposes, including voting, which designation can be changed from time to time by written notification as set forth in the membership agreement.\nSection 4. Resignation. Any member may withdraw from the Corporation after fulfilling all obligations to it by giving written notice of such intention to the Secretary, which notice shall be presented to the Board or Executive Committee by the Secretary at the first meeting after its receipt. Dues and service fees paid shall not be refundable. Section 5. Suspension and expulsion. A member may be suspended for a period or expelled for cause, such as violation of these By-Laws or any rules of the Corporation, or for conduct prejudicial to the best interests of the Corporation. Suspension or expulsion shall be by a vote of the Board (or by action of the Executive Committee, to take effect at the time specified in such Executive Committee action and to be reviewed and ratified by a vote of the Board at the next subsequent Board meeting), except where the suspension or expulsion is the result of the non-payment of dues and fees or required by applicable international sanctions compliance, in which event the Board may delegate such authority to the Executive Director. The member will be notified of its suspension or expulsion by the method specified in the then current version of the membership agreement. The Executive Committee of the Board shall be empowered to temporarily or permanently ratify, modify or rescind the previous action, and may, within its complete discretion, permit the member to seek reinstatement by presenting a defense to its suspension or expulsion.\nArticle II Fiscal Year\nThe fiscal year of the Corporation shall begin on the first day of January and end on the last day of December in each year.\nArticle III\nDues and Service Fees\nSection 1. Annual dues and service fees. The Board may determine from time to time the amount of all dues and service fees payable to the Corporation by members. Section 2. Payment of dues and service fees. Dues and service fees shall be payable on such terms and at such times specified by the Board from time to time. Dues and service fees of a new member shall be prorated from the first day of the month in which such new member is accepted for membership, for the remainder of the fiscal year of the Corporation. Section 3. Default and termination of membership. When any member shall be in default in the payment of dues and service fees for a period of three (3) consecutive months, its membership may thereupon be terminated in the manner provided in Article I, Section 5, of these By-Laws.\nArticle IV Meetings of Members\nSection 1. Annual meetings. There shall be an annual meeting of members of the Corporation during the second week of November in each year, or at such other time as the Board may determine from time to time, for election of Directors and for receiving the annual reports of officers, Directors, and committees, and the transaction of other business. If the day designated falls upon a legal or religious holiday, the meeting shall be held on the next succeeding secular day not a holiday.\nSection 2. Special meetings. Special meetings of the members may be called by the Board in its discretion. Upon the written request of members entitled to cast forty percent (40%) of the total number of votes entitled to be cast at any such meeting, the Board shall call a special meeting to consider a specific subject. No business other than that specified in the notice of meeting shall be transacted at any special meeting of the members.\nSection 3. Notice and waiver of notice. Notice of any meeting of the members, stating the place, date, and time of the meeting and, in the case of a special meeting, the purpose or purposes for which it is called, shall be given by the Secretary by delivering a copy thereof personally, by first class mail, by facsimile telecommunication (fax) or by electronic mail, not less than ten (10) days nor more than fifty (50) days before the meeting to each member at the address in the records of the Corporation. Notwithstanding the provisions of any of the foregoing sections, a meeting of the members may be held at any time and at any place designated by the Board, and any action may be taken thereat, if notice is waived in writing by every member having the right to vote at the meeting. Any member may waive notice of any meeting by submitting a waiver in person or by proxy either before or after the meeting. Waiver of notice may be written or electronic. If written, the waiver must be signed by the member’s authorized representative (including by facsimile signature). If electronic, the waiver must be sent by electronic mail, and must set forth or be submitted with information from which it can reasonably be determined that the transmission was authorized by the member. The attendance of any member at a meeting, in person or by proxy, without protesting the lack of notice of such meeting prior to the conclusion of the meeting shall constitute a waiver of notice by such member.\nSection 4. Record date. For the purpose of determining the members entitled to vote at any meeting of members or any adjournment thereof, or to express consent to or dissent from any proposal without a meeting, or for the purpose of any other action by the members, the Board may fix, in advance, a date as the record date for any such determination by members. Such record date shall not be more than fifty (50) nor less than ten (10) days before the date of such meeting.\nSection 5. Quorum. The presence in person or by proxy of the lesser of one-tenth of the members entitled to vote, or one hundred (100) members entitled to vote, or such other number as may be set by the laws of the State of New York as the minimum number necessary to constitute a quorum for meetings of members, shall be necessary to constitute a quorum for the transaction of business.\nSection 6. Inspectors of election. One (1) Inspector of Election shall be chosen by vote of the members at the annual meeting. He or she shall act as Inspector of Election at the meeting and at all special meetings until the next annual meeting.\nSection 7. Voting. Any member may be represented at any meeting by any member of its staff delegated by it for that purpose, but each member in good standing shall be entitled to only one vote. If the manner of deciding any question has not otherwise been prescribed, it shall be decided by majority vote of the members present in person or by proxy.\nSection 8. Proxies. Every member entitled to vote at any meeting of the members may vote by proxy. A member may authorize another person to act for the member as proxy by (i) executing a writing providing such authorization, signed (including facsimile signature) by the member’s authorized representative, or (ii) providing such authorization by electronic mail to the person who will be the holder of the proxy or to a proxy solicitation firm, proxy support service organization or like agent, provided that such authorization must set forth information from which it can be reasonably determined that the authorization was given by the member. A proxy shall be revocable at the pleasure of the member executing it, to the extent permitted by law. Unless the duration of the proxy is specified, it shall be invalid after eleven (11) months from the date of its execution.\nSection 9. Order of business. The order of business at all the meetings of the members, Board, and Executive Committee shall be as determined by the Board or the Executive Committee, as the case may be, from time to time.\nAny question as to priority of business shall be decided by the Chairman without debate.\nThis order of business may be altered or suspended at any meeting by a majority vote of the members, Directors, or Executive Committee members present, as appropriate.\nSection 10. Membership action without meeting. Whenever members are required or permitted to take any action by vote, such action may be taken without a meeting upon the consent of all the members entitled to vote thereon, setting forth the action so taken. Such consent may be written or electronic. If written, the consent must be executed by the member’s authorized representative by signing or causing his or her signature to be affixed to the consent by any reasonable means, including but not limited to facsimile signature. If electronic, the transmission of the consent must be sent by electronic mail and set forth, or be submitted with, information for which it can be reasonably determined that the transmission was authorized by the member.\nArticle V Directors\nSection 1. Number. The property, affairs, activities, and concerns of the Corporation shall be vested in the Board, which shall consist of not fewer than three (3) nor more than sixteen (16) Directors or such other number determined by the Board and as permitted or required by the Not-for-Profit Corporation Law and the Certificate of Incorporation. Directors shall, upon election, enter into the performance of their duties immediately upon the expiration or termination of the term then extant and shall continue in office until their successors shall be duly elected and qualified.\nSection 2. Election and term of Directors. Election of Directors and terms of service shall be as specified in the Certificate of Incorporation, as amended from time to time. Each candidate for Director shall be an employee or officer of a member and no member may designate more than one candidate for election to the Board in any election. Any member whose candidate is elected to the Board may designate an alternate for such Director. Each alternate so designated may attend meetings of the Board and shall be deemed to be a member of the Board for all purposes but only for the duration of such designation. No such designation shall operate to increase the representation on the Board of the member designating the alternate and in the event that both the alternate and the Director are present at any Board meeting only the Director shall have the right to vote at the meeting.\nSection 3. Duties of Directors. The Board may without limitation: (1) hold meetings at such times and places as it thinks proper; (2) admit members and suspend or expel them by ballot; (3) appoint committees on particular subjects from the Directors, or from the members; (4) audit bills and disburse the funds of the Corporation; (5) print and circulate documents and publish articles; (6) carry on correspondence and communicate with other associations interested in scholarly or scientific publishing; (7) employ agents; and (8) devise and carry into execution such other measures as it deems proper and expedient to promote the objects of the Corporation and to best protect the interests and welfare of the members.\nSection 4. Meetings of Board. Regular meetings of the Board shall be held during the next calendar quarter immediately following the annual meeting of the members and on such other days as the Board may determine commensurate with good corporate practice. The Chairman may, when he or she deems necessary, or the Secretary shall, at the request in writing of five (5) Directors, issue a call for a special meeting of the Board. The Chairman shall preside at all meetings of the Board.\nSection 5. Notice and waiver of notice. Notice of each regular meeting, signed by the Secretary or another officer, shall be delivered personally, by first class mail, by facsimile telecommunication (fax) or by electronic mail to the last recorded address of each Director at least five (5) days before the time appointed for the meeting. Notice of each special meeting, signed by the Secretary or another officer, shall be delivered personally, by first class mail, by facsimile telecommunication (fax) or by electronic mail to the last recorded address of each Director at least three (3) days before the time appointed for the meeting. Notice of a meeting need not be given to any Director who submits a signed waiver of notice whether before or after the meeting, or who attends such meeting without protesting, prior thereto or at its commencement, the lack of notice. Waiver of notice may be written or may be given via electronic mail. If written, the waiver must be executed by the Director signing such waiver or causing his or her signature to be affixed to such waiver by any reasonable means, including facsimile signature. If sent by electronic mail, the waiver must include information from which it can be reasonably determined that the transmission was authorized by the Director.\nSection 6. Quorum. A majority of the entire Board shall constitute a quorum for the transaction of business. Any one or more Directors or any committee thereof may participate in a meeting of such Board or committee by means of a conference telephone, videoconference, or similar communications equipment, as long as all persons participating in the meeting can hear each other at the same time and each Director can participate in all matters before the Board. Participation by such means shall constitute presence in person at the meeting. If a quorum is not present, a lesser number by majority may adjourn the meeting to a later date, not more than ten (10) days later. The Secretary shall give written notice of the adjourned date to all Directors in the manner described for regular meetings in Article V, Section 5 above.\nSection 7. Absence. The Board shall have the right, power and authority to set minimum requirements for Board attendance and participation including without limitation rules and criteria for alternate Board members. Should any Director or his or her alternate violate such requirements without sending a communication to the Chairman or Secretary stating his or her reason for so doing, or if his or her excuse should not be accepted by the Board, or if any Director or his or her alternate fails to be present at two (2) consecutive Board meetings, such Director or alternate will be deemed to have resigned. The seat shall be filled as set forth in Section 9.\nSection 8. Resignation of Directors. In addition to the procedures for the resignation of Directors set forth in Article Sixth of the Certificate of Incorporation, if the Board determines that two (2) or more Directors are affiliated with the same member, within ten (10) days following notice thereof by the Board, such member may designate in writing to the Board one Director affiliated with it to remain on the Board, and all other Directors affiliated with such member shall cease to be Directors, provided, that if such member does not make such designation within such time, it shall be made, with the same effect, by a majority of the Board without participation in such decision by the Directors so affiliated.\nSection 9. Vacancies. Except as set forth in the Certificate of Incorporation, any vacancy in the Board shall be filled without undue delay by a majority vote by ballot of the remaining members of the Board at the next regular meeting or at a special meeting which shall be called for that purpose. The election shall be held within sixty (60) days after the occurrence of the vacancy. The person so chosen shall hold office until the end of the term which the director was elected or appointed to fill, or for a term to be determined by the Board which ends at an annual meeting (but in no event longer than three (3) years), or until his or her successor shall have been chosen at a special meeting of the members.\nSection 10. Removal of Directors. Any one or more of the Directors may be removed either with or without cause, at any time, by a vote of two-thirds (2/3) of the members present at a regular meeting or at any special meeting called for that purpose.\nSection 11. Directors’ action without meeting. Any action required or permitted to be taken by the Board or by any committee thereof may be taken without a meeting if all the members of the Board or of such committee consent in writing to the adoption of a resolution authorizing the action, which consent may be sent by electronic mail, including information from which it can reasonably be determined that the transmission was authorized by the applicable Director. In the event of any such action without a meeting, the resolution and the written consent thereto shall be filed with the minutes of the proceedings of the Board or of the relevant committee, as the case may be.\nArticle VI Officers\nSection 1. Number. This Corporation shall, at a minimum, have the following officers: a Chairman, a Secretary, and a Treasurer. The Board shall have the right, power and authority to specify additional offices and elect and/or appoint officers to fill such offices, from time to time.\nSection 2. Method of election. The Board shall elect all officers for a term of one (1) year, the Chairman and Treasurer being elected from the Board. A majority vote of a quorum present shall be necessary to constitute an election. Officers shall serve until their respective successors are elected and have qualified.\nSection 3. Duties of officers. The duties and powers of the following officers of the Corporation shall be as set forth below:\nChairman\nThe Chairman shall preside over operations of the Corporation and shall be a member ex officio, without right to vote (unless such right may be conferred on the Chairman by other or dual status) of the Board. He or she shall also, at the annual meeting of the Corporation and such other times as he or she deems proper, communicate to the Corporation or to the Board such matters and make such suggestions as may in his or her opinion tend to promote the prosperity and welfare and increase the usefulness of the Corporation and shall perform such other duties as are necessarily incident to the office of the Chairman.\nSecretary\nIt shall be the duty of the Secretary to give notice of and attend all meetings of the members and the Board and keep a record of their doings; to conduct all correspondence and to carry into execution all orders, votes, and resolutions not otherwise committed; to keep a list of the members of the Corporation; to collect the fees, annual dues and service fees, and subscriptions and pay them over to the Treasurer; to notify the officers and members of the Corporation of their election; to notify members of their appointment on committees; to furnish the Chairman of each committee with a copy of the vote under which the committee is appointed, and at his request give notice of the meetings of the committee; to prepare, under the direction of the Board, an annual report of the transactions and condition of the Corporation, and generally to devote his or her best efforts to forwarding the business and advancing the interests of the Corporation. In case of absence or disability of the Secretary, the Board may appoint a Secretary pro tem. The Secretary shall be the keeper of the Corporation’s seal. The offices of Secretary and Chairman may not be held by the same person.\nTreasurer\nThe Treasurer shall keep an account of all moneys received and expended for the use of the Corporation, and shall make disbursements only upon vouchers approved in writing by any member of the Executive Committee. He or she shall deposit all sums received in a bank, or banks, or trust company approved by the Board, and make a report at the annual meeting or when called upon by the Chairman. Funds may be drawn only upon the signature of the Chairman, the Treasurer or the Executive Director, if any.\nThe funds, books, and vouchers in his or her hands shall at all times be under the supervision of the Board and subject to its inspection and control. At the expiration of his or her term of office, the Treasurer shall deliver over to his or her successor all books, moneys, and other property, or, in the absence of a treasurer-elect, to the Chairman. In case of the absence or disability of the Treasurer, the Board may appoint a Treasurer pro tem.\nIn case of the death or absence of the Chairman, or of his or her inability from any cause to act, the Treasurer shall perform the duties of the Chairman.\nExecutive Director\nThe Executive Director shall have day-to-day responsibility for the operations of the Corporation and shall report to the Corporation’s senior officers and the Board.\nSection 4. Vacancies. All vacancies in any office shall be filled by the Board without undue delay, at its regular meeting, or at a meeting specially called for that purpose.\nSection 5. Compensation of officers. The officers shall receive no salary or compensation unless the Board otherwise determines, so long as such compensation does not violate the Not-for-Profit Corporation Law.\nSection 6. Reimbursement. The Corporation may reimburse its officers and Directors for their reasonable and documented expenditures which conform to the reimbursement criteria established by the Board from time to time, provided that such expenditures are incurred in furtherance of the Corporation’s purposes.\nArticle VII Committees\nSection 1. Executive Committee. There shall be appointed annually by the Board an Executive Committee to be comprised of the Chairman, the Treasurer and three (3) other Directors at least one of whom shall be the employee or representative of a not-for-profit publishing entity which is a member. The Executive Committee may act on behalf of the Corporation in any matter when the Board is not in session, reporting to the Board on the Executive Committee’s actions at each regular meeting or any special meeting called for that purpose. Three (3) members of the Executive Committee shall constitute a quorum for the transaction of business. Meetings may be called by the Chairman or by two (2) members of the Executive Committee.\nSection 2. Nominating Committee. The Board shall appoint a Nominating Committee of five (5) members, each of whom shall be either a Director or the designated representative of a member that is not represented on the Board, whose duty it shall be to nominate candidates for Directors to be elected at the next annual election. The Nominating Committee shall designate a slate of candidates for each election that is at least equal in number to the number of Directors to be elected at such election. Each such slate will be comprised such that, as nearly as practicable, one-half of the resulting Board shall be comprised of Directors designated by Members then representing Revenue Tier 1; and one-half of the resulting Board shall be comprised of Directors designated by Members then representing Revenue Tier 2. “Revenue Tier 1” means all consecutive membership dues categories, starting with the lowest dues category, that, when taken together, aggregate, as nearly as possible, to fifty percent (50%) of Crossref’s annual revenue. “Revenue Tier 2” means all membership dues categories above Revenue Tier 1. The Nominating Committee shall notify the Secretary in writing, at least twenty (20) days before the date of the annual meeting, of the names of such candidates, and the Secretary, except as herein otherwise provided, shall transmit a copy thereof to the last recorded address of each member of record simultaneously with the notice of the meeting.\nSection 3. [Reserved.]\nSection 4. Audit Committee. The Board shall appoint an Audit Committee comprised of three independent Directors (as defined in the Not-for-Profit Corporation Law) who are not officers of the Corporation or members of the Executive Committee. The Audit Committee shall oversee the accounting and financial reporting processes of the Corporation and the audit of its financial statements, annually retain or renew the retention of an independent auditor, review with the independent auditor the results of the audit, including the management letter, and oversee the adoption and implementation of, and compliance with, any conflict of interest or whistleblower policies. The Audit Committee shall report to the Board at regular meetings, or at special meetings called for that purpose, as requested by the Board but not less often than once per year. The Audit Committee has the authority to engage independent legal, accounting and other advisors as it determines necessary to carry out its duties, and to approve each such advisor’s fees and other retention terms.\nSection 5. Other committees. The Board may, at any time, appoint other committees on any other subject. The Board will appoint the Chair and the members of each such committee, to serve on the committee for the term specified by the Board. Unless specifically provided otherwise in the resolution forming such a committee, such committee shall remain in existence for one year from the date of its formation unless reauthorized by the Board for additional one- year terms. Unless specifically provided otherwise in the Certificate of Incorporation or these By- Laws, members of such committees are not required to be Directors, provided that any committee that is not composed solely of Directors shall not have authority to bind the Board.\nSection 6. Committee quorum. Unless specifically provided otherwise in these By- Laws, a majority of the members of any committee shall constitute a quorum for the transaction of business, unless any committee shall by a majority vote of its entire membership decide otherwise.\nSection 7. Committee vacancies. The various committees shall have the power to fill vacancies in their membership.\nArticle VIII Liability and Indemnification\nSection 1. Liability. The personal liability of the Directors and officers of the Corporation is hereby eliminated to the fullest extent permitted by Sections 719, 720 and 720-a of the Not-for-Profit Corporation Law, as the same may be amended and supplemented, from time to time.\nSection 2. Indemnification. The Corporation shall, to the fullest extent permitted by Sections 721 et seq. of the Not-for-Profit Corporation Law, as the same may be amended and supplemented from time to time, indemnify any and all persons whom it shall have power to indemnify under said sections from and against any and all of the expenses, liabilities or other matters referred to in, or covered by, said sections, and the indemnification provided for herein shall not be deemed exclusive of any other rights to which those indemnified may be entitled under any By-Law, agreement, vote of members or disinterested Directors or otherwise, both as to action in his or her official capacity and as to action in another capacity while holding such office, and shall continue as to a person who has ceased to be a Director, officer, employee or agent of the Corporation and shall inure to the benefit of the heirs, executors and administrators of any such person.\nSection 3. Insurance. The Corporation shall procure and maintain errors and omissions insurance coverage against Director and officer liability in such amounts, upon such terms and from such insurer(s) as the Board may from time to time deem advisable.\nArticle IX\nAudit of Books and Records\nThe Board shall cause the Corporation’s books and records to be audited at least once each year by a certified public accountant and shall report thereon to the Corporation’s membership in the form of a written annual report meeting the requirements of Section 519 of the Not-for-Profit Corporation Law to be distributed to the members as soon after completion of the audit as is practicable.\nArticle X\nAmendments\nThese By-Laws may be amended, repealed, or altered in whole or in part by a majority vote of the entire Board. The proposed change or changes shall be transmitted to the last recorded address of each member of the Board at least ten (10) days before the time of the meeting which is to consider such change or changes.\nEffective July 1, 2014; adopted March 5, 2014; revised July 13, 2016; revised July 12, 2018; revised November 15, 2018; revised March 7, 2019, revised July 11, 2019\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/helping-researchers-identify-content-they-can-text-mine/", "title": "Helping researchers identify content they can text mine", "subtitle":"", "rank": 1, "lastmod": "2020-04-16", "lastmod_ts": 1586995200, "section": "Blog", "tags": [], "description": "TL;DR Many organizations are doing what they can to aid in the response to the COVID-19 pandemic. Crossref members can make it easier for researchers to identify, locate, and access content for text mining. In order to do this, members must include elements in their metadata that:\nPoint to the full text of the content. Indicate that the content is available under an open access license or that it is being made available for free (gratis).", "content": "TL;DR Many organizations are doing what they can to aid in the response to the COVID-19 pandemic. Crossref members can make it easier for researchers to identify, locate, and access content for text mining. In order to do this, members must include elements in their metadata that:\nPoint to the full text of the content. Indicate that the content is available under an open access license or that it is being made available for free (gratis). How to do it. If your content is open access Make sure the Crossref metadata for all of your open access content includes:\nThe URL of the open access license the content is under. A URL that points to the full text of the content on your site (PDF, XML or HTML). Instructions for including license and full text URLs in your metadata.\nIf you are making subscription content available for text mining (temporarily or otherwise). Make sure the Crossref metadata for the content you are making freely available for text mining includes:\nThe URL of the publisher license the content is under. A URL that points to the full text of the content where it is being made freely available (PDF, XML or HTML). This might not be on your site. Instructions for including license and full text URLs in your metadata.\nIn addition, you need to flag the content that you are making freely available.\nA “free to read” element in the access indicators section of your metadata indicating that the content is being made available free-of-charge (gratis). An assertion element indicating that the content being made available is available free-of-charge. Instructions for flagging your content as “free”\nNote that step #4 is required in order for users to be able to find content marked as “gratis” in Crossref’s REST API.\nAnd if you decide to revoke the free access in the future, you will need to update the data to reflect that restrictions have been reimposed.\nSounds great. Has anybody else actually done this? Yes.\nOver 43 million metadata records already have a license and a full text link. https://0-api-crossref-org.libus.csd.mu.edu/works?filter=has-license:true,has-full-text:true\u0026rows=0\nMillions of the above items have one of the Creative Commons licenses or a dedicated text and data mining license provided by the publisher.\nAnd in the past three weeks (as of the writing of this blog post) over 23,000 articles have been flagged as “free” so they are available for text mining.\nhttps://api.crossref.org/v1/works?filter=assertion:free,has-full-text:true\n", "headings": ["TL;DR","How to do it.","If your content is open access","If you are making subscription content available for text mining (temporarily or otherwise).","Sounds great. Has anybody else actually done this?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/changes-to-resolution-reports/", "title": "Changes to resolution reports", "subtitle":"", "rank": 1, "lastmod": "2020-04-10", "lastmod_ts": 1586476800, "section": "Blog", "tags": [], "description": "This blog is long overdue. My apologies for the delay. I promised you an update in February as a follow up to the resolution reports blog originally published in December by my colleague Jon Stark and me. Clearly we (I) missed that February projection, but I’m here today to provide said update. We received many great suggestions from our members as a result of the call for comments. For those of you who took time to write: thank you! We took extra time to review and evaluate all of your comments and recommendations. We have reached a decision about the major proposed change - removal of all filters from monthly resolution reports - as well as a couple of suggested improvements from that feedback.\n", "content": "This blog is long overdue. My apologies for the delay. I promised you an update in February as a follow up to the resolution reports blog originally published in December by my colleague Jon Stark and me. Clearly we (I) missed that February projection, but I’m here today to provide said update. We received many great suggestions from our members as a result of the call for comments. For those of you who took time to write: thank you! We took extra time to review and evaluate all of your comments and recommendations. We have reached a decision about the major proposed change - removal of all filters from monthly resolution reports - as well as a couple of suggested improvements from that feedback.\nQuick recap of our original blog Jon wrote the original version of the resolution report in late 2009 in an effort to provide you, our members, with information about the usage of registered Crossref DOIs. At that time, Jon and others at Crossref thought it important to segment human-driven traffic from resolutions by machines (bots). Thus, we decided to filter out well-known machine activity in an attempt to only present you with resolutions by individual humans.\nIn the last ten-plus years things changed. We live in a time where most of our work requires both human and machine interaction. Therefore, we have hypothesized that some, or most, of those resolutions from machines today represent legitimate activity and should be reported to you each month. Since we don’t have a reliable method to segment those resolutions, and don’t think we should be making judgments about which resolutions should and should not be included in the reports, we proposed removing all filters and presenting you with all the numbers.\nWhat we heard from you In addition to soliciting comments in the blog, I also reached out to all of our members who had written into our support desk in the last year about anything related to resolution reports. We received dozens of responses from the blog and my outreach via email. The most common response was from members expressing their appreciation for and highlighting the utility of the reports. Most everyone told us how they were using the reports - from monitoring failure rates to mitigate issues to identifying trends over time. And a great number of respondents expressed concern that removing the filters might alter how or what we present to you in the reports (more on that soon). And, finally, several of you shared suggestions for improvement.\nWhere we go from here Our existing filters have been removing between 100 and 150 million resolutions from the monthly numbers we report to all members, collectively. Based on those figures, when we remove the filters all resolutions numbers will increase by about 25%. Those increased resolutions will vary from member to member because the numbers are based on actual bots crawling specific content, so some members may see more of an increase than others. We are mindful of how our members might adjust to that new baseline, since these changes will mean a noticeable (and, significant) increase in resolution totals for the majority of our members.\nOutside of the suggested tweaks from members below and that 25% increase I mentioned (due to the retirement of the filters), the reports will remain unchanged. You’ll continue to receive successful resolutions, the report of top 10 DOIs, and the csv file containing failed resolutions. Our most important consideration throughout this process is that these reports continue to serve you.\nThe changes We liked some of your suggestions, so we’re set to adopt a few of the more straightforward improvements. Those that are more complicated we’re considering for the Member Center (working title, subject to change) project, where we will start to bring together all business and technical information for our members, service providers and metadata users.\nAs I said, we’re removing the filters. Starting in June, we’ll present all of the resolutions to you. No filters. On average, monthly resolution numbers will therefore increase by about 25%. We currently link to the failed DOI.csv near the bottom of the resolution report. For many members with large volumes of content, the resolution report can take some time to load and sift through, so we’re moving the link to the failed DOI.csv file up the page (Note: we know they are other changes we can make to the report itself that will make it easier to work with for members with large volumes of data; we’re exploring those improvements). We learned during this process that some members were not receiving resolution reports when they only had failed resolutions. One of the aims of the reports is to help members identify content registration problems, so this was a bug we are keen to repair. We are fixing it. Once it is fixed, all members who have at least one resolution - successful or failed - during the previous month will receive the report. What we can\u0026rsquo;t change Many members who responded to the call and who also enquire throughout the year (outside of this call) express interest in receiving more information from the resolution reports. You want resolution numbers for all your DOIs. You want referral information about where the resolutions are coming from (e.g., IP addresses) and breakdowns by machine/human. You want more information about how and why the failure rate is growing over time. We understand.\nIn the past, we did try to process more information for IP addresses and user agents but it turns out that generating that volume of extra data and processing monthly is simply impractical. The other issue is one of privacy. IP addresses are considered personally identifiable information (PII), or data that could potentially be used to identify particular people. We are committed to maintaining the privacy of our members and users and therefore cannot provide this level of granularity in our reports.\nNext up Look for these changes starting in June. If you read this far, you may not need it, but we’ll also include a reminder atop the report itself about the increase in resolution totals as a result of our changes.\n", "headings": ["Quick recap of our original blog","What we heard from you","Where we go from here","The changes","What we can\u0026rsquo;t change","Next up"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/committees/", "title": "Committees", "subtitle":"", "rank": 4, "lastmod": "2020-04-10", "lastmod_ts": 1586476800, "section": "Committees", "tags": [], "description": "We have a number of committees that provide oversight of different aspects of our activities. They ensure that Crossref is governed and run efficiently and fulfills its mission. Committees are established in our by-laws or they can be established by our board for a specific purpose.\nOur board appoints the Chair of each committee each year at its November meeting. The Crossref guiding principles highlight that effective and representative governance is important to persistence and enabling us to achieve our mission.", "content": "We have a number of committees that provide oversight of different aspects of our activities. They ensure that Crossref is governed and run efficiently and fulfills its mission. Committees are established in our by-laws or they can be established by our board for a specific purpose.\nOur board appoints the Chair of each committee each year at its November meeting. The Crossref guiding principles highlight that effective and representative governance is important to persistence and enabling us to achieve our mission.\nCommittees Committee Staff facilitator Chair Audit committee Lucy Ofiesh Ashley Towne, University of Chicago Press Executive committee Ed Pentz Lisa Schiff, California Digital Library Membership and Fees committee Amanda Bartell Vincas Grigas, Vilnius University Nominating committee Lucy Ofiesh James Phillpotts, Oxford University Press ", "headings": ["Committees"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/doi-resolution/", "title": "DOI Resolution", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/committees/executive/", "title": "Executive committee", "subtitle":"", "rank": 3, "lastmod": "2020-04-10", "lastmod_ts": 1586476800, "section": "Committees", "tags": [], "description": "The Executive Committee is made up of the Chair, Treasurer and three other board members, one who has to be a representative of a non-profit member. The Executive committee has three major functions, to:\nSteer: create and review agendas for discussion and decision by the Board. Oversee: evaluate key performance indicators and suggest corrective actions between Board meetings. Expedite: take any decisions delegated to it by the board. This usually happens after full board discussion reaches a consensus on a major initiative and wants open details resolved before the next board meeting.", "content": "The Executive Committee is made up of the Chair, Treasurer and three other board members, one who has to be a representative of a non-profit member. The Executive committee has three major functions, to:\nSteer: create and review agendas for discussion and decision by the Board. Oversee: evaluate key performance indicators and suggest corrective actions between Board meetings. Expedite: take any decisions delegated to it by the board. This usually happens after full board discussion reaches a consensus on a major initiative and wants open details resolved before the next board meeting. Executive Committee members Lisa Schiff, California Digital Library (Chair) Rose L\u0026rsquo;Huillier, Elsevier (Treasurer) Oscar Donde, Pan Africa Science Journal Nick Lindsay, MIT Press James Phillpotts, Oxford University Press ", "headings": ["Executive Committee members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/free-public-data-file-of-112-million-crossref-records/", "title": "Free public data file of 112+ million Crossref records", "subtitle":"", "rank": 1, "lastmod": "2020-04-09", "lastmod_ts": 1586390400, "section": "Blog", "tags": [], "description": "A lot of people have been using our public, open APIs to collect data that might be related to COVID-19. This is great and we encourage it. We also want to make it easier. To that end we have made a free data file of the public elements from Crossref’s 112.5 million metadata records.\nThe file (65GB, in JSON format) is available via Academic Torrents here: https://0-doi-org.libus.csd.mu.edu/10.13003/83B2GP\nIt is important to note that Crossref metadata is always openly available.", "content": "A lot of people have been using our public, open APIs to collect data that might be related to COVID-19. This is great and we encourage it. We also want to make it easier. To that end we have made a free data file of the public elements from Crossref’s 112.5 million metadata records.\nThe file (65GB, in JSON format) is available via Academic Torrents here: https://0-doi-org.libus.csd.mu.edu/10.13003/83B2GP\nIt is important to note that Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through March 2020 into one file for download.\nThe sheer number of records means that, though anyone can use these records anytime, downloading them all via our APIs can be quite time-consuming. We hope this saves the research community valuable time during this crisis.\nA few important notes All records are included. In other words, the data file has every DOI ever registered with Crossref through March 31st, 2020. This means it’s a large file, 65GB.\nMetadata is supplied by our members and, as such, not all records have the same completeness (or quality) of metadata. Bibliographic metadata is generally required. All other metadata, e.g. license and funding information, ORCIDs, etc. is optional (though very much encouraged). References (i.e. authors’ cited sources) are also optional metadata. Nearly 50 million records include references and, of those, nearly 30 million have open references that are included in the data file. “Limited” and “Closed” references are not included in the data file. [EDIT 6th June 2022 - all references are now open by default with the March 2022 board vote to remove any restrictions on reference distribution]. If an error in the metadata is found, please report it directly to the publisher to correct. The records are in JSON.\nNew and updated records can be added incrementally using our REST API, which includes a number of date filter options, e.g. index-date.\nNo registration is required to use our REST API but we do strongly encourage being a ‘polite’ (i.e. identified) user. It makes troubleshooting much easier and reduces the chance of negatively impacting other users.\nQuestions, comments and feedback are welcome at support@crossref.org.\nWe thank AcademicTorrents.com for helping us make this data available. And we are grateful for the incredible efforts of everyone working to support research everywhere\u0026ndash;stay safe and well.\n", "headings": ["A few important notes"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/content-registration/", "title": "Content Registration", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "Content Registration allows members to register and update metadata via machine or human interfaces. When you join Crossref as a member you are issued a DOI prefix. You combine this with a suffix of your choice to create a DOI, which becomes active once registered with Crossref. Content Registration allows members to register a DOI and deposit or update its associated metadata, via machine or human interfaces.\nBenefits of content registration Academic and professional research travels further if it’s linked to the millions of other published papers.", "content": " Content Registration allows members to register and update metadata via machine or human interfaces. When you join Crossref as a member you are issued a DOI prefix. You combine this with a suffix of your choice to create a DOI, which becomes active once registered with Crossref. Content Registration allows members to register a DOI and deposit or update its associated metadata, via machine or human interfaces.\nBenefits of content registration Academic and professional research travels further if it’s linked to the millions of other published papers. Crossref members register content with us to let the world know it exists, instead of creating thousands of bilateral agreements.\nMembers send information called metadata to us. Metadata includes fields like dates, titles, authors, affiliations, funders, and online location. Each metadata record includes a persistent identifier called a digital object identifier (DOI) that stays with the work even if it moves websites. Though the DOI doesn\u0026rsquo;t change, its associated metadata is kept up-to-date by the owner of the record.\nRicher metadata makes content useful and easier to find. Through Crossref, members are distributing their metadata downstream, making it available to numerous systems and organizations that together help credit and cite the work, report impact of funding, track outcomes and activity, and more.\nMembers maintain and update metadata long-term, telling us if content moves to a new website, and they include more information as time goes on. This means that there is a growing chance that content is found, cited, linked to, included in assessment, and used by other researchers.\nParticipation Reports give a clear picture for anyone to see the metadata Crossref has. See for yourself where the gaps are, and what our members could improve upon. Understand best practice through seeing what others are doing, and learn how to level-up.\nThis is Crossref infrastructure. You can’t see infrastructure, yet research—and researchers all over the world—rely on it.\nShow image × Download the content registration factsheet, and explore factsheets for other Crossref services and in different languages.\nHow content registration works To register content with Crossref, you need to be a member. You’ll use one of our content registration methods to give us metadata about your content. Note that you don’t send us the content itself - you create a metadata record that links persistently (via a persistent identifier) to the content on your site or hosting platform. Learn more about metadata, constructing your DOIs, and ways to register your content.\nYou should assign Crossref DOIs to and register content for anything that is likely to be cited in the scholarly literature.\nNo matter whether you register content using one of our helper tools, or creating your own metadata files, all metadata deposited with Crossref is submitted as XML, and formatted using our metadata deposit schema section. Explore our XML sample files to help you create your own XML.\nWhat types of resources and records can be registered with Crossref? We are working to make our input schema more flexible so that almost any type of object can be registered and distributed openly through Crossref. At the moment, members tend to register the following:\nBooks, chapters, and reference works: includes book title and/or chapter-level records. Books can be registered as a monograph, series, or set. Conference proceedings: information about a single conference and records for each conference paper/proceeding. Datasets: includes database records or collections. Dissertations: includes single dissertations and theses, but not collections. Grants: includes both direct funding and other types of support such as the use of equipment and facilities. Journals and articles: at the journal title and article level, and includes supplemental materials as components. Peer reviews: any number of reviews, reports, or comments attached to any other work that has been registered with Crossref. Pending publications: a temporary placeholder record with minimal metadata, often used for embargoed work where a DOI needs to be shared before the full content is made available online. Preprints and posted content: includes preprints, eprints, working papers, reports, and other types of content that has been posted but not formally published. Reports and working papers: this includes content that is published and likely has an ISSN. Standards: includes publications from standards organizations. You can also establish relationships between different research objects (such as preprints, translations, and datasets) in your metadata. Learn more about all the metadata that can be included in these records with our schema library and markup guides.\nObligations and fees for content registration You pay a one-time content registration fee for each content item you register with us. content registration fees are different for different types of content and sometimes include volume discounts for large batches or backfile material. You don’t pay to update an existing metadata record. It’s an obligation of membership that you maintain your metadata for the long term, including updating any URLs that change. In addition, we warmly encourage you to correct and add to your metadata, and there is no charge for redepositing (updating) existing metadata. Learn more about maintaining your metadata, and managing existing DOIs.\nYour content registration fees are billed quarterly in arrears. This means you’ll usually receive a bill at the beginning of each quarter for the content you registered in the previous quarter. The only exception is if you’ve only registered a small number of DOIs.\nGetting started with content registration Learn more about content registration in our documentation.\n", "headings": ["Content Registration allows members to register and update metadata via machine or human interfaces.","Benefits of content registration ","How content registration works ","What types of resources and records can be registered with Crossref?","Obligations and fees for content registration ","Getting started with content registration "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/crossmark/", "title": "Crossmark", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record. Crossmark provides a cross-platform way for readers to quickly discover the status of a research output along with additional metadata related to the editorial process. Crucially, the Crossmark button can also be embedded in PDFs, which means that members have a way of alerting readers to changes months or even years after it’s been downloaded.", "content": " The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record. Crossmark provides a cross-platform way for readers to quickly discover the status of a research output along with additional metadata related to the editorial process. Crucially, the Crossmark button can also be embedded in PDFs, which means that members have a way of alerting readers to changes months or even years after it’s been downloaded.\nResearch doesn’t stand still: even after publication, articles can be updated with supplementary data or corrections. It’s important to know if the content being cited has been updated, corrected, or retracted. Crossmark makes this information more visible to readers. With one click, you can see if content has changed, and access valuable additional metadata provided by the member, such as key publication dates (submission, revision, acceptance), plagiarism screening status, and information about licenses, handling editors, and peer review.\nCrossmark lets readers know when a substantial change affecting the citation or interpretation has occurred, and that the member has updated the metadata record to reflect the new status.\nWatch the introductory Crossmark animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil Benefits of Crossmark Members can report updates to readers and showcase additional metadata. Researchers and librarians can easily see the changes to the content they are reading, which licenses apply to the content, see linked clinical trials, and more. Anyone can access metadata associated with Crossmark through our REST API, providing a myriad of opportunities for integration with other systems and analysis of changes to the scholarly record. How Crossmark works Members place the Crossmark button close to the title of an item on their web pages and in PDFs. They commit to informing us if there is an update such as a correction or retraction, as well as optionally providing additional metadata about editorial procedures and practices.\nShow image × While members who implement Crossmark provide links to update policies and commit themselves to accurately reporting updates, the presence of Crossmark itself is not a guarantee. However, it allows the community to more easily verify how members are updating their content.\nIf you use Crossmark, the Crossmark button must be applied to all of your new content, not just content is updated. Selective implementation means that a reader, such as a research or librarian, who downloaded a PDF version before the update would have no way to know that it has been updated. We also encourage you to implement Crossmark for backfile content, although doing so is optional. At least, we encourage you to do so for backfile content that has been updated.\nObligations for Crossmark Any member can provide update metadata and register an update policy. If you are a member who implements the Crossmark button, you must:\nMaintain your content and promptly register any updates. Include the Crossmark button on all digital formats (HTML, PDF, ePub). Implement Crossmark using the script provided by us. Not alter the Crossmark button in any way other than adjusting its size. Implementing the Crossmark button involves technical changes to your website and production processes. Check that you have the necessarily expertise to implement these before you start. If not, you can start to deliver update metadata and implement the Crossmark button at a later point.\nAny organisation can also implement the Crossmark button on pages where they display content. If you do so, you must follow the guidelines above, except for the first point if you are not reponsible for the content.\nThere are no additional fees to participate in Crossmark.\nHow to participate in Crossmark There are several steps for members to fully implement Crossmark:\nDevise an update policy, assign it a DOI, and register it with us. Add the update policy and, optionally, other relevant metadata to your metadata records. Publish corrections, retractions, and other updates for works where necessary, and register their metadata. See our guidance on registering updates. Implement the Crossmark button online and in PDFs. Learn more about participating in Crossmark.\nTo see which Crossref members are registering Crossmark information, visit Participation Reports. These reports give a clear picture for anyone to see the metadata Crossref has including Crossmark data.\nLearn more about version control, corrections, and retractions.\nShow image × Download the Crossmark factsheet, and explore factsheets for other Crossref services and in different languages.\n", "headings": ["The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record.","Benefits of Crossmark ","How Crossmark works ","Obligations for Crossmark ","How to participate in Crossmark "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/crossmark/", "title": "Crossmark", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record. Crossmark provides a cross-platform way for readers to quickly discover the status of a research output along with additional metadata related to the editorial process. Crucially, the Crossmark button can also be embedded in PDFs, which means that members have a way of alerting readers to changes months or even years after it’s been downloaded.", "content": " The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record. Crossmark provides a cross-platform way for readers to quickly discover the status of a research output along with additional metadata related to the editorial process. Crucially, the Crossmark button can also be embedded in PDFs, which means that members have a way of alerting readers to changes months or even years after it’s been downloaded.\nResearch doesn’t stand still: even after publication, articles can be updated with supplementary data or corrections. It’s important to know if the content being cited has been updated, corrected, or retracted. Crossmark makes this information more visible to readers. With one click, you can see if content has changed, and access valuable additional metadata provided by the member, such as key publication dates (submission, revision, acceptance), plagiarism screening status, and information about licenses, handling editors, and peer review.\nCrossmark lets readers know when a substantial change affecting the citation or interpretation has occurred, and that the member has updated the metadata record to reflect the new status.\nWatch the introductory Crossmark animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil Benefits of Crossmark Members can report updates to readers and showcase additional metadata. Researchers and librarians can easily see the changes to the content they are reading, which licenses apply to the content, see linked clinical trials, and more. Anyone can access metadata associated with Crossmark through our REST API, providing a myriad of opportunities for integration with other systems and analysis of changes to the scholarly record. How Crossmark works Members place the Crossmark button close to the title of an item on their web pages and in PDFs. They commit to informing us if there is an update such as a correction or retraction, as well as optionally providing additional metadata about editorial procedures and practices.\nShow image × While members who implement Crossmark provide links to update policies and commit themselves to accurately reporting updates, the presence of Crossmark itself is not a guarantee. However, it allows the community to more easily verify how members are updating their content.\nIf you use Crossmark, the Crossmark button must be applied to all of your new content, not just content is updated. Selective implementation means that a reader, such as a research or librarian, who downloaded a PDF version before the update would have no way to know that it has been updated. We also encourage you to implement Crossmark for backfile content, although doing so is optional. At least, we encourage you to do so for backfile content that has been updated.\nObligations for Crossmark Any member can provide update metadata and register an update policy. If you are a member who implements the Crossmark button, you must:\nMaintain your content and promptly register any updates. Include the Crossmark button on all digital formats (HTML, PDF, ePub). Implement Crossmark using the script provided by us. Not alter the Crossmark button in any way other than adjusting its size. Implementing the Crossmark button involves technical changes to your website and production processes. Check that you have the necessarily expertise to implement these before you start. If not, you can start to deliver update metadata and implement the Crossmark button at a later point.\nAny organisation can also implement the Crossmark button on pages where they display content. If you do so, you must follow the guidelines above, except for the first point if you are not reponsible for the content.\nThere are no additional fees to participate in Crossmark.\nHow to participate in Crossmark There are several steps for members to fully implement Crossmark:\nDevise an update policy, assign it a DOI, and register it with us. Add the update policy and, optionally, other relevant metadata to your metadata records. Publish corrections, retractions, and other updates for works where necessary, and register their metadata. See our guidance on registering updates. Implement the Crossmark button online and in PDFs. Learn more about participating in Crossmark.\nTo see which Crossref members are registering Crossmark information, visit Participation Reports. These reports give a clear picture for anyone to see the metadata Crossref has including Crossmark data.\nLearn more about version control, corrections, and retractions.\nShow image × Download the Crossmark factsheet, and explore factsheets for other Crossref services and in different languages.\nGetting started with Crossmark Learn more about Crossmark in our documentation.\n", "headings": ["The Crossmark button gives readers quick and easy access to the current status of an item of content, including any corrections, retractions, or updates to that record.","Benefits of Crossmark ","How Crossmark works ","Obligations for Crossmark ","How to participate in Crossmark ","Getting started with Crossmark"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/metadata-plus/", "title": "Metadata Plus", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits. To use Metadata Plus, an optional paid-for service, you do not need to be a member. In addition to enhanced access to all our supported APIs (OAI-PMH and REST APIs) and metadata in XML and JSON, Metadata Plus provides you with:\na service level agreement guaranteeing you extra service and support, giving you a consistent and predictable experience additional features such as snapshots and priority service/rate limits.", "content": " Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits. To use Metadata Plus, an optional paid-for service, you do not need to be a member. In addition to enhanced access to all our supported APIs (OAI-PMH and REST APIs) and metadata in XML and JSON, Metadata Plus provides you with:\na service level agreement guaranteeing you extra service and support, giving you a consistent and predictable experience additional features such as snapshots and priority service/rate limits. Metadata Plus users may use either or both of the included API interfaces:\nOAI-PMH: this API retrieves metadata in XML using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) version 2 repository framework REST API: this API searches and filters metadata, is generally RESTFUL, and returns results in JSON. Learn more in our REST API documentation. Description of Service Feature Defined as Notes Snapshots User-requested full-file downloads.\nData may be requested in JSON and XML via an interface separate from each API.\nPre-generated reports available via machine request and human user interface. Filters based on commonly used data are in *under consideration*:\n+ ISSN\n+ Month and year\n+ Publisher\n+ Journal title Priority Service/Rate Limits Plus users will be prioritized via isolated resources.\nMinimum number of queries per second per IP address. Rate limiting of the API is primarily on a per IP basis. If a method allows, for example, for 150 requests per rate limit window, then it allows 150 requests per IP. This number can depend on the system state and may need to change. If it does, Crossref will publish it in the response headers. In the exceptional event that a release is not backwards-compatible, Crossref will provide extensive lead time to communicate the changes, as part of our proactive support for Plus users.\nHow Metadata Plus works Start by contacting us about Metadata Plus access. Plus subscribers can create their own API keys to use either or both the OAI-PMH and the REST API. Learn more in our extensive documentation for both the REST API and OAI-PMH interfaces.\nService level agreement (SLA) Crossref will maintain an aggregated, average uptime for all of the interfaces that together comprise the Crossref Metadata Service of 99.5%, reported on a monthly basis. Crossref will provide technical support to Subscriber through Crossref’s existing support channels as requested by Subscriber, and will provide a response within one (1) business day to support requests received during normal working hours in the United States and the United Kingdom. \u0026ldquo;Business days\u0026rdquo; do not include weekends or legal holidays in the United States and the United Kingdom. \u0026ldquo;Response\u0026rdquo; means that support requests will be acknowledged. The time required for resolution will depend upon the nature of the request.\nAgreement and fees for Metadata Plus Learn more about the Metadata Plus service terms, and fees and pricing tiers for Metadata Plus.\n", "headings": ["Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits.","Description of Service ","How Metadata Plus works ","Service level agreement (SLA) ","Agreement and fees for Metadata Plus "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/", "title": "Metadata Plus", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits. To use Metadata Plus, an optional paid-for service, you do not need to be a member. In addition to enhanced access to all our supported APIs (OAI-PMH and REST APIs) and metadata in XML and JSON, Metadata Plus provides you with:\na service level agreement guaranteeing you extra service and support, giving you a consistent and predictable experience additional features such as snapshots and priority service/rate limits.", "content": " Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits. To use Metadata Plus, an optional paid-for service, you do not need to be a member. In addition to enhanced access to all our supported APIs (OAI-PMH and REST APIs) and metadata in XML and JSON, Metadata Plus provides you with:\na service level agreement guaranteeing you extra service and support, giving you a consistent and predictable experience additional features such as snapshots and priority service/rate limits. Metadata Plus users may use either or both of the included API interfaces:\nOAI-PMH: this API retrieves metadata in XML using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) version 2 repository framework REST API: this API searches and filters metadata, is generally RESTFUL, and returns results in JSON. Learn more in our REST API documentation. Description of Service Feature Defined as Notes Snapshots User-requested full-file downloads.\nData may be requested in JSON and XML via an interface separate from each API.\nPre-generated reports available via machine request and human user interface. Filters based on commonly used data are in *under consideration*:\n+ ISSN\n+ Month and year\n+ Publisher\n+ Journal title Priority Service/Rate Limits Plus users will be prioritized via isolated resources.\nMinimum number of queries per second per IP address. Rate limiting of the API is primarily on a per IP basis. If a method allows, for example, for 150 requests per rate limit window, then it allows 150 requests per IP. This number can depend on the system state and may need to change. If it does, Crossref will publish it in the response headers. In the exceptional event that a release is not backwards-compatible, Crossref will provide extensive lead time to communicate the changes, as part of our proactive support for Plus users.\nHow Metadata Plus works Start by contacting us about Metadata Plus access. Plus subscribers can create their own API keys to use either or both the OAI-PMH and the REST API. Learn more in our extensive documentation for both the REST API and OAI-PMH interfaces.\nService level agreement (SLA) Crossref will maintain an aggregated, average uptime for all of the interfaces that together comprise the Crossref Metadata Service of 99.5%, reported on a monthly basis. Crossref will provide technical support to Subscriber through Crossref’s existing support channels as requested by Subscriber, and will provide a response within one (1) business day to support requests received during normal working hours in the United States and the United Kingdom. \u0026ldquo;Business days\u0026rdquo; do not include weekends or legal holidays in the United States and the United Kingdom. \u0026ldquo;Response\u0026rdquo; means that support requests will be acknowledged. The time required for resolution will depend upon the nature of the request.\nAgreement and fees for Metadata Plus Learn more about the Metadata Plus service terms, and fees and pricing tiers for Metadata Plus.\nGetting started with Metadata Plus Learn more about Metadata Plus in our documentation.\n", "headings": ["Metadata Plus gives you enhanced access to all our supported APIs, guarantees service levels and support, and additional features such as snapshots and priority service/rate limits.","Description of Service ","How Metadata Plus works ","Service level agreement (SLA) ","Agreement and fees for Metadata Plus ","Getting started with Metadata Plus "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/retrieve-metadata/", "title": "Metadata Retrieval", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Analyse Crossref metadata to inform and understand research\nCrossref is the sustainable source of community-owned scholarly metadata and is relied upon by thousands of systems across the research ecosystem and the globe.\nSome of the typical users (outer) and uses (inner) of Crossref metadata\nShow image\n× People using Crossref metadata need it for all sorts of reasons including metaresearch (researchers studying research itself such as through bibliometric analyses), publishing trends (such as finding works from an individual author or reviewer), or incorporation into specific databases (such as for discovery and search or in subject-specific repositories), and many more detailed use cases.", "content": " Analyse Crossref metadata to inform and understand research\nCrossref is the sustainable source of community-owned scholarly metadata and is relied upon by thousands of systems across the research ecosystem and the globe.\nSome of the typical users (outer) and uses (inner) of Crossref metadata\nShow image\n× People using Crossref metadata need it for all sorts of reasons including metaresearch (researchers studying research itself such as through bibliometric analyses), publishing trends (such as finding works from an individual author or reviewer), or incorporation into specific databases (such as for discovery and search or in subject-specific repositories), and many more detailed use cases.\nAll Crossref metadata is open and available for reuse without restriction. Our 160,104,382 records include information about research objects like articles, grants and awards, preprints, conference papers, book chapters, datasets, and more. The information covers elements like titles, contributors, descriptions, dates, references, connecting identifiers such as Crossref DOIs, ROR IDs and ORCID iDs, together with all sorts of metadata that helps to determine provenance, trust, and reusability\u0026mdash;such as funding, clinical trial, and license information.\nTake a look at a list of some of the organizations who rely on our REST API and read some of the case studies from a selection of users. Download the metadata retrieval fact sheet or read more about the types of metadata and records we have.\nInterfaces for retrieving metadata There are public data files published annually containing the entirety of our metadata corpus. The first public data file was published in 2020, and the most recent public data file is available at Academic Torrents or directly from AWS for a small fee.\nHere is a comparison of the metadata retrieval options. Please note that all interfaces include Crossref test prefixes: 10.13003, 10.13039, 10.18810, 10.32013, 10.50505, 10.5555, 10.88888.\nFeature / option Metadata Search Simple Text Query REST API XML API OAI-PMH OpenURL Public data files Metadata Plus (OAI-PMH + REST API) Interface for people or machines? People People People (low volume and occasional use) and machines Machines Machines Machines Machines Machines Output format Text, JSON Text JSON XML XML XML json.tar.gz JSON, XML Suitable for citation matching? Yes (low volume) Yes Yes Yes No No Yes, locally Yes Supports volume downloads? No No Yes No Yes No Yes, exclusively Yes Suitable for usage type Frequent and occasional Frequent and occasional Frequent and occasional Frequent Frequent Frequent Occasional Frequent and occasional Free or cost? Free Free Free and cost options Free and cost options Cost for full service, more options available Free Free Cost Includes all available metadata? In JSON only DOIs only Yes Yes Yes Bibliographic only Yes Yes Documentation Metadata Search Simple Text Query REST API XML API OAI-PMH OpenURL Tips for working with Crossref public data files and Plus snapshots Metadata Plus (OAI-PMH + REST API) If you’d like to share a case study for how you use Crossref metadata, and be featured on our blog, please contact us.\nUsing content negotiation The APIs listed here provide metadata in a variety of representations (also known as output formats). If you want to access our metadata in a particular representation (for example, RDF, BibTex, XML, CSL), you can use content negotiation to retrieve the metadata for a DOI in the representation you want. Content negotiation is supported by a number of DOI registration agencies including Crossref, DataCite, and mEDRA.\nObligations and fees for metadata retrieval It is important that members understand that metadata is used by other software and services in the Crossref community. We encourage members to submit as much metadata as possible so that our APIs can include and deliver rich contextual information about their content.\nIf you’re using the public REST API, it is optional but encouraged to include your email address in header requests as this puts your query into the \u0026ldquo;polite\u0026rdquo; pool which has priority processing. Learn more about our REST API etiquette.\nAll of our metadata is freely available, but there is a fee for our premium Metadata Plus service.\nCrossref generally provides metadata without restriction; however, some abstracts contained in the metadata may be subject to copyright by publishers or authors.\nHow to participate - interfaces for people Crossref provides a number of user interfaces to access Crossref metadata. Some are general-purpose, and others are more specialized.\nService name Description Metadata Search Metadata Search is our primary user interface for searching and filtering of our metadata. It can be used to look up the DOI for a reference or a partial reference or a set of references, to look up metadata for a content item, submit a query on an author’s name, or find retractions registered with us. It can also be used to search and filter a number of elements, including funding data, ISSN, ORCID iDs, and more. Simple Text Query Simple Text Query is a tool designed to allow anyone to look up DOIs for multiple references. As such it’s particularly useful for members who want to link their references. Members can even use this tool to add linked references to their metadata. How to participate - APIs for machines We have a number of APIs for accessing metadata. There is one general-purpose API and several specialized ones. The specialized APIs are designed for our members so that they can manage their metadata or they are APIs based on standards that are popular in the community.\nAPI name Description REST API The REST API outputs in JSON and enables sophisticated, flexible machine and programmatic access to search and filter our metadata. It can be used, for example, to look up the metadata for a content item or submit a query on an author’s name or find retractions registered with us. It also allows users to search and filter on a number of elements, including a funder, or all content items with ORCID iDs. The REST API is open to all and it is included in the Metadata Plus service. OpenURL This API lets you look up a Crossref DOI for a reference, using a standard that is popular in the library community, and particularly with link resolver services. OAI-PMH This API outputs in XML and uses a standard popular in the library community to harvest metadata. The OAI-PMH API is optimized to return a list of results matching the query parameters (such as publication year). The OAI-PMH API is included in the Metadata Plus service. XML API The XML API supports XML-formatted querying. The XML API is optimized to return the best fit DOI based on the metadata supplied in the query. Public data files While the public data files are not an API, they are freely available bulk downloads of the full Crossref metadata corpus, published annually. It can be downloaded via Academic Torrents, or directly from AWS for a small fee. Show image × Download the metadata retrieval factsheet, and explore factsheets for other Crossref services and in different languages.\nLooking up metadata and identifiers We support a range of tools and APIs to help you get metadata (and identifiers) out of our system. Some query interfaces will return only one match, and only if fairly strict requirements are met. These interfaces may be used to populate citations with persistent identifiers. Other interfaces will return a range of results and may be used to retrieve a variety of metadata records or match metadata when metadata, DOIs, or other identifiers (such as ORCID iD, ISSN, ISBN, funder identifier) are provided.\nUser interfaces Metadata Search - any results containing the entered search terms will be returned. Search by full citation, title (or fragments of a title), authors, ISSN, ORCID, DOI (to retrieve metadata) and more. Simple Text Query - cut-and-paste your reference list into the form and retrieve exact DOI matches. APIs REST API - a RESTful API that supports a wide range of facets and filters. By default, results are returned in JSON, and returning results in XML is an option. This API is currently publicly available (no account or token required), but there is a paid Metadata Plus service available on a token for those who require guaranteed service levels XML API - the XML API will return a DOI that best fits the metadata supplied in the query. This API is suitable for automated population of citations with DOIs as the results are accurate and do not need evaluation. This API is available to members, or by supplying an email address. OpenURL - used mostly by libraries but also available to members, or by providing an email address. Learn more about OpenURL access. OAI-PMH - as well as a free public list option, we provide a subscription-only OAI-PMH interface that may be used to retrieve sets of metadata records (subscribers only) GetResolvedRefs - retrieve DOIs matched with deposited references (members only) Deposit harvester - retrieve DOIs and metadata for a given member (members only). ", "headings": ["Interfaces for retrieving metadata ","Using content negotiation ","Obligations and fees for metadata retrieval ","How to participate - interfaces for people ","How to participate - APIs for machines ","Looking up metadata and identifiers ","User interfaces ","APIs "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/funder-registry/", "title": "Open Funder Registry", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world. It is good practice for authors to acknowledge support for and contributions to their research in their published articles. This support may be financial, such as a grant or salary award; or practical, such as the use or loan of specialist facilities and equipment.", "content": " The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world. It is good practice for authors to acknowledge support for and contributions to their research in their published articles. This support may be financial, such as a grant or salary award; or practical, such as the use or loan of specialist facilities and equipment. They do this by listing the funding agency and the grant number somewhere in their article - usually the first or last page, or in the acknowledgments or footnotes section. Members contribute by depositing the funding acknowledgements from their publications as part of their standard metadata, together with the unique funder IDs listed in the OFR. The deposit should include funder names, funder IDs, and associated grant numbers.\nThis means that anyone can make connections, for example, to identify which funders invest in certain fields of research. Funding data is also used by funders to track the publications that result from their grants.\nShow image × The Crossref OFR is an open registry of grant-giving organization names and identifiers, which you use to find funder IDs and include them as part of your metadata deposits. It is a freely-downloadable RDF file. It is CC0-licensed and available to integrate with your own systems. Funder names from acknowledgements should be matched with the corresponding unique funder ID from the registry.\nYou can search funding metadata manually using our funding data search, or programmatically via our REST API. This data not only clarifies the scholarly record, but makes life easier for researchers who may need to comply with requirements to make their published results publicly available.\nWatch the introductory Open Funder Registry animation in your language: English 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil Benefits of the Open Funder Registry There are many benefits of clear, transparent, and measurable information on who funded research, and where it has been published. The OFR facilitates accurate funding metadata, which in turn enables multiple parties to better understand the research funding landscape:\nReaders and researchers can read and assess literature in the context of knowing who funded it; Research institutions can monitor the published outputs of their researchers; Publishers can track who is funding their authors, and check if they’re meeting funding mandates; Service providers can offer integrated time-saving features to their users; and Funders can easily track the reach and return of the work they have supported. How the Open Funder Registry works Authors acknowledge the funding sources for their research in their publications. Using the registry, members can find the unique IDs for these funders, standardize this metadata and send it to us.\nThe registry is donated by Elsevier, and is updated around every 4-6 weeks with new and updated funder records. Existing entries are also reviewed to make sure that they are accurate and up-to-date. We can then make it openly available through our funding data search and our API. If you spot anything that doesn’t look right, please let us know. You can also download a .csv file of the latest registry. Using the OFR, members can find the unique IDs for these funders, standardize this metadata to send it to us.\nObligations and fees for the Open Funder Registry The OFR is open to everyone. There are no fees for members depositing funding data. Open Funder Registry search and our API are also freely available.\nMembers must include the OFR ID for each funder if it is present in the Registry. If a funder is not in the Registry and does not have an ID, include the name of the funder.\nHow to participate in the Open Funder Registry To access the OFR, you do not need to be a member, but you need to be a member to include OFR iDs in your Crossref metadata. Anyone who’s interested can simply enter an organization’s name into the Open Funder Registry search to view content connected to funding sources. The metadata in the registry is also openly available via our API, and as a downloadable RDF file. Learn more about accessing the OFR.\nDepositing metadata (members): collect funder names and grant numbers from your authors through your manuscript tracking system (or extract them from acknowledgements sections) and match them with the corresponding Funder IDs from the registry. Once this is done, it’s easy to add these three additional pieces of metadata - funder name, funder id, and grant number - as additional metadata in the regular Crossref content registration service. Learn more about how to collect and register funding data.\nWhenever you register content with us, make sure you include funder names and grant numbers in the submission:\nIf you are using a content registration helper tool - Crossref XML plugin for OJS or, the web deposit form - simply enter funder names and grant numbers in the relevant fields. For OJS, you must be running at least OJS 3.1.2 and have the Crossref funding plugin enabled. If you’re depositing XML with Crossref, include your funding data in your XML. Retrieving metadata: you can view the content that has cited a particular funding source by entering the organization’s name into the Open Funder Registry search. If you prefer a machine-readable query, use our REST API. If you have questions about how your organization appears in the registry then please get in touch. Learn more about the OFR and our other services on our funder community page.\nShow image × Download the Open Funder Registry factsheet, and explore factsheets for other Crossref services and in different languages.\n", "headings": ["The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world.","Benefits of the Open Funder Registry ","How the Open Funder Registry works ","Obligations and fees for the Open Funder Registry ","How to participate in the Open Funder Registry "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/funder-registry/", "title": "Open Funder Registry (OFR)", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world. It is good practice for authors to acknowledge support for and contributions to their research in their published articles. This support may be financial, such as a grant or salary award; or practical, such as the use or loan of specialist facilities and equipment.", "content": " The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world. It is good practice for authors to acknowledge support for and contributions to their research in their published articles. This support may be financial, such as a grant or salary award; or practical, such as the use or loan of specialist facilities and equipment. They do this by listing the funding agency and the grant number somewhere in their article - usually the first or last page, or in the acknowledgments or footnotes section. Members contribute by depositing the funding acknowledgements from their publications as part of their standard metadata, together with the unique funder IDs listed in the OFR. The deposit should include funder names, funder IDs, and associated grant numbers.\nThis means that anyone can make connections, for example, to identify which funders invest in certain fields of research. Funding data is also used by funders to track the publications that result from their grants.\nShow image × The Crossref OFR is an open registry of grant-giving organization names and identifiers, which you use to find funder IDs and include them as part of your metadata deposits. It is a freely-downloadable RDF file. It is CC0-licensed and available to integrate with your own systems. Funder names from acknowledgements should be matched with the corresponding unique funder ID from the registry.\nYou can search funding metadata manually using our funding data search, or programmatically via our REST API. This data not only clarifies the scholarly record, but makes life easier for researchers who may need to comply with requirements to make their published results publicly available.\nWatch the introductory Open Funder Registry animation in your language: English 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil Benefits of the Open Funder Registry There are many benefits of clear, transparent, and measurable information on who funded research, and where it has been published. The OFR facilitates accurate funding metadata, which in turn enables multiple parties to better understand the research funding landscape:\nReaders and researchers can read and assess literature in the context of knowing who funded it; Research institutions can monitor the published outputs of their researchers; Publishers can track who is funding their authors, and check if they’re meeting funding mandates; Service providers can offer integrated time-saving features to their users; and Funders can easily track the reach and return of the work they have supported. How the Open Funder Registry works Authors acknowledge the funding sources for their research in their publications. Using the registry, members can find the unique IDs for these funders, standardize this metadata and send it to us.\nThe registry is donated by Elsevier, and is updated around every 4-6 weeks with new and updated funder records. Existing entries are also reviewed to make sure that they are accurate and up-to-date. We can then make it openly available through our funding data search and our API. If you spot anything that doesn’t look right, please let us know. You can also download a .csv file of the latest registry. Using the OFR, members can find the unique IDs for these funders, standardize this metadata to send it to us.\nObligations and fees for the Open Funder Registry The OFR is open to everyone. There are no fees for members depositing funding data. Open Funder Registry search and our API are also freely available.\nMembers must include the OFR ID for each funder if it is present in the Registry. If a funder is not in the Registry and does not have an ID, include the name of the funder.\nHow to participate in the Open Funder Registry To access the OFR, you do not need to be a member, but you need to be a member to include OFR iDs in your Crossref metadata. Anyone who’s interested can simply enter an organization’s name into the Open Funder Registry search to view content connected to funding sources. The metadata in the registry is also openly available via our API, and as a downloadable RDF file. Learn more about accessing the OFR.\nDepositing metadata (members): collect funder names and grant numbers from your authors through your manuscript tracking system (or extract them from acknowledgements sections) and match them with the corresponding Funder IDs from the registry. Once this is done, it’s easy to add these three additional pieces of metadata - funder name, funder id, and grant number - as additional metadata in the regular Crossref content registration service. Learn more about how to collect and register funding data.\nWhenever you register content with us, make sure you include funder names and grant numbers in the submission:\nIf you are using a content registration helper tool - Crossref XML plugin for OJS or, the web deposit form - simply enter funder names and grant numbers in the relevant fields. For OJS, you must be running at least OJS 3.1.2 and have the Crossref funding plugin enabled. If you’re depositing XML with Crossref, include your funding data in your XML. Retrieving metadata: you can view the content that has cited a particular funding source by entering the organization’s name into the Open Funder Registry search. If you prefer a machine-readable query, use our REST API. If you have questions about how your organization appears in the registry then please get in touch. Learn more about the OFR and our other services on our funder community page.\nShow image × Download the Open Funder Registry factsheet, and explore factsheets for other Crossref services and in different languages.\nGetting started with Open Funder Registry (OFR)` ``\u0026lt; Learn more about the Open Funder Registry (OFR) in our documentation.\n", "headings": ["The Open Funder Registry (OFR, formerly FundRef) and associated funding metadata allows everyone to have transparency into research funding and its outcomes. It’s an open and unique registry of persistent identifiers for grant-giving organizations around the world.","Benefits of the Open Funder Registry ","How the Open Funder Registry works ","Obligations and fees for the Open Funder Registry ","How to participate in the Open Funder Registry ","Getting started with Open Funder Registry (OFR)` ``\u0026lt;"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/reference-linking/", "title": "Reference Linking", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. To link references, you don’t need to be a Crossref member. Reference linking means including Crossref DOIs (displayed as URLs) in the reference lists that you provide in your own published work. This enables researchers to follow a link from a reference list to the current landing page for that referenced work.", "content": " Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. To link references, you don’t need to be a Crossref member. Reference linking means including Crossref DOIs (displayed as URLs) in the reference lists that you provide in your own published work. This enables researchers to follow a link from a reference list to the current landing page for that referenced work. And because it’s a DOI rather than just a link, it will remain persistent.\nSo, instead of just including the reference\u0026hellip;\nSoleimani N, Mohabati Mobarez A, Farhangi B. Cloning, expression and purification flagellar sheath adhesion of Helicobacter pylori in Escherichia coli host as a vaccination target. Clin Exp Vaccine Res. 2016 Jan;5(1):19-25.\n\u0026hellip;you should also display the DOI link:\nSoleimani N, Mohabati Mobarez A, Farhangi B. Cloning, expression and purification flagellar sheath adhesion of Helicobacter pylori in Escherichia coli host as a vaccination target. Clin Exp Vaccine Res. 2016 Jan;5(1):19-25. https://0-doi-org.libus.csd.mu.edu/10.7774/cevr.2016.5.1.19\nBecause Crossref is all about rallying the scholarly community to work together, reference linking is an obligation for all Crossref members and for all \u0026lsquo;current\u0026rsquo; resources (published during this and the two previous years). It is also encouraged for for backfile resources (published longer ago than current resources).\nWatch the introductory reference linking animation in your language:\nBenefits of reference linking Persistent links enhance scholarly communications. Reference linking offers important benefits:\nReciprocity: members’ records are linked together and more discoverable because all members link their references. As a member organization, we can obligate all our members to link their references, so that individual members can avoid the inconvenience of signing bilateral agreements to link to persistent resources on other platforms. The result is a scholarly communications infrastructure that enables the exchange of ideas and knowledge. Discoverability: research travels further when everyone links their references. Because DOIs don’t break if implemented correctly, they will always lead readers to the resource they’re looking for, including yours. When the DOIs are displayed, anyone can copy and share them. This will also enable better tracking of where and when people are talking about and sharing scholarly objects, including in social media. Obligations and fees for reference linking There’s no charge for reference linking but it is an obligation of membership. Reference linking is required for all Crossref members and for all current resources. We’d encourage you to also add reference linking for backfile records too.\nShow image × Download the reference linking factsheet, and explore factsheets for other Crossref services and in different languages.\nHow to participate in reference linking Note that reference linking is not the same as registering references - learn more about the differences.\nTo link references, you do not need to be a member, but reference linking is an obligation for Crossref members. When your organization becomes a Crossref member, look up the DOIs for your references, and add the DOI (as a URL) to reference lists for your records.\nBest practice for reference linking Start reference linking within 18 months of joining Crossref Link references for backfile as well as current resources Link references in all relevant resource types such as preprints, books, data, conference proceedings, etc. Make sure the links in your references and other platforms conform to our DOI display guidelines ", "headings": ["Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things.","Benefits of reference linking ","Obligations and fees for reference linking ","How to participate in reference linking ","Best practice for reference linking "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/reference-linking/", "title": "Reference Linking", "subtitle":"", "rank": 5, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. To link references, you don’t need to be a Crossref member. Reference linking means including Crossref DOIs (displayed as URLs) in the reference lists that you provide in your own published work. This enables researchers to follow a link from a reference list to the current landing page for that referenced work.", "content": " Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things. To link references, you don’t need to be a Crossref member. Reference linking means including Crossref DOIs (displayed as URLs) in the reference lists that you provide in your own published work. This enables researchers to follow a link from a reference list to the current landing page for that referenced work. And because it’s a DOI rather than just a link, it will remain persistent.\nSo, instead of just including the reference\u0026hellip;\nSoleimani N, Mohabati Mobarez A, Farhangi B. Cloning, expression and purification flagellar sheath adhesion of Helicobacter pylori in Escherichia coli host as a vaccination target. Clin Exp Vaccine Res. 2016 Jan;5(1):19-25.\n\u0026hellip;you should also display the DOI link:\nSoleimani N, Mohabati Mobarez A, Farhangi B. Cloning, expression and purification flagellar sheath adhesion of Helicobacter pylori in Escherichia coli host as a vaccination target. Clin Exp Vaccine Res. 2016 Jan;5(1):19-25. https://0-doi-org.libus.csd.mu.edu/10.7774/cevr.2016.5.1.19\nBecause Crossref is all about rallying the scholarly community to work together, reference linking is an obligation for all Crossref members and for all \u0026lsquo;current\u0026rsquo; resources (published during this and the two previous years). It is also encouraged for for backfile resources (published longer ago than current resources).\nWatch the introductory reference linking animation in your language:\nBenefits of reference linking Persistent links enhance scholarly communications. Reference linking offers important benefits:\nReciprocity: members’ records are linked together and more discoverable because all members link their references. As a member organization, we can obligate all our members to link their references, so that individual members can avoid the inconvenience of signing bilateral agreements to link to persistent resources on other platforms. The result is a scholarly communications infrastructure that enables the exchange of ideas and knowledge. Discoverability: research travels further when everyone links their references. Because DOIs don’t break if implemented correctly, they will always lead readers to the resource they’re looking for, including yours. When the DOIs are displayed, anyone can copy and share them. This will also enable better tracking of where and when people are talking about and sharing scholarly objects, including in social media. Obligations and fees for reference linking There’s no charge for reference linking but it is an obligation of membership. Reference linking is required for all Crossref members and for all current resources. We’d encourage you to also add reference linking for backfile records too.\nShow image × Download the reference linking factsheet, and explore factsheets for other Crossref services and in different languages.\nHow to participate in reference linking Note that reference linking is not the same as registering references - learn more about the differences.\nTo link references, you do not need to be a member, but reference linking is an obligation for Crossref members. When your organization becomes a Crossref member, look up the DOIs for your references, and add the DOI (as a URL) to reference lists for your records.\nBest practice for reference linking Start reference linking within 18 months of joining Crossref Link references for backfile as well as current resources Link references in all relevant resource types such as preprints, books, data, conference proceedings, etc. Make sure the links in your references and other platforms conform to our DOI display guidelines Getting started with reference linking See how you can find other members DOIs for your reference list in our documentation.\nCrossref members can look up the DOIs for their references, and add the links to their articles\u0026rsquo; reference lists. Our website provides a simple text tool for manual, low volume querying, and a form for uploading a small number of reference lists as .txt files to find their DOIs (if available). However, the preferred method for most members is via XML API for individual or batch query requests.\n", "headings": ["Reference linking enables researchers to follow a link from the reference list to other full-text documents, helping them to make connections and discover new things.","Benefits of reference linking ","Obligations and fees for reference linking ","How to participate in reference linking ","Best practice for reference linking ","Getting started with reference linking "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/documentation/similarity-check/", "title": "Similarity Check", "subtitle":"", "rank": 1, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Documentation", "tags": [], "description": "A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism. Our Similarity Check service helps Crossref members prevent scholarly and professional plagiarism by providing immediate feedback regarding a manuscript’s similarity to other published academic and general web content, through reduced-rate access to the iThenticate text comparison software from Turnitin.\nOnly Similarity Check members benefit from this tailored iThenticate experience that includes read-only access to the full text of articles in the Similarity Check database for comparison purposes, discounted checking fees, and unlimited user accounts per organization.", "content": " A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism. Our Similarity Check service helps Crossref members prevent scholarly and professional plagiarism by providing immediate feedback regarding a manuscript’s similarity to other published academic and general web content, through reduced-rate access to the iThenticate text comparison software from Turnitin.\nOnly Similarity Check members benefit from this tailored iThenticate experience that includes read-only access to the full text of articles in the Similarity Check database for comparison purposes, discounted checking fees, and unlimited user accounts per organization.\nWatch the introductory Similarity Check animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil With editors under increased pressure to assess higher volumes of manuscript submissions each year, it’s important to find a fast, cost-effective solution that can be embedded into your publishing workflows. Similarity Check allows editors to upload a paper, and instantly produces a report highlighting potential matches and indicating if and how the paper overlaps with other work. This report enables editors to assess the originality of the work before they publish it, providing confidence for publishers and authors, and evidence of trust for readers. And as the iThenticate database contains over 78 million full-text scholarly content items, editors can be confident that Similarity Check will provide a comprehensive and reliable addition to their workflow.\nMaking sure only original research is published provides:\npeace of mind for publishers and authors that their content is identified and protected, a way for editors to educate their authors and ensure the reputation of their publication, and clarity for readers around who produced the work. Benefits of Similarity Check Similarity Check participants enjoy use of iThenticate at reduced cost because they contribute their own published content into Turnitin’s database of full-text literature. This means that as the number of participants grows, so too does the size of the database powering the service. More content in the database means greater peace of mind for editors looking to determine a manuscript’s originality.\nIf you participate in Similarity Check, not only do you get reduced rate access to iThenticate, but you also have the peace of mind of knowing that any similarity between your published content and manuscripts checked by other publishers will be flagged as a potential issue too.\nAs a Similarity Check user, you also see extra features in iThenticate, such as enhanced text-matches within the Document Viewer.\nHow the Similarity Check service works To participate in Similarity Check, you need to be a member. Similarity Check subscribers allow Turnitin to index their full catalogue of current and archival published content into the iThenticate database. This means that the service is only available to members who are actively publishing DOI-assigned content and including in their metadata full-text URLs specifically for Similarity Check.\nTurnitin indexes members’ content directly via its Content Intake System (CIS). Its CIS accesses our metadata daily to collect the full-text content links provided by our members within their metadata. Turnitin follows these URLs and indexes the content found at each location.\nWhen you apply for the Similarity Check service, Turnitin will check that they can access your existing content via the full-text URLs in your Crossref metadata. Once confirmed, you’ll be provided with access to the iThenticate tool where you will be able to submit manuscripts to compare against the corpus of published academic and general web content in Turnitin’s database. You can do this in the iThenticate tool, or through your manuscript submission system using an API. iThenticate provides a Similarity Report containing a Similarity Score and a highlighted set of matches to similar text. Editors can then further review matches in order to make their own decision regarding a manuscript’s originality.\nShow image × Download the Similarity Check factsheet, and explore factsheets for other Crossref services and in different languages.\nFees for Similarity Check Similarity Check fees are in two parts: an annual service fee, and a per-document checking fee.\nThe annual service fee is 20% of your Crossref annual membership fee and is included in the renewal invoices you receive each January. When you first join Similarity Check, you’ll receive a prorated invoice for the remainder of that calendar year.\nPer-document checking fees are also paid annually in January. Volume discounts apply, and your first 100 documents are free of charge.\nSimilarity Service status page Check the Turnitin service status page for real-time updates on system performance and any ongoing issues that may impact your Similarity Check service.\nUpdate 2024: We are no longer able to offer the Similarity Check service to members based in Russia. Find out more.\n", "headings": ["A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism.","Benefits of Similarity Check ","How the Similarity Check service works ","Fees for Similarity Check ","Similarity Service status page"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/similarity-check/", "title": "Similarity Check", "subtitle":"", "rank": 9, "lastmod": "2020-04-08", "lastmod_ts": 1586304000, "section": "Find a service", "tags": [], "description": "A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism. Our Similarity Check service helps Crossref members prevent scholarly and professional plagiarism by providing immediate feedback regarding a manuscript’s similarity to other published academic and general web content, through reduced-rate access to the iThenticate text comparison software from Turnitin.\nOnly Similarity Check members benefit from this tailored iThenticate experience that includes read-only access to the full text of articles in the Similarity Check database for comparison purposes, discounted checking fees, and unlimited user accounts per organization.", "content": " A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism. Our Similarity Check service helps Crossref members prevent scholarly and professional plagiarism by providing immediate feedback regarding a manuscript’s similarity to other published academic and general web content, through reduced-rate access to the iThenticate text comparison software from Turnitin.\nOnly Similarity Check members benefit from this tailored iThenticate experience that includes read-only access to the full text of articles in the Similarity Check database for comparison purposes, discounted checking fees, and unlimited user accounts per organization.\nWatch the introductory Similarity Check animation in your language:\nEnglish 한국어 Japanese Chinese Español Français Bahasa Indonesia العربية Português do Brasil With editors under increased pressure to assess higher volumes of manuscript submissions each year, it’s important to find a fast, cost-effective solution that can be embedded into your publishing workflows. Similarity Check allows editors to upload a paper, and instantly produces a report highlighting potential matches and indicating if and how the paper overlaps with other work. This report enables editors to assess the originality of the work before they publish it, providing confidence for publishers and authors, and evidence of trust for readers. And as the iThenticate database contains over 78 million full-text scholarly content items, editors can be confident that Similarity Check will provide a comprehensive and reliable addition to their workflow.\nMaking sure only original research is published provides:\npeace of mind for publishers and authors that their content is identified and protected, a way for editors to educate their authors and ensure the reputation of their publication, and clarity for readers around who produced the work. Benefits of Similarity Check Similarity Check participants enjoy use of iThenticate at reduced cost because they contribute their own published content into Turnitin’s database of full-text literature. This means that as the number of participants grows, so too does the size of the database powering the service. More content in the database means greater peace of mind for editors looking to determine a manuscript’s originality.\nIf you participate in Similarity Check, not only do you get reduced rate access to iThenticate, but you also have the peace of mind of knowing that any similarity between your published content and manuscripts checked by other publishers will be flagged as a potential issue too.\nAs a Similarity Check user, you also see extra features in iThenticate, such as enhanced text-matches within the Document Viewer.\nHow the Similarity Check service works To participate in Similarity Check, you need to be a member. Similarity Check subscribers allow Turnitin to index their full catalogue of current and archival published content into the iThenticate database. This means that the service is only available to members who are actively publishing DOI-assigned content and including in their metadata full-text URLs specifically for Similarity Check.\nTurnitin indexes members’ content directly via its Content Intake System (CIS). Its CIS accesses our metadata daily to collect the full-text content links provided by our members within their metadata. Turnitin follows these URLs and indexes the content found at each location.\nWhen you apply for the Similarity Check service, Turnitin will check that they can access your existing content via the full-text URLs in your Crossref metadata. Once confirmed, you’ll be provided with access to the iThenticate tool where you will be able to submit manuscripts to compare against the corpus of published academic and general web content in Turnitin’s database. You can do this in the iThenticate tool, or through your manuscript submission system using an API. iThenticate provides a Similarity Report containing a Similarity Score and a highlighted set of matches to similar text. Editors can then further review matches in order to make their own decision regarding a manuscript’s originality.\nShow image × Download the Similarity Check factsheet, and explore factsheets for other Crossref services and in different languages.\nFees for Similarity Check Similarity Check fees are in two parts: an annual service fee, and a per-document checking fee.\nThe annual service fee is 20% of your Crossref annual membership fee and is included in the renewal invoices you receive each January. When you first join Similarity Check, you’ll receive a prorated invoice for the remainder of that calendar year.\nPer-document checking fees are also paid annually in January. Volume discounts apply, and your first 100 documents are free of charge.\nSimilarity Service status page Check the Turnitin service status page for real-time updates on system performance and any ongoing issues that may impact your Similarity Check service.\nGetting started with Similarity Check Learn more about Similarity Check in our documentation.\nUpdate 2024: We are no longer able to offer the Similarity Check service to members based in Russia. Find out more.\n", "headings": ["A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism.","Benefits of Similarity Check ","How the Similarity Check service works ","Fees for Similarity Check ","Similarity Service status page","Getting started with Similarity Check "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/youve-had-your-say-now-what-next-steps-for-schema-changes/", "title": "You’ve had your say, now what? Next steps for schema changes", "subtitle":"", "rank": 1, "lastmod": "2020-04-02", "lastmod_ts": 1585785600, "section": "Blog", "tags": [], "description": "It seems like ages ago, particularly given recent events, but we had our first public request for feedback on proposed schema updates in December and January. The feedback we received indicated two big things: we’re on the right track, and you want us to go further. This update has some significant but important changes to contributors, but is otherwise a fairly moderate update. The feedback was mostly supportive, with a fair number of helpful suggestions about details.", "content": "It seems like ages ago, particularly given recent events, but we had our first public request for feedback on proposed schema updates in December and January. The feedback we received indicated two big things: we’re on the right track, and you want us to go further. This update has some significant but important changes to contributors, but is otherwise a fairly moderate update. The feedback was mostly supportive, with a fair number of helpful suggestions about details.\nFeedback and changes Many of you are excited about CRediT, and a number of members have indicated that they are ready and waiting to send us CRediT roles. To support this, as in my initial proposal, we’re adding a new role element and role_type attribute that supports existing Crossref-defined roles and CRediT roles, as well as a required vocab attribute to specify which vocabulary is being supplied.\n\u0026lt;role role_type=\u0026quot;author\u0026quot; vocab=\u0026quot;crossref\u0026quot;\u0026gt;author\u0026lt;/role\u0026gt; \u0026lt;role role_type=\u0026quot;writing-original_draft\u0026quot; vocab=\u0026quot;credit\u0026quot;/\u0026gt;\nCRediT as it exists now is an informal standard coordinated by CASRAI, but a formal standard is in the works via NISO. CRediT is currently a list of well considered and defined roles that are not particularly machine-readable. I’ve created a list for implementation that eliminates spaces and ampersands. CRediT also lacks reliable PIDs or persistent URLs for the role definitions, so that has been omitted from our implementation. We’ll adopt any changes resulting from the NISO standard, but have decided to go forward with it as-is, as many of our members are eager to implement.\nBeyond CRediT, we’ll also be expanding and refining our contributor support in a number of ways:\nWe’ll be expanding our affiliation metadata beyond a simple string to include organization identifiers like ROR, and allow markup of organization names and locations. We’re expanding the contributor identifiers as well - in addition to ORCID iDs, members can send us Wikidata, ISNI, and other identifiers. We’re adding support for multiple names to support contributors whose names can be expressed in multiple alphabets, or who have aliases or nicknames. We’re changing surname to family_name and will be relaxing the requirement that all person names have a “surname” - a given name may be supplied on its own to support contributors who do not have family names. The current element for corporate/group authors, organization, will be replaced by collab as the term “organization” was widely confusing (we have a lot of affiliation info registered as group authors!), and the collab section will also allow organization identifiers. Many of these updates align with how JATS supports contributors - I hope these changes will allow our members to supply robust contributor metadata without the burden of complicated conversions.\nI’m also including the proposed changes to support data citation and typing of citations. Additionally, we’ll be adding support for members who want to:\nsupply Grant IDs in their metadata records register identifiers for conferences. A draft 5.0 xsd file is available in a branch of our GitLab schema repository with the details of the planned updates, and more robust documentation and examples are forthcoming.\nImplementation plans My house was built in 1890 and there are always surprises whenever we need to fix or renovate anything. Our system is just as old in technology years - it’s been chugging along since the aughts. This means while we don’t think it’s powered by knob-and-tube wiring, we can’t be sure until we open up the walls. We want to implement our plans (in fact we want to do more!) but if we run into any big blockers or crucial issues, we may roll out the changes over several iterations. These updates are fairly conservative and I remain optimistic we’ll be able to implement them as-is. Our update will help us build a foundation for future updates, allowing us to continuously evolve our schema as we move forward.\nSome of you are understandably worried about our implementation schedule and backwards incompatibility. We’re aware that changes are expensive and inconvenient, and making them on our schedule doesn’t always work for your schedule. That’s why we’ve sustained 12+ versions of our schema over the past 12 years. We won’t be mandating a change any time soon, and definitely won’t do so without sufficient warning and community involvement. In the future we’ll need to make a sustained effort to retire older schema, but now isn’t the time for that.\nWe intend to commence work in Q2 but won’t have a firm timeline for a few more weeks. I will be providing regular updates as we progress, and will be asking for volunteers to test the updates when we’re ready. I’ll also be sharing more documentation and information about how the changes will be represented in our metadata outputs.\nHave more to say? Our feedback period has finished and we do plan to implement the changes as described, but if you have opinions, please share them.\n", "headings": ["Feedback and changes","Implementation plans","Have more to say?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/encouraging-even-greater-reporting-of-corrections-and-retractions/", "title": "Encouraging even greater reporting of corrections and retractions", "subtitle":"", "rank": 1, "lastmod": "2020-03-30", "lastmod_ts": 1585526400, "section": "Blog", "tags": [], "description": "TL;DR: We no longer charge fees for members to participate in Crossmark, and we encourage all our members to register metadata about corrections and retractions - even if you can’t yet add the Crossmark button and pop-up box to your landing pages or PDFs.\n\u0026ndash;\n", "content": "TL;DR: We no longer charge fees for members to participate in Crossmark, and we encourage all our members to register metadata about corrections and retractions - even if you can’t yet add the Crossmark button and pop-up box to your landing pages or PDFs.\n\u0026ndash;\nResearch doesn’t stand still; even after publication, articles can be updated with supplementary data or corrections. When research outputs are is changed in this way the publisher should report and link it, so that those accessing and citing the content know if it’s been updated, corrected or even retracted. This also emphasizes the member\u0026rsquo;s commitment to the ongoing stewardship of research outputs.\nMany people find and store articles to read later, either as PDFs on their laptop or on one of any number of reference management systems - when they come back to read and cite these articles, possibly many months later, they want to know if the version they have is current or not.\nRemoving Crossmark fees To encourage even wider adoption of Crossmark, and to promote best practice around better reporting of corrections and retractions, we will no longer be charging additional fees for our Crossmark service. This change applies to all Crossmark metadata registered from 1 January 2020. All members are now encouraged to add Crossmark metadata and add the Crossmark button and pop-up box to their publications - and you can do so as part of your regular content registration.\nRicher metadata gives important context We know that there are many more corrections and retractions that are not yet being registered, and to address this, we are now asking all of our members to start registering metadata for significant updates to your publications, even if you don\u0026rsquo;t implement the Crossmark button and pop-up box on your content. Remember, anyone can access the Crossmark metadata through our public REST API, and start using it straight away - even if you\u0026rsquo;re not ready to implement the Crossmark button.\nCheck out how to get started; if you only want to deposit metadata, follow steps one through four. If you also want to add the Crossmark button and pop-up box to your web pages/PDFs so that readers can easily see when content has changed, then also follow the rest of the steps.\nCrossmark We launched Crossmark in 2012 to raise awareness of these critical changes, by asking Crossref members to:\nrecord such updates in your metadata, either as part of your regular Crossref metadata deposit, or deposited as stand-alone data for backfiles help readers find out about the changes by placing a Crossmark button and pop-up box (which is consistent across all members making it recognizable to readers) on your landing pages and in PDFs Members can also use Crossmark to register additional metadata about content, giving further context and background for the reader. These metadata appear in the “More Information” section of the Crossmark box. 7 million DOIs have some additional metadata, the most common being copyright statements, publication history, and peer review methods.\nAnyone can access the Crossmark metadata through our public REST API, providing a myriad of opportunities for integration with other systems, and analysis of changes to the scholarly record.\nWho has implemented Crossmark? 440 Crossref members have implemented Crossmark to date. 11.4 million DOIs have some Crossmark metadata.\nTotal DOIs DOIs with Crossmark metadata % Journal articles 80,862,460 10,155,340 12.56% Book chapters 14,040,646 792,953 5.65% Conference Papers 6,175,733 457,237 7.40% Datasets 1,862,852 19,206 1.03% Books 753,298 239 0.03% Monographs 469,333 23 0.00% Of those, about 130,000 contain an update:\nYou can see which members or journals have implemented Crossmark by viewing the relevant Crossref Participation Report.\n", "headings": ["Removing Crossmark fees","Richer metadata gives important context","Crossmark","Who has implemented Crossmark?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/events-got-the-better-of-us/", "title": "Events got the better of us", "subtitle":"", "rank": 1, "lastmod": "2020-03-27", "lastmod_ts": 1585267200, "section": "Blog", "tags": [], "description": "Publisher metadata is one side of the story surrounding research outputs, but conversations, connections and activities that build further around scholarly research, takes place all over the web. We built Event Data to capture, record and make available these \u0026lsquo;Events\u0026rsquo; –– providing open, transparent, and traceable information about the provenance and context of every Event. Events are comments, links, shares, bookmarks, references, etc.\n", "content": "Publisher metadata is one side of the story surrounding research outputs, but conversations, connections and activities that build further around scholarly research, takes place all over the web. We built Event Data to capture, record and make available these \u0026lsquo;Events\u0026rsquo; –– providing open, transparent, and traceable information about the provenance and context of every Event. Events are comments, links, shares, bookmarks, references, etc.\nIn September 2018 we said Event Data was \u0026lsquo;production ready.\u0026rsquo; What we meant was development of the service had reached a point where we expected no further major changes to the code, and we encouraged you to use it. What normally would have followed was a detailed handover to our operations team, for monitoring and performance management, and for Product Management to expand Event Data by adding new Crossref member domains and evaluating additional event sources.\nWhy so quiet? But many things changed on the staff front, meaning 2019 was a year of reinvention for the Technical and Product teams and of critical knowledge sharing and learning –– Event Data had to take a back seat as we focused resources on other key projects (more on that later). From a technical perspective, we\u0026rsquo;ve found the Elasticsearch index is not performing well and the approach taken to specifically support data citations through Scholix has not really scaled.\nWhen things go wrong, whether in ways you can or can\u0026rsquo;t anticipate, the most important thing is communication –– in dealing with the challenges we forgot to do that. We understand how frustrating that can be and we\u0026rsquo;re extremely sorry to have gone so quiet.\nSo, where are we today? Event Data is important to us and clearly important to you too as you\u0026rsquo;ve contacted us about your use-cases and the reliability of the service. Event Data remains available and you\u0026rsquo;re welcome to use it, but you should expect instability to continue and be aware that it does not find events for DOIs/domains of our newer members (who joined Crossref since 2019) –– so we\u0026rsquo;re conscious it might be hard to say whether it\u0026rsquo;s a good fit for your project at this point.\nWhat are we doing? We have brought in additional expert Elasticsearch resources to assist with a separate project to migrate our REST API from SOLR to Elasticsearch. We\u0026rsquo;re making fantastic progress on this. As soon as we\u0026rsquo;re confident we can make this switch, we will move those same Elasticsearch resources to shoring up Event Data. The REST API takes priority over Event Data because we need to add support for important new record types (like research grants) that aren\u0026rsquo;t yet available via the API.\nWe\u0026rsquo;re also concluding the process of hiring two new Product Managers which means we\u0026rsquo;ll be in a position to assign someone to head up the product management of Event Data. When we do return to Event Data in the coming months, our initial priority will be increased support for data citation and Scholix. If that means radical changes to the rest of the service, we\u0026rsquo;ll let you know. Opening up the discussion We will have more news on Event Data in mid-2020. We\u0026rsquo;d love you to join the Crossref Community Forum; we\u0026rsquo;ve created a new Category for Event Data where you can post details of how you are using, or plan to use Event Data; post questions to the group; suggestions for future development and provide general feedback on the Event Data service.\n", "headings": ["Why so quiet?","So, where are we today?","What are we doing?","Opening up the discussion"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-manager-update/", "title": "Metadata Manager Update", "subtitle":"", "rank": 1, "lastmod": "2020-03-24", "lastmod_ts": 1585008000, "section": "Blog", "tags": [], "description": "At Crossref, we\u0026rsquo;re committed to providing a simple, usable, efficient and scalable web-based tool for registering content by manually making deposits of, and updates to, metadata records. Last year we launched Metadata Manager in beta for journal deposits to help us explore this further. Since then, many members have used the tool and helped us better understand their needs.\n", "content": "At Crossref, we\u0026rsquo;re committed to providing a simple, usable, efficient and scalable web-based tool for registering content by manually making deposits of, and updates to, metadata records. Last year we launched Metadata Manager in beta for journal deposits to help us explore this further. Since then, many members have used the tool and helped us better understand their needs.\nWhat we\u0026rsquo;ve learned has made us realize how useful such a tool can be to both large and small publishers, but also that the approach we took with Metadata Manager needs to be changed - it\u0026rsquo;s not flexible enough to easily add other record types, like books/book chapters, or to include any changes we may make to our input schema.\nWith that in mind, we\u0026rsquo;re pausing development on Metadata Manager to allow us to properly evaluate what we\u0026rsquo;ve learned. If you\u0026rsquo;re currently using Metadata Manager for journal deposits without any problems, please do continue - you\u0026rsquo;re helping us learn a lot! But if you haven\u0026rsquo;t used Metadata Manager before, or are having problems, please:\nuse our existing Web Deposit Form instead, or upload XML directly through the deposit system admin interface We won\u0026rsquo;t be fixing bugs in Metadata Manager, except for providing any essential security updates. Of course, if you still need help please read our Content Registration help pages, or contact the Support team.\nMetadata Manager\u0026rsquo;s features will be reimagined as part of our planned Member Center (working title, subject to change) project, where we will start to bring together all business and technical information for our members, service providers and metadata users. The Member Center will be the heart of our strategy to make it easier for you to work with Crossref to:\nregister and update metadata view, update and transfer titles visualize your activity/participation and act on problems with metadata understand your bills and invoices manage your users and service providers and their access and entitlements and more We\u0026rsquo;re in the early stages of planning for the Member Center and will be seeking feedback from members, service providers and metadata users in the coming months.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/flagging-free-to-read/", "title": "Flagging content that is “free” for text mining", "subtitle":"", "rank": 1, "lastmod": "2020-03-13", "lastmod_ts": 1584057600, "section": "", "tags": [], "description": "Flagging content that is \u0026ldquo;free\u0026rdquo; for text mining. The Crossref API can be used for locating the full text of published articles and preprints for the purpose of text mining.\nCrossref members who have have subscription-access content and who want to make some of their content available for text mining need to take the following steps.\nThe Crossref schema supports the NISO Access and License Indicators ALI section, and, normally, the free_to_read functionality of ALI would be the recommended mechanism for indicating that content is available for free (e.", "content": "Flagging content that is \u0026ldquo;free\u0026rdquo; for text mining. The Crossref API can be used for locating the full text of published articles and preprints for the purpose of text mining.\nCrossref members who have have subscription-access content and who want to make some of their content available for text mining need to take the following steps.\nThe Crossref schema supports the NISO Access and License Indicators ALI section, and, normally, the free_to_read functionality of ALI would be the recommended mechanism for indicating that content is available for free (e.g. \u0026ldquo;gratis\u0026rdquo;, not \u0026ldquo;open\u0026rdquo;). However, the ALI free_to_read element is not currently exposed through our REST API filters.\nBut we have defined a workaround that allows members to both register the ALI free_to_read element and an equivalent assertion that will work with the REST API and which will allow researchers to locate content that has been flagged as \u0026ldquo;free.\u0026rdquo;\nSteps TL;DR 1. Ensure that you have recorded links to full text in your Crossref metadata. Crossref's participation reports can be used to tell if you are already doing this. See the section marked \u0026ldquo;Text mining URLs\u0026rdquo; and/or \u0026ldquo;Similarity Check URLs\u0026rdquo; to see what percentage of your registered content has some sort of full text link.\n2. Remove your platform\u0026rsquo;s access control restrictions from the URLs for the DOIs you would like to make available for free. This will vary from publisher to publisher and platform to platform. But please note that Crossref does not have any control over access to our members\u0026rsquo; content.\n3. Flag the DOIs that you are making available \u0026ldquo;free.\u0026rdquo; You can do this by submitting the relevant ALI free_to_read element as well as a Crossmark assertion for each relevant DOI. See details below.\n4. Test the DOIs via the Crossref API to ensure that everything is working. Details Flagging your DOIs as \u0026ldquo;free\u0026rdquo;. To flag your DOIs as \u0026ldquo;free\u0026rdquo;, you can submit a single CrossMark assertion and deposit the XML using our \u0026lsquo;resource-only deposit\u0026rsquo; mechanism (Note that as of January 2020 there is no longer a charge for participating in Crossmark and so this can be done without any additional fees)\nThe following XML shows an example \u0026ldquo;resource-only deposit\u0026rdquo; that shows how you can add the ALI free_to_read and a Crossmark free assertion to an existing Crossref metadata record.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;doi_batch version=\u0026#34;4.4.2\u0026#34; xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/doi_resources_schema/4.4.2\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/schemas/4.4.2 http://0-www-crossref-org.libus.csd.mu.edu/schemas/doi_resources4.4.2.xsd\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;!-- Replace below with a unique ID --\u0026gt; \u0026lt;doi_batch_id\u0026gt;arg_123_954\u0026lt;/doi_batch_id\u0026gt; \u0026lt;depositor\u0026gt; \u0026lt;!-- Replace below with member name --\u0026gt; \u0026lt;depositor_name\u0026gt;Member Name\u0026lt;/depositor_name\u0026gt; \u0026lt;!-- Replace below with the email address where errors should be reported --\u0026gt; \u0026lt;email_address\u0026gt;name@example.com\u0026lt;/email_address\u0026gt; \u0026lt;/depositor\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;crossmark_data\u0026gt; \u0026lt;!--the DOI being updated with CrossMark metadata --\u0026gt; \u0026lt;doi\u0026gt;DOI\u0026lt;/doi\u0026gt; \u0026lt;!--CrossMark metadata --\u0026gt; \u0026lt;crossmark\u0026gt; \u0026lt;crossmark_version\u0026gt;1\u0026lt;/crossmark_version\u0026gt; \u0026lt;!-- If you already have a Crossmark policy DOI, replace it below. If you do not have a Crossmark policy, then repeat the DOI being updated --\u0026gt; \u0026lt;crossmark_policy\u0026gt;DOI\u0026lt;/crossmark_policy\u0026gt; \u0026lt;custom_metadata\u0026gt; \u0026lt;assertion name=\u0026#34;free\u0026#34; label=\u0026#34;Free to read\u0026#34;\u0026gt;This content has been made available to all.\u0026lt;/assertion\u0026gt; \u0026lt;/custom_metadata\u0026gt; \u0026lt;/crossmark\u0026gt; \u0026lt;/crossmark_data\u0026gt; \u0026lt;lic_ref_data\u0026gt; \u0026lt;!--the DOI being updated with CrossMark metadata--\u0026gt; \u0026lt;doi\u0026gt;DOI\u0026lt;/doi\u0026gt; \u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/AccessIndicators.xsd\u0026#34;\u0026gt; \u0026lt;free_to_read/\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;/lic_ref_data\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/doi_batch\u0026gt; Assuming this record was named free_to_read.xml, then you can deposit the record via our XML API using curl as follows:\ncurl -F \u0026#39;operation=doDOICitUpload\u0026#39; -F \u0026#39;login_id=USERNAME\u0026#39; -F \u0026#39;login_passwd=PASSWORD\u0026#39; -F \u0026#39;fname=@FILENAME.XML\u0026#39; \u0026#39;https://0-doi-crossref-org.libus.csd.mu.edu/servlet/deposit\u0026#39; Note that it can take up to an hour before an update is reflected in the REST API. Querying articles flagged as free in the Crossref REST API You may want to acquaint yourself with the documentation for the Crossref REST API.\nBut here are some example queries using a filter to identify content that has been asserted to be \u0026lsquo;free\u0026rsquo; using the above technique.\nQuerying all works that have a free assertion associated with them: https://0-api-crossref-org.libus.csd.mu.edu/v1/works?filter=assertion:free Querying all works that have a free assertion associated with them and which include links to full text: https://0-api-crossref-org.libus.csd.mu.edu/v1/works?filter=assertion:free,has-full-text:t Querying all works that have a free assertion, include links to full text, and include the term \u0026ldquo;Covid 19\u0026rdquo; in the bibliographic metadata: https://0-api-crossref-org.libus.csd.mu.edu/v1/works?filter=assertion:free,has-full-text:t\u0026amp;query.bibliographic=\u0026#34;Covid 19\u0026#34; (note that as of 2020-03-12 this returns zero results)\n", "headings": ["Flagging content that is \u0026ldquo;free\u0026rdquo; for text mining.","Steps TL;DR","1. Ensure that you have recorded links to full text in your Crossref metadata.","2. Remove your platform\u0026rsquo;s access control restrictions from the URLs for the DOIs you would like to make available for free.","3. Flag the DOIs that you are making available \u0026ldquo;free.\u0026rdquo;","4. Test the DOIs via the Crossref API to ensure that everything is working.","Details","Flagging your DOIs as \u0026ldquo;free\u0026rdquo;.","Querying articles flagged as free in the Crossref REST API","Querying all works that have a free assertion associated with them:","Querying all works that have a free assertion associated with them and which include links to full text:","Querying all works that have a free assertion, include links to full text, and include the term \u0026ldquo;Covid 19\u0026rdquo; in the bibliographic metadata:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/double-trouble-with-dois/", "title": "Double trouble with DOIs", "subtitle":"", "rank": 1, "lastmod": "2020-03-10", "lastmod_ts": 1583798400, "section": "Blog", "tags": [], "description": "Detective Matcher stopped abruptly behind the corner of a short building, praying that his loud heartbeat doesn\u0026rsquo;t give up his presence. This missing DOI case was unlike any other before, keeping him awake for many seconds already. It took a great effort and a good amount of help from his clever assistant Fuzzy Comparison to make sense of the sparse clues provided by Miss Unstructured Reference, an elegant young lady with a shy smile, who begged him to take up this case at any cost.\n", "content": "Detective Matcher stopped abruptly behind the corner of a short building, praying that his loud heartbeat doesn\u0026rsquo;t give up his presence. This missing DOI case was unlike any other before, keeping him awake for many seconds already. It took a great effort and a good amount of help from his clever assistant Fuzzy Comparison to make sense of the sparse clues provided by Miss Unstructured Reference, an elegant young lady with a shy smile, who begged him to take up this case at any cost.\nThe final confrontation was about to happen, the detective could feel it, and his intuition rarely misled him in the past. He was observing DOI 10.2307/257306, which matched Miss Reference\u0026rsquo;s description very well. So far, there was no indication that DOI had any idea he was being observed. He was leaning on a wall across the street in a seemingly nonchalant way, just about to put out his cigarette. Empty dark streets and slowly falling snow together created an excellent opportunity to capture the fugitive.\nSuddenly, Matcher heard a faint rustling sound. Out of nowhere, another shady figure, looking very much like 10.5465/amr.1982.4285592, appeared in front of the detective, crossed the street and started running away. Matcher couldn\u0026rsquo;t believe his eyes. These two DOIs had identical authors, year and title. They were even wearing identical volume and issue! He quickly noticed minor differences: slight alteration in the journal title and lack of the second page number in one of the DOIs, but this was likely just a random mutation. How could have he missed the other DOI? And more importantly, which of them was the one worried Miss Reference simply couldn\u0026rsquo;t live without?\nTL;DR Crossref metadata contains duplicates, i.e. items with different DOIs and identical (or almost identical) bibliographic metadata. This often happens when there is more than one DOI pointing to the same object. In some cases, but not all of them, one of the DOIs is explicitly marked as an alias of the other DOI. In this blog post, I analyze those duplicates, that are not marked with an alias relation. The analysis shows that the problem exists, but is not big. Among 524,496 DOIs tested in the analysis, 4,240 (0.8%) were flagged as having non-aliased duplicates. I divided those duplicates into two categories: Self-duplicate is a duplicate deposited by the same member as the other DOI, there were 3,603 (85%) of them. Other-duplicate is a duplicate deposited by a different member than the other DOI\u0026rsquo;s depositor, there were only 637 (15%) of them. I used three member-level metrics to estimate the volume of duplicates deposited by a given member: Self-duplicate index is the fraction of self-duplicates in member\u0026rsquo;s DOIs: on average 0.67%. Other-duplicate index is the fraction of other-duplicates in a member\u0026rsquo;s DOIs: on average 0.13%. Global other-duplicate index is the fraction of globally detected other-duplicates involving a given member: on average 0.34%. Introduction In an ideal world, the relationship between research outputs and DOIs is one-to-one: every research output has exactly one DOI assigned and each DOI points to exactly one research output.\nAs we all know too well, we do not live in a perfect world, and this one-to-one relationship is also sometimes violated. One way to violate it is to assign more than one DOI to the same object. This can cause problems.\nFirst of all, if there are two DOIs referring to the same object, eventually they both might end up in different systems and datasets. As a result, merging data between data sources becomes an issue, because we no longer can rely on comparing the DOI strings only.\nReference matching algorithms will also be confused when they encounter more than one DOI matching the input reference. They might end up assigning one DOI from the matching ones at random, or not assigning any DOI at all.\nAnd finally, more than one DOI assigned to one object is hugely problematic for document-level metrics such as citation counts, and eventually affects h-indexes and impact factors. In practice, metrics are typically calculated per DOI, so when there are two DOIs pointing to one document, the citation count might be split between them, effectively lowering the count, and making every academic author\u0026rsquo;s biggest nightmare come true.\nIt seems we shouldn\u0026rsquo;t simply cover our eyes and pretend this problem does not exist. So what are we doing at Crossref to make the situation better?\nIt is possible for our members to explicitly mark a DOI as an alias of another DOI, if it was deposited by mistake. This does not remove the problem, but at least allows metadata consumers to access and use this information. Whenever a DOI is registered or updated in Crossref, we automatically compare its metadata to the metadata of existing DOIs. If the metadata is too similar to the metadata of another DOI, this information is sent to the member and they have a chance to modify the metadata as they see fit. Despite these efforts, we still see duplicates that are not explained by anything in the metadata. In this blog post, I will try to understand this problem better and assess how big it is. I also define three member-level metrics that can show how much a given member contributes to duplicates in the system and can flag members with unusually high fractions of duplicates.\nGathering the data The data for this analysis was collected in the following way: Only journal articles were considered in the analysis. Only members with at least 5,000 journal article DOIs were considered in the analysis. For each member, a random sample of 1,000 journal article DOIs was selected. DOIs with no title, title shorter than 20 characters or shorter than 3 words were removed from each sample. This was done because items with short titles typically result in incorrectly flagged duplicates (false positives). For each remaining DOI in the sample, a simple string representation was generated. This representation is a concatenation of the following fields: authors, title, container-title, volume, issue, page, published date. This string representation was used as query.bibliographic in Crossref\u0026rsquo;s REST API and the resulting item list was examined. If the original DOI came back as the first or the second hit, the relevance score difference between the first two hits is less than 1, they are both journal articles, and there is no relation (alias or otherwise) between them, the other one of the two is considered a duplicate of the original DOI. The score difference threshold was chosen through a manual examination of a number of cases. Most detected duplicates came back scored identically. Overall results In total, I tested 590 members and 524,496 DOIs. Among them, 4,240 DOIs (0.8%) were flagged as duplicates of other DOIs. This shows the problem exists, but is not huge.\nI also analyzed separately two categories of duplicates:\nself-duplicates are two DOIs with (almost) identical metadata, deposited by the same member, other-duplicates are two DOIs with (almost) identical metadata, deposited by two different members. Self-duplicates are more common: 3,603 (85%) of all detected duplicates are self-duplicates, and only 637 (15%) are other-duplicates. This is also good news: self-duplicates involve one member only, so they are easier to handle.\nSelf-duplicates To explore the levels of self-duplicates among members, I used a custom member-level metric called self-duplicate index. Self-duplicate index is the fraction of self-duplicates among the member\u0026rsquo;s DOIs, in this case calculated over a sample.\nOn average, members have a very small self-duplicate index of 0.67%. In addition, in the samples of 44% of analyzed members no self-duplicates were found. The histogram shows the skewness of the distribution:\nAs we can see in the distribution, there are only a few members with high self-duplicate index. The table shows all members with the self-duplicate higher than 10%:\nName Total DOIs Sample size Self-duplicate index University of California Press 129,741 798 36% Inderscience Publishers 127,729 998 29% American Society of Hematology 137,124 990 24% Pro Reitoria de Pesquisa, Pos Graduacao e Inovacao - UFF 7,756 919 19% American Diabetes Association 49,536 946 18% Other-duplicates Other-duplicate index is the fraction of other duplicates among the member\u0026rsquo;s DOIs, in this case calculated from a sample.\nOn average, members have a very low other-duplicate index of only 0.13%. What is more, 89% members have no other-duplicates in the sample, and the distribution is even more skewed than in the case of self-duplicates:\nHere is the list of all members with more than 2% of other-duplicates in the sample:\nName Total DOIs Sample size Other-duplicate index American Bryological and Lichenological Society 5,593 844 41% Maney Publishing 15,342 832 6% JSTOR 1,612,174 864 4% American Mathematical Society (AMS) 83,015 844 4% American Bryological and Lichenological Society is a clear outlier with 41% of their sample flagged as duplicates. Interestingly, all those duplicates come from one other member only (JSTOR) and JSTOR was the first to deposit them.\nSimilarly, all other-duplicates detected in the American Mathematical Society\u0026rsquo;s sample are shared with JSTOR, and JSTOR was the first to deposit them.\nManey Publishing\u0026rsquo;s 51 other-duplicates are all shared with a member not listed in this table: Informa UK Limited.\nJSTOR is the only member in this table, whose 36 other-duplicates are shared with multiple (8) members.\nAnother interesting observation is that the members in this table (apart from JSTOR) are rather small or medium, in terms of total DOIs registered by them. It is also worrying that Informa UK Limited, a member that shares 51 other-duplicates flagged in Maney Publishing\u0026rsquo;s sample, was not flagged by this index. The reason might be differences in the overall number of registered DOIs: two members that deposited the same number of other-duplicates, but have different overall numbers of registered DOIs, will have different other-duplicate indexes.\nTo address this issue, I looked at a third index called global other-duplicate index. Global other-duplicate index is the fraction of globally detected other-duplicates involving a given member.\nGlobal other-duplicate index has a useful interpretation: it tells us how much the overall number of other-duplicates would drop, if the given member resolved all its other-duplicates (for example by setting appropriate relations or correcting the metadata so that it is no longer so similar).\nHere is the list of members with global-duplicate index higher than 2%:\nName Total DOIs Global other-duplicate index JSTOR 1,612,174 69% American Bryological and Lichenological Society 5,593 54% Informa UK Limited 4,275,507 15% Maney Publishing 15,342 8% American Mathematical Society (AMS) 83,015 6% Project Muse 326,300 5% Wiley 8,003,815 3% Elsevier BV 16,268,943 3% Liverpool University Press 31,870 3% Cambridge University Press (CUP) 1,621,713 2% Ovid Technologies (Wolters Kluwer Health) 2,152,723 2% University of Toronto Press Inc. (UTPress) 46,778 2% Note that the values add up to more than 100%. This is because in every other-duplicate there are two members involved, so the involvement adds up to 200%.\nAs we can see, all the members from the previous table are in this one as well. Apart from them, however, this index flagged several large members. Among them, Informa UK Limited, that was missing from the previous table.\nAll the indexes defined here are useful in identifying members that contribute a lot of duplicates to the Crossref metadata. They can be used to help to clean up the metadata, and also to monitor the situation in the future.\nLimitations It is important to remember that index values presented here were calculated on a single sample of DOIs drawn for a given member. The values would be different if a different sample was used, and so they shouldn\u0026rsquo;t be treated as exact numbers.\nThe tables include members with the index exceeding a certain threshold, chosen arbitrarily, for illustrative purposes. Different runs with different samples could result in different members being included in the tables, especially in their lower parts.\nTo obtain more stable values of indexes, multiple samples could be used. Alternatively, in the case of smaller members, exact values could be calculated from all their DOIs.\n", "headings": ["TL;DR","Introduction","Gathering the data","Overall results","Self-duplicates","Other-duplicates","Limitations"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata-quality/", "title": "Metadata Quality", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/can-you-help-us-to-launch-distributed-usage-logging/", "title": "Can you help us to launch Distributed Usage Logging?", "subtitle":"", "rank": 1, "lastmod": "2020-03-02", "lastmod_ts": 1583107200, "section": "Blog", "tags": [], "description": "Update: Deadline extended to 23:59 (UTC) 13th March 2020.\nDistributed Usage Logging (DUL) allows publishers to capture traditional usage activity related to their content that happens on sites other than their own so they can provide reports of “total usage”, for example to subscribing institutions, regardless of where that usage happens.\n", "content": "Update: Deadline extended to 23:59 (UTC) 13th March 2020.\nDistributed Usage Logging (DUL) allows publishers to capture traditional usage activity related to their content that happens on sites other than their own so they can provide reports of “total usage”, for example to subscribing institutions, regardless of where that usage happens.\nWe are looking for a consultant to take the lead with DUL outreach, promoting the service and its benefits in order to solicit participation from publishers (receivers) and content-hosting platforms/scholarly collaboration networks (senders).\nCrossref provides the infrastructure for DUL. The call for participation is being led by COUNTER and the selected consultant will be representing COUNTER, with additional support from Crossref\nIf you are interested in this opportunity, please download the request for information (RFI).\nThe RFI response deadline is 23:59 (UTC) 13 March 2020.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/members/", "title": "Members", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/standards/", "title": "Standards", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/bibliometrics/", "title": "Bibliometrics", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/citation-data/", "title": "Citation Data", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-metadata-for-bibliometrics/", "title": "Crossref metadata for bibliometrics", "subtitle":"", "rank": 1, "lastmod": "2020-02-21", "lastmod_ts": 1582243200, "section": "Blog", "tags": [], "description": "Our paper, Crossref: the sustainable source of community-owned scholarly metadata, was recently published in Quantitative Science Studies (MIT Press). The paper describes the scholarly metadata collected and made available by Crossref, as well as its importance in the scholarly research ecosystem.\n", "content": "Our paper, Crossref: the sustainable source of community-owned scholarly metadata, was recently published in Quantitative Science Studies (MIT Press). The paper describes the scholarly metadata collected and made available by Crossref, as well as its importance in the scholarly research ecosystem.\nContaining over 106 million records and expanding at an average rate of 11% a year, Crossref\u0026rsquo;s metadata has become one of the major sources of scholarly data for publishers, authors, librarians, funders, and researchers. The metadata set consists of 13 record types, including not only traditional types, such as journals and conference papers, but also data sets, reports, preprints, peer reviews, and grants. The metadata is not limited to basic publication metadata, but can also include abstracts and links to full text, funding and license information, citation links, and the information about corrections, updates, retractions, etc. This scale and breadth make Crossref a valuable source for research in scientometrics, including measuring the growth and impact of science and understanding new trends in scholarly communications. The metadata is available through a number of APIs, including REST API and OAI-PMH.\nIn the paper, we describe the kind of metadata that Crossref provides and how it is collected and curated. We also look at Crossref\u0026rsquo;s role in the research ecosystem and trends in metadata curation over the years, including the evolution of its citation data provision. We summarize the research that used Crossref\u0026rsquo;s metadata and describe plans that will improve metadata quality and retrieval in the future.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/leaving-crossref/", "title": "Leaving Crossref", "subtitle":"", "rank": 1, "lastmod": "2020-02-14", "lastmod_ts": 1581638400, "section": "Blog", "tags": [], "description": "Where does the time go\u0026hellip; In my blog post on January 14th about Crossref’s 20th anniversary I said, “The one constant in Crossref’s 20 years has been change”. It’s true that there has been constant change, but there has been another constant at Crossref –– me (and DOIs, to be fair). I started as Crossref’s first employee and Executive Director on February 1st, 2000, so I just marked my 20th anniversary with the organization.\n", "content": "Where does the time go\u0026hellip; In my blog post on January 14th about Crossref’s 20th anniversary I said, “The one constant in Crossref’s 20 years has been change”. It’s true that there has been constant change, but there has been another constant at Crossref –– me (and DOIs, to be fair). I started as Crossref’s first employee and Executive Director on February 1st, 2000, so I just marked my 20th anniversary with the organization.\nThis milestone prompted me to reflect on where I am and where I’m heading. After 20 years leading the organization, I’ve decided to leave Crossref. It’s time for a new challenge. I’m still very committed to the mission and very proud of my time at Crossref, the culture we’ve created and what the organization has achieved. It’s been an honor serving as Executive Director and a pleasure working with so many great people over the years. And to be clear –– I’m not ill, being pushed or having a midlife crisis (yet).\nIt’s a difficult and emotional decision but I think the transition can be positive for me, the staff, the board, and the organization. I’ll be working with the Crossref board, Chair, Treasurer and staff on the transition –– the plan is for me to be around through September or October to enable the recruitment and handover to a new Executive Director. There will be more information about the transition and recruitment process after the Crossref board meeting March 11-12 in London.\nCrossref has a bright future and many opportunities to do new things. Crossref provides essential, open scholarly infrastructure and services that benefit its members and the wider scholarly research ecosystem –– and we’ve got a lot of interesting things in development and ambitious plans. To anyone who might be interested in being Crossref’s next Executive Director, I can honestly say it is fantastic, challenging, fun, and very fulfilling –– that’s why I’ve done it for 20 years.\nWhat’s next for me? I don’t know but it’s something I’ll be thinking about over the coming months. I do know that working for a mission driven organization and staying involved with scholarly communications and research –– a fascinating and worthy field –– will be top of my list.\nAnyway - it’s back to work and full steam ahead for Crossref!\n", "headings": ["Where does the time go\u0026hellip;"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api-with-open-ukrainian-citation-index/", "title": "Using the Crossref REST API (with Open Ukrainian Citation Index)", "subtitle":"", "rank": 1, "lastmod": "2020-02-05", "lastmod_ts": 1580860800, "section": "Blog", "tags": [], "description": "Over the past few years, I\u0026rsquo;ve been really interested in seeing the breadth of uses that the research community is finding for the Crossref REST API. When we ran Crossref LIVE Kyiv in March 2019, Serhii Nazarovets joined us to present his plans for the Open Ukrainian Citation Index, an initiative he explains below.\nBut first an introduction to Serhii and his colleague Tetiana Borysova.\nSerhii Nazarovets is a Deputy Director for Research at the State Scientific and Technical Library of Ukraine.", "content": "Over the past few years, I\u0026rsquo;ve been really interested in seeing the breadth of uses that the research community is finding for the Crossref REST API. When we ran Crossref LIVE Kyiv in March 2019, Serhii Nazarovets joined us to present his plans for the Open Ukrainian Citation Index, an initiative he explains below.\nBut first an introduction to Serhii and his colleague Tetiana Borysova.\nSerhii Nazarovets is a Deputy Director for Research at the State Scientific and Technical Library of Ukraine. Serhii has a Ph.D. in Social Communication Science. His research interests lie in the area of scientometrics and library science. Serhii is the Associate Editor for DOAJ (www.doaj.org) and the Regional Editor for E-LIS (Eprints in Library and Information Science). Serhii has worked in different scientific libraries of Ukraine for more than 10 years. Tetiana Borysova is a Senior Researcher at the State Scientific and Technical Library of Ukraine. Her research interests are focused on topics such as research data management, journal management and scientometrics.\nIntroducing OUCI OUCI (Open Ukrainian Citation Index) is a new search engine and a citation database based on publication metadata from Crossref members.\nOUCI is intended to simplify the search of scientific publications, to attract the editors\u0026rsquo; attention to the problem of completeness and quality of the metadata of Ukrainian scholarly publications, and will allow bibliometricians to freely study the relations between authors and documents from various disciplines, in particular in the field of social sciences and humanities. OUCI is open for every user in the world without any restrictions.\nOUCI launched in November 2019. The project is being implemented by the State Scientific and Technical Library of Ukraine with the support of the Ministry of Education and Science of Ukraine.\nIn Ukraine, we do not have a national citation database, and this significantly impedes the search and analysis of information about Ukrainian publications. According to preliminary estimates, more than 3,000 titles of scientific journals are currently published in Ukraine. At the same time, only around 100 Ukrainian journal titles are indexed in authoritative citation databases, such as Scopus and Web of Science Core Collection. Thus, researchers and managers lack this citation data to understand the impact of Ukrainian journals and their demand in the scientific communication system. Our approach is that OUCI database contains metadata from all publishers that use the Crossref\u0026rsquo;s Cited-by service and who support the Initiative for Open Citations by making the reference metadata they publish with Crossref openly available.\nHow is Crossref metadata used in OUCI? A publication can only be indexed in OUCI if there is a DOI. At first glance, the idea of creating an index of national publications based on this condition may seem too optimistic. However, in January 2018, a new requirement was adopted by the List of scientific publications of Ukraine (a list of Ukrainian journals recognized by experts as qualitative for publishing their research results for a scientific degree), which listed a DOI as one of the requirements for inclusion. After that, the number of publishers who received the DOI prefix from Crossref has tripled, to 352 in November 2019.\nAnother important feature of OUCI is that publishers have to use Crossref\u0026rsquo;s Cited-by service and support the Initiative for Open Citations. We are working to build a new fair infrastructure where everyone who is interested in the dissemination of scientific knowledge can present their publications to the community, develop expert judgment skills and access citations to explore the links between documents. The philosophy of the index is to use only open resources to fill it.\nIn addition to standard filters from Crossref metadata (such as publisher, publication, type, year), OUCI offers to refine search results by:\nindexation in Web of Science and/or Scopus, journal category (A or B according to the List of scientific publications of Ukraine), the field of knowledge and scientific specialties (according to the Ukrainian legislation) and other aspects important to Ukrainian users characteristics. Figure 1: OUCI search and filter options\nBeyond the ability to search articles, OUCI displays profiles for Ukrainian journals (the titles of these journals will include hyperlinks in the search results). Administrators can manage them, add and edit information about their journals: web-site, aims and scope, scientific fields of the journal according to the Ukrainian classification. Also, you can see some quantitative characteristics of journals: number of publications, number of citations, h-index, i10-index etc.\nFigure 2: Display of journal information in OUCI\nIn addition, we have implemented an analytics module. Using the data about the number of articles and citations from Crossref, it allows users to analyze Ukrainian journals by field.\nFigure 3: Publication and citation information\nWhat are the future plans for OUCI? In the near future, we plan to add:\nthe ability to export search results for further analysis; integration with Unpaywall; alternative metrics from Crossref Event Data. In the ideal future for our index, every Ukrainian article will be registered with Crossref and have open references. We plan to promote the importance of reach and quality metadata in Crossref among Ukrainian publishers. We also encourage all publishers to support the Initiative for Open Citations.\nWhat else would OUCI like to see in Crossref metadata? One of the main problems we encountered when creating OUCI was the metadata about the authors. Very few publications contain data about the author\u0026rsquo;s ORCID iD. Focusing publishers on the need to transmit full metadata to Crossref, as well as monitoring their quality is a must for the resources like this. Also we look forward to the growing usage of ROR (Research Organization Registry) - identifiers for research organizations, similar to the way that ORCID offers identifiers for researchers. We believe that the ROR will help to obtain reliable data for analyzing the scientific activity of Ukrainian institutions.\nAnother issue we\u0026rsquo;ve identified in some Ukrainian journals that some of the small publishers that register content via Crossref Sponsors did not take care getting their own prefix, so it can be difficult to see their publications - this is something that showing the metadata via an index can help them see and therefore fix.\nQuestions? We\u0026rsquo;ve had lots of questions about OUCI in the run up to the launch and now that it\u0026rsquo;s live. Here is a selection of our FAQs, all available on our website. You can also get in touch directly if you have another question we haven\u0026rsquo;t answered yet.\n", "headings": ["Introducing OUCI","How is Crossref metadata used in OUCI?","What are the future plans for OUCI?","What else would OUCI like to see in Crossref metadata?","Questions?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-is-20/", "title": "Crossref is 20", "subtitle":"", "rank": 1, "lastmod": "2020-01-14", "lastmod_ts": 1578960000, "section": "Blog", "tags": [], "description": "It seems like only yesterday\u0026hellip; On January 19th, 2000 a new not-for-profit organization was registered in New York State. It was called Publishers International Linking Association, Inc but was more commonly referred to as \u0026ldquo;CrossRef\u0026rdquo;. This means that Crossref will be 20 years old on January 19th, 2020 so I wanted to mark the occasion with a short post. We are planning more ways to mark our 20th anniversary later this year so keep a lookout.\n", "content": "It seems like only yesterday\u0026hellip; On January 19th, 2000 a new not-for-profit organization was registered in New York State. It was called Publishers International Linking Association, Inc but was more commonly referred to as \u0026ldquo;CrossRef\u0026rdquo;. This means that Crossref will be 20 years old on January 19th, 2020 so I wanted to mark the occasion with a short post. We are planning more ways to mark our 20th anniversary later this year so keep a lookout.\n", "headings": ["It seems like only yesterday\u0026hellip;"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/citations/", "title": "Citations", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-corrections-updates-and-additions-in-metadata-manager/", "title": "Metadata Corrections, Updates, and Additions in Metadata Manager", "subtitle":"", "rank": 1, "lastmod": "2020-01-13", "lastmod_ts": 1578873600, "section": "Blog", "tags": [], "description": "It\u0026rsquo;s been a year since Metadata Manager was first launched in Beta. We\u0026rsquo;ve received a lot of helpful feedback from many Crossref members who made the switch from Web Deposit Form to Metadata Manager for their journal article registrations.\nThe most common use for Metadata Manager is to register new DOIs for newly published articles. For the most part, this is a one-time process. You enter the metadata, register your DOI, and success!\n", "content": "It\u0026rsquo;s been a year since Metadata Manager was first launched in Beta. We\u0026rsquo;ve received a lot of helpful feedback from many Crossref members who made the switch from Web Deposit Form to Metadata Manager for their journal article registrations.\nThe most common use for Metadata Manager is to register new DOIs for newly published articles. For the most part, this is a one-time process. You enter the metadata, register your DOI, and success!\nBut everything doesn\u0026rsquo;t always go quite as expected. Humans make mistakes, and typos in metadata are bound to happen on occasion, even for the most careful users.\nWe always want to make it as easy as possible for our members to find and correct metadata errors, and to add additional metadata when it becomes available. Our Schematron, Conflict, and Resolution reports can help you identify existing metadata errors. We never charge content registration fees for metadata updates, additions, or corrections, so cost won\u0026rsquo;t be a barrier to getting the most accurate and thorough metadata possible. And, now, Metadata Manager can make those corrections easier to do.\nCorrecting Errors Because accurate and comprehensive metadata is so important for the linking and discoverability of your publications, it\u0026rsquo;s important to catch these occasional errors and correct them.\nWe send out reports that automatically screen for particular types of metadata errors, and we pass along comments from users who contact us with concerns about metadata quality to our contacts at the relevant publisher. The \u0026ldquo;Review all\u0026rdquo; feature in Metadata Manager also allows you to do a final check of all the metadata you entered right before you\u0026rsquo;re about to submit your deposits. So, we also rely on you to evaluate your own accuracy there as well.\nOnce you’ve identified an error, you’ll need to correct it. To do that, you must resubmit a whole new metadata deposit for the affected item. The newly deposited metadata will entirely overwrite the previously deposited metadata.\nIf you’re used to using the Web Deposit Form, you know that the redeposit can be a little tedious. For example, if you find that you misspelled an author’s last name, you’d have to manually type in or copy-paste not just the corrected last name, but all of the journal-level, issue-level, and article-level metadata that applies to the article.\nUsing Metadata Manager, the process is much simpler. The full metadata record is retained or imported and you only need to correct the error itself.\nFor articles originally registered using Metadata Manager If you find a metadata error in an article which you initially registered in Metadata Manager itself, you can locate the article in one of two ways:\nNavigate through the list of Accepted articles within a given journal\nOr, search by article title in the Deposit History\nOnce you’ve located the relevant article, click on the article title to open the article’s metadata record. From there, you can make the necessary corrections. With the corrections complete, click “Continue” and then “Add to deposit.” After that, the process is exactly the same as depositing a new article.\nFor articles registered using the Web Deposit Form or any other deposit method If you registered an article using the Web Deposit Form, an XML deposit, or the OJS plugin, you can still use Metadata Manager to quickly correct an error. But, first you have to import the article’s metadata into Metadata Manager.\nTo do this, click into the relevant journal from your Metadata Manager home page. Then, search for the article title using the “Add existing article” search box. Select “Add” next to the article title in the search results, which will import the article’s metadata record into Metadata Manager.\nFrom here, make any necessary corrections and click “Continue” and then “Add to deposit.” Navigate to the “To deposit” tab and “Review all” to ensure that your metadata record is accurate. Then select “Deposit” to finalize your submission. You’ll receive immediate feedback as to whether your metadata deposit was successful or not.\nAdding additional metadata Perhaps there are no problems with your metadata, and everything is completely accurate. That\u0026rsquo;s great! But, we encourage our members to submit metadata that is not just accurate, but also as thorough as possible. Check your Participation Report to see if there are any types of metadata that you haven\u0026rsquo;t been submitting yet, or that you haven\u0026rsquo;t been submitting for certain journals.\nMetadata Manager allows you to deposit references, licenses, and relationships between your articles and other DOIs, which weren’t possible to add using the Web Deposit Form. The same process described above for corrections will allow you to import previously registered articles and add in these new metadata elements.\nWe also know that many of our members register DOIs for their articles when they’re first published online, but aren’t yet included in an issue. When the articles are published in their final versions, there is important metadata added which wasn’t yet available when the DOI was first registered. This includes things like volume number, issue number, page numbers, and full publication date, all of which are extremely important for linking and discoverability. Sometimes the resolution URL changes when the article is moved from its pre-publication status to its final version.\nSo, when each issue is published, you can use Metadata Manager to pull up all the already-registered articles included in that issue and add in the newly relevant metadata like page numbers, issue number, URL, etc. Then add them to a new deposit, review, and submit.\nPlease check out the full Metadata Manager help documentation for more details, or join us on an upcoming workshop to test out Metadata Manager in real-time with us. And, as always, feel free to email us at support@crossref.org with any questions.\n", "headings": ["Correcting Errors","For articles originally registered using Metadata Manager","For articles registered using the Web Deposit Form or any other deposit method","Adding additional metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/metadata-manager/", "title": "Metadata Manager", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2019/", "title": "2019", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/jon-stark/", "title": "Jon Stark", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/resolution-reports-a-look-inside-and-ahead/", "title": "Resolution reports: a look inside and ahead", "subtitle":"", "rank": 1, "lastmod": "2019-12-17", "lastmod_ts": 1576540800, "section": "Blog", "tags": [], "description": "Isaac Farley, technical support manager, and Jon Stark, software developer, provide a glimpse into the history and current state of our popular monthly resolution reports. They invite you, our members, to help us understand how you use these reports. This will help us determine the best next steps for further improvement of these reports, and particularly what we do and don’t filter out of them.\n", "content": "Isaac Farley, technical support manager, and Jon Stark, software developer, provide a glimpse into the history and current state of our popular monthly resolution reports. They invite you, our members, to help us understand how you use these reports. This will help us determine the best next steps for further improvement of these reports, and particularly what we do and don’t filter out of them.\nIsaac joined Crossref in April 2018. Before that, he was with one of our members, a geoscience society in Oklahoma (USA). As a Crossref member, like all of our members, he received the resolution reports to his inbox during the first week of each month. And like many of you, he had questions.\nWhat exactly is this report? What are all these numbers? Now, what about those 10 top DOIs is making them so popular? Why are some of these DOIs failing? And, what’s with this filtering of “known search engine crawlers?” Now that Isaac is the Crossref Technical Support Manager, instead of asking these questions, he answers many of them.\nWhoa\u0026hellip;too fast\u0026hellip;what exactly are resolution reports? The resolution report provides an overview of DOI resolution traffic, and can identify problems with your DOI links. The failed DOI.csv linked to your resolution report email contains a list of all DOIs with failed resolution attempts (more on this later). If a user clicks on a DOI with your DOI prefix and the DOI is not registered, it won’t resolve to a web page, and thus will appear on your report.\nWhat are those numbers? This is always a good starting point for wrangling statistical information. Resolution statistics are based on the number of DOI resolutions made through the DOI proxy server on a month-by-month basis. These statistics give an indication of the traffic generated by users - both human and machine - clicking (or, resolving) DOIs. CNRI (the organization that manages the DOI proxy server) sends us resolution logs at the end of every month and we pass the data on to you.\nResolution reports are sent by default to the business contact on your account, and we can always add or change the recipient(s) as needed. We send a separate report for each DOI prefix you’re responsible for.\nHistorically we have done our best to filter out obvious crawlers and machine activity - thus valuing human-driven traffic to traffic generated by machines. That sentence above about those obvious crawlers is the real reason we are here today blogging.\nWhy are some of those DOIs failing? The ideal failure rate is 0%. A failure rate of 0% would mean that every DOI you owned that was clicked in the previous month successfully resolved to the resolution URL you registered with us. But, in reality, a 0% failure rate is rare, because any string of characters that is combined with your prefix (e.g., 10.5555/ThisIsNOTARealDOI) and attempted to be resolved will go through the resolver and result in another single failed count toward your monthly resolution report. If you are new to Crossref, or have only deposited metadata for a small number of content items, you may have a high failure percentage (for example, 2 failures and 8 successes = 20% failure rate).\nBefore 2019, the overall resolution failure rate across all publishers held fairly steady each month between 2 and 4%. You may have noticed that that number has been climbing this year. And, as a result, we think a new normal is closer to 10%.\nGiven this new norm, if your overall resolution failure rate is higher than 8 to 12%, we advise you to look closely at the failed DOI.csv file that we include in the monthly report we email you. The first step in your analysis of this portion of the report is to make sure the DOIs listed have been registered. Very often failures of legitimate DOIs are the result of content registration errors or workflow inefficiencies (i.e., DOIs are shared with the editorial team and/or contributors before being registered with us, leading to premature clicks). If during your investigation, you find invalid DOIs (like the example above: 10.5555/ThisIsNOTARealDOI) - and you will find invalid DOIs because we all make mistakes when resolving DOIs - you may simply ignore those DOIs within the report.\nWhat’s with this filtering of “known search engine crawlers?” You may have recently noticed that we made a few changes to the resolution reports. We merged, rearranged, and in some cases completely rewrote the report you receive to your inboxes, because, well, it needed it. It was confusing. Parts of the report still are. Most specifically, those “known search engine crawlers.” To that point, you may have also noticed that the reports that arrived to your inboxes in early November 2019 were scrubbed of nearly 150 million resolutions across all members.\nBased on Jon’s analysis of these 150 million filtered resolutions, they were from bots. In the past, it was important to filter out bots, as we found our community was most focused on human readers. But should we be filtering out resolutions from bots any more? We live in a time where most of our work (at least in the Crossref community) requires both human and machine interaction; thus, aren’t at least some of these resolutions from machines legitimate?\nOur internal analysis shows that we cannot reliably determine which usage is from:\nIndividual humans; Machines acting as intermediaries between researchers and DOIs; Internet service providers with real human users behind them; or, Bots that do not result in actual human usage. As a result, it is our thinking that we may serve you better by not filtering any traffic, as we cannot guarantee that we’re removing the right things. We feel that it may be better for us to just give you everything we know. And invite you to make your own judgments.\nHow\u0026rsquo;d we get here? Jon joined Crossref in 2004. He wrote the original version of the resolution report in late 2009 in an effort to provide you, our members, with information about the usage of registered Crossref DOIs. At that time, most members were creating DOIs, but then had no real feedback about the traffic that was getting to their content (via the DOI proxy server lookups of their DOIs). These reports filled that gap. The other benefit of the report was the information it provided about failed resolutions. As suggested above, the list of failed resolutions helped members identify potential problems with the content registration process.\nA DOI that appeared on the report as a failed resolution could be cause of concern for the member. But, then again, humans and machines make mistakes when attempting to resolve DOIs (e.g., typos). Thus, not much has changed in the last ten years - the DOIs that appear in the failed resolution reports must be evaluated. Care should especially be taken when a DOI that should have been registered has not and appears as a failed resolution (e.g., data problem, agent behind on deposits, etc.) within this report.\nLike we said, mistakes happen. Users may enter a DOI incorrectly when looking it up. Or, it could be a bot throwing randomly generated traffic that looks like a DOI, but is not. And, sometimes bots are scraping through PDFs for DOIs and simply extract them incorrectly. These are all user errors, and not necessarily a concern for our members. That’s why we provide that list of what failed.\nAt the start, there were a few well known crawlers that were resolving large numbers of DOIs regularly. It was our opinion at the time that it would be helpful to filter that usage since we assumed members only cared about human-driven traffic. As the next decade passed, it became clear that the internet had and would continue to change. With bots popping up every day and IP addresses moving or spanning broad address ranges (and IPs we had already filtered with the potential of being repurposed), it was obvious that we would always miss as much as we caught.\nBetween the constantly changing landscape and the fact that real usage can be hidden behind IP addresses that appear like bot traffic, we no longer have confidence in our filtering process. It may be best for our users to just get the data as the data exists and know that our metadata world covers a vast range of usages - many as valid and valuable today as that human-driven traffic we prioritized ten years ago. Perhaps there is some other metric we can provide that might be useful for understanding the traffic in better ways, but filtering some of this traffic seems no longer useful.\nYour help with next steps There you have it. Our thinking: we’ve been filtering these resolution reports the best we can for ten years. Today, our confidence in the filtering process has waned. We’re proposing a change: we want to give you the raw resolution numbers, for machines and humans alike. We want to make this change soon, but we also want to hear from you.\nHow are you using the resolution reports? What you do you think of this proposed change? Will our removal of all filters from monthly resolution reports affect how you use the information within? We want to hear from you, and we’re inviting you to help us determine our next steps. We are going to give you until Friday, 31 January to tell us what you think of this proposed change. Then, Isaac and Jon will be back in early February to share with you what you have helped us decide. Thanks in advance!\n", "headings": ["Whoa\u0026hellip;too fast\u0026hellip;what exactly are resolution reports?","What are those numbers?","Why are some of those DOIs failing?","What’s with this filtering of “known search engine crawlers?”","How\u0026rsquo;d we get here?","Your help with next steps"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-journey-of-a-crossref-ambassador-in-latin-america/", "title": "A Journey of a Crossref Ambassador in Latin America", "subtitle":"", "rank": 1, "lastmod": "2019-12-11", "lastmod_ts": 1576022400, "section": "Blog", "tags": [], "description": "English version –– Información en español\nIn this post, Arley Soto shares some experiences about his work as a Crossref ambassador in Latin America.\nWhen I joined as a volunteer Crossref ambassador in 2018, I never imagined that in less than two years, I would have the opportunity to travel to three Latin American cities, visit Toronto, organize the first Crossref LIVE in Spanish and hold webinars in Spanish about Crossref\u0026rsquo;s services.", "content": "English version –– Información en español\nIn this post, Arley Soto shares some experiences about his work as a Crossref ambassador in Latin America.\nWhen I joined as a volunteer Crossref ambassador in 2018, I never imagined that in less than two years, I would have the opportunity to travel to three Latin American cities, visit Toronto, organize the first Crossref LIVE in Spanish and hold webinars in Spanish about Crossref\u0026rsquo;s services. After almost two years of continuous learning, I think it is worth sharing my experience with the Crossref community for a better understanding of the ambassadors\u0026rsquo; role in Latin America and to inspire ambassadors from other parts of the world to write and post their experiences.\nBefore becoming a Crossref ambassador, I had already been working with Crossref since 2011, when we started to coordinate DOI registration for the Biomédica Journal of the National Health Institute, one of the first journals to implement the DOI in Colombia. During these first years of relations with Crossref, I acquired basic knowledge on membership and the technical aspects of the services the agency offers, including Reference Linking, Content Registration and Crossmark. This close relationship with Crossref enabled us to hold the PKP-Crossref workshop in 2018 with Juan Pablo Alperín and Susan Collins at the Third International Congress of Redalyc Editors at Universidad César Vallejo, city of Trujillo.\nIn the same year, thanks to the invitation by the State University System (SUE, for the Spanish original) (Bogotá chapter), I had the opportunity to give a presentation on Crossref during the 2018 International Open Access Week held at Universidad Militar Nueva Granada. Around 50 people participated, including members and non-members of Crossref. There, I emphasized the nature of Crossref as a non-profit organization, based on affiliations and the importance of new members participating in the annual elections organized by Crossref and running to be representatives in the Crossref Board of Directors.\nIn November 2018, I had the pleasure of participating in the Crossref Meeting in Toronto, thanks to an invitation from the organizers. There, I talked to the representatives of other organizations who are members of Crossref around the world and I also met some of the members of the Crossref team in person. This event was essential for me as an ambassador, because I learned about Crossref\u0026rsquo;s vision and different projects firsthand, which increased my capacity to explain Crossref\u0026rsquo;s scope and role in the area of scientific communications. I remember that the booth Crossref provided to answer technical questions was particularly useful. There, Isaac, Shayn and other members of the technical team were always available to resolve specific queries that I had not been able to resolve before myself.\nIn my second year as an ambassador, I represented Crossref at the Universidad Central del Ecuador (Quito, Ecuador), in a talk with an average of 40 people from different parts of Ecuador. There, I emphasized the technical aspects of the DOI and good practices for its use in academic publications. This talk was held on April 21, 2019, in collaboration with Crossref and BITECA S.A.S., a sponsoring member of Crossref.\nIn May 2019, with Susan Collins and Vanessa Fairhurst, we organized Crossref LIVE Bogotá, which was not only successful because of the number of attendees from different parts of Colombia and other countries in the region, but also due to the meeting of Latin American ambassadors, where we worked the full morning discussing the priorities and issues of the region with ambassadors from Brazil, Mexico, Chile and Peru. Apart from other issues, at this meeting, it became clear the need to have better resources and support in Spanish for Spanish-speaking members.\nAdditionally, we helped to review the Spanish translation of the \u0026ldquo;You are Crossref\u0026rdquo; booklet, which we printed and distributed at Crossref LIVE Bogotá.\nDuring 2019, I participated in the Introduction to Crossref and Content Registration and Introduction to Reference Linking and Cited-by webinar webinars and held the first webinar in Spanish about the new Metadata Manager tool, always with the ongoing support and assistance of the Crossref team.\nAnd to end the year with a bang, together with Rachael Lammey, we organized the presentation: Open infrastructure and open data for the global metrics community: what can you build? I presented this at the 2Latmetrics: Altmetrics and Open Science in Latin America colloquium on November 4 in the city of Cusco (Peru).\nThis account of activities is a demonstration of the commitment of Crossref\u0026rsquo;s ambassadors to transmit the message of the importance of ethically and responsibly sharing, citing and making science visible on the web.\nSpanish version\nCuando me vinculé como embajador voluntario de Crossref en 2018, no imaginaba que en menos de dos años tendría la oportunidad de viajar a 3 ciudades en Latinoamérica, conocer Toronto, organizar el primer Crossref LIVE en español y realizar webinars en español sobre los servicios de Crossref. Después de casi dos años de continuo aprendizaje, creo que vale la pena compartir mi experiencia a la comunidad de Crossref para entender mejor el rol de los embajadores en Latinoamérica y para inspirar embajadores de otras regiones del mundo a que escriban y publiquen sus experiencias.\nAntes de convertirme en embajador de Crossref ya había trabajado con Crossref desde el año 2011, año en el que empezamos a gestionar DOI para la revista Biomédica del Instituto Nacional de Salud, una de las primeras revistas en implementar DOI en Colombia. Durante esos primeros años de relaciones con Crossref, adquirí un conocimiento básico sobre las membresías y los aspectos técnicos de los servicios que la agencia ofrece, incluyendo el Reference Linking, Content Registration y Crossmark, entre otros. Esta relación estrecha con Crossref favoreció para que en 2018 realizáramos el taller de PKP - Crossref entre Juan Pablo Alperín y Susan Collins en el 3er Congreso Internacional de Editores Redalyc, en la Universidad César Vallejo, ciudad de Trujillo En ese mismo año, gracias a la invitación realizada por el Sistema Universitario Estatal, SUE (capítulo Bogotá) tuve la oportunidad de hacer una presentación de Crossref en la Semana Internacional de Acceso Abierto 2018, realizado en Universidad Militar Nueva Granada 2018, allí participaron alrededor de 50 personas entre miembros y no miembros de Crossref, aquí hice énfasis en la naturaleza de Crossref como organización sin ánimo de lucro, basada en afiliaciones y la importancia de que los nuevos miembros participen en las votaciones anuales que organiza Crossref y que se postulen para ser representantes en la junta directiva de Crossref.\nEn noviembre de 2018 tuve el placer de participar en el Crossref Meeting en la ciudad de Toronto, gracias a una invitación de los organizadores. Allí conversé con representantes de otras organizaciones afiliadas a Crossref alrededor del mundo y también conocí en persona a algunos de los integrantes del equipo de Crossref. Este evento fue de vital importancia para mí como embajador ya que conocí de primera mano la visión y los diferentes proyectos que realiza Crossref, lo que aumentó mi capacidad para explicar en mi contexto el alcance y el papel de Crossref en el entorno de la comunicación científica. Recuerdo que fue particularmente útil el kiosco que dispuso Crossref para atender inquietudes técnicas en donde Isaac, Shane y otros miembros del equipo técnico siempre estuvieron dispuestos a solucionar dudas específicas que no había podido resolver antes por mi mismo.\nEn el segundo año como embajador representé a Crossref en la Universidad Central del Ecuador (Quito, Ecuador), charla a la que asistieron en promedio 40 personas de diversos lugares del Ecuador, allí hice énfasis en los aspectos técnicos del DOI y buenas prácticas de su utilización en publicaciones académicas.. Esta charla tuvo lugar el 21 de abril de 2019 y la realizamos en colaboración con Crossref y BITECA SAS miembro patrocinador en Crossref.\nEn mayo de 2019 organizamos junto con Susan Collins y Vanessa Fairshuit el Crossref LIVE Bogotá, que no solamente fue exitoso por la cantidad de asistentes de diferentes partes de Colombia y de otros países de la región, sino por la reunión de embajadores de Latinoamérica, donde trabajamos una mañana completa para discutir acerca de las prioridades y temáticas propias de la región con embajadores de Brasil, México, Chile y Perú. Entre otros asuntos, en esta reunión se hizo evidente la necesidad de tener mayores recursos y soporte en Español para los miembros hispanohablantes.\nAsí mismo contribuimos con la revisión de la traducción al español de la cartilla \u0026ldquo;Usted es Crossref\u0026rdquo; que imprimimos y repartimos durante el Crossref LIVE Bogotá.\nDurante 2019 participé en los webinars Introduction to Crossref and Content Registration y Introduction to Reference Linking and Cited-by webinar y llevé a cabo el primer Webinar en español sobre la nueva herramienta Metadata Manager, siempre con el acompañamiento y el soporte permanente del equipo de Crossref.\nY para terminar el año de la mejor manera, preparamos junto con Rachael Lammey la ponencia Open infrastructure and open data for the global metrics community: what can you build? Que presenté en el congreso 2Latmetrics: métricas alternativas y ciencia abierta en américa latina el 04 de noviembre en la ciudad de Cusco (Perú).\nEste recuento de actividades es una muestra del compromiso de los embajadores de Crossref en transmitir el mensaje de la importancia de compartir, citar y hacer visible la ciencia en la web, de una manera ética y responsable.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/introducing-our-new-director-of-finance-operations/", "title": "Introducing our new Director of Finance & Operations", "subtitle":"", "rank": 1, "lastmod": "2019-12-09", "lastmod_ts": 1575849600, "section": "Blog", "tags": [], "description": "I\u0026rsquo;m happy to announce that Lucy Ofiesh has joined Crossref as our new Director of Finance and Operations. Lucy has experience supporting the sustainability and governance of not-for-profit organizations having held roles such as Executive Vice President of the Brooklyn Children\u0026rsquo;s Museum and for the last few years as Chief Operating Officer at Center for Open Science, a Crossref member.\n", "content": "I\u0026rsquo;m happy to announce that Lucy Ofiesh has joined Crossref as our new Director of Finance and Operations. Lucy has experience supporting the sustainability and governance of not-for-profit organizations having held roles such as Executive Vice President of the Brooklyn Children\u0026rsquo;s Museum and for the last few years as Chief Operating Officer at Center for Open Science, a Crossref member.\nAt Center for Open Science, Lucy built her knowledge of the research communications community; she is knowledgeable about how diverse this community has become and the challenges of planning and scale that this comes with. She knows how to manage the complexities of an expanding global operation, where members, users––and staff––in several locations need fair, timely, and accurate information, whether it’s about how invoices relate to their use of our services or information about our approach to health benefits.\nFinance underpins all that Crossref does and is crucial to long term sustainability while ‘Operations’ is a varied function and it is only becoming more so as Crossref grows. The role encompasses human resources, organization culture, governance (including serving as secretary of the organization), and working as part of the senior leadership team. Lucy will bring community focus to our operations, putting member experience first so that it becomes easier to work with us, from implementing systems and processes that work for multiple languages and currencies to providing personable billing support.\nShe will also play a vital role on the Crossref leadership team, working with me and the other directors Bryan, Ginny, and Geoffrey to hone the strategies, goals, and metrics that will allow us to track progress and meet our ambitious goals.\nA word from Lucy… I am excited to be joining Crossref as its next Director of Finance and Operations. I previously worked for an organization that was a Crossref member and two qualities stood out to me: first, the focus with which Crossref has provided solutions to shared challenges across scholarly publishing; and second, the ways Crossref operates transparently and from a values-driven perspective.\nMy past experience has been in helping organizations run as effectively as they can, navigate change and growth, and build and support high functioning teams. Specifically, my work has focused on strategic and sustainability planning, financial forecasting, organizational governance, and staff management. My goal in finance and operations is to ensure that the working experience at Crossref––both for external partners and members and internal staff––is as frictionless as possible so we can have the greatest impact on our community.\nI am only the second person to step into this role. Lisa Hart Martin has led finance and operations for the first twenty years of Crossref\u0026rsquo;s existence. I am fortunate to be overlapping with her for a couple of weeks and grateful for the trust of the Crossref team to help guide us into our third decade. I really want to hear from our members so please reach out to me with your thoughts on Crossref\u0026rsquo;s finance and operations.\nPlease join us in welcoming Lucy to the Crossref community!\n", "headings": ["A word from Lucy…"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/proposed-schema-changes-have-your-say/", "title": "Proposed schema changes - have your say", "subtitle":"", "rank": 1, "lastmod": "2019-12-04", "lastmod_ts": 1575417600, "section": "Blog", "tags": [], "description": "The first version of our metadata input schema (a DTD, to be specific) was created in 1999 to capture basic bibliographic information and facilitate matching DOIs to citations. Over the past 20 years the bibliographic metadata we collect has deepened, and we’ve expanded our schema to include funding information, license, updates, relations, and other metadata. Our schema isn’t as venerable as a MARC record or as comprehensive as JATS, but it’s served us well.", "content": "The first version of our metadata input schema (a DTD, to be specific) was created in 1999 to capture basic bibliographic information and facilitate matching DOIs to citations. Over the past 20 years the bibliographic metadata we collect has deepened, and we’ve expanded our schema to include funding information, license, updates, relations, and other metadata. Our schema isn’t as venerable as a MARC record or as comprehensive as JATS, but it’s served us well. It’s not currently positioned to fully support everything we want to do long term - we’d like to support assertions, map cleanly to JATS and schema.org magically at the same time, and maybe even move beyond XML - but for now it’s something we can work with to empower member metadata to help find, cite, and connect scholarly content.\nWe’ve maintained backwards compatibility for most things since 2007 but this update will require some moderate changes to how contributors are modeled. The balance between supporting established tagging and addressing the evolution of what we collect and how it is expressed can be tricky. We want to collect good metadata without significantly disrupting the workflow of our membership, who are the source of the metadata. Even so, this is a fairly pragmatic update that will position us well for the future. I look forward to supporting new types of content and metadata in the future, but for now take a look at what I\u0026rsquo;m proposing.\nLeave feedback, ask questions, and make suggestions in the feedback document or via email to feedback@crossref.org. Next update I’m proposing some updates and additions to the metadata we collect, and would like your feedback. To fully and elegantly support affiliation identifiers and multiple author roles, we need to break backwards compatibility. Specifically, we want to:\nAdd support for CRediT The CASRAI CRediT taxonomy is increasingly used to represent roles common to contributors to research outputs. Our members are applying CRediT to contributors, so we want to capture them as well. Supporting CRediT allows Crossref and our membership to identify and credit contributors beyond authors and editors.\nAs most of you know, a contributor often does more than one thing - they write, they edit, they curate. We currently only allow one contributor role as an attribute, but, to realistically support CRediT and accurately capture evidence about the work, we need to allow multiple contributor roles. This will break backwards compatibility. We can potentially support the old way and the new way, but I’m trying to avoid awkward compromises wherever possible.\nSupporting CRediT doesn’t mean you need to adopt CRediT. We’ll continue to support existing author roles, but they’ll be marked up differently. Details are in our request for feedback document.\nExpand support for author and organization identifiers We collect ORCID iDs in our metadata but do not currently support other types of contributor identifiers. We also don\u0026rsquo;t support affiliation or organization identifiers beyond those assigned within our funder and clinical trial registries. We’ve had increasing demands from both metadata suppliers and users to expand support for affiliation identifiers because\u0026hellip;identifiers are useful. We also want to expand author identifier support as ORCID IDs may only be registered by researchers who are able to curate their own ORCID record. Adding support for ISNI and Wikidata IDs is a common request, but we anticipate there\u0026rsquo;s a need for other identifiers as well.\nOur plan is to accept identifiers registered with identifiers.org as well as other identifiers upon request. We prefer to remain consistent with the identifiers.org registry as much as possible.\nWe’re particularly keen to support open community-led identifiers like ORCID and ROR and will continue to do so, but also want to support the metadata our members want to distribute. Organization identifiers will be particularly useful as they’ll help us populate records with ROR IDs in the future, leading to better quality affiliation metadata.\nExpand support for a range of contributor names We currently require a surname for all contributors, and don’t provide comprehensive support for contributors whose names are represented by multiple alphabets, or who have nicknames or aliases, or who don’t have a surname. To begin with, we’ll replace surname with the more widely used ‘family name’ and remove the fixed surname requirement, allowing only a given name to be provided where appropriate. We’ll also allow a variety of names to be provided for each contributor.\nExpand affiliation support We currently collect affiliation as a single string - we’re going to break that up to support affiliation names, and add in support for organizational identifiers like ROR.\nExpand support for data citation For those of you who send us references, we’re adding a few fields to better support data citation. We’re also going to allow you to (optionally) supply a specific publication type for references.\nOther updates We’re making some other small updates as well. If you have a small request, we may be able to accommodate it in our next update. Larger changes or additions will probably have to wait for future updates, but we’d love to start collecting suggestions now.\nWe need your feedback! I\u0026rsquo;ll be giving a webinar on December 19 at 02:00 and 15:00 UTC to go over these changes in detail - please visit our webinars page to register.\nAgain, please leave feedback, ask questions, and make suggestions in the feedback document, or if you prefer send feedback via email to feedback@crossref.org. We\u0026rsquo;ll be taking feedback through January 15, 2020.\n", "headings": ["Next update","Add support for CRediT","Expand support for author and organization identifiers","Expand support for a range of contributor names","Expand affiliation support","Expand support for data citation","Other updates","We need your feedback!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-turning-point-is-a-time-for-reflection/", "title": "A turning point is a time for reflection", "subtitle":"", "rank": 1, "lastmod": "2019-11-09", "lastmod_ts": 1573257600, "section": "Blog", "tags": [], "description": "Crossref strives for balance. Different people have always wanted different things from us and, since our founding, we have brought together diverse organizations to have discussions\u0026mdash;sometimes contentious\u0026mdash;to agree on how to help make scholarly communications better. Being inclusive can mean slow progress, but we’ve been able to advance by being flexible, fair, and forward-thinking.\nWe have been helped by the fact that Crossref’s founding organizations defined a clear purpose in our original certificate of incorporation, which reads:", "content": "Crossref strives for balance. Different people have always wanted different things from us and, since our founding, we have brought together diverse organizations to have discussions\u0026mdash;sometimes contentious\u0026mdash;to agree on how to help make scholarly communications better. Being inclusive can mean slow progress, but we’ve been able to advance by being flexible, fair, and forward-thinking.\nWe have been helped by the fact that Crossref’s founding organizations defined a clear purpose in our original certificate of incorporation, which reads:\n“To promote the development and cooperative use of new and innovative technologies to speed and facilitate scientific and other scholarly research.”\nAs Crossref prepares to turn 20 in January 2020, it’s an opportunity to reflect on achievements and highlights from 2018-19 and also ponder the preceding decades. Change is a constant at Crossref but the organization has never strayed from its initial defined purpose. Our services and value now extend well beyond persistent identifiers and reference linking, and our connected open infrastructure benefits our 11,000+ membership as well as all those involved in scholarly research. This expansion is exactly what was envisioned to meet the goal of “speeding and facilitating” research.\nThis year\u0026rsquo;s annual report is different from previous years’; it has been expanded into a ‘fact file’ so that we can invite comments on the path ahead, based on transparent access to data about our membership, activities, and finances. As we were pulling together the charts and tables for this annual report we noticed stark differences in where Crossref is today compared to years past.\nThe rate of membership growth has accelerated and we now have over 180 new members joining every month, leading to one of the most striking changes we found. The lowest three membership tiers now account for 46% of revenue (up from 25% in 2011) while the highest three tiers account for 36% (down from 56% in 2011). Today, the typical Crossref member has just a few hundred registered content items. One way we have been able to accommodate this growth efficiently is by collaborating with sponsors in different countries. Very small members can join via a local sponsor that is able to provide technical, financial, language, and administrative support. We now have more members joining via sponsors, who otherwise would largely not be able to join at all. While you’d need to be a millionaire by US standards to join directly from Indonesia in our lowest fee tier (calculated using Purchasing Power Parity), the sponsor program\u0026mdash;supported often by government investment in science and education\u0026mdash;has enabled Indonesian organizations to join Crossref in large numbers, supporting their aim to become one of the fastest-growing nations in open research, and to help that research be discovered.\nCrossref has repeatedly stayed ahead of developments in the community In 2007, when the Similarity Check working group discussions and pilot started, there was disagreement on the board about whether Crossref should provide such a service and whether it was a strategic priority for members. By the end of the pilot, when the decision came to launch a production service, it was seen as essential and a top priority. This conclusion has been borne out in recent research into the value of Crossref; Similarity Check is one of the services of most importance to members.\nAdding preprints as a record type was controversial at the time. The board discussed the topic of “duplicative works” for about two years with strong opinions on all sides. The working group delivered a good set of policies and technical specifications and in the July 2015 board meeting there was a majority—but not 100%—agreement on the motion to approve. We implemented preprints as a record type just in time to accommodate the snowballing of preprint servers emerging from existing and new members.\nAnother example of a former\u0026mdash;and current\u0026mdash;area of contention is the approach to metadata. When Crossref first launched, there were lengthy discussions about what metadata we should collect. The initial focus was on the minimal set of metadata to enable reference matching in support of reference linking. In the beginning, neither article titles, lists of authors, references, nor abstracts were included in the minimal metadata set. We supported them as optional but most members opted out. However, the huge set of metadata that Crossref collects and disseminates now is seen as essential, providing a lot of value for members in terms of discoverability.\nToday, Crossref enables metadata retrieval on a large scale—an average of more than 600 million queries per month—through a variety of interfaces, most notably the REST API (Public, Polite, and Plus versions). The metadata is used by thousands of organizations and services—both commercial and not-for-profit—increasing the discoverability of member content. In fact, members of all stripes have long initiated projects to expand the metadata Crossref is able to collect and disseminate: from facilitating text mining (through license and full-text URLs); to enabling better connections with and evidence of contributions (through Funder IDs, ORCID iDs, and soon CRediT roles and ROR IDs).\nThese are all examples of where Crossref has successfully “promoted the cooperative use of new and innovative technologies” and where we are meeting our mission to make scholarly communications a little bit better. As ever, we need to thank our brilliant staff for their unfailing resilience, balance, and diligence, in these times of dynamic change.\nConsidering the value and future of Crossref Research is global, and supporting a diverse global community is a challenge. This year, we conducted our first wide-ranging investigation into what people value from Crossref. This involved telephone interviews with over 40 community members as well as an online survey of 600+ respondents.\nThe results of the value research are referenced throughout the annual report/fact file and are available online publicly. We will be discussing the insights in various forums and posing some questions, such as:\nHow should Crossref balance the different dynamics in the community? Are the right members involved in key decisions? Are the sustainability model we have and the fees we charge fair? Which initiatives should be top or bottom priorities? Director of MIT Press, Amy Brand, recently reflected that Crossref is currently at a crossroads, envisioning that:\n“The Crossref of 2040 could be an even more robust, inclusive, and innovative consortium to create and sustain core infrastructures for sharing, preserving, and evaluating research information.”\nBut only if Crossref is not:\n“held back, and its remit circumscribed, by legacy priorities and forces within the industry that may perceive open data and infrastructure as a threat to their own evolving business interests.”\nWe welcome this public commentary and encourage others in the community to respond and report what value Crossref offers as community-owned infrastructure, and how they’d like to see the organization evolve.\nMore than ever, we need to have this discussion with a broad and representative group. So please, read the value research report and the annual report/fact file, and get ready to voice your opinions!\n", "headings": ["Crossref has repeatedly stayed ahead of developments in the community","Considering the value and future of Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/machine-learning/", "title": "Machine Learning", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/whats-your-citations-style/", "title": "What’s your (citations’) style?", "subtitle":"", "rank": 1, "lastmod": "2019-10-29", "lastmod_ts": 1572307200, "section": "Blog", "tags": [], "description": "Bibliographic references in scientific papers are the end result of a process typically composed of: finding the right document to cite, obtaining its metadata, and formatting the metadata using a specific citation style. This end result, however, does not preserve the information about the citation style used to generate it. Can the citation style be somehow guessed from the reference string only?\nTL;DR I built an automatic citation style classifier. It classifies a given bibliographic reference string into one of 17 citation styles or \u0026ldquo;unknown\u0026rdquo;.", "content": "Bibliographic references in scientific papers are the end result of a process typically composed of: finding the right document to cite, obtaining its metadata, and formatting the metadata using a specific citation style. This end result, however, does not preserve the information about the citation style used to generate it. Can the citation style be somehow guessed from the reference string only?\nTL;DR I built an automatic citation style classifier. It classifies a given bibliographic reference string into one of 17 citation styles or \u0026ldquo;unknown\u0026rdquo;. The classifier is based on supervised machine learning. It uses TF-IDF feature representation and a simple Logistic Regression model. For training and testing, I used datasets generated automatically from Crossref metadata. The accuracy of the classifier estimated on the test set is 94.7%. The classifier is open source and can be used as a Python library or REST API. Introduction Threadgill-Sowder, J. (1983). Question Placement in Mathematical Word Problems. School Science and Mathematics, 83(2), 107-111 This reference is the end result of a process that typically includes: finding the right document, obtaining its metadata, and formatting the metadata using a specific citation style. Sadly, the intermediate reference forms or the details of this process are not preserved in the end result. In general, just by looking at the reference string we cannot be sure which document it originates from, what its metadata is, or which citation style was used.\nGlobal multi-billion dollar fashion industry proves without a doubt that people care about their fashion style. But why should we care about the citation style used to generate a specific reference? This might seem like an insignificant piece of information, but it can be a powerful clue when we try to solve tasks like:\nReference parsing, i.e., extracting metadata from the reference string. If the style is known, we also know where to expect metadata fields in the string, and it is typically enough to use simple regular expressions instead of complicated (and slow) machine learning-based parsers. Discipline/topic classification. Citation styles used in documents correlate with their discipline. As a result, knowing the citation style used in the document could provide a useful clue for a discipline classifier. Extracting references from documents. Conforming to a specific style might suggest that the reference string was correctly located within a larger document. Even though the style is not directly mentioned in the reference string, the string contains useful clues. Some styles will abbreviate the authors\u0026rsquo; first names, and others won\u0026rsquo;t. Some will place the year in parentheses, others separate it with commas. The presence of such fragments in the reference string can be used as the input for the style classifier.\nI used these clues to build an automatic style classifier. It takes a single reference string on the input and classifies it into one of 17 styles or \u0026ldquo;unknown\u0026rdquo;. You can use it as a Python library or via REST API. The source code is also available. If you find this project useful, I would love to hear about it!\nAnd if you are interested in more details about the classifier and how it was built, read on.\nData The data for the experiments was generated automatically. The training and the test set were generated in the same way but from two different samples. The process was the following:\n5,000 documents were randomly chosen from Crossref collection. Each document was formatted into 17 citation styles. This resulted in 85,000 pairs (reference string, citation style). Very short reference strings were removed. A short reference string typically results from very incomplete metadata of the document. From a number of randomly selected references, I removed fragments like the name of the month. These fragments appear in the automatically generated reference strings because sometimes months are included in the metadata records in Crossref collection. However, they rarely appear in the real-life reference strings, so removing them made the dataset more reliable. 5,000 strings labelled as \u0026ldquo;unknown\u0026rdquo; were also added. These were generated by randomly swapping the words in the \u0026ldquo;real\u0026rdquo; reference strings. This process resulted in two sets: training set containing 87,808 data points and test set containing 87,625 data points. The training set was used to choose various classification parameters and to train the final model. The test set was used to obtain the final estimation of the classifier\u0026rsquo;s accuracy.\nStyles The classifier was trained on the following 17 citation styles (+ \u0026ldquo;unknown\u0026rdquo;):\nacm-sig-proceedings american-chemical-society american-chemical-society-with-titles american-institute-of-physics american-sociological-association apa bmc-bioinformatics chicago-author-date elsevier-without-titles elsevier-with-titles harvard3 ieee iso690-author-date-en modern-language-association springer-basic-author-date springer-lecture-notes-in-computer-science vancouver These 17 styles were chosen to cover a vast majority of references that we see in the real-life data, without including too many variants of very similar styles.\nIf you need a different style set, fear not. You can use the library to train your own model based on exactly the styles you need.\nFeatures Our learning algorithm cannot work directly with the raw text on the input. It needs numerical features. In the case of text classification (and reference strings are text), one very common feature representation is bag-of-words. In the simplest variant, each feature represents a single word, and the value of the feature is binary: 1 if the word is present in the text, 0 otherwise.\nThere are many variants of this representation, for example:\nThe input text typically undergoes normalization before the features are extracted. Depending on the use case, this might include lowercasing, removing punctuation, bringing the words to their canonical form by stemming, etc. We do not have to use single words as features. In some use cases, it is beneficial to use n-grams, which correspond to fixed-length sequences of words. Instead of binary values, we might want to use some other feature weight schemes, such as the famous TF-IDF representation. Our use case is not a typical case of text classification. We cannot use raw words as features, as words do not carry the information about the citation style. Imagine the same document formatted in different styles –– those reference strings will contain the same words, and the learning algorithm won\u0026rsquo;t be able to distinguish between them.\nAs a side note, in some cases, some specific words might be important. For example, if the reference contains the word \u0026ldquo;algorithm\u0026rdquo;, chances are the document is from computer science. If so, then perhaps the citing paper is from computer science as well. And in computer science, some styles are more popular than others. Machine learning algorithms are pretty good at detecting such correlations in the data. In the first version of our classifier, however, we do not take this into account. This keeps things simpler.\nIf not words, then what matters in our case? It seems that the information about the style is present in punctuation, capitalization and abbreviations.\nTo capture these clues, before extracting the features we first map our reference string into a sequence of \u0026ldquo;word types\u0026rdquo; (or \u0026ldquo;character types\u0026rdquo;). The types are the following: lowercase-word, lowercase-letter, uppercase-word, uppercase-letter, capitalized-word, other-word, year, number, dot, comma, left-parenthesis, right-parenthesis, left-bracket, right-bracket, colon, semicolon, slash, dash, quote, other.\nIn addition, we mark the beginning and the end of the reference string with special types start and end.\nSo for example this string:\nEberlein, T. J. Yearbook of Surgery 2006, 322–324. is mapped into this sequence:\nstart capitalized-word comma uppercase-letter dot uppercase-letter dot capitalized-word lowercase-word capitalized-word year comma number dash number dot end This transformation effectively brings together different words, as long as their form is the same.\nAfter transforming the reference string we extract 2-grams, 3-grams and 4-grams. The values of the features are TF-IDF weights.\nSome example features in our representation include:\nlowercase-word lowercase-word lowercase-word lowercase-word - a sequence of four lowercase words. It is most likely the part of the article title and won\u0026rsquo;t have a huge impact on the decision about the citation style. capitalized-word comma uppercase-letter dot - typical representation of an author in some styles, where the first name is given as an initial only and follows the last name. left-parenthesis year right-parenthesis - typical for styles that enclose the year in parentheses. number dash number - this sequence is most likely pages range. Learning algorithm I tested four learning algorithms (naive Bayes, logistic regression, linear support vector classification and random forest) in a 5-fold cross validation on the training set. The plot shows the distribution of accuracies obtained by each algorithm:\nBased on these results, logistic regression was chosen as the algorithm with the best mean accuracy and the lowest variance of the results.\nFinal accuracy estimation The final model was trained on the entire training set and evaluated on the test set. As evaluation metric accuracy was used. In this case, accuracy is simply the fraction of the references in the test set correctly classified by the classifier.\nThe accuracy on the test set was 94.7%. The confusion matrix shows which styles were most often confused with each other:\nThe most often confused styles are chicago-author-date and american-sociological-association. Let\u0026rsquo;s see some example strings from these two styles:\nLegros, F. 2003. \u0026#34;Can Dispersive Pressure Cause Inverse Grading in Grain Flows?: Reply.\u0026#34; Journal of Sedimentary Research 73(2):335–335 Legros, F. 2003. \u0026#34;Can Dispersive Pressure Cause Inverse Grading in Grain Flows?: Reply.\u0026#34; Journal of Sedimentary Research 73 (2) : 335–335 Clarke, Jennie T. 2011. \u0026#34;Recognizing and Managing Reticular Erythematous Mucinosis.\u0026#34; Archives of Dermatology 147(6):715 Clarke, Jennie T. 2011. \u0026#34;Recognizing and Managing Reticular Erythematous Mucinosis.\u0026#34; Archives of Dermatology 147 (6) : 715 Chalmers, Alan, and Richard Nicholas. 1983. \u0026#34;Galileo on the Dissipative Effect of a Rotating Earth.\u0026#34; Studies in History and Philosophy of Science Part A 14(4):315–40 Chalmers, Alan, and Richard Nicholas. 1983. \u0026#34;Galileo on the Dissipative Effect of a Rotating Earth.\u0026#34; Studies in History and Philosophy of Science Part A 14 (4) : 315–340 It seems that the styles are indeed very similar. The strings look almost identical, apart from spacing, which is not included in any way in our feature representation. No wonder that the classifier confuses these two styles a lot.\nA more detailed analysis of the classifier can be found here.\n", "headings": ["TL;DR","Introduction","Data","Styles","Features","Learning algorithm","Final accuracy estimation"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/working-groups/metadata/", "title": "Metadata Practitioner Interest Group", "subtitle":"", "rank": 2, "lastmod": "2019-10-07", "lastmod_ts": 1570406400, "section": "Working groups", "tags": [], "description": "The Metadata Practitioners Interest Group advises Crossref on publishing needs and trends as they impact Crossref metadata. We want our metadata to be compatible, complete, credible, and curated. Our metadata comes from our members, and development efforts need to be community-led.\nWe are currently working on:\nUpdating the Crossref metadata schema Identifying metadata weak spots Redefining preprints Working with JATS and JATS4R Just like working and advisory groups, the interest group is open to all members of the Crossref community, members and users alike.", "content": "The Metadata Practitioners Interest Group advises Crossref on publishing needs and trends as they impact Crossref metadata. We want our metadata to be compatible, complete, credible, and curated. Our metadata comes from our members, and development efforts need to be community-led.\nWe are currently working on:\nUpdating the Crossref metadata schema Identifying metadata weak spots Redefining preprints Working with JATS and JATS4R Just like working and advisory groups, the interest group is open to all members of the Crossref community, members and users alike. However, interest groups tend to be a bit \u0026rsquo;looser\u0026rsquo;, and can come together sporadically. Your participation can be passive or active, enthusiastic or occasional. This group consists of people with a shared interest in shaping the metadata we collect. Contact Patricia (pfeeney@crossref.org) to be added to our monthly call and mailing list.\nParticipants Midori Baer, National Academy of Sciences Debra Borrelli, West Virginia University Iwan Joe Dewanto, Pengurus Besar Persatuan Dokter Gigi Indonesia Angela Herrera, Universidad Nacional de Educacion Enrique Guzman y Valle Cyrenes Moncawe, National Fisheries Research and Development Institute Melissa Harrison, eLife Johannes Gottschalt, Bohlau Verlag James Phillpotts, OUP Asbjørn Dahl, National Library of Denmark Iwan Joe Dewanto, Pengurus Besar Persatuan Dokter Gigi Indonesia Uli Fechner, Beilstein Institut Favio Andres Florez, Editorial Pontificia Universidad Javeriana April Gilbert, San Jose State University Mark Gillespie, Reactome Johannes Gottschalt, Bohlau Verlag Xiaofeng Guo, Wanfang Data Melissa Harrison, eLife Angela Herrera, Universidad Nacional de Educacion Enrique Guzman y Valle Helen King, BMJ Cyrenes Moncawe, National Fisheries Research and Development Institute Mike Nason, PKP/UNB Jaime de la Ossa, Universidad de Sucre James Phillpotts, OUP Carly Robinson, OSTI ", "headings": ["Participants"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/accidental-release-of-internal-passwords-api-tokens-for-the-crossref-system/", "title": "Accidental release of internal passwords, & API tokens for the Crossref system", "subtitle":"", "rank": 1, "lastmod": "2019-10-04", "lastmod_ts": 1570147200, "section": "Blog", "tags": [], "description": "TL;DR On Wednesday, October 2nd, 2019 we discovered that we had accidentally pushed the main Crossref system as part of a docker image into a developer’s account on Docker Hub. The binaries and configuration files that made up the docker image included embedded passwords and API tokens that could have been used to compromise our systems and infrastructure. When we discovered this, we immediately secured the repo, changed all the passwords and secrets, and redeployed the system code.", "content": "TL;DR On Wednesday, October 2nd, 2019 we discovered that we had accidentally pushed the main Crossref system as part of a docker image into a developer’s account on Docker Hub. The binaries and configuration files that made up the docker image included embedded passwords and API tokens that could have been used to compromise our systems and infrastructure. When we discovered this, we immediately secured the repo, changed all the passwords and secrets, and redeployed the system code. We have since been scanning all of our logs and systems to see if there has been any unusual activity that could be related to the exposure of the container.\nPlease note that no external data e.g. member passwords or personal information were exposed; our source code contains only internal passwords and ‘secrets’ such as API tokens.\nThankfully, the way in which these secrets were exposed (in compressed, binary files which were, in turn, in a Docker image) means that they were probably overlooked by the automated exploitation tools which focus on scanning source code. And, so far, we have seen nothing that would indicate that these passwords and secrets have been exploited. We will, of course, inform our members directly (and update this blog) if that changes.\nMore than you probably want to know If you are continuing to read this, my guess is that you might have questions like:\nWhy are you doing something as silly as embedding secrets and passwords in your code? And wait a minute… I thought Crossref code was open source? And why is the director of strategic initiatives announcing this? Let me answer these questions in random order.\nIn March 2019 I took over Crossref’s technical teams when Chuck Koscher announced that he would be retiring at the end of the year. I’m now the director of technology \u0026amp; research.\nA few months earlier we had already concluded that a major portion of the Crossref system had accumulated 20 years of technical debt and that we were going to spend a significant portion of 2019 and 2020 paying down that debt.\nSpecifically, a lot of the code that runs Crossref was inherited from a third party who developed it back in the early 2000s. This means that, even though any new systems that we’ve developed since 2007 have been open-source, the code for the oldest parts of the system has remained closed because it contained potentially proprietary code as well as a lot of deprecated coding practices. Also - the architecture, the tooling, and the development processes behind the Crossref system had not changed much in those twenty years. It was fantastic architecture, tooling, and code for its time. But architectures that scale to millions of records need to change to handle hundreds of millions of records. Processes that work for configuring one service need to change when you are managing dozens of services. And support tools that work for a few hundred members break down when you are dealing with tens of thousands of members.\nThese parts of the Crossref system were decidedly not 12 factor. We were not using DevOps or SRE working practices to run them. And the bulk of that part of the system is still being run in a traditional data center.\nBut since March we have been slowly fixing that. In incremental steps. Some of which are visible as a side effect of the security incident that precipitated this blog post. For example, one of our first moves was to move our development to Gitlab. Even though a big chunk of the base Crossref code is still closed source, we saw moving to Gitlab as a priority because Gitlab offers a fantastic suite of tools to help automate and manage our deployments. Similarly, we have been Dockerizing the Crossref system so that it is easier to scale and run in different environments. And as part of this effort, we have spent a lot of time on the issue of how to best handle secrets. We knew our secrets management in this part of the codebase was horrible. We have been developing some experiments and infrastructure for handling these secrets securely. But we haven’t finished this work yet. And so the system slipped out into a public repo too early. Ironically, this too illustrates a fundamental change in the way we develop things. Our default is to be open and transparent. This case is currently an exception. An exception we want to eliminate, but one we are not ready to do yet. We have to audit and scrub the code first.\nYes, this incident has been embarrassing. But not nearly as embarrassing as the fact that Crossref has succumbed to a technology industry cliche. That we spent so much time growing and focusing on new features for our members, that we neglected some of the creaking infrastructure of our infrastructure.\nAnd I should be clear about two things:\nFirst, not all of our code is like this. We have, for a long time, been building open source software and using modern best practices for secrets management in our newer subsystems and services. The problems described above are confined to twenty-year-old-code that we didn’t write in the first place and that we had been avoiding refactoring.\nAnd second, the technology team has been marvelous at responding to the challenge we face. They have adopted new processes and tools. They are learning new techniques. We are steadily chipping away at these problems.\nIt is generally considered bad practice to praise or reward technology teams for fire-fighting instead of fire prevention, but this may be the exception that proves the rule.\nI was blown away by how the technology, product, and support teams worked together. When we discovered this problem, I sat at my desk in rural France and watched as staff from the UK, and all three US time zones shut down this problem in just a couple of hours. Obviously, I wish we hadn’t had the problem in the first place, but seeing their response did a great deal to encourage me that we are on the right track.\nIn any case, it looks like we’ve been lucky. And we’ll be working even harder to refactor our code, tools, and processes so that this kind of thing doesn’t happen again.\n", "headings": ["TL;DR","More than you probably want to know"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/code/", "title": "Code", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/crossref-system/", "title": "Crossref System", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/gitlab/", "title": "Gitlab", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/request-for-feedback-conference-id-implementation/", "title": "Request for feedback: Conference ID implementation", "subtitle":"", "rank": 1, "lastmod": "2019-09-13", "lastmod_ts": 1568332800, "section": "Blog", "tags": [], "description": "We’ve all been subject to floods of conference invitations, it can be difficult to sort the relevant from the not-relevant or (even worse) sketchy conferences competing for our attention. In 2017, DataCite and Crossref started a working group to investigate creating identifiers for conferences and projects. Identifiers describe and disambiguate, and applying identifiers to conference events will help build clear durable connections between scholarly events and scholarly literature.\nChaired by Aliaksandr Birukou, the Executive Editor for Computer Science at Springer Nature, the group has met regularly over the past two years, collaborating to create use cases and define metadata to identify and describe conference series and events.", "content": "We’ve all been subject to floods of conference invitations, it can be difficult to sort the relevant from the not-relevant or (even worse) sketchy conferences competing for our attention. In 2017, DataCite and Crossref started a working group to investigate creating identifiers for conferences and projects. Identifiers describe and disambiguate, and applying identifiers to conference events will help build clear durable connections between scholarly events and scholarly literature.\nChaired by Aliaksandr Birukou, the Executive Editor for Computer Science at Springer Nature, the group has met regularly over the past two years, collaborating to create use cases and define metadata to identify and describe conference series and events. We first asked for input on metadata specifications in April 2018. Technical implementation kicked off in February with a workshop at CERN to discuss the mechanics of making PIDs for conferences a reality.\nWe’ve reached another milestone and want your feedback Crossref has supported a number of conference publication-related PIDs for years - members can currently register PIDs for conference series publications, conference proceedings, and of course individual conference papers - and that won’t change, but we will also be supporting DOI registration for conferences. A crucial step towards this is of course integrating the new identifier into our metadata input schema.\nThe details We currently collect some limited metadata describing the conference itself such as theme, location, and dates as part of the conference series or proceeding metadata, but do not apply a DOI to that information. The new Conference ID records will include expanded metadata as defined by the working group. You\u0026rsquo;ll be able to register a distinct metadata record for a single conference. You\u0026rsquo;ll also be able to register a record for a conference series, and connect Conference IDs to conference proceeding metadata records and DOIs.\nChanges to the conference-specific metadata are backwards compatible. Members will be able to register event metadata per usual, or can instead use the new event metadata to register an identifier for their conference event and/or series. This means a member can:\nRegister conference, conference series, proceedings series, proceedings, and papers in one submission Register proceedings or proceedings series and papers without a Conference ID included Register Conference IDs only Update an existing conference record with a Conference PID I’ve written up our proposal in this google doc and we want your feedback before we proceed with implementation. Please comment directly in the Google doc, open a Gitlab issue, or feedback@crossref.org. We’ll keep the document open for comments until September 30.\n", "headings": ["We’ve reached another milestone and want your feedback","The details"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/speaking-traveling-listening-learning/", "title": "Speaking, Traveling, Listening, Learning", "subtitle":"", "rank": 1, "lastmod": "2019-08-29", "lastmod_ts": 1567036800, "section": "Blog", "tags": [], "description": "2019 has been busy for the Community Outreach Team; our small sub-team travels far and wide, talking to members around the world to learn how we can better support the work they do. We run one-day LIVE local events alongside multi-language webinars, with the addition of a new Community Forum, to better support and communicate with our global membership.\nThis year we held a publisher workshop in London in collaboration with the British Library in February to talk about all things metadata and Open Access, before heading over to speak to members in Kyiv in March at the National Technical University of Ukraine.", "content": "2019 has been busy for the Community Outreach Team; our small sub-team travels far and wide, talking to members around the world to learn how we can better support the work they do. We run one-day LIVE local events alongside multi-language webinars, with the addition of a new Community Forum, to better support and communicate with our global membership.\nThis year we held a publisher workshop in London in collaboration with the British Library in February to talk about all things metadata and Open Access, before heading over to speak to members in Kyiv in March at the National Technical University of Ukraine. June saw our first ever non-English LIVE local event in Bogota held in collaboration with Biteca, and in an action-packed week in July, Rachael Lammey and myself jetted across to Kuala Lumpur and Bangkok where we collaborated with Malaysian Ministry of Education, USIM, Chulalongkorn University, iGroup, and ORCID to run two events for our South-East Asian members.\nDespite the varied locations, speakers and audiences at these events, some common themes emerged\u0026hellip;\nLanguage Matters We currently work with member organisations in over 125 countries around the world, spanning an even greater number of languages. Whilst, at the moment at least, it is not possible to provide support across all these languages, we are improving support for non-native English speakers. We now have service videos, factsheets, and brochures available in 8 languages including: French, Spanish, Brazilian Portuguese, Arabic, Chinese, Japanese, Korean, and Bahasa Indonesia. As well as expanding our webinars to include a series in Russian, Brazilian Portuguese, Arabic, Spanish and Turkish so far.\nOur global team of 24 Ambassadors have been key in helping us to provide translated documentation, to run multi-lingual webinars and in-person events, and to answer questions from our members across languages and timezones. Our LIVE local event in Bogota, saw us run our first ever Spanish event with support from our Latin American ambassador team.\nI know first hand how daunting public speaking can be, particularly in a second language. As a non-native Spanish speaker, the fear of being misunderstood or mis-pronouncing a word can be paralysing. Members come along to our events with a whole host of questions, sometimes preferring to come and speak to us one-on-one at the break or follow up with us after the event. Everyone has their own preferences, however, being able to communicate in the local language helps to break down barriers and boosts audience participation by taking away these added pressures.\nAdditionally after running a number of these events, one of the key things we have learnt is how much content to cover in a day. Our LIVE locals are free to attend and open to the whole community. This however can mean that we have a very varied audience in terms of technical know-how and experience of working with our systems. At first we attempted to cover all we could, addressing as many needs, questions and uses of Crossref metadata that we could. However, creating content to please everyone is often a recipe for disaster and information overload. If you start to see your attendee’s eyes glaze over or they start answering emails on their smartphones, you’ve lost them.\nInstead we are now going to tailor our events a little more, asking registrants questions in advance, and selecting specific topics to cover. Having a good range of distinct topics and presenters, including local guest speakers, also helps to maintain momentum and avoid audience fatigue. Wider information and conversations will then continue on our Community Forum as well as events being supplemented by webinars in local languages and timezones.\nRelationship status: It’s complicated A question we are often asked when talking to members is how to link distinct content items in the metadata - whether this be a data-set to the published results, a preprint with the version of record, or a translated version of an article with the original. Linking these related research outputs is extremely important; researchers need to be able to cite the correct version of the work they have used in their research. Creating a network of these linkages between scholarly outputs also helps ourselves, our members, and the wider community better track how research is used and developed.\nEnglish is by far the most common language used in international academic journals and often is required for publication, however the article can be published in two or more languages, enabling greater discovery and use of the research. A frequent question we get asked is how to register the two versions, whether they use the same DOI or whether each should be assigned its own identifier. Our advice is that each version of the article should have it’s own DOI for citation reasons, but should be linked in the metadata of the translated version as in the xml example below:\nHowever, our schema covers far more relationship types than purely translations. Another interesting area of discussion which has become increasingly prevalent in the last couple of years is around preprints. We began supporting the registration of preprints in November 2016, using their specific record type and enabling linking in the metadata to the version of record, providing a clear publication history for accurate citation. Today we have almost 150k registered in our system.\nIn Kyiv, we had a request to talk more about data citation; the importance of making data available and persistently linked to. Although data is often shared, it is not routinely referenced in the same way as journal articles or other publications, and this is something we want to encourage. When data is cited it provides clarity and context about the research underpinning the published article, as well as enabling greater discovery and re-use of that data in future research and publications. You can do this in two ways at Crossref, either by including data citations your reference lists, or, again, by using the relations section of the schema. If you want to learn more about the ‘how’ of data citation, we have some useful guidance you can take a look at.\n", "headings": ["Language Matters","Relationship status: It’s complicated","Finding Solutions to Resolutions","Get involved"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2019-election-slate/", "title": "2019 election slate", "subtitle":"", "rank": 1, "lastmod": "2019-08-23", "lastmod_ts": 1566518400, "section": "Blog", "tags": [], "description": "2019 Board Election The annual board election is a very important event for Crossref and its members. The board of directors, comprising 16 member organizations, governs Crossref, sets its strategic direction and makes sure that we fulfill our mission. Our members elect the board - its \u0026ldquo;one member one vote\u0026rdquo; - and we like to see as many members as possible voting. We are very pleased to announce the 2019 election slate - we have a great set of candidates and an update to the ByLaws addressing the composition of the slate to ensure that the board continues to be representative of our membership.", "content": "2019 Board Election The annual board election is a very important event for Crossref and its members. The board of directors, comprising 16 member organizations, governs Crossref, sets its strategic direction and makes sure that we fulfill our mission. Our members elect the board - its \u0026ldquo;one member one vote\u0026rdquo; - and we like to see as many members as possible voting. We are very pleased to announce the 2019 election slate - we have a great set of candidates and an update to the ByLaws addressing the composition of the slate to ensure that the board continues to be representative of our membership.\n2019 Election Slate Crossref received 52 expressions of interest this year through the link that was sent out via our blog, and over 100 emails from members interested in serving on our Board. It is very exciting to see that our members want to be involved.\nIn March of this year, the Board made a motion per the recommendation of an adhoc Governance Committee. It was resolves to \u0026ldquo;provide the following guidance to the Nominating Committee: To achieve balance between revenue tiers by proposing a 2019 slate consisting of one Revenue Tier 1 seat and four Revenue tier 2 seats, and a 2020 slate consisting of four Revenue Tier 1 seats and two Revenue Tier 2 seats; thereby resulting in, as nearly as practicable, an equal balance between board members representing Revenue Tier 1 and Revenue Tier 2 (as those terms are defined in Crossref\u0026rsquo;s ByLaws below).\u0026rdquo;\nSection 2. Nominating Committee. The Board shall appoint a Nominating Committee of five (5) members, each of whom shall be either a Director or the designated representative of a member that is not represented on the Board, whose duty it shall be to nominate candidates for Directors to be elected at the next annual election. The Nominating Committee shall designate a slate of candidates for each election that is at least equal in number to the number of Directors to be elected at such election. Each such slate will be comprised such that, as nearly as practicable, one-half of the resulting Board shall be comprised of Directors designated by Members then representing Revenue Tier 1; and one-half of the resulting Board shall be comprised of Directors designated by Members then representing Revenue Tier 2. \u0026ldquo;Revenue Tier 1\u0026rdquo; means all consecutive membership dues categories, starting with the lowest dues category, that, when taken together, aggregate, as nearly as possible, to fifty percent (50%) of Crossref\u0026rsquo;s annual revenue. \u0026ldquo;Revenue Tier 2\u0026rdquo; means all membership dues categories above Revenue Tier 1. The Nominating Committee shall notify the Secretary in writing, at least twenty (20) days before the date of the annual meeting, of the names of such candidates, and the Secretary, except as herein otherwise provided, shall transmit a copy thereof to the last recorded address of each member of record simultaneously with the notice of the meeting.\nThe Committee and the Board has worked very hard to balance the Board, so you will see two categories on the ballot, large and small.\nThe 2019 slate includes: seven candidates for five available seats Candidate organizations, in alphabetical order, for the Small category (1 seat available):\neLife, Melissa Harrison The Royal Society, Stuart Taylor Candidate organizations, in alphabetical order, for the Large category (4 seats available):\nClarivate Analytics, Nandita Quaderi Elsevier, Chris Shillum IOP, Graham McCann Springer Nature, Reshma Shaikh Wiley, Todd Toler Take a look at the candidates\u0026rsquo; organizational and personal statements You can be part of this important process, by voting in the election If your organization is a voting member in good standing of Crossref as of September 13, 2019, you are eligible to vote when voting opens on September 27, 2019.\nHow can you vote? On September 27, 2019, your organization\u0026rsquo;s designated voting contact will receive an email with the Formal Notice of Meeting and Proxy Form with concise instructions on how to vote. You will also receive a user name and password with a link to our voting platform.\nThe election results will be announced at LIVE19 Amsterdam on November 13, 2019.\n", "headings": ["2019 Board Election","2019 Election Slate","The 2019 slate includes: seven candidates for five available seats","Take a look at the candidates\u0026rsquo; organizational and personal statements","You can be part of this important process, by voting in the election","How can you vote?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lisa-hart-martin/", "title": "Lisa Hart Martin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/building-better-metadata-with-schema-releases/", "title": "Building better metadata with schema releases", "subtitle":"", "rank": 1, "lastmod": "2019-08-21", "lastmod_ts": 1566345600, "section": "Blog", "tags": [], "description": "This month we have officially released a new version of our input metadata schema. As well as walking through the latest additions, I\u0026rsquo;ll also describe here how we\u0026rsquo;re starting to develop a new streamlined and open approach to schema development, using GitLab and some of the ideas under discussion going forward.\n", "content": "This month we have officially released a new version of our input metadata schema. As well as walking through the latest additions, I\u0026rsquo;ll also describe here how we\u0026rsquo;re starting to develop a new streamlined and open approach to schema development, using GitLab and some of the ideas under discussion going forward.\nWhat\u0026rsquo;s included in version 4.4.2 The latest schema as of August 2019 is version 4.4.2 and this release now includes:\nSupport for \u0026ldquo;pending publication\u0026rdquo; Support for JATS 1.2 abstracts Abstract support to dissertations, reports, and allow multiple abstracts wherever available Support for multiple dissertation authors A new acceptance_date element added to journal article, book, book chapter, and conference paper record types \u0026ldquo;Pending publication\u0026rdquo; is the term we\u0026rsquo;ve coined for the phase where a manuscript has been accepted for publication but where the publisher needs to communicate a DOI much earlier than most article metadata is available. Some members asked for the ability to register and assign DOIs prior to online publication, even without a title, so this allows members to register a DOI with minimal metadata, temporarily, before online publication. There is of course no obligation to use this feature.\nIt\u0026rsquo;s worth calling out the addition of acceptance_date too. This is a key attribute that is heavily requested by downstream metadata users like universities. Acceptance dates allow people to report on outputs much more accurately, so we do encourage all members to start including acceptance dates in their metadata. It\u0026rsquo;s highly appreciated!\nSchema files public on GitLab I’ve added our latest schema to a new GitLab repository, There you’ll find the schema files, some documentation, and the opportunity to suggest enhancements. The schema has been released as bundle 0.1.1 and also includes our new Grant metadata schema for members that fund research.\nThe schema has been available in some form for months but at this point we consider it ‘officially’ released to kick off our new but necessary practice of formal schema releases. Any forthcoming updates will be added to the next version.\nSchema management process We’ve been adding sets of metadata and new record types over the years, but also need to have a defined process for small but vital pieces of metadata that you need to provide and retrieve from our metadata records. If you’re wondering what our procedure for updating our schema is, you are not alone! We have not had a formal process, instead relying on ad-hoc requests from our membership and working groups. Our release management and schema numbering has also not been consistent.\nGoing forward, I will ensure that all forthcoming versions of our metadata schema are be posted as a draft on GitLab for review and comment, and the final version will be officially released via GitLab as well.\nIt\u0026rsquo;s important to note that when we talk about \u0026ldquo;the schema\u0026rdquo;, we generally mean the input schema specifically i.e. what members of Crossref can register about the content they produce. As always, the output for retrieving that metadata is subject to separate development plans for our Metadata APIs. I\u0026rsquo;m working with our technical team so we can develop and introduce an \u0026rsquo;end-to-end\u0026rsquo; approach that doesn\u0026rsquo;t in future treat the input and the output as such separate considerations.\nWhat\u0026rsquo;s next Many of the updates in this latest release have been in the works for some time. Changes to our metadata both large and small are considered carefully, but I’d like to do this in a transparent and cooperative way with our community.\nI recently set up the \u0026ldquo;Metadata Practitioners Interest Group\u0026rdquo; and we\u0026rsquo;ve just had our second call. A big topic was how to best manage the ideas and requests from the community. The ability for public comments on GitLab is a first step.\nThis most recent update contains a mix of long term projects and updates to keep our metadata current and useful. Other changes that are under discussion will require more development on our end. But stay tuned for more information about forthcoming changes, as well information about how you can contribute.\n", "headings": ["What\u0026rsquo;s included in version 4.4.2","Schema files public on GitLab","Schema management process","What\u0026rsquo;s next"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/introducing-our-new-director-of-product/", "title": "Introducing our new Director of Product", "subtitle":"", "rank": 1, "lastmod": "2019-08-19", "lastmod_ts": 1566172800, "section": "Blog", "tags": [], "description": "I\u0026rsquo;m happy to announce that Bryan Vickery has joined Crossref today as our new Director of Product. Bryan has extensive experience developing products and services at publishers such as Taylor \u0026amp; Francis, where he led the creation of the open-access platform Cogent OA. Most recently he was Managing Director of Research Services at T\u0026amp;F, including Wizdom.ai after it was acquired.\n", "content": "I\u0026rsquo;m happy to announce that Bryan Vickery has joined Crossref today as our new Director of Product. Bryan has extensive experience developing products and services at publishers such as Taylor \u0026amp; Francis, where he led the creation of the open-access platform Cogent OA. Most recently he was Managing Director of Research Services at T\u0026amp;F, including Wizdom.ai after it was acquired.\nHe previously held a range of roles from Publisher to Chief Operations Officer at BioMedCentral, as well as online community and technology leadership roles at Elsevier.\nBryan is a great addition to Crossref and we are lucky to have him. The product team is keen to progress the long list of wishes from our community with his guidance. Bryan will bring focus and clarity to our roadmap and our development processes, making it easier for people to adopt and participate in our services, and ensuring that we are working on the issues that are most important to our members.\nHe will also be a vital part of the leadership team, working with me and the other directors Geoffrey, Ginny, and Lisa to help us take the organization forward in a transparent way that serves our mission and empowers our excellent staff.\nAnd now a few words from Bryan… I’m thrilled to be joining Crossref as Director of Product at a time of considerable change in scholarly communication. I’ve worked in, and around, scholarly publishing for more than 20 years.\nThis is a challenging role. We have many exciting services and collaborations to progress, and also technical debt to address (like everyone else) to upgrade our existing services - it’s essential we balance these. My priority is to stay on top of the issues of the highest value to the scholarly community, now and in the future, and ensure we deliver services that are both useful and usable.\nI will be attending Crossref LIVE19 “The strategy one” along with other staff and look forward to meeting many of our members then. In the meantime, I\u0026rsquo;d love to hear your thoughts on where we’ve been (what it’s like working with us and using our services) and where we\u0026rsquo;re going (what you’d like to see from us). You can reach me via our feedback email.\nPlease join us in welcoming Bryan to the Crossref community.\n", "headings": ["And now a few words from Bryan…"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/alice-meadows/", "title": "Alice Meadows", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/helena-cousijn/", "title": "Helena Cousijn", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/stephanie-harley/", "title": "Stephanie Harley", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/well-be-rocking-your-world-again-at-pidapalooza-2020/", "title": "We’ll be rocking your world again at PIDapalooza 2020", "subtitle":"", "rank": 1, "lastmod": "2019-08-18", "lastmod_ts": 1566086400, "section": "Blog", "tags": [], "description": "The official countdown to PIDapalooza 2020 begins here! It\u0026rsquo;s 163 days to go till our flame-lighting opening ceremony at the fabulous Belem Cultural Center in Lisbon, Portugal. Your friendly neighborhood PIDapalooza Planning Committee\u0026mdash;Helena Cousijn (DataCite), Maria Gould (CDL), Stephanie Harley (ORCID), Alice Meadows (ORCID), and I\u0026mdash;are already hard at work making sure it’s the best one so far!\n", "content": "The official countdown to PIDapalooza 2020 begins here! It\u0026rsquo;s 163 days to go till our flame-lighting opening ceremony at the fabulous Belem Cultural Center in Lisbon, Portugal. Your friendly neighborhood PIDapalooza Planning Committee\u0026mdash;Helena Cousijn (DataCite), Maria Gould (CDL), Stephanie Harley (ORCID), Alice Meadows (ORCID), and I\u0026mdash;are already hard at work making sure it’s the best one so far!\nWe have a shiny [new website](https://pidapalooza.org), with loads more information than before, including spotify playlists (please add your PID songs to [the 2020 one](https://open.spotify.com/playlist/1oJtbpTzF9I3MewQ1Yasml?si=D0TKdR8BTJSL-GA3X_LwVQ)!), an instagram photo gallery, and of course registration information. Look out for updates there and on [Twitter](https://twitter.com/pidapalooza). And, led by Helena, the Program Committee is starting its search for sessions that meet PIDapalooza’s goals of being PID-focused, fun, informative, and interactive. If you’ve a PID story to share, a PID practice to recommend, or a PID technology to launch, the Committee wants to hear from you. Please send them your ideas, using this form, by September 27. We aim to finalize the program by late October/early November.\nDon’t forget to tie your proposal into one of the six festival themes: Theme 1: Putting Principles into Practice FAIR, Plan S, the 4 Cs; principles are everywhere. Do you have examples of how PIDs helped you put principles into practice? We’d love to hear your story!\nTheme 2: PID Communities We believe PIDs don’t work without community around them. We would like to hear from you about best practice among PID communities so we can learn from each other and spread the word even further!\nTheme 3: PID Success Stories We already know PIDs are great, but which strategies worked? Share your victories! Which strategies failed? Let’s turn these into success stories together!\nTheme 4: Achieving Persistence through Sustainability Persistence is a key part of PIDs, but there can’t be persistence without sustainability. Do you want to share how you sustain your PIDs or how PIDs help you with sustainability?\nTheme 5: Bridging Worlds - Social and Technical What would make heterogeneous PID systems \u0026lsquo;interoperate\u0026rsquo; optimally? Would standardized metadata and APIs across PID types solve many of the problems, and if so, how would that be achieved? And what about the social aspects? How do we bridge the gaps between different stakeholder groups and communities?\nTheme 6: PID Party! You don’t just learn about PIDs through powerpoints. What about games? Interpretive dance? Get creative and let us know what kind of activity you’d like to organize at PIDapalooza this year!\nPIDapalooza: the essentials What? PIDapalooza 2020 - the open festival of persistent identifiers When? 29-30 January 2020 (kickoff party the evening of January 28) Where? Belem Cultural Center, Lisbon, Portugal (map) Why? To think, talk, live persistent identifiers for two whole days with your fellow PID people, experts, and newcomers alike!\nWe hope you’re as excited about PIDapalooza 2020 as we are and we look forward to seeing you in Lisbon.\n", "headings": ["Don’t forget to tie your proposal into one of the six festival themes:","Theme 1: Putting Principles into Practice","Theme 2: PID Communities","Theme 3: PID Success Stories","Theme 4: Achieving Persistence through Sustainability","Theme 5: Bridging Worlds - Social and Technical","Theme 6: PID Party!","PIDapalooza: the essentials"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/live19-the-strategy-one-have-your-say/", "title": "LIVE19, the strategy one: have your say", "subtitle":"", "rank": 1, "lastmod": "2019-08-11", "lastmod_ts": 1565481600, "section": "Blog", "tags": [], "description": "With a smaller group than usual, we\u0026rsquo;re dedicating this year\u0026rsquo;s annual meeting to hear what you value about Crossref. Which initiatives would you put first and/or last? Where would you have us draw the line between mission and ambition? What is “core” for you? How could/should we adapt for the future in order to meet your needs?\nStriving for balance Different people want different things from us. As Aristotle said: \u0026ldquo;There is only one way to avoid criticism: do nothing, say nothing, and be nothing.", "content": "With a smaller group than usual, we\u0026rsquo;re dedicating this year\u0026rsquo;s annual meeting to hear what you value about Crossref. Which initiatives would you put first and/or last? Where would you have us draw the line between mission and ambition? What is “core” for you? How could/should we adapt for the future in order to meet your needs?\nStriving for balance Different people want different things from us. As Aristotle said: \u0026ldquo;There is only one way to avoid criticism: do nothing, say nothing, and be nothing.\u0026rdquo; As we prepare for our 20th year of operation, please join this unique meeting to help shape the future of Crossref.\nThere won\u0026rsquo;t be any plenary talks about trends in scholarly communications, but instead workshop-style activities to help hone our strategy, do some scenario planning, and prioritize goals together, as a community.\nHave your say Whether you can make it in person or not, you can still pitch in by giving us your opinion in advance. We\u0026rsquo;re gathering broad input on what you think we\u0026rsquo;re doing well, whether we\u0026rsquo;re on the right track strategically, and how we can improve. There\u0026rsquo;s never been such a comprehensive study of what value we offer so we hope to learn a lot and will adjust plans based on the results. Please take the \u0026ldquo;Value of Crossref\u0026rdquo; survey. It\u0026rsquo;ll take 10-12 minutes. At the meeting Please join us at the Tobacco Theater in central Amsterdam on the afternoon of 13th November from 12:30 pm and for the full day of 14th November. The first afternoon will involve some scene-setting talks with key information you\u0026rsquo;ll need for the following day\u0026rsquo;s workshops, including the results of the survey above. There will also be some announcements, including who members have voted onto our board (this year\u0026rsquo;s slate is yet to be communicated), and of course plenty of time for discussion and questions among peers.\nIn addition to the results of the survey, during the meeting each participant will be furnished with a \u0026lsquo;fact pack\u0026rsquo; to reference in their discussions and recommendations. It will include answers to questions like who pays to keep Crossref sustainable?. I\u0026rsquo;m looking forward to busting some myths on that one! Everyone will be pre-assigned to a particular table/topic (like a wedding!) and will stay in those groups for roundtable discussions. There will be a community facilitator and a staff member on each table. You will be able to mingle more widely in the breaks and the evening drinks reception on the 13th.\nBased on this provided data, we\u0026rsquo;ll be asking participants to think about key questions such as:\nWho, ultimately, does Crossref serve? What should Crossref\u0026rsquo;s product development priorities be? What (if anything) would be missed if Crossref went away? (i.e. what\u0026rsquo;s our central value) What does \u0026lsquo;community\u0026rsquo; really mean and how should Crossref work to better balance opposing priorities? Research is global, and supporting a diverse global community is a challenge. Come and have your say. Register today.\nI can\u0026rsquo;t wait to see you there and hear your thoughts.\n", "headings": ["Striving for balance","Have your say","At the meeting"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/funders-and-infrastructure-lets-get-building/", "title": "Funders and infrastructure: let’s get building", "subtitle":"", "rank": 1, "lastmod": "2019-07-29", "lastmod_ts": 1564358400, "section": "Blog", "tags": [], "description": "Human intelligence and curiosity are the lifeblood of the scholarly world, but not many people can afford to pursue research out of their own pocket. We all have bills to pay. Also, compute time, buildings, lab equipment, administration, and giant underground thingumatrons do not come cheap. In 2017, according to statistics from UNESCO, $1.7 trillion dollars were invested globally in Research and Development. A lot of this money comes from the public - 22c in every dollar spent on R\u0026amp;D in the USA comes from government funds, for example.", "content": "Human intelligence and curiosity are the lifeblood of the scholarly world, but not many people can afford to pursue research out of their own pocket. We all have bills to pay. Also, compute time, buildings, lab equipment, administration, and giant underground thingumatrons do not come cheap. In 2017, according to statistics from UNESCO, $1.7 trillion dollars were invested globally in Research and Development. A lot of this money comes from the public - 22c in every dollar spent on R\u0026amp;D in the USA comes from government funds, for example. Funders really do support a LOT of research.\nFor that research to count, it needs to be communicated. For us to interpret those research communications critically, we need to understand how the research was done and who paid for it.\nAt Crossref, we’ve been working with funders for many years. The Open Funder Registry was launched (with donated support from Elsevier) in 2012, and provides a taxonomy of funders, each uniquely identified, which has grown to cover 20,000 funders around the world. This resource has helped to connect the organizations that provide research funds to resources, projects, and publications. Some are also members and have been registering content with us. This is a growing trend as more funders start to launch their own open platforms. Funders also consume metadata from Crossref members, using it to track and report on the published outputs of the researchers they support.\nMore recently, we have been exploring the ways that we can do more in partnership with the funding community. As our board concluded in 2017,\nCrossref requires increased emphasis on funders, understanding their needs and requirements and increasingly including funders in the scholarly communication dialogue.\nIn response, we have explored new services and practical enhancements to our existing portfolio, such as the new grants registration system, which will also power search and lookup tools.\nThis new initiative will link structured information about grants with DOIs, and enable us to provide open tools to help institutions, publishers, and research supporting organizations to re-use that data and make long-lasting connections between specific funding (and other kinds of research support) and research activities and outcomes. The value of this was beautifully explained by our friends at Wellcome (now members) in this blog post, and was reinforced by a recent survey undertaken by ORCID in which linking grants to outputs was cited as one of the major challenges facing funders. The Crossref Grant Linking System launched this July with a group of early adopter funders, ably supported by the team at Europe PMC.\nWe’re not stopping there though: we are lucky to have a dedicated and engaged funder advisory group, and we will continue to work with them to understand how our interactions with funders can benefit the wider ecosystem that we support, and help funders to achieve their goals.\nThere are many platforms providing vital intelligence to funders, from Dimensions to OpenAIRE, which rely on Crossref data. Last month, I was at the OAI11 workshop in Geneva, and it was striking how many presentations included a slide that mentioned using Crossref data. There were 200 people from the open science community there, and they clearly rely on Crossref as a foundational infrastructure to build their ecosystem. That community is also just a subset of the more than 2,500 registered consumers of Crossref metadata. We need to keep asking how this metadata can improve the information available to funders, to their partners and to service providers. Adding grants to the mix will help all of these parties provide an even richer picture of research.\nAs we move forward with our engagement with the global funding community, new opportunities are becoming visible, and not just for funders. Better experiences for authors, reduced overhead for publishers and easier benchmarking for institutions are a selection of benefits that this work can help us realize.\nWhen we really start to get to grips with opening up information about the inputs to research in the way we already have with its outputs, truly exciting things can happen. The really great thing about this is that, quite literally, everyone benefits: from Crossref members to everyone touched by advances in our understanding of the world. Let’s get building!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/josh-brown/", "title": "Josh Brown", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/big-things-have-small-beginnings-the-growth-of-the-open-funder-registry/", "title": "Big things have small beginnings: the growth of the Open Funder Registry", "subtitle":"", "rank": 1, "lastmod": "2019-07-21", "lastmod_ts": 1563667200, "section": "Blog", "tags": [], "description": "The Open Funder Registry plays a critical role in making sure that our members correctly identify the funding sources behind the research that they are publishing. It addresses a similar problem to the one that led to the creation of ORCID: researchers\u0026rsquo; names are hard to disambiguate and are rarely unique; they get abbreviated, have spelling variations and change over time.\nThe same is true of organizations. You don’t have to read all that many papers to see authors acknowledge funding from the US National Institutes of Health as NIH, National Institutes for Health, National Institute of Health, etc.", "content": "The Open Funder Registry plays a critical role in making sure that our members correctly identify the funding sources behind the research that they are publishing. It addresses a similar problem to the one that led to the creation of ORCID: researchers\u0026rsquo; names are hard to disambiguate and are rarely unique; they get abbreviated, have spelling variations and change over time.\nThe same is true of organizations. You don’t have to read all that many papers to see authors acknowledge funding from the US National Institutes of Health as NIH, National Institutes for Health, National Institute of Health, etc. And wait, are you sure they didn’t mean National Institute for Health Research? (An entirely separate UK-based funder).\nAnd a lot of countries have a National Science Foundation…\nIf each funder has a unique identifier, our members can include it in the metadata that they register with us, giving a clear and accurate link between the funder of the research and the published outcomes. And we can make that information available to everyone via our API, and build human interfaces so that you can look it up.\nMany types of funding bodies are represented in the Funder Registry, from government agencies and large international foundations to small single-mission charities, and everything in between. As well as a unique DOI for each institution, the Registry contains additional metadata that can help to identify the funder such as country, abbreviated or alternate names, translated names, and so on.\nThe Registry also supports relationships between different funders. These can be hierarchical parent/child relationships for larger organizations, or connections between archival and current entries in instances where a funder has changed its name or become part of another body (to tell us about these kinds of changes you just need to get in touch).\nThe Registry was donated to Crossref by Elsevier when we first introduced funding information as part of our Content Registration schema back in 2012. We started out with a list of just over 4000 funders. Through an ongoing partnership the list has been - and continues to be - updated on a monthly basis by Elsevier, and sent to Crossref as a formatted XML file that we process and release.\nIn return, Crossref sends Elsevier a feed of funder names that our members have registered with us that are not present in the Registry, which a team at Elsevier validates and adds to their databases, and then puts those newly-identified funders in to the next iteration of the list they send to us. It’s nice and circular and benefits both parties.\nWe released v1.27 of the Funder Registry last week, and it contains entries for an impressive 21,356 funders. I’ve been involved in this project since its inception, and have enjoyed a productive and cooperative working relationship with the team at Elsevier, headed by Peter Berkvens (Senior Product Manager) and Paul Mostert (Director Product Management). I asked them to explain a little about the process from their side:\n“Our team maintains a workflow in which Acknowledgement and Funding sections from articles are scanned for appearances of funding organizations using Natural Language Processing techniques. External Elsevier vendors then edit the data and add the validated names of the funders to what is called the Funding Bodies Taxonomy. The latter feeds Crossref’s Open Funder Registry.\nCurrently, the Taxonomy is nearing 22,000 Funders. It is expected it will grow to 25,000 Funders eventually. When this stage is reached, Elsevier believes that all existing Funders will be covered in the Funder Registry. Elsevier will continue to maintain the list adding new Funders as soon as they appear in scientific papers.\nElsevier’s Primary Articles production workflow for ScienceDirect uses the Funder Registry during the copyediting process, validating and tagging the Funders that appear in the accepted articles for Elsevier journals hosted by ScienceDirect. We then send the funder names and IDs to Crossref as part of our metadata.”\nThanks to everyone involved for getting us ever-closer to a truly comprehensive list of funders.\nAnd if you’re a member who’s not already registering funding information, why not look into getting started? It all leads to richer metadata which means more people can find, cite and re-use research \u0026ndash; and we all know that’s a good thing\u0026hellip;\n", "headings": ["We released v1.27 of the Funder Registry last week, and it contains entries for an impressive 21,356 funders."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/metadataplus-thank-you/", "title": "Thank you for completing the Metadata Plus subscriber form", "subtitle":"", "rank": 1, "lastmod": "2019-07-10", "lastmod_ts": 1562716800, "section": "Become a member", "tags": [], "description": "Thanks for completing the Metadata Plus subscriber form Here\u0026rsquo;s what happens next: We’ll send you a pro-rated invoice for your Metadata Plus subscriber fee for the remainder of this current calendar year. Once we confirm the invoice is paid, we’ll give you access to the key manager where you will be able to create your API keys.\nPlease contact our Metadata Plus support team if you have any questions in the meantime.", "content": "Thanks for completing the Metadata Plus subscriber form Here\u0026rsquo;s what happens next: We’ll send you a pro-rated invoice for your Metadata Plus subscriber fee for the remainder of this current calendar year. Once we confirm the invoice is paid, we’ll give you access to the key manager where you will be able to create your API keys.\nPlease contact our Metadata Plus support team if you have any questions in the meantime.\n", "headings": ["Thanks for completing the Metadata Plus subscriber form","Here\u0026rsquo;s what happens next:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/reference-matching/", "title": "Reference Matching", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-if-i-told-you-that-bibliographic-references-can-be-structured/", "title": "What if I told you that bibliographic references can be structured?", "subtitle":"", "rank": 1, "lastmod": "2019-07-08", "lastmod_ts": 1562544000, "section": "Blog", "tags": [], "description": "Last year I spent several weeks studying how to automatically match unstructured references to DOIs (you can read about these experiments in my previous blog posts). But what about references that are not in the form of an unstructured string, but rather a structured collection of metadata fields? Are we matching them, and how? Let\u0026rsquo;s find out.\n", "content": "Last year I spent several weeks studying how to automatically match unstructured references to DOIs (you can read about these experiments in my previous blog posts). But what about references that are not in the form of an unstructured string, but rather a structured collection of metadata fields? Are we matching them, and how? Let\u0026rsquo;s find out.\nTL;DR 43% of open/limited references deposited with Crossref have no publisher-asserted DOI and no unstructured string. This means they need a matching approach suitable for structured references. [EDIT 6th June 2022 - all references are now open by default]. I adapted our new matching algorithms: Search-Based Matching (SBM) and Search-Based Matching with Validation (SMBV) to work with both structured and unstructured references. I compared three matching algorithms: Crossref\u0026rsquo;s current (legacy) algorithm, SBM and SBMV, using a dataset of 2,000 structured references randomly chosen from Crossref\u0026rsquo;s references. SBMV and the legacy algorithm performed almost the same. SBMV\u0026rsquo;s F1 was slightly better (0.9660 vs. 0.9593). Similarly as in the case of unstructured references, SBMV achieved slightly lower precision and better recall than the legacy algorithm. Introduction Those of you who often read scientific papers are probably used to bibliographic references in the form of unstructured strings, as they appear in the bibliography, for example:\n[5] Elizabeth Lundberg, “Humanism on Gallifrey,” Science Fiction Studies, vol. 40, no. 2, p. 382, 2013. This form, however, is not the only way we can store the information about the referenced paper. An alternative is a structured, more machine-readable form, for example using BibTeX format:\n@article{Elizabeth_Lundberg_2013, year = 2013, publisher = {{SF}-{TH}, Inc.}, volume = {40}, number = {2}, pages = {382}, author = {Elizabeth Lundberg}, title = {Humanism on Gallifrey}, journal = {Science Fiction Studies} } Probably the most concise way to provide the information about the referenced document is to use its identifier, for example (🥁drum roll\u0026hellip;) the DOI:\n\u0026lt;https://0-doi-org.libus.csd.mu.edu/10.5621/sciefictstud.40.2.0382\u0026gt; It is important to understand that these three representations (DOI, structured reference and unstructured reference) are not equivalent. The amount of information they carry varies:\nThe DOI, by definition, provides the full information about the referenced document, because it identifies it without a doubt. Even though the metadata and content are not directly present in the DOI string, they can be easily and deterministically accessed. It is by far the preferred representation of the referenced document. The structured reference contains the metadata of the referenced object, but it doesn\u0026rsquo;t identify the referenced object without a doubt. In our example, we know that the paper was published in 2013 by Elizabeth Lundberg, but we might not know exactly which paper it is, especially if there are more than one document with the same or similar metadata. The unstructured reference contains the metadata field values, but without the names of the fields. This also doesn\u0026rsquo;t identify the referenced document, and even its metadata is not known without a doubt. In our example, we know that the word “Science” appears somewhere in the metadata, but we don\u0026rsquo;t know for sure whether it is a part of the title, journal title, or maybe the author\u0026rsquo;s (very cool) name. The diagram presents the relationships between all these three forms:\nThe arrows show actions that Crossref has to perform to transform one form to another.\nGreen transformations are in general easy and can be done without introducing any errors. The reason is that green arrows go from more information to less information. We all know how easy it is to forget important stuff!\nGreen transformations are typically performed when the publication is being created. At the beginning the author can access the DOI of the referenced document, because they know exactly which document it is. Then, they can extract the bibliographic metadata (the structured form) of the document based on the DOI, for example by following the DOI to the document\u0026rsquo;s webpage or retrieving the metadata from Crossref\u0026rsquo;s REST API. Finally, the structured form can be formatted into an unstructured string using, for example, the CiteProc tool.\nWe\u0026rsquo;ve also automated it further and these two green transformation (getting the document\u0026rsquo;s metadata based on the DOI and formatting it into a string) can be done in one go using Crossref\u0026rsquo;s content negotiation.\nRed transformations are often done in systems that store bibliographic metadata (like our own metadata collection), often at a large scale. In these systems, we typically want to have DOIs (or other unique identifiers) of the referenced documents, but in practise we often have only structured and/or unstructured form. To fix this, we match references. Some systems also perform reference parsing (thankfully, we discovered we do not need to do this in our case).\nIn general, red transformations are difficult, because we have to go from less information to more information, effectively recreating the information that has been lost during paper writing. This requires a bit of reasoning, educated guessing, and juggling probabilities. Data errors, noise, and sparsity make the situation even more dire. As a result, we do not expect any matching or parsing algorithm to be always correct. Instead, we perform evaluations (like in this blog post) to capture how well they perform on average.\nMy previous blog post focused on matching unstructured references to DOIs (long red \u0026ldquo;matching\u0026rdquo; arrow). In this one, I analyse how well we can match structured references to DOIs (short red \u0026ldquo;matching\u0026rdquo; arrow).\nReferences in Crossref You might be asking yourself how important it is to have the matching algorithm working for both structured and unstructured references. Let\u0026rsquo;s look more closely at the references our matching algorithm has to deal with.\n29% of open/limited references deposited with Crossref already have the DOI provided by the publisher member. At Crossref, when we come across those references, we start dancing on a rainbow to the tunes of Linkin Park, while the references holding their DOIs sprinkle from the sky. Some of us sing along. We live for those moments, so if you care about us, please provide as many DOIs in your references as possible!\nYou might be wondering how we are sure these publisher-provided DOIs are correct. The short answer is that we are not. After all, the publisher might have used an automated matcher to insert the DOIs before depositing the metadata. Nevertheless, our current workflow assumes these publisher-provided DOIs are correct and we simply accept them as they are.\nUnfortunately, the remaining 71% of references are deposited without a DOI. Those are the references we try to match ourselves.\nHere is the distribution of all the open/limited references:\n17% of the references are deposited with no DOI and both structured and unstructured form. 11% have no DOI and only an unstructured form, and 43% have no DOI and only a structured form. These 43% cannot be directly processed by the unstructured matching algorithm.\nThis distribution clearly shows that we need a matching algorithm able to process both structured and unstructured references. If our algorithm worked only with one type, we would miss a large percentage of the input references, and the quality of our citation metadata would be questionable.\nThe analysis Let\u0026rsquo;s get to the point. I evaluated and compared three matching algorithms, focusing on the structured references.\nThe first algorithm is one of the legacy algorithms currently used in Crossref. It uses fuzzy querying in a relational database to find the best matching DOI for the given structured reference. It can be accessed through a Crossref OpenURL query.\nThe second algorithm is an adaptation of the Search-Based Matching (SBM) algorithm for structured references. In this algorithm, we concatenate all metadata fields of the reference and use it to search in the Crossref\u0026rsquo;s REST API. The first hit is returned as the target DOI if its relevance score exceeds the predefined threshold.\nThe third algorithm is an adaptation of the Search-Based Matching with Validation (SBMV) for structured references. Similarly as in the case of SBM, we also concatenate all metadata fields of the input reference and use it to search in the Crossref\u0026rsquo;s REST API. Next, a number of top hits are considered as candidates and their similarity score with the input reference is calculated. The candidate with the highest similarity score is returned as the target DOI if its score exceeds the predefined threshold. The similarity score is based on fuzzy comparison of the metadata field values between the candidate and the input reference.\nI compared these three algorithms on a test set composed of 2,000 structured bibliographic references randomly chosen from Crossref\u0026rsquo;s metadata. For each reference, I manually checked the output of all matching algorithms, and in some cases performed additional manual searching. This resulted in the true target DOI (or null) assigned to each reference.\nThe metrics are the same as in the previous evaluations: precision, recall and F1 calculated over the set of input references.\nThe thresholds for SBM and SBMV algorithms were chosen on a separate validation dataset. The validation dataset also contains 2,000 structured references with manually-verified target DOIs.\nThe results The plot shows the results of the evaluation of all three algorithms:\nThe vertical black lines on top of the bars represent the confidence intervals.\nAs we can see, SBMV and the legacy approach achieved very similar results. SBMV slightly outperforms the legacy approach in F1: 0.9660 vs. 0.9593.\nSBMV is slightly worse that the legacy approach in precision (0.9831 vs. 0.9929) and better in recall (0.9495 vs. 0.9280).\nThe SBM algorithm performs the worst, especially in precision. Why is there such a huge difference between SBM and SBMV? The algorithms differ in the post-processing validation stage. SBM relies on the ability of the search engine to select the best target DOI, while SBMV re-scores a number of candidates obtained from the search engine using custom similarity. The results here suggest that in the case of structured references, the right target DOI is usually somewhere close to the top of the search results, but often it is not in the first position. One of the reasons might be missing titles in 76% of the structured references, which can confuse the search engine.\nLet\u0026rsquo;s look more closely at a few interesting cases in our test set:\nfirst-page\t=\t1000 article-title\t=\tSequence capture using PCR-generated probes: a cost-effective method of targeted high-throughput sequencing for nonmodel organisms volume\t=\t14 author\t=\tPeñalba year\t=\t2014 journal-title\t=\tMolecular Ecology Resources The reference above was successfully matched by SBMV to https://0-doi-org.libus.csd.mu.edu/10.1111/1755-0998.12249, even though the document\u0026rsquo;s volume and pages are missing from Crossref\u0026rsquo;s metadata.\nissue\t=\t2 first-page\t=\t101 volume\t=\t6 author\t=\tAbraham year\t=\t1987 journal-title\t=\tPromoter: An Automated Promotion Evaluation System Here the structure incorrectly labels article title as journal title. Despite this, the reference was correctly matched by our brave SBMV to https://0-doi-org.libus.csd.mu.edu/10.1287/mksc.6.2.101.\nauthor\t=\tMarshall Day C. volume\t=\t39 first-page\t=\t572 year\t=\t1949 journal-title\t=\tIndia. J. A. D. A. Above we have most likely a parsing error. A part of the article title appears in the journal name, and the main journal name is abbreviated. ‘I see what you did there, my old friend Parsing Algorithm! Only a minor obstacle!\u0026rsquo; said SBMV, and matched the reference to https://0-doi-org.libus.csd.mu.edu/10.14219/jada.archive.1949.0114.\nvolume\t=\t5 year\t=\t2015 article-title\t=\tA retrospective analysis of the effect of discussion in teleconference and face-to-face scientific peer-review panels journal-title\t=\tBMJ Open Here the the page number and author are not in the structure, but our invincible SBMV jumped over the holes left by the missing metadata and gracefully grabbed the right DOI https://0-doi-org.libus.csd.mu.edu/10.1136/bmjopen-2015-009138.\nissue\t=\t2 first-page\t=\t533 volume\t=\t30 author\t=\tUthman BM year\t=\t1989 journal-title\t=\tEpilepsia In this case we have a mismatch in the page number (“533” vs. “S33”). But did SBMV give up and burst into tears? I think we already know the answer! Of course, it conquered the nasty typo with the sword made of fuzzy comparisons (yes, it\u0026rsquo;s a thing!) and brought us back the correct DOI https://0-doi-org.libus.csd.mu.edu/10.1111/j.1528-1157.1989.tb05823.x.\nStructured vs. unstructured How does matching structured references compare to matching unstructured references?\nThe general trends are the same. For both structured and unstructured references, SBMV outperforms the legacy approach in F1, achieving worse precision and better recall. This tells us that our legacy algorithms are more strict and as a result they miss some links.\nStructured reference matching seems easier than unstructured reference matching. The reason is that when we have the structure, we can compare the input reference to the candidate field by field, which is more precise than using the unstructured string.\nStructured matching, however, in practise brings new challenges. One big problem is data sparsity. 15% of structured references without DOIs have fewer than four metadata fields. This is not always enough to identify the DOI. Also, 76% of the structured references without DOIs do not contain the article title, which poses a problem for candidate selection using the search engine.\nWhat\u0026rsquo;s next? So far, I have focused on evaluating SBMV for unstructured and structured references separately. 17% of the open/limited references at Crossref, however, have both unstructured and structured form. In those cases, it might be beneficial to use the information from both forms. I plan to perform some experiments on this soon.\nThe data and code for this evaluation can be found at https://github.com/CrossRef/reference-matching-evaluation. The Java version of SBMV (for both structured and unstructured references) can be found at https://gitlab.com/crossref/search-based-reference-matcher.\n", "headings": ["TL;DR","Introduction","References in Crossref","The analysis","The results","Structured vs. unstructured","What\u0026rsquo;s next?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/similarity-check/terms/", "title": "Similarity Check terms", "subtitle":"", "rank": 1, "lastmod": "2019-06-30", "lastmod_ts": 1561852800, "section": "Find a service", "tags": [], "description": "Updated June 2019\nSIMILARITY CHECK SERVICE TERMS\nIMPORTANT NOTICE: These Similarity Check Service Terms (“Service Terms”) are a binding legal contract between you (“Member”) and Publishers International Linking Association, Inc. (d/b/a “Crossref” and together with Member, the “parties”), a nonprofit corporation (“Crossref”) organized under the laws of New York, USA. By click-through accepting these Service Terms, Member is agreeing to be bound by these Service Terms. Member may only sign these Service Terms OR access, or use the Service if Member is in compliance with and not in breach of the Membership Terms.", "content": "Updated June 2019\nSIMILARITY CHECK SERVICE TERMS\nIMPORTANT NOTICE: These Similarity Check Service Terms (“Service Terms”) are a binding legal contract between you (“Member”) and Publishers International Linking Association, Inc. (d/b/a “Crossref” and together with Member, the “parties”), a nonprofit corporation (“Crossref”) organized under the laws of New York, USA. By click-through accepting these Service Terms, Member is agreeing to be bound by these Service Terms. Member may only sign these Service Terms OR access, or use the Service if Member is in compliance with and not in breach of the Membership Terms. If Member does not agree to these Service Terms or Member is in breach of the Membership Terms, Crossref is not willing to license any right to use or access the Service. In such event, Member may not access or use the Service. If you are entering into this agreement on behalf of a company or other legal entity, you represent that you have the authority to bind such entity to this agreement, and in such case, the term \u0026ldquo;member\u0026rdquo; shall refer to such entity.\nThe parties agree as follows:\n1. LICENSE TO USE THE SERVICE. (a)\tSubject to these Service Terms, Crossref grants Member a non-transferable, worldwide, royalty-free, non-exclusive license to use the services and related materials described in Schedule 1 (collectively the “Service”). Member is responsible for ensuring its employees and agents comply with these Service Terms and shall be responsible for any breach by Member, its employees, or agents. The Service is licensed, not sold. (b)\tMember is entitled to the number of accounts with the Vendor (as defined below) that are reasonably necessary for Member to make use of the Service, with no set limit on the number of accounts on a per-Member basis. \u0026#40;c)\tAs a condition of receiving the Service, Member must make available to Crossref at least ninety percent (90%) of its published journal articles registered with Crossref where the full-text is digitally available in a format that can be used by the Service and for which Member has all necessary rights. Such published journal articles will be subject to the license terms set forth in Section 3. 2. LIMITATIONS AND FURTHER AGREEMENTS. The license granted by Crossref to Member in these Service Terms is subject to the following restrictions and agreements:\n(a)\tUse of the Service will be for Member’s internal purposes only, defined to include both Member’s internal and external (e.g., editorial board review) steps in the editorial review and publishing process. (b)\tMember may not reverse engineer, decompile, disassemble, modify, or create works derivative of the Service. For the avoidance of doubt, the output of the Service, including, but not limited to, Matching Reports and work product based on such reports, shall not be considered derivative works prohibited by this section. \u0026#40;c)\tExcept as otherwise expressly permitted by these Service Terms, Member may not assign, sublicense, rent, timeshare, loan, lease, or otherwise transfer the Service, or directly or indirectly permit any third party to use or copy the Service. Member will keep any passwords associated with the use of the Service in strict confidence and will not share such passwords with any third party. Member will be solely responsible for all use of the Service made with Member’s passwords, if any. (d)\tMember may not remove any proprietary notices (e.g., copyright and trademark notices) from either the Service or any documentation, content, or reports provided by Crossref or the service provider identified as “Vendor” in Schedule 1 (hereinafter the “Vendor”). (e)\tMember is responsible for verifying that its in-house and outside editorial staff and reviewers do not use their access to the Service as a “back-door” method for getting free full-text access to included content and the Service. The steps to be used to accomplish this should include (i) limiting access to the Service to those employees and outside contractors who, in Member’s reasonable judgment, have a need to use the Service; (ii) requiring each user to register using his or her email address; (iii) limiting submissions to the Service for checking to those that pertain to a publication published by Member; (iv) taking reasonable steps to ensure that account information and passwords used to access the Service are kept confidential and are not shared beyond the permitted users of the Service; (v) submitting to an audit of users, to be conducted by the Vendor at its own expense and with advance written notice to Member no more often than once per year, to determine if unauthorized users are being given or are getting access to the Service through Member; and (vi) taking reasonable steps to monitor use and potential abuse of the Service. Crossref shall ensure that all other Members with access to the Content (or any portion thereof) through the Service are bound by an obligation substantially similar to the obligation set forth in this Section 2(e). (f)\tMember shall exercise its independent professional judgment in, and assume sole and exclusive responsibility for, determining the actual existence of plagiarism under the acknowledgement and understanding that any outputs received from the Service are only tools for detecting textual similarities between compared works and do not determine conclusively the existence of plagiarism. (g)\tAny disclosure to any third party by Member of any outputs received from the Service is at Member’s sole risk. (h)\tMember shall comply with its obligations under Section 12. 3. CONTENT.\n(a)\tTo the extent Member has not entered into a separate Side Letter (as defined below), Member grants Crossref a non-exclusive, royalty-free, worldwide license to use the full-text of journal articles, conference proceedings, books, book chapters, theses, dissertations, and other materials associated with Identifiers (as defined below) registered with Crossref by Member (collectively, “Content”) solely as set forth in Schedule 1. The non-textual components included as part of such articles, proceedings, books and chapters, theses and dissertations and datasets, and any submitted text materials that are run through the Service (“Submitted Text”) are not deemed to be “Content” for purposes of these Service Terms. As between Crossref, the Vendor and Member, Member will retain all rights, title, copyright, and other intellectual or proprietary rights in Content. Member owns or controls and will own or control sufficient rights (owned or licensed) in and to Content sufficient to grant Crossref the rights and licenses granted pursuant to these Service Terms and for Crossref and the Vendor to use Content solely in accordance with and as contemplated by these Service Terms. (b)\tCrossref shall remove Content or any part thereof from the Similarity Check Database upon receipt of a request for removal from Member in accordance with the process set forth in this Section and in Schedule 1. Such request may be made at any time and for any reason whatsoever. The means for requesting such removal may include sending a written request to support@crossref.org (or a replacement email designated by Crossref by written notice to Member). The notice of removal must specify Content to be removed (i.e., specific Identifiers). Following termination of these Service Terms, the notice of removal may alternatively specify that all Content be removed. Upon removal of Content pursuant to a notice of removal, Crossref shall, and shall cause the Vendor to, destroy the removed Content. Crossref will confirm in writing to Member the removal and destruction of Content. \u0026#40;c)\tCrossref may sublicense Content only as set forth in, and in accordance with, Schedule 1. Crossref may not sublicense Content to the Vendor unless the Vendor is subject to confidentiality and security measures commensurate with those to which Crossref is subject pursuant to these Service Terms and, in no case, less than what would be reasonable. Member will have the audit rights set out in Schedule 1. (d)\tMember shall provide Crossref with “Full-Text URLs” for Content as specified in the technical documentation for the Service, which Crossref may provide to the Vendor directly in connection with the Vendor’s provision of the Service. 4\u0026#46; SUSPENSION OF ACCESS. Crossref may, in its sole discretion, suspend access to all or any portion of the Service to (i) prevent damages to, or degradation of, the Service; (ii) comply with any law, regulation, court order, or other governmental request; (iii) otherwise protect Crossref from potential legal liability; (iv) address a breach of the Terms of Use (attached as Exhibit A); or (v) address a breach of the Crossref Terms of Membership, available at https://0-www-crossref-org.libus.csd.mu.edu/membership/terms/, (the “Membership Terms”). With respect to the above, Crossref shall use reasonable efforts to provide Member with written notice prior to or within 24 hours following any suspension of access to the Service. Crossref may also, in its sole discretion, suspend access to all or any portion of the Service not less than 60 days after providing written notice to Member of its non-compliance with the requirement set forth in Section 1\u0026#40;c). Crossref shall (and shall work with the Vendor to, as applicable) restore access to the Service as soon as the event giving rise to suspension has been resolved. 5. PRICING AND PAYMENT. Pricing for the Service is set forth in Schedule 1. All payments are due net 45 days from date of the invoice.\n6. SUPPORT. Support will be provided as specified in Schedule 1.\n7. TERM AND TERMINATION. The duration of these Service Terms (the “Term”) consists of the Initial Term and any Renewal Terms, as defined herein. The Initial Term of these Service Terms will commence: (a) for a Member that is new to the Service and does not have an existing agreement in place covering Member’s use of the Service, upon Crossref\u0026rsquo;s approval of the Service application form, eligibility checks, and receipt of the first pro-rated annual Service fee, and (b) for a Member that is a current user of the Similarity Check service and has an existing agreement in place covering Member’s use of the Similarity Check service, on the date that the Member’s existing agreement for use of the Similarity Check service is terminated. In each case, the Initial Term will extend for the remainder of that calendar year (the “Initial Term”). Thereafter, provided Member has paid the applicable fees, these Service Terms will automatically renew each calendar-year (each, a “Renewal Term”), unless either party gives the other party written notice of its intent not to renew at least 30 days prior to the expiration of the end of the year. In the event of a material breach of these Service Terms, the non-breaching party shall provide the other party written notice of such breach and such other party will have a period of 30 days in which to cure the breach, if it can be remedied. In the event the breaching party fails to cure the breach within the cure period, or it is a breach incapable of remedy, in addition to whatever other remedies may be available at law or equity, the non-breaching party may terminate these Service Terms upon providing the other party prior written notice of termination. Notwithstanding the foregoing, these Service Terms shall terminate concurrently with the termination of the Membership Terms. Sections 3, 7, 8, 9, 10, 11, 13, 14, and 15 will survive any expiration or termination of these Service Terms, irrespective of the reason for such termination, and will continue in full force and effect thereafter. Although the license to Content set forth in Section 3 is intended to survive termination of these Terms, following termination Member may request removal of any or all Content in writing pursuant to the terms of Section 3(b).\n8. WARRANTY AND DISCLAIMER; LIMITATION OF LIABILITY.\n(a)\tLimited Warranties; Disclaimer. Crossref represents and warrants that it has the requisite authority to enter into and perform these Service Terms and that its performance will not violate or breach any laws or any agreement to which it is a party or by which it is bound. Crossref warrants that to the best of its knowledge and belief, the Service (excluding any Content or materials provided by Member or any third party) does not infringe the intellectual property rights of any third party. During the Term, Crossref warrants that it will use reasonable efforts to provide the Service and support as set forth herein and as described on Crossref’s website and published documentation. Crossref understands and agrees that it shall be responsible for the acts and omissions of the Vendor to the same extent as if such acts or omissions were by Crossref. EXCEPT AS SET FORTH IN THIS SECTION 8(A), THE SERVICE (INCLUDING ANY OUTPUTS FROM THE SERVICE) IS PROVIDED ON AN “AS IS” AND “AS AVAILABLE” BASIS. CROSSREF AND THE VENDOR SPECIFICALLY DISCLAIM ALL WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, QUIET ENJOYMENT, QUALITY OF INFORMATION, NON-INFRINGEMENT, AND TITLE. NO WARRANTY IS MADE THAT THE SERVICE WILL BE TIMELY, SECURE, OR ERROR-FREE. IN JURISDICTIONS NOT ALLOWING THE LIMITATION OR EXCLUSION OF CERTAIN WARRANTIES, CROSSREF’S WARRANTY WILL BE LIMITED TO THE GREATEST EXTENT PERMITTED BY LAW. (b)\tTHE SERVICE IS ACCESSED AND USED OVER THE INTERNET. MEMBER ACKNOWLEDGES AND AGREES THAT CROSSREF AND THE VENDOR DO NOT OPERATE OR CONTROL THE INTERNET AND THAT: (I) VIRUSES, WORMS, TROJAN HORSES, OR OTHER UNDESIRABLE DATA OR SOFTWARE; OR (II) UNAUTHORIZED USERS (e.g., HACKERS) MAY ATTEMPT TO OBTAIN ACCESS TO AND DAMAGE MEMBER’S DATA, COMPUTERS, OR NETWORKS. CROSSREF AND THE VENDOR SHALL NOT BE RESPONSIBLE FOR SUCH ACTIVITIES. \u0026#40;c)\tIRRESPECTIVE OF THE TYPE OF CLAIM OR THE NATURE OF THE CAUSE OF ACTION, IN NO EVENT WILL CROSSREF, THE VENDOR, OR THEIR AFFILIATES, OFFICERS, EMPLOYEES, AGENTS, OR LICENSORS, BE LIABLE FOR: (I) ANY DECISION MADE OR ACTION TAKEN OR NOT TAKEN IN RELIANCE UPON THE INFORMATION CONTAINED IN OR PROVIDED BY THE SERVICE, OR (II) ANY LIABILITY ARISING FROM MEMBER’S DISCLOSURE OF ANY OUTPUT FROM THE SERVICE TO ANY THIRD PARTY. (d)\tIN NO EVENT SHALL EITHER PARTY, THE VENDOR, OR THEIR RESPECTIVE AFFILIATES, OFFICERS, EMPLOYEES, AGENTS OR LICENSORS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES, INCLUDING BUT NOT LIMITED TO LOSS OF REVENUES AND LOSS OF PROFITS OF THE OTHER PARTIES, EVEN IF THAT PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE LIMITATIONS SET FORTH IN THIS SECTION 8(D) SHALL NOT APPLY TO LIMIT EITHER PARTY’S OBLIGATION TO INDEMNIFY, DEFEND, AND HOLD HARMLESS UNDER SECTION 9. (e)\tEXCEPT WITH RESPECT TO EACH PARTY’S INDEMNITY OBLIGATION IN SECTION 9, MEMBER, CROSSREF, THE VENDOR, AND THEIR RESPECTIVE AFFILIATES, OFFICERS, EMPLOYEES, AGENTS OR LICENSORS’ RESPECTIVE TOTAL CUMULATIVE LIABILITY ARISING UNDER OR RELATED TO THESE SERVICE TERMS AND IN CONNECTION WITH THE SERVICE, WHETHER IN CONTRACT, TORT, OR OTHERWISE, WILL NOT EXCEED THE AMOUNTS, IF ANY, PAID BY MEMBER FOR THE SERVICE IN THE 12 MONTHS IMMEDIATELY PRECEDING THE EVENT GIVING RISE TO LIABILITY. SOME JURISDICTIONS DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES; IN THOSE JURISDICTIONS A PARTY’S LIABILITY UNDER THESE SERVICE TERMS WILL BE LIMITED TO THE GREATEST EXTENT PERMITTED BY LAW. THE LIMITATION OF LIABILITY AND LIABILITY CAP WILL APPLY EVEN IF THE EXPRESS WARRANTIES SET FORTH ABOVE FAIL OF THEIR ESSENTIAL PURPOSE. 9\u0026#46; INDEMNIFICATION. (a)\tTo the extent permitted by applicable law, Member shall indemnify, defend, and hold Crossref and its affiliates, directors, officers, employees, personnel, representatives, and licensors (each an “Indemnitee”) harmless for any causes of action, claims, costs, or liabilities related to any third-party claim arising out of or based on: (a) Member’s breach of its obligations under these Service Terms; or (b) Content. (b)\tTo the extent permitted by applicable law, Crossref shall indemnify, defend, and hold Member and its affiliates, directors, officers, employees, personnel, representatives, and licensors (each an “Indemnitee”) harmless for any causes of action, claims, costs, or liabilities related to any third-party claim arising out of or based on: (i) Member’s licensed use of the Service, as permitted hereunder, infringes the copyrights or U.S. patent or other intellectual property rights of the third party; or (ii) Crossref has violated any U.S. state or federal privacy law relating to information provided by Member hereunder. \u0026#40;c)\tIndemnification Procedures. The party seeking indemnification (the “Indemnified Party”) shall promptly give written notice to the other party (the “Indemnifying Party”) of any claim subject to indemnification (the “Claim Notice”), provided that the delay of or failure to give notice will not affect the Indemnified Party or any Indemnitee’s rights hereunder except to the extent the Indemnifying Party has been prejudiced by reason of the delay or failure. Following receipt of a Claim Notice, the Indemnifying Party will, at its expense, assume control of the negotiation, settlement, and defense of the claim. The Indemnified Party may participate in the defense of such third-party claim and employ counsel of its choosing at its expense. Upon fulfillment of its obligations with respect to indemnification, including payment in full of all amounts due pursuant to its indemnification obligations, the Indemnifying Party will be subrogated to the rights of the Indemnified Party Indemnitees with respect to the Claims to which such indemnification relates. The parties will reasonably communicate and cooperate in the Indemnifying Party’s defense of the Claim. If the Indemnifying Party fails to comply with its indemnification obligations hereunder, then the Indemnified Party (upon notice to the Indemnifying Party) will have the right to undertake the defense, compromise or settlement of such Claim, by counsel or other Representatives of its choosing, and any reasonable fees and expenses incurred by the Indemnified Party will be considered costs for which the Indemnified Party will be entitled to indemnification. Absent the Indemnified Party’s express written consent, the Indemnifying Party may only agree to any settlement or entry of judgment if: (i) the Indemnifying Party agrees in writing to pay all amounts payable; (ii) the settlement or judgment: (1) includes a written release of the Indemnified Party Indemnitees and the Indemnifying Party from all liability for the claim that is reasonably satisfactory to the Indemnified Party; (2) does not impose any injunction or restriction on Indemnified Party Indemnitees, (3) does not include an admission or stipulation of any Indemnified Party Indemnitee’s liability or any element or evidence of liability; and the settlement or judgment is subject to a non-disclosure agreement. (d)\tInfringement Exceptions. The obligations of Member set out in Section 9(a) and of Crossref set out in Section 9(b) to defend, indemnify, hold harmless the other party against any third-party claim arising out of or based on infringement, misappropriation, or other violation of third-party rights will not apply to the extent that a claim is: based on the use by the other party of the Service or Content, as applicable, in a manner not permitted by these Service Terms, if such claim would not have arisen but for such unauthorized use; based on the modification of the Service or Content, as applicable, in a manner not permitted by these Service Terms, if such Claim would not have arisen but for such modification; or based on any Service that was developed in compliance with detailed technical specifications provided by Member, if such claim would not have arisen but for Crossref’s compliance with such specifications. (e)\tInfringement Cures. Following notice to Crossref of any claim of infringement, misappropriation, or other violation of third-party rights, or if Crossref believes such a claim is likely, Crossref will, at its sole expense and option: (i) procure for Member the right to continue to use the allegedly infringing Service; (ii) replace or modify the allegedly infringing Service to make it non-infringing; or, if (i) and (ii) are not possible after commercially reasonable efforts, (iii) cancel the allegedly infringing Service and equitably adjust the Fees to reflect the reduced value of the Service based on the cancellation. In the event that (iii) occurs, Member may elect to terminate these Service Terms immediately on written notice following the effective date of cancellation. 10\u0026#46; GOVERNING LAW. These Service Terms will be governed by the laws of the United States of America and the State of Delaware, excluding its conflict of laws rules. 11. CONFIDENTIALITY AND SECURITY.\n(a)\tConfidentiality. Each party recognizes that in the course of performing its obligations and exercising its rights under these Service Terms, it (the “Receiving Party”) may have access to non-public or proprietary information of the other party or its licensors (the “Disclosing Party”) that is marked “confidential” or by its nature should reasonably be considered to be confidential, including information about product designs and specifications, Content, information that may be used alone or in combination to identify individuals (“Personal Information”), and other confidential proprietary data or information (all of the foregoing, “Confidential Information”). Each party agrees that it will make no use and make no disclosure of the Confidential Information of the other party except as necessary to perform such party’s obligations hereunder. For the avoidance of doubt, Crossref may provide Vendor with the name and email address of one or more contacts at Member in order to enable Vendor to provide the Service to Member. As between the two parties, each will at all times remain the sole and exclusive owner of its or its licensors’ Confidential Information. Without limiting the generality of the foregoing, Member will not have access to any portion of the source code and underlying architecture and algorithms for the Service, and Member may not attempt to reverse engineer, disassemble, decompile, decode, or otherwise attempt to derive or gain access to the underlying architecture or algorithms. (b)\tExclusions. Subject to Section 11\u0026#40;c), Confidential Information does not include information that: (i) was rightfully known to the Receiving Party without restriction on use or disclosure prior to such information’s being disclosed or made available to the Receiving Party in connection with these Service Terms; (ii) was or becomes generally known by the public other than by the Receiving Party’s or any of its representatives’ noncompliance with these Service Terms; (iii) was or is received by the Receiving Party on a non-confidential basis from a third party that was not or is not, at the time of such receipt, under any obligation to maintain its confidentiality; or (iv) was or is independently developed by the Receiving Party without reference to or use of any Confidential Information. \u0026#40;c)\tExceptions to Section 11(b). None of the exclusions set forth in Section 11(b) apply to any Content or Personal Information. (d)\tLegal Obligation to Disclose. Unless otherwise prohibited by law, if the Receiving Party becomes legally obligated to disclose Confidential Information, the Receiving Party will give the Disclosing Party prompt written notice sufficient to allow the Disclosing Party to seek a protective order or other appropriate remedy, and will reasonably cooperate with the Disclosing Party’s efforts to obtain such protective order or other remedy at the Disclosing Party’s expense, and in the event the Receiving Party is unable to do so, the Receiving Party will (so long as not prohibited by law from doing so) advise the Disclosing Party in writing immediately subsequent to such disclosure. The Receiving Party will disclose only such information as is required, in the opinion of its counsel, and will use commercially reasonable efforts to obtain confidential treatment for any Confidential Information that is so disclosed. (e)\tCrossref’s Security Obligations. Crossref shall implement reasonable administrative, technical, and physical controls in accordance with industry standards that are designed to secure Member’s Confidential Information (including Content) within Crossref’s possession from unauthorized access and use, and shall utilize industry standard technology to do so. Crossref’s obligation to provide security for Member’s Confidential Information (including Content) will include, but not be limited to, the inclusion of industry standard virus software and firewalls and the requirement that Crossref update such technological protections as is necessary to protect Member’s Confidential Information (including Content) and as vulnerabilities in existing technological protections are identified. Member may, no more than once a year, conduct a security audit of the systems on which Member’s Content resides as set forth in Schedule 1. (f)\tMember’s Security Obligations. Member shall implement commercially reasonable administrative, technical, and physical controls in accordance with industry standards that are designed to secure Crossref’s Confidential Information within Member’s possession from unauthorized access and use, and shall utilize industry standard technology to do so. Member’s obligation to provide security for Crossref Confidential Information will include, but not be limited to, the inclusion of industry standard virus software and firewalls and the requirement that Member update such technological protections as is necessary to protect Crossref Confidential Information and as vulnerabilities in existing technological protections are identified. 12\u0026#46;\tCOMPLIANCE WITH LAWS. (a)\tEach Party shall comply with all applicable laws governing its use of Content and Member’s use of the Service, including applicable privacy and data security laws, anti-corruption and money laundering laws, and trade control laws. (b)\tGDPR. To the extent applicable: each party shall comply with the General Data Protection Regulation (Regulation 2016/679 EU) (“GDPR”), shall provide to each other their Data Protection Policies on request, and shall cooperate in relation to any request from each other to provide evidence of GDPR compliance. 13\u0026#46;\tINDEPENDENT CONTRACTORS. Nothing in these Service Terms will make Crossref and Member partners, joint venturers, or otherwise associated in or with the business of the other. Crossref is an independent contractor of Member. Neither party will be liable for any debts, accounts, obligations, or other liabilities of the other party. The parties are not authorized to incur debts or obligations of any kind on the part of or as agent for the other, except as may specifically be authorized in writing. 14.\tGENERAL. Together with the Membership Terms (to the extent referenced herein), these Service Terms constitute the entire agreement and understanding between the parties with respect to the subject matter hereof and supersede and replace any and all prior or contemporaneous written or oral agreements. Notwithstanding the foregoing, the parties acknowledge that the Vendor and Member may enter into a separate agreement governing the licensing of Content by Member to the Vendor for use in the Service offered hereunder (a “Side Letter”). These Service Terms may be amended by Crossref by providing written notice to Member of the amendment. In the event that Crossref amends these Service Terms as set forth in this paragraph, Member may terminate these Service Terms on written notice to Crossref within 60 days of receipt of notice of amendment from Crossref. A party’s failure to insist upon or enforce strict performance of any provision of these Service Terms will not be construed as a waiver of any provision or right. If any provision of these Service Terms is held to be invalid or unenforceable, such determination will not affect the balance of these Service Terms, which will remain in full force and effect, and the offending provision shall be modified to the minimum extent required to render the provision enforceable. Member may not assign or transfer these Service Terms without the written consent of Crossref, which consent may not be unreasonably withheld, conditioned or delayed; provided, that Member may assign these Service Terms to an affiliate or subsidiary, or in connection with the sale of all or substantially all of Member’s assets or any merger, consolidation, or acquisition of a party that results in a change in the ownership of more than 50% of the voting interests of Member. Any assignment in violation of the preceding sentence will be null and void. Crossref may, with Member’s prior written permission, use and reference Member’s name as a subscriber to the Service in connection with truthful advertising or promotion of the Service. Except with respect to the Vendor’s rights under Sections 8 and 9, there are no third party beneficiaries of these Service Terms. Neither party will be responsible for any delays, errors, failures to perform, interruptions or disruptions in the Service caused by acts of God, flood, fire, earthquake, explosion, war, invasion, hostilities (whether war is declared or not), terrorist threats or acts, riot, or other civil unrest, national or regional emergency, strikes, lockouts, changes in law or regulations, storm, power failure, or failures of the Internet. The rights and remedies provided by these Service Terms are cumulative and use of any one right or remedy by either party will not preclude or waive the right to use any or all other rights or remedies. The said rights and remedies are given in addition to any other rights or remedies the Parties may have by law, statute, ordinance, or otherwise.\n15.\tNOTICES. Written notice under these Service Terms shall be given as follows:\n(a)\tIf to Crossref: by emailing member@crossref.org addressing Mr. Edward Pentz, Executive Director. (b)\tIf to a Member: To the name and email address designated by the Member as the Primary Contact (previously \"Business Contact\") in such Member's membership application. This information may be changed by the Member by giving notice to Crossref by email at member@crossref.org, in accordance with the Membership Terms. 16. CONFLICTS. In the event of any conflict or inconsistency between the provisions of these Service Terms, any Side Letter, and any document incorporated by reference herein, including via a URL contained in these Service Terms, the following order of precedence shall be observed, in order of priority: (a) any Side Letter, but only with respect to terms governing the licensing of Content by Member to Turnitin pursuant to such Side Letter; (b) this Agreement; (c) the Membership Terms; and (d) any SOW. For avoidance of doubt, except with respect to terms governing the licensing of Content by Member to Turnitin pursuant to any Side Letter, this Agreement shall prevail over any such Side Letter in all other cases; and the terms of Section 1(c) are not “terms governing the licensing of Content by Member to Turnitin” for purposes of this Section.\n17.\tAUTHORIZATION. The Member represents that the Member is in compliance with and not in breach of the Membership Terms as of the date hereof.\nRev. 3 June 2019 Schedule 1\nTerms Applicable to Member’s Use of\nSimilarity Check, including the Turnitin System This Schedule 1 describes the services being provided under the Service Terms, including services provided by Turnitin, LLC (“Turnitin”). The terms of this Schedule 1 are incorporated into the Service Terms by reference. If at any time Turnitin ceases to provide the services described herein, Crossref may, in its sole discretion, either (i) terminate the Service Terms without payment of any fee or penalty, or (ii) replace or amend this Schedule 1, including by changing the Vendor who is providing the Service, in accordance with the amendment procedure set forth in Section 14 of the Service Terms.\n1. Definitions\n“Turnitin System” means the plagiarism detection software created and owned by Turnitin, LLC (“Turnitin”), a California limited liability corporation, and licensed to Crossref under the name, “Similarity Check” (as such name may be changed from time to time) that compares Submitted Text against a database to identify materials that may have been previously published or plagiarized, and produces a report showing instances of overlapping text (“Matching Report”). For the avoidance of doubt, provision of the Turnitin System will be part of the Service. The Service will also include any other reports, documentation, and other materials provided to Member via Similarity Check.\n“Snippet” means an excerpt from Content in the form of displayed text that consists of a sample of overlapping text identified through the use of the Service. The Snippet will be composed of a limited excerpt of and no more than a total of 24 lines of surrounding text, and must include bibliographic metadata for the document where the matching occurs and a Digital Object Identifier-based link to the content on Member’s website.\n“Vendor” means Turnitin.\n2. Terms of Use\nMember’s use of the Turnitin System must be in accordance with the terms of use attached as Exhibit A for reference and available at https://help.turnitin.com/Privacy_and_Security/Privacy_and_Security.htm under the heading “Acceptable Use Policy” (the “Terms of Use”). Continued use of the Turnitin System after any such revisions will constitute Member’s acceptance of such revisions to the policy.\n3. License\nTo the extent Member has not entered into a separate Side Letter (as defined above), Member agrees that Crossref may sublicense Content to Turnitin on a non-exclusive, non-transferable (except as part of the sale of all or substantially all of Turnitin’s assets or any merger, consolidation, or acquisition of Turnitin), royalty-free basis solely for inclusion in a database (the “Similarity Check Database”) of Content maintained by Turnitin and used solely for purposes of the Turnitin System, and the similarity checking software created and owned by Turnitin and currently offered for license under the trademarks iThenticate, Originality Check, WriteCheck, Turnitin Revision Assistant, and Turnitin Feedback Studio (as may be re-branded from time to time), which compare submitted text materials against a database to identify materials that may have been previously published or plagiarized and produces a report (“Matching Report”) showing instances of overlapping text (the “Permitted Solutions”). The Similarity Check Database may be used solely for the purpose of indexing and comparing documents in order to generate the Matching Reports used in the Turnitin System and Permitted Solutions. For the avoidance of doubt, any Submitted Text and data generated through the Turnitin System will be owned by or licensed to Member for use consistent with the Service Terms.\nMatching Reports provided to Member and to other Members participating in the Service will include access to the full-text of the Content item included in the Similarity Check Database in which the identified overlapping text appears. For every content item with a registered Digital Object Identifier (“Identifier”), Turnitin shall display the Identifier and shall link to the item via the Identifier when information about the item is displayed as part of the Service. Content may be used for the purpose of generating Matching Reports for non-member users of the Permitted Solutions, provided, however, that such Matching Reports will only include Snippets of Content, as well as the Identifier where available, and may not provide samples of or access to the full-text of Content.\n4. Removal of Content\nIf Member requests removal of specific Content pursuant to Section 3(b)(i) of the Service Terms, the specific Content will be removed within 10 Business Days of Turnitin’s receipt of the request for such removal. If Member requests removal of all Content pursuant to Section 3(b)(ii) of the Service Terms, all of Member’s Content will be removed within 30 calendar days of Turnitin’s receipt of the request for such removal. “Business Day” means a day other than a Saturday, Sunday, or federal holiday in the United States.\n5. Availability\nTurnitin shall use reasonable efforts to make the Turnitin System, to the extent applicable, available for access over the Internet at least 99.5% of the time during each month of the Term, except for scheduled maintenance and repairs, failures related to Member’s systems and Internet access, and any interruption in the Turnitin System due to causes beyond the control of Turnitin or that are not reasonably foreseeable by Turnitin, including, without limitation: loss or theft of data; interruption or failure of telecommunication or digital transmission links; Internet slow-downs or failure; failures or default of third party software, vendors, or products; and communications, network/internet connection, or utility interruption or failure. In the event Turnitin fails to achieve the foregoing availability requirement, Turnitin shall use commercially reasonable efforts to correct such loss or interruption as quickly as practicable.\n6. Fees\nFees for the Service are set forth at: https://0-www-crossref-org.libus.csd.mu.edu/fees/#similarity-check-fees. 7. Support\nTechnical support for the Service is described at: https://0-www-crossref-org.libus.csd.mu.edu/services/similarity-check/.\n8. Audit\nMember may, no more than once a year, at a mutually agreed upon date and time, to conduct security audits of the Turnitin systems on which Content resides to ensure compliance with the Service Terms. Such audits must not unreasonably interfere with Turnitin’s business and Turnitin will have no liability for any breach of security or other failure to comply with the Service Terms resulting from Member’s activities in connection with the audit. In the event that Member desires to use a third party to conduct the audit, such third party shall execute Turnitin’s then current standard non-disclosure agreement before being granted access to Turnitin’s facilities and systems.\n9. Separate Node\nTurnitin shall maintain Content as a separate node in its database. Exhibit A\nTerms of Use So that everyone can enjoy the Site, Turnitin reserves the right to suspend anyone’s access to the Site who violates this policy and the following provisions. Please do NOT:\n1. restrict any other user\u0026rsquo;s enjoyment of the Site.\n2. engage in unlawful, threatening, abusive, libelous, defamatory, pornographic, profane, or otherwise offensive actions.\n3. carry out or encourage criminal conduct, give rise to civil liability, or otherwise violate any law.\n4. violate or infringe upon the rights of any third party, including, without limitation, patent, copyright, trademark, privacy, or any other proprietary right.\n5. distribute anything that contains a virus or other harmful component.\n6. distribute anything that contains false or misleading indications of origin or statements of fact.\nTurnitin reserves the right to disclose information as necessary to satisfy any legal requirement, including regulations, government requests, court orders, or subpoenas. Turnitin also reserves the right to edit or remove any information, in whole or in part, that it deems objectionable, disruptive to the Site, or in violation of these terms.\nIn other words: please be respectful of others. Thank you.\nIf you are already a Crossref member and would like to sign up for Similarity Check, please check whether your metadata includes what you need for Similarity Check. If it does, that will lead you to an application form. Please contact our support team with any questions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/license-metadata-ftw/", "title": "License metadata FTW", "subtitle":"", "rank": 1, "lastmod": "2019-06-28", "lastmod_ts": 1561680000, "section": "Blog", "tags": [], "description": "More and better license information is at the top of a lot of Christmas lists from a lot of research institutions and others who regularly use Crossref metadata. I know, I normally just ask for socks too. To help explain what we mean by this, we\u0026rsquo;ve collaborated with Jisc to set out some guidance for publishers on registering this license metadata with us.\n", "content": "More and better license information is at the top of a lot of Christmas lists from a lot of research institutions and others who regularly use Crossref metadata. I know, I normally just ask for socks too. To help explain what we mean by this, we\u0026rsquo;ve collaborated with Jisc to set out some guidance for publishers on registering this license metadata with us.\nAt the most basic level, complete and accurate license metadata helps anyone interested in using a research work out how they can do so. Making the information machine-readable helps this to be done easily and at scale by all kinds of tools and services.\nIn this best practice guide, we’re specifically focusing on a use case for license metadata that comes from research institutions. They need to know which version of an article (or other content item) may be exposed in an open repository, and from what date, and tell anyone who comes across the piece of content in the repository what they can do with it once they find it there.\nWithout this being stated simply and clearly in the Crossref metadata, the institution won’t know which works they can make available and which they cannot, even if you as the publisher know that the item is open access, or is open access after a certain date. This can impact the research community’s capacity to find and use the research you publish.\nThe guidance offers advice on:\nthe kind of license information it’s useful to link out to from the Crossref metadata what the Crossref metadata might look like for: gold open access content green open access content with a Creative Commons License green open access content with a publisher-defined post-embargo license how to add this metadata to existing or new Crossref deposits Take a look at the full guidelines here. Maybe there’s more to the story than this, or more information that you need as a publisher or as a research institution - if so, let us know and we can adapt this document based on your feedback. Requests for socks may be declined.\n", "headings": ["Take a look at the full guidelines here."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/licenses/", "title": "Licenses", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/participation/", "title": "Participation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rest-in-peace-christine-hone/", "title": "Rest in peace Christine Hone", "subtitle":"", "rank": 1, "lastmod": "2019-06-09", "lastmod_ts": 1560038400, "section": "Blog", "tags": [], "description": "Our friend and colleague Christine Hone (née Buske) passed away in May from a short but brutal illness. Here is our attempt at \u0026lsquo;some words\u0026rsquo;, which we wrote for her funeral book and are posting here with her husband Dave\u0026rsquo;s permission.\nWe are devastated to lose Christine as a colleague and friend. It’s hard to put into words the effect she had on our small organization in such a short time, and how much we’re already missing her. But here it goes.\n", "content": "Our friend and colleague Christine Hone (née Buske) passed away in May from a short but brutal illness. Here is our attempt at \u0026lsquo;some words\u0026rsquo;, which we wrote for her funeral book and are posting here with her husband Dave\u0026rsquo;s permission.\nWe are devastated to lose Christine as a colleague and friend. It’s hard to put into words the effect she had on our small organization in such a short time, and how much we’re already missing her. But here it goes.\nIt was 2015 when some of us first met Chris, and we immediately saw how much of an asset she could be to our organization. She was very active in the community and well-known in many academic and publishing circles around the world. And she had an enviable combination of technical skills, a scientific mind, and a natural ability to engage people.\nWe tried to recruit her back then but she was in demand by others and it wasn’t until early 2018 that we succeeded. We finally got her! She became the Product Manager for a very advanced and complex system but she took to it perfectly, with real excitement and a complete understanding of how we (and therefore she) could help the research community all over the world see and make connections.\nChristine\u0026rsquo;s official Crossref headshot 😊\nWith colleagues spread around the world, she joined an organization that had exciting opportunities and its share of challenges. Chris engaged with all of this head-on. She handled a constant stream of queries from people spread across time zones, whilst at the same time getting to grips with a service that was difficult to pin down. She balanced these tasks which were at very opposite ends of the spectrum. She added so much and with such energy and intelligence to everything she got involved in, always bringing human attention and creativity.\nChris was also on the winning team at 2018\u0026rsquo;s UKSG quiz!\nEd, Amanda, and Chris: the Crossref contingent of the winning quiz team\nIn her talk at the 5:AM altmetrics conference she brought together technical detail, big-picture ideas, and her own particular passion. Her opening words were “My name is Christine and I’m a recovering fish scientist”. Never afraid to bring her personal brand of humour into the workplace, her opening slide was a photograph of her covered in rats. That presentation was the first time that much of the audience really understood our service. Having cracked the messaging for us, she was due to give the same talk at our annual meeting in Toronto a few months later…\nChris\u0026rsquo;s opening slide at her 5:AM talk\nChris giving her now legendary talk at 5:AM on Event Data\nMany of us were in Toronto for that meeting; it was two weeks after we’d heard the news of her diagnosis. Some of us were able to visit her in the hospital where she told us of her and Dave’s decision to bring forward their wedding plans. It was a bittersweet announcement but, clearly, they adored each other and were determined to be happy together despite the challenging times ahead.\nOver the last few months, even when she had little energy to spare, Chris popped in (virtually) to chat and update us, share pictures and, selflessly, to see how we were doing. Even people who never met or worked closely with her started to follow her vlog and exchange notes and news directly.\nAlways checking in with us, an update from Chris shortly before she passed\nWe have all been rocked by the news and there is a lot of sadness and grief among the Crossref staff and community. Even in the last moments we shared together Chris always asked about how her projects were going. Her passion for her products was a big part of what animated her when she first joined. Throughout her late-stage illness, this remained constant. She yearned to return to work. This zeal will forever be an inspiration to us all at Crossref.\nChristine taught us a lot, through her work, with her attitude to life, and in the manner that she dealt with this terrible illness. We thank her for giving us so many great memories and we will never forget her.\nA Crossref photoshoot; our Christine ❤️\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/jennifer-lin/", "title": "Jennifer Lin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/similarity-check-is-changing/", "title": "Similarity Check is changing", "subtitle":"", "rank": 1, "lastmod": "2019-05-30", "lastmod_ts": 1559174400, "section": "Blog", "tags": [], "description": "Tl;dr Crossref is taking over the service management of Similarity Check from Turnitin. That means we\u0026rsquo;re your first port of call for questions and your agreement will be direct with us. This is a very good thing because we have agreed and will continue to agree the best possible set-up for our collective membership. Similarity Check participants need to take action to confirm the new terms with us as soon as possible and before 31st August 2019.", "content": "Tl;dr Crossref is taking over the service management of Similarity Check from Turnitin. That means we\u0026rsquo;re your first port of call for questions and your agreement will be direct with us. This is a very good thing because we have agreed and will continue to agree the best possible set-up for our collective membership. Similarity Check participants need to take action to confirm the new terms with us as soon as possible and before 31st August 2019. Instructions will be circulated early June via email.\nBackground Many of our members use Similarity Check which gives their editors reduced-rate access to Turnitin’s iThenticate system for plagiarism checking. Some use Similarity Check directly and some as part of a submission system.\nThe service launched in 2008 when we announced our initial partnership with Turnitin. Since then it\u0026rsquo;s gone from strength to strength and now has over 60 million full-text documents (from over 87 thousand titles) available for text comparison and almost 1500 members using the service.\nThe way that the Similarity Check arrangement works is changing, and it’s important that users know what’s happening. We have worked with Turnitin to set up a process that will transition participants easily and swiftly into the upgraded service with no access interruptions to iThenticate access.\nSo, what is changing and why? We know that Similarity Check is a critical service for our members, and we want to improve people\u0026rsquo;s experience of using it. So, in consultation with members, we’ve strengthened the service by updating our relationship with Turnitin to consolidate all the components of the service under our care and stewardship. From next week, Similarity Check participants will move from having an agreement with Turnitin to one with Crossref. And at Crossref, we have a new agreement with Turnitin as the technology provider for the service.\nThe new arrangement puts us in a strong position to improve support and drive future improvements of the system. Representing our collective membership, we’ve agreed better terms than what people have today and what members would get acting individually.\nThere are five key changes specifically:\nMembers\u0026rsquo; Similarity Check service agreement will be with us and not Turnitin. Per-document checking fees will be invoiced by us, and not Turnitin. They’ll be included in members\u0026rsquo; regular invoices, reducing international transfer fees for many. The first 100 documents checked each year will be free of charge. Turnitin will operate as a vendor for Crossref. We’ve already agreed a range of additions to their technology roadmap. Turnitin will remain responsible for fixing any bugs or technical issues with the system, but we\u0026rsquo;re in a stronger position to ensure these are fixed quickly. Users will get training and on-boarding support from Crossref. This will cover both how to use the interface and how to interpret the results. What’s staying the same? The system itself and how it\u0026rsquo;s accessed - people\u0026rsquo;s logins will stay exactly the same and nothing will change about how participants have their systems set up. The fees - the annual Similarity Check fee and the per-document checking fees will remain at the same level (although under the new arrangement users will get the first 100 documents each year for free - see \u0026ldquo;what\u0026rsquo;s changing\u0026rdquo; above!) Your service obligations - members still need to make at least 90% of all their journal article content available for Turnitin to index. This is achieved through the dedicated full-text URLs that members register in their metadata. Licensing and privacy - there are no changes to the licensing of members\u0026rsquo; content or the privacy requirements for Turnitin’s use of member content. For existing users We’ve worked closely with Turnitin to ensure an easy transition to the new Crossref terms. You can transition to the new terms at any stage from next week through to 31st August, and Turnitin will end your contract with them in the month you take that action.\nNext week, we’ll email your main Crossref membership contact with a link to a form asking them to click-through accept the new terms. This will confirm and commence the transition process.\nYou’ll then need to:\nPay your final Turnitin invoice, which will be sent at the end of the month you complete the form. This will cover your per-document checking fees up to the 25th of that month. Continue to use iThenticate as usual. Your service agreement will officially move from the Turnitin agreement to the Crossref agreement on the 25th of the month that you complete the transition form. The next Similarity Check invoices you receive will be from Crossref in January 2020 and will include your Similarity Check annual fee and your per-document checking fees for the remainder of 2019.\nIf you haven’t transitioned to the new agreement by 31st August, you risk losing access to the iThenticate system as Turnitin will not be able to automatically renew your direct contract.\nIf you have any questions about these changes, do contact our membership specialist. We’ll be in touch next week with a link to a new form where you’ll be able to check your details and accept the new agreement directly with us.\nFor prospective users When you apply to participate in Similarity Check you will be accepting terms directly with Crossref and not Turnitin. Eligible members can apply any time from next week.\nAny questions? There are many benefits to this new set-up, but we understand these things can be a bit of a hassle. We\u0026rsquo;ve welcomed a new colleague (say hello to Kathleen!) to help people transition and get the best from their use of Similarity Check. Please contact her via Support with any questions.\n[Update June 5th: we\u0026rsquo;ve added a new FAQ page for members who signed up for Similarity Check prior to June 2019]\n", "headings": ["Tl;dr","Background","So, what is changing and why?","What’s staying the same?","For existing users","For prospective users","Any questions?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/putting-content-in-context/", "title": "Putting content in context", "subtitle":"", "rank": 1, "lastmod": "2019-05-13", "lastmod_ts": 1557705600, "section": "Blog", "tags": [], "description": "You can’t go far on this blog without reading about the importance of registering rich metadata. Over the past year we’ve been encouraging all of our members to review the metadata they are sending us and find out which gaps need filling by looking at their Participation Report.\nThe metadata elements that are tracked in Participation Reports are mostly beyond the standard bibliographic information that is used to identify a work. They are important because they provide context: they tell the reader how the research was funded, what license it’s published under, and more about its authors via links to their ORCID profiles. And while this metadata is all available through our APIs, we also display much of it to readers through our Crossmark service.\n", "content": "You can’t go far on this blog without reading about the importance of registering rich metadata. Over the past year we’ve been encouraging all of our members to review the metadata they are sending us and find out which gaps need filling by looking at their Participation Report.\nThe metadata elements that are tracked in Participation Reports are mostly beyond the standard bibliographic information that is used to identify a work. They are important because they provide context: they tell the reader how the research was funded, what license it’s published under, and more about its authors via links to their ORCID profiles. And while this metadata is all available through our APIs, we also display much of it to readers through our Crossmark service.\n", "headings": ["Who’s in?","Some news on clicks and views","Web pages vs PDFs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-simpler-text-query-form/", "title": "A simpler text query form", "subtitle":"", "rank": 1, "lastmod": "2019-04-30", "lastmod_ts": 1556582400, "section": "Blog", "tags": [], "description": "The Simple Text Query form (STQ) allows users to retrieve existing DOIs for journal articles, books, and chapters by cutting and pasting a reference or reference list into a simple query box. For years the service has been heavily used by students, editors, researchers, and publishers eager to match and link references.\nWe had changes to the service planned for the first half of this year - an upgraded reference matching algorithm, a more modern interface, etc. In the spirit of openness and transparency, part of our project plan was to communicate these pending changes to STQ users well in advance of our 30 April completion date. What would users think? Could they help us improve upon our plans?\n", "content": "The Simple Text Query form (STQ) allows users to retrieve existing DOIs for journal articles, books, and chapters by cutting and pasting a reference or reference list into a simple query box. For years the service has been heavily used by students, editors, researchers, and publishers eager to match and link references.\nWe had changes to the service planned for the first half of this year - an upgraded reference matching algorithm, a more modern interface, etc. In the spirit of openness and transparency, part of our project plan was to communicate these pending changes to STQ users well in advance of our 30 April completion date. What would users think? Could they help us improve upon our plans?\nAbout a month ago, I reached out to the 21,000 plus users we had on record of using STQ since January 2018. We received nearly 85 responses from the messages we sent. Questions ranged from: if we were making changes, would PubMed ID matching be supported? To: What about the reliability of the returned reference links? And: Could we better accommodate larger reference lists?\nMany of the users we heard from told us how STQ was critical to their work. I read all these messages. The concerns raised by users were legitimate and much appreciated. We reassessed our project timeline and plans, and decided to shift course. So, what are we doing?\nWhat’s changing? The previous hurdle of having to register your email address simply to return reference links was confusing and unnecessary. We removed it. We previously limited the number of monthly reference links to 5,000 per email address. Most didn’t reach the limit, but those who did were frustrated by it and/or found ways around it. We want you to match and register as many references as possible, so we removed the monthly limit too. Many of you with long reference lists found that you were occasionally reaching our limit of 30,000 characters per submission. Once again, we want you to match and register as many references as possible so we removed the character limit altogether and instead are just looking at the number of references per submission. We now provide space for 1,000 references per submission (We checked. The most references we have ever received via the STQ form in one submission was around 750. Thus, we rounded up.). We did make a change to the backend of the service. We updated the algorithm we use to return reference links. We think it’s an improvement. Let us know how you find it. What’s remaining the same? Core functionality. It\u0026rsquo;s all in the name. Retrieve DOIs for journal articles, books, and chapters by cutting and pasting a reference or reference list into a simple query box. PubMed ID matching. You use it. You need it. We’re keeping it. Deposits. You’ll still need an email address for this, but we won’t ask for it until you’re at the deposit screen. The interface. We’re still eager to give the user interface a much-needed refresh, but, as many users pointed out to us, there’s still some core functionality that’s important that we need to retain with any interface update. For instance, you need to be able to easily copy and paste reference links into your reference list. That functionality isn’t going anywhere. Resetting reference links. Submit references, match, reset, and repeat. Many users like the reset button. It’s not going anywhere either. XML queries The change to the backend of the service that I mentioned above is not confined to reference matching and depositing for STQ users. XML queries for reference matching are also now powered by that new backend. We think it’s a seamless transition, but if you find it is not, please let us know.\nI’m excited for these changes and hope you are too. I invite you to try the simpler and improved STQ form, and let us know what you think.\n", "headings": ["What’s changing?","What’s remaining the same?","XML queries"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/express-your-interest-in-serving-on-the-crossref-board/", "title": "Express your interest in serving on the Crossref board", "subtitle":"", "rank": 1, "lastmod": "2019-04-24", "lastmod_ts": 1556064000, "section": "Blog", "tags": [], "description": "The Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2019 election.\nThe board\u0026rsquo;s purpose is to provide strategic and financial oversight and counsel to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The Board tends to review the strategic direction every few years, taking a landscape view of the scholarly communications community and trends that may affect Crossref\u0026rsquo;s mission.", "content": "The Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2019 election.\nThe board\u0026rsquo;s purpose is to provide strategic and financial oversight and counsel to the Executive Director and the staff leadership team, with the key responsibilities being:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The Board tends to review the strategic direction every few years, taking a landscape view of the scholarly communications community and trends that may affect Crossref\u0026rsquo;s mission. In July 2017, the board and staff came up with four strategic themes and these have been developed into an organization-wide roadmap.\nThe board votes on any new policy or service that staff and committees propose if it is a departure from normal practice for Crossref.Some of the recent things the board has approved include:\nApproval of all the new terms of membership; broadening of the membership eligibility criteria to include non-publishers.\nInvolvement in the ROR.org initiative including community outreach, technical prototyping, and helping to explore governance options.\nApproval of a proposal for funders to join at a reduced annual fee; the registration of DOIs for research grants.\nAllocating $50,000 USD of the operating budget to research the community\u0026rsquo;s level of interest in a distributed usage service.\nSpecifying the Board makeup to include equal numbers of small and large members; reframing the election processes.\nWhat is expected of a Crossref Board member? Board members should be able to attend all board meetings, which occur three times a year in different parts of the world. If you are unable to attend in person you may send your named alternate as your proxy or be able to attend via telephone.\nBoard members must:\nbe familiar with the three key responsibilities listed above;\nactively participate and contribute towards discussions; and\nread the board documents and materials provided, prior to attending meetings.\nHow to submit an expression of interest to serve on the Board We are seeking people who know about scholarly communications and would like to be part of our future. If you have experience on a governing board (as opposed to an operational board) and have a vision for the international Crossref community, we are interested in hearing from you.\nIf you are a Crossref member, are eligible to vote, and would like to be considered, you can complete and submit the expression of interest form with both your organization\u0026rsquo;s statement and your personal statement before 21 May 2019.\nIt is important to note it is your organization who is the Crossref member\u0026mdash;and therefore the seat will belong to your organization.\nAbout the election and our Board We have a principle of \u0026ldquo;one member, one vote\u0026rdquo;; our board comprises a cross-section of members and it doesn\u0026rsquo;t matter how big or small you are, every member gets a single vote. Board terms are three years, and one third of the Board is eligible for election every year. There are five seats up for election in 2019, 4 large and 1 small.\nThe board meets in a variety of international locations in March, July, and November each year. View a list of the current Crossref Board members and a history of the decisions they\u0026rsquo;ve made (motions).\nThe slate will be decided by the Nominating Committee and interested parties will be informed if they have made the slate by July 15, 2019.\nThe election opens online in September 2019 and voting is done by proxy online, results will be announced at the annual business meeting during \u0026lsquo;Crossref LIVE19\u0026rsquo; on 13th November 2019 in Amsterdam, Netherlands. Election materials and instructions for voting will be available online to all Crossref members in September 2019.\nThe role of the Nominating Committee The Nominating Committee meets to discuss change, process, criteria, and potential candidates, ensuring a fair representation of membership. The Nominating Committee is charged with selecting a slate of candidates for election from those who have expressed an interest.\nThe selection of the slate (which might exceed the number of open seats) is based on the quality of the expressions of interest and the nominating committee\u0026rsquo;s review of the candidates in light of the board\u0026rsquo;s directive of maintaining an appropriately balanced and representative board. The nominating committee will prioritize maintaining representation of members having both commercial and non-commercial business models, in addition to continuing to seek balance across factors such as gender, ethnic and racial background, geography, and sector.\nThe Board voted in March 2019 that balance according to size (based on revenue tier) will be achieved by a 2019 slate consisting of one revenue tier 1 seat (small) and 4 revenue 2 seats (large), and a 2020 slate consisting of 4 revenue tier 1 seats and 2 revenue tier 2 seats (see Crossref\u0026rsquo;s amended Bylaws on the Crossref website).\nThe Committee is made up of three board members not up for election, and two non-board members. The current Nominating Committee members are:\nJasper Simons, APA (Chair);\nScott Delman ACM;\nCatherine Mitchell, CDL;\nVincent Cassidy, The Institution of Engineering \u0026amp; Technology (IET); and\nClaire Moulton, The Company of Biologists.\nPlease submit your expression of interest or reply to me with any questions at lhart@crossref.org. This is your opportunity to help guide our wonderful organization!\n", "headings": ["What is expected of a Crossref Board member?","How to submit an expression of interest to serve on the Board","About the election and our Board","The role of the Nominating Committee"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/operations-and-sustainability/annual-report/", "title": "Annual report", "subtitle":"", "rank": 1, "lastmod": "2019-04-13", "lastmod_ts": 1555113600, "section": "Operations & sustainability", "tags": [], "description": "2019 The 2019 annual report is an expanded version we\u0026rsquo;ve called a \u0026ldquo;fact file\u0026rdquo;.\nCite as: \u0026ldquo;Crossref Annual Report \u0026amp; Fact File 2018-19\u0026rdquo;, retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/y8ygwm5\nDownload the PDF.\n2018 Download the PDF.\n2017 Download the PDF.\n2016 Download the PDF.\nPrevious annual reports 2014-15 annual report: PDF or Digital 2013-14 annual report: PDF or Digital 2012-13 annual report: PDF or Digital 2011-12 annual report: PDF or Digital 2010-11 annual report: PDF or Digital 2009-10 annual report: PDF or Digital 2008-09 annual report: PDF or Digital 2007-08 annual report: PDF or Digital Please contact our outreach team if you have any questions.", "content": "2019 The 2019 annual report is an expanded version we\u0026rsquo;ve called a \u0026ldquo;fact file\u0026rdquo;.\nCite as: \u0026ldquo;Crossref Annual Report \u0026amp; Fact File 2018-19\u0026rdquo;, retrieved [date], https://0-doi-org.libus.csd.mu.edu/10.13003/y8ygwm5\nDownload the PDF.\n2018 Download the PDF.\n2017 Download the PDF.\n2016 Download the PDF.\nPrevious annual reports 2014-15 annual report: PDF or Digital 2013-14 annual report: PDF or Digital 2012-13 annual report: PDF or Digital 2011-12 annual report: PDF or Digital 2010-11 annual report: PDF or Digital 2009-10 annual report: PDF or Digital 2008-09 annual report: PDF or Digital 2007-08 annual report: PDF or Digital Please contact our outreach team if you have any questions.\n", "headings": ["2019","Cite as:","2018","2017","2016","Previous annual reports"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/invoices/", "title": "Invoices", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/quarterly-deposit-invoices-avoiding-surprises/", "title": "Quarterly deposit invoices: avoiding surprises", "subtitle":"", "rank": 1, "lastmod": "2019-04-10", "lastmod_ts": 1554854400, "section": "Blog", "tags": [], "description": "Whenever we send out our quarterly deposit invoices, we receive queries from members who have registered a lot of backlist content, but have been charged at the current year’s rate. As the invoices for the first quarter of 2019 have recently hit your inboxes, I thought I’d provide a timely reminder about this in case you spot this problem on your invoice.\n", "content": "Whenever we send out our quarterly deposit invoices, we receive queries from members who have registered a lot of backlist content, but have been charged at the current year’s rate. As the invoices for the first quarter of 2019 have recently hit your inboxes, I thought I’d provide a timely reminder about this in case you spot this problem on your invoice.\nThis problem is usually the result of metadata being registered that makes it look as though the content was current, despite the fact that it was backlist. This post will show you what to do if you spot this problem in your latest invoice - and more importantly, how you can avoid this situation in the future.\nAbout current and backlist Content Registration fees There are different fees for registering content depending on whether it’s current (this year and the previous two years - 2017, ‘18, and ‘19) or backlist (older than that). As an example, it’s $1 each for a current journal article, and $0.15 for each backlist article. So, if you’ve incorrectly registered your content as published in 2019 when actually it was published in 2012, your quarterly invoice will overcharge you based on the metadata discrepancy.\nWe send you the quarterly deposit invoice at the end of each quarter. This example is an invoice for all deposits of the first quarter of 2018 for username ‘test’ - months January, February, and March. The BY code represents backlist (or, back year) content (journal article, in this example). Backlist content is charged at $0.15 per content item. The CY code represents current year content (journal article, in this example, although you can see that this invoice has charges for other content items as well). Current year content is charged at $1 per content item. Determining whether content is current or backlist A record is determined to be either a backlist or current year deposit based on the metadata that you deposit with us. If you use our helper tools - Metadata Manager or the web deposit form - the system looks at the information you’ve entered into the “publication date” field. If you deposit XML with us, it looks at the date in the \u0026lt;publication_date\u0026gt; element. And we look at each individual item separately—so even if you’ve put a publication date at journal level, you still need to put it at the journal article level too.\nAdditionally, sometimes we find that deposits mistakenly include the deposit date in place of the publication date. These two dates - the deposit date and the publication date - are not necessarily one and the same, especially if you are depositing backlist content. Please take care to double check this before you submit your deposit(s).\nWhat to do if you think you’ve registered the wrong publication date As you can only update a publication date by running a full redeposit, it’s important to get it right the first time. If you’ve registered the wrong publication date and have received an invoice for the wrong amount, please redeposit your content and then get in contact with us. If you do this as soon as you spot the error, we’ll be able to send a new invoice for the correct amount.\n", "headings": ["About current and backlist Content Registration fees","Determining whether content is current or backlist","What to do if you think you’ve registered the wrong publication date"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/sponsored-thank-you/", "title": "Thank you for your application", "subtitle":"", "rank": 1, "lastmod": "2019-03-28", "lastmod_ts": 1553731200, "section": "Become a member", "tags": [], "description": "Thanks for applying - you\u0026rsquo;re on your way! Here\u0026rsquo;s what happens next: Thanks for submitting your application to become a Crossref Member via a Sponsor. Our team will now check out your details, set you up and send your credentials over to your Sponsor. If you have any questions in the meantime, do contact your Sponsor.\nPlease note, if you’re already a member of Crossref moving to a Sponsor, we’ll send you any outstanding invoices you may have.", "content": "Thanks for applying - you\u0026rsquo;re on your way! Here\u0026rsquo;s what happens next: Thanks for submitting your application to become a Crossref Member via a Sponsor. Our team will now check out your details, set you up and send your credentials over to your Sponsor. If you have any questions in the meantime, do contact your Sponsor.\nPlease note, if you’re already a member of Crossref moving to a Sponsor, we’ll send you any outstanding invoices you may have. These will need to be paid before we can transfer you to your new sponsor. And if you’re already a Sponsored Member of Crossref with another Sponsor, there may be a delay while we ask for permission from your previous Sponsor to move you.\nPlease make sure you have read and understood: The member terms you\u0026rsquo;re agreeing to. We\u0026rsquo;re looking forward to your participation in our community!\n", "headings": ["Thanks for applying - you\u0026rsquo;re on your way!","Here\u0026rsquo;s what happens next:","Please make sure you have read and understood:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/simcheck-transition-thank-you/", "title": "Thank you for your request to upgrade", "subtitle":"", "rank": 1, "lastmod": "2019-03-28", "lastmod_ts": 1553731200, "section": "Become a member", "tags": [], "description": "Thanks for your request to upgrade We\u0026rsquo;ll check your details and make sure that you still meet the Similarity Check eligibility criteria.\nWe\u0026rsquo;ll then pass your request over to the team at Turnitin. They\u0026rsquo;ll triple check that they can still index 90% of your content. If there are any problems, they will contact your Similarity Check technical contact. If there are no problems, they will send over login details for v2 to your Similarity Check editorial contact.", "content": "Thanks for your request to upgrade We\u0026rsquo;ll check your details and make sure that you still meet the Similarity Check eligibility criteria.\nWe\u0026rsquo;ll then pass your request over to the team at Turnitin. They\u0026rsquo;ll triple check that they can still index 90% of your content. If there are any problems, they will contact your Similarity Check technical contact. If there are no problems, they will send over login details for v2 to your Similarity Check editorial contact.\n", "headings": ["Thanks for your request to upgrade"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/heres-to-year-one/", "title": "Here’s to year one!", "subtitle":"", "rank": 1, "lastmod": "2019-03-22", "lastmod_ts": 1553212800, "section": "Blog", "tags": [], "description": "Our Ambassador Program is now one year old, and we are thrilled at how the first 12 months have gone. In 2018 we welcomed 16 ambassadors to the team, based in Australia, Brazil, Colombia, India, Indonesia, Mexico, Nigeria, Peru, Russia, Singapore, South Korea, UAE, Ukraine, USA, and Venezuela.\nOur ambassadors are volunteers with a good knowledge of Crossref and the wider scholarly community, they are well connected and passionate about the work that we do.", "content": "Our Ambassador Program is now one year old, and we are thrilled at how the first 12 months have gone. In 2018 we welcomed 16 ambassadors to the team, based in Australia, Brazil, Colombia, India, Indonesia, Mexico, Nigeria, Peru, Russia, Singapore, South Korea, UAE, Ukraine, USA, and Venezuela.\nOur ambassadors are volunteers with a good knowledge of Crossref and the wider scholarly community, they are well connected and passionate about the work that we do. Participating in the ambassador program is complementary to people’s existing roles and enables those who already work with Crossref to have a mechanism to feed back to us and to provide support for their communities.\nWe reflected on the successes and challenges of the first 12 months and discovered quite a lot has been achieved so far.\nThe Ambassador Program better equips me to support researchers to conduct outreach and collaborate in multidisciplinary discovery!\n\u0026ndash; Woei Fuh Wong, Research 123, Singapore\nWithin the framework of the Ambassador Crossref program, I ran a seminar, webinar, and held several meetings in Ukrainian scientific organizations.\n\u0026ndash; Andrii Zolkover, Internauka, Ukraine\nIn my role as ambassador, I am able to provide a greater level of support in Russian. Alongside translated materials, we have also received over 400 tickets to our Russian electronic support system and made over 300 consultations by phone.\n\u0026ndash; Maxim Mitrofanov, NEICON, Russia\nBeing an ambassador has enabled me to increase knowledge of Crossref within my community.\n\u0026ndash; Edilson Damasio, Department of Mathematics Library of State University of Maringá-UEM, Brazil\nThe ambassador program has helped in vastly raising the awareness of Crossref and its services all over the world. Based in the Middle East, I see the need in the Arab region to know more about Crossref in their mother tongue (Arabic). The program has proven success and its positive impact is tangible.\n\u0026ndash; Mohamad Mostafa, Knowledge E, UAE\nHighlights Over the course of 2018, there were a number of big achievements which would not have been possible without the help of our ambassadors.\nDue to your feedback, we’re very keen to expand the level of multi-language support we offer our diversifying community. In addition to translating key messages, slide decks, and other educational materials, our ambassadors (and some members - thanks also!) helped us in the production of a series of short videos. We now have videos available for each of the Crossref services in nine languages including English, French, Spanish, Brazilian Portuguese, Chinese, Japanese, Korean, Arabic, and Bahasa Indonesia. You can see in the chart below, that although our English videos have the most views (this is the default language), others have also experienced a lot of visitors, particularly notable are the Chinese and Spanish language videos. This underscores the importance of further support in non-English languages, as our series of multi-lingual webinars also demonstrated.\nIn 2018 we ran webinars in Russian, Brazilian Portuguese, Spanish and Arabic. Several ambassadors have taken a lead in running these webinars in their local languages with assistance from Crossref staff on producing materials and answering questions on the day. Spanish language webinars saw record numbers of attendees from a range of different countries, and our Russian webinar recordings have been viewed over 200 times. We will be continuing to offer more webinars in different time zones and languages, and the recordings are always available for anyone who can’t attend on the day.\nOur ambassadors have also been helping us improve and expand our LIVE local events. Last year we held events in Japan, South Africa, Russia, Germany, Brazil and India. Ambassadors help by providing recommendations on venues, accommodation, guest speakers, or even attending and speaking at the event themselves. Some run their own Crossref events which we can help provide materials and also represent Crossref at related industry events in their region. You may have had the chance to meet some of our ambassadors at our annual event in Toronto last November as well.\nAs our ambassadors are our representatives, acting as our eyes and ears in the wider community, it is important that they are kept up to date with new developments and have good opportunities to report back to us. The ambassador team has participated in beta-testing of a number of new initiatives including our new Metadata Manager and Participation Reports and our upcoming Community Forum. By providing feedback from their own user perspective and from how they anticipate those in their communities will view and use these tools, it enables us get better insights into how an initiative might work before launching it more widely.\nFuture Plans Initial feedback on the program has been overwhelmingly positive, both from the ambassadors themselves and the wider Crossref community, so we’re looking at what we can do to hone the program over time. In 2019 we will be welcoming some more ambassadors to the team to further support our global community. We want to support our ambassadors, so we don’t foresee the group growing to the point where there are too many ambassadors for us to be able to engage with. You can read about the team and where they are based, as well as all about the new ambassadors we have welcomed so far this year, on Our Ambassadors page.\nThis year our ambassadors will be involved when we launch our online community forum (more to come on that soon). They’re already helping with the task-force that is advising on our new documentation, and we’ll be providing them with further training on Crossref tools and services. We also have more webinars and LIVE locals in the pipeline. Keep an eye on our webinars and events pages for more details as they come.\nSo a final thank you to our ambassador team - it has been great to work with you over the last year, and we look forward to how we can continue to work together!\n", "headings": ["Highlights","Future Plans"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/terms_2024/", "title": "Metadata Plus service agreement 2019-2024", "subtitle":"", "rank": 1, "lastmod": "2019-03-20", "lastmod_ts": 1553040000, "section": "Find a service", "tags": ["Terms"], "description": "This agreement was deprecated in 2024 and replaced by new terms of service. Background Crossref manages and maintains a database of digital identifiers assigned by Crossref (“Identifiers”) that are associated with specific items of professional and scholarly materials and content (collectively, “Content”). The database comprises metadata (collectively, “Metadata”) from Crossref members that describes, identifies and provides information (including bibliographic information, abstracts, and references) about—and points to the Internet location of—such Content.", "content": " This agreement was deprecated in 2024 and replaced by new terms of service. Background Crossref manages and maintains a database of digital identifiers assigned by Crossref (“Identifiers”) that are associated with specific items of professional and scholarly materials and content (collectively, “Content”). The database comprises metadata (collectively, “Metadata”) from Crossref members that describes, identifies and provides information (including bibliographic information, abstracts, and references) about—and points to the Internet location of—such Content. Metadata, including the Identifiers, are made available to subscribers of Crossref through Crossref’s service known as “Metadata Plus,” referred to in this Agreement as the “Service.” ​Subscription​. Subject to the terms of this Agreement, including the Description of Service set forth at​ ​https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/ and hereby incorporated by reference herein, Crossref grants Subscriber, during the Term, a non-exclusive, non-transferable, and non-sublicensable right to use and access the Service as set forth in this Agreement, such use to be solely for Subscriber’s internal business purposes and not for transfer, distribution, or disclosure to third parties or for the commercial benefit of third parties. The foregoing use limitation does not limit Subscriber’s rights with respect to the Metadata, which are set forth in Section 2 below. ​Metadata Rights and Limitations​. Subject to the terms of this Agreement, Crossref hereby grants Subscriber a fully-paid, non-exclusive, worldwide license for any and all rights necessary to use, reproduce, transmit, distribute, display and sublicense Metadata without restriction. ​Access​. Access to the Service is provided through the interfaces described in the Description of Service. Upon Crossref’s receipt of Subscriber’s Annual Fee in the applicable amount set forth in the Description of Service (the “Annual Fee”), Crossref will provide Subscriber with Access Credentials, as described in the Description of Service. ​Obligations of Subscriber​. Subscriber shall: Where Subscriber displays, refers to, or references Content, cause the corresponding Identifiers to be used as a means of linking to the Content; Display Identifiers in a manner consistent with the Crossref Display Guidelines set out at​ ​Crossref DOI display guidelines Take reasonable steps to protect the security of Subscriber’s Access Credentials; Not share the Access Credentials with any third party, other than a third party acting on Subscriber’s behalf, such as a third-party provider of Subscriber’s in-house IT services; Comply with all applicable copyright laws; and Comply with all other terms of this Agreement. ​Service Level Agreement​. The Service shall be provided in accordance with the Service Level Agreement set forth in the Description of Service (the “SLA”). Subscriber’s sole remedy in the event that Crossref fails to comply with the SLA (an “SLA Failure”) will be a service credit (a “Service Credit”) equal to 3% of the Annual Fee paid by Subscriber for the then-current term. To be eligible to receive a Service Credit, Subscriber must notify Crossref of Subscriber’s claim for a Service Credit within thirty (30) days of the end of the month in which an outage resulting in an SLA Failure occurs. Service Credits are credited against the Subscriber’s next applicable invoice amount due at renewal. Notwithstanding anything to the contrary contained in this Agreement, the aggregate amount of Service Credits to be paid to Subscriber in a given calendar year shall not exceed 15% of the Annual Fee actually paid by Subscriber on account of that calendar year. ​No Access to Full-Text Content.​ For the avoidance of doubt, this Agreement confers on Subscriber no rights to gain access to full-text content. ​Use of Marks.​ Crossref may use the Subscriber’s name(s) and mark(s) to identify the Subscriber as a user of the Service. Subscriber shall use commercially reasonable efforts to identify its use of Crossref Identifiers and Metadata by placing the Crossref mark or Crossref badges (without modification) on its website, by​ referencing the code provided on Crossref’s website. Subscriber may make other uses of Crossref’s trademark, or any other trademarks or trade names owned by Crossref (such as, by way of example but not limitation, in press releases, advertising, client lists or marketing materials) only with the prior written approval of Crossref. ​Term and Termination. The term of this Subscriber Agreement (the “Term”) will commence on the later of the date that Crossref (i) accepts Subscriber’s executed signature page to this Agreement and (ii) receives payment from Subscriber of Subscriber’s applicable Annual Fee or prorated portion thereof. The Term shall continue through December 31 of the then-current year, and shall thereafter automatically renew, under the terms of the then-most-recent version of the Subscriber Agreement (available at https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/terms/), for consecutive 12-month periods unless terminated earlier in accordance with the provisions of this Agreement. Either party may terminate the Agreement (i) without cause upon written notice given not later than thirty (30) days following the end of any calendar year during the Term, (ii) without cause at any time on ninety (90) days’ written notice, or (iii) at any time for material breach by the other party that remains uncured following thirty (30) days’ written notice, or is not reasonably capable of cure. Upon the termination or expiration of this Agreement: Subscriber’s access to Metadata Plus, including to all Identifiers and Metadata accessed through Metadata Plus, shall be terminated, and Subscriber’s Access Credentials shall be disabled; Subscriber shall promptly remove all references to Crossref’s name, logo, and all trademarks from Subscriber’s websites and services (except to the extent Subscriber is permitted to continue to display such references and trademarks pursuant to another agreement between Subscriber and Crossref, such as the Crossref Terms of Membership); and Subscriber shall not be entitled to any refund or proration of the Annual Fee paid for the year in which the termination occurs, ​**except that**​ in the case of a termination (x) by either party pursuant to clause 9(b)(i), (y) by Crossref pursuant to clause 9(b)(ii), or (z) by Subscriber pursuant to clause 9(b)(iii), Subscriber shall be entitled to a refund of the then-unused portion of any Annual Fee previously paid on account of the year in which such termination occurs. ​CROSSREF DISCLAIMERS; NO WARRANTIES. ​EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE SERVICE, METADATA AND IDENTIFIERS ARE MADE AVAILABLE “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. CROSSREF DOES NOT WARRANT, GUARANTEE OR MAKE ANY REPRESENTATIONS THAT THE SERVICE, METADATA OR IDENTIFIERS WILL MEET THE SUBSCRIBER’S PARTICULAR REQUIREMENTS OR THAT THE OPERATION OF CROSSREF’S WEBSITE OR OF OTHER TOOLS MADE AVAILABLE TO SUBSCRIBER WILL BE UNINTERRUPTED OR ERROR-FREE. SUBSCRIBER ASSUMES THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE METADATA AND IDENTIFIERS. CROSSREF’S AGENTS AND EMPLOYEES ARE NOT AUTHORIZED TO MODIFY WARRANTIES OR REPRESENTATIONS, OR THE DISCLAIMERS THEREOF, OR TO MAKE ADDITIONAL WARRANTIES OR REPRESENTATIONS BINDING ON CROSSREF. ACCORDINGLY, ADDITIONAL STATEMENTS, WHETHER WRITTEN OR ORAL, DO NOT CONSTITUTE, AND SHOULD NOT BE RELIED ON AS, WARRANTIES OF CROSSREF. ​Ownership.​ Except as set forth herein and without limiting Section 2 above, nothing in this Agreement gives Subscriber any rights (including copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed) to any Metadata or Identifiers. For avoidance of doubt, the terms of this Section 10 are not intended to, and do not, affect, transfer, or limit any existing rights of Subscriber with respect to Metadata belonging to Subscriber. ​​Injunctive Relief.​ Subscriber acknowledges that the unauthorized use of the Service or any Metadata or Identifiers would cause Crossref irreparable harm that could not be compensated by monetary damages. Accordingly, Subscriber agrees that Crossref may seek temporary, preliminary and permanent injunctive relief without the posting of a bond or security to remedy any actual or threatened unauthorized use of the Service (including any Metadata or Identifier), in addition to any other damages to which Crossref is entitled. ​​Limitation of Liability​. IN NO EVENT SHALL CROSSREF OR ANYONE ELSE WHO HAS BEEN INVOLVED IN THE CREATION, PRODUCTION OR DELIVERY OF METADATA OR IDENTIFIERS BE LIABLE OR RESPONSIBLE FOR ANY LOSS OR INACCURACY OF DATA OF ANY KIND NOR FOR ANY LOST PROFITS, LOST SAVINGS, OR ANY OTHER DIRECT OR INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL OR EXEMPLARY DAMAGES ARISING OUT OF OR RELATED IN ANY WAY TO THE USE OR INABILITY TO USE THE SERVICE (INCLUDING IDENTIFIERS AND METADATA), EVEN IF CROSSREF OR ITS AGENTS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL APPLY TO ANY CLAIM OR CAUSE WHATSOEVER, WHETHER SUCH CLAIM OR CAUSE IS IN CONTRACT, TORT, OR OTHERWISE. WITHOUT LIMITING THE FOREGOING, NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR SPECIAL, INCIDENTAL, CONSEQUENTIAL OR PUNITIVE DAMAGES OF ANY NATURE, FOR ANY REASON, INCLUDING, WITHOUT LIMITATION, THE BREACH OF THIS AGREEMENT OR ANY TERMINATION OF THIS AGREEMENT, WHETHER SUCH LIABILITY IS ASSERTED ON THE BASIS OF CONTRACT, TORT (INCLUDING NEGLIGENCE OR STRICT LIABILITY) OR OTHERWISE, EVEN IF THE OTHER PARTY HAS BEEN WARNED OF THE POSSIBILITY OF SUCH DAMAGES. EXCEPT (I) WITH RESPECT TO THE OBLIGATIONS SET FORTH IN SECTION 13 BELOW (INDEMNIFICATION) OR (II) IN THE CASE OF SUCH PARTY’S FRAUD, WILLFUL MISCONDUCT, OR VIOLATION OF LAW, IN NO EVENT SHALL EITHER PARTY’S AGGREGATE LIABILITY TO THE OTHER EXCEED AN AMOUNT EQUAL TO THREE (3) TIMES THE AMOUNT ACTUALLY PAID AND/OR PAYABLE BY SUBSCRIBER UNDER THIS AGREEMENT. ​​Indemnification by Subscriber.​ Subscriber, at its own expense, shall indemnify, defend and hold harmless Crossref, and its officers, directors, employees, and agents, from and against any claim, demand, cause of action, debt or liability, costs and expenses, including without limitation reasonable attorneys’ fees, arising out of any use by Subscriber of, or a third party’s gaining access through Subscriber to, the Service, the Metadata, or Identifiers. Subscriber will not make any representations, warranties or guarantees to any third parties (including Subscriber’s customers and potential customers) regarding Crossref’s services or products, including the Service and the Metadata, except to the extent specifically set forth in written sales and marketing documentation provided to Subscriber by Crossref. ​​No Assignment; Relationship of Parties. Subscriber may not assign, subcontract or sublicense this Agreement (or any portion thereof) without the prior written consent of Crossref, except that Subscriber may, without such consent, assign this Agreement to any of its affiliates. Any attempted assignment in violation of the foregoing sentence shall be void. This Agreement will not create or be deemed to create any agency, partnership, employment relationship, or joint venture between Crossref and Subscriber, who are independent contractors. Subscriber shall not have any right, power or authority to enter into any agreement for or on behalf of, or incur any obligation or liability of, or to otherwise bind, Crossref. ​​Notices​. Written notice under this Agreement shall be given as follows: If to Crossref: by emailing ​agreements@crossref.org If to Subscriber: To the name and email address provided by Subscriber as the Subscriber Business Contact upon Subscriber’s application to use the Service. This information may be changed by the Member by giving notice to Crossref by email at​​plus@crossref.org ​​Governing Law, Jurisdiction​. This Agreement shall be interpreted, governed and enforced under the laws of New York, USA, without regard to its conflict of law rules. All claims, disputes and actions of any kind arising out of or relating to this Agreement shall be settled in Boston, Massachusetts, USA, and the parties hereby consent to the personal jurisdiction of the courts of Massachusetts, USA. ​​Compliance​. Each of Subscriber and Crossref shall perform under this Agreement in compliance with all laws, rules, and regulations of any jurisdiction which is or may be applicable to its respective business and activities, including anti-corruption, copyright, privacy, and data protection laws, rules, and regulations. ​​General Provisions​. If any provision of this Agreement (or any portion thereof) is determined to be invalid or unenforceable, the remaining provisions of this Agreement will not be affected thereby and will be binding upon the parties and will be enforceable, as though said invalid or unenforceable provision (or portion thereof) were not contained in this Agreement. No delay or omission by either party to exercise any right hereunder shall impair such right or be construed as a waiver thereof, and a waiver by a party of any covenant or breach of the other party shall not be construed as a waiver of any succeeding covenant or breach. The headings of the sections and subsections used in this Agreement are included for convenience only and are not to be used in construing or interpreting this Agreement. This Agreement, including the Display Guidelines and the Description of Services, sets forth the entire agreement between the parties hereto with respect to the subject matter hereof, and supersedes any prior or contemporaneous oral or written agreements with respect thereto. The “Background” section at the beginning of this Agreement forms a part of this Agreement and is incorporated by reference herein. ​​Amendment​. This Agreement may not be amended or modified except in writing signed by both parties hereto; except that Crossref: May from time to time, upon notice to Subscriber, amend the Display Guidelines and Description of Service (other than the applicable Annual Fee and SLA, which are addressed in the following clause (b)), and Subscriber shall be obligated to comply with such changes within (90) days from receipt of such notice; and May, no more than once per year, amend the schedule of Annual Fees for the Service and/or the SLA, by notice to Subscriber delivered not later than November 1 of the then-current calendar year, with such changes to take effect on January 1 of the following calendar year. Notwithstanding any revision to the Description of Service from time to time, the Description of Service will, at all times during the Term, at a minimum provide for Subscriber’s access to substantially all Metadata in the Crossref database, including, to the extent permitted by the applicable Crossref member, references. ​​Counterparts​. This Agreement may be executed in counterparts by each party and delivered by electronic transmission, and such delivery shall be legally binding on the parties to the same extent as if original signatures in ink were delivered in person.", "headings": ["Background"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/agreement/", "title": "Old Metadata Plus service agreement", "subtitle":"", "rank": 1, "lastmod": "2019-03-20", "lastmod_ts": 1553040000, "section": "Find a service", "tags": ["Terms"], "description": "This agreement was deprecated in 2019 and replaced by new terms of service. Subscriber Agreement Crossref Metadata APIs Plus Service These terms and conditions and the other documents incorporated by reference below constitute the Subscriber Agreement between Publishers International Linking Association, Inc., doing business as Crossref (“Crossref”) and the Subscriber identified below.\nBackground. Crossref collects, manages, maintains and updates on an ongoing basis a database of the digital identifiers assigned by Crossref (“Crossref DOIs”) that are associated with specific content items (“Content”).", "content": " This agreement was deprecated in 2019 and replaced by new terms of service. Subscriber Agreement Crossref Metadata APIs Plus Service These terms and conditions and the other documents incorporated by reference below constitute the Subscriber Agreement between Publishers International Linking Association, Inc., doing business as Crossref (“Crossref”) and the Subscriber identified below.\nBackground. Crossref collects, manages, maintains and updates on an ongoing basis a database of the digital identifiers assigned by Crossref (“Crossref DOIs”) that are associated with specific content items (“Content”). The database includes metadata (collectively, “Metadata”) from certain publishers (“Crossref Member Publishers”) that describes, identifies and provides information about the Content and that point to the location of certain Content on the Internet (“Enabled Content”). The Crossref DOIs and Metadata for Enabled Content are made available to authorized Subscribers of Crossref through one or more interfaces provided as part of “Crossref Metadata Service Plus”. Crossref Metadata Service Plus is described in more detail in the Description of Service available at https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/.\nAccess and Fee Schedule. Access to Crossref Metadata Service Plus is provided through the interfaces described in the Description of Service. The Description of Service may be revised from time to time by Crossref, but will at a minimum provide that Subscriber will have access to all Metadata in the Crossref database, including (where permitted by the Crossref Member Publisher) publisher references. Subscriber will pay to Crossref the applicable Annual Fee for Crossref Metadata Service Plus set forth as part of the Description of Service. Upon receipt of the Annual Fee, Crossref will provide Subscriber with Access Credentials as described in the Description of Service. Crossref reserves the right to manage the traffic at its sites by, if Crossref determines it to be necessary in its sole discretion, imposing rate limits on users, including Subscriber. In the event such limits are imposed, Crossref Subscribers getting access to Crossref Metadata Service Plus pursuant to an Subscriber Agreement will be prioritized over users of Crossref free services.\nUse of Metadata. Except as provided in Section 4, Crossref places no restrictions on the use or reuse of the Metadata acquired by Subscriber pursuant to this Subscriber Agreement.\nObligations of Subscriber. Subscriber acknowledges and agrees that (i) where Subscriber displays standard bibliographic metadata for Enabled Content the corresponding Crossref DOIs will be used as a nonexclusive means of linking to the Enabled Content; (ii) Subscriber will display Crossref DOIs in a manner consistent with the Crossref Display Guidelines set out at https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy; (iii) Subscriber will take reasonable steps to protect the security of Access Credentials provided by Crossref and will not share the Access Credentials with third parties; (iv) Subscriber will comply with the copyright laws of the countries in which the relevant services are available; and (v) Subscriber will comply with the terms of this Agreement.\nObligations of Crossref. Crossref agrees to comply with the Service Level Agreement set forth in the Description of Service. Subscriber’s sole remedy in the event that Crossref fails to maintain the uptime specified in Description of Service will be to claim a credit against the invoice amount due at renewal, which credit (i) must be claimed by the user within thirty (30) days of the end of the month in which an outage exceeding the guaranteed uptime is reported; (ii) will be equal to 10% of the annual fee paid by Subscriber for the then-current term; and (iii) will not in any event exceed 30% of such annual fee.\nNo Access to Full Content. Any rights granted by this Subscriber Agreement to access the Crossref DOIs and Metadata do not include rights to crawl or otherwise gain access to full text publisher content.\nUse of Crossref Trademarks. Subscriber may use the Crossref Trademarks solely to identify Crossref as the source of the Crossref DOIs and Metadata. Subscriber shall use the Crossref Trademarks in the form supplied and approved by Crossref and in accordance with the Crossref Trademark Guidelines available at https://0-www-crossref-org.libus.csd.mu.edu/brand/. The logo may be referenced from https://0-assets-crossref-org.libus.csd.mu.edu/logo/metadata-from-crossref-logo-200.svg. Crossref will notify the Subscriber by email of any changes to the Trademark Guidelines and Subscriber shall conform to the revised guidelines within 3 months of notification. Subscriber may only make other uses of the Crossref Trademark, or any other trademarks or trade names owned by Crossref (such as, by way of example but not limitation, in press releases, advertising, client lists or marketing materials) with the prior written approval of Crossref.\nJoint Marketing. Crossref and Subscriber agree to cooperate in marketing efforts and each will request approval from the other for any joint marketing efforts including but not limited to marketing collateral, web sites, press releases, webinars, or other materials, such approval not to be unreasonably withheld.\nTerm and Termination. The Subscriber Agreement will commence on the later of the date that Crossref (i) accepts the Subscriber Agreement and (ii) receives payment from Subscriber of the applicable Annual Fee. The term shall continue through December 31 of the current year, and shall thereafter be automatically renewed under the terms of the then-most-recent version of the Subscriber Agreement (which will be available at https://0-www-crossref-org.libus.csd.mu.edu/services/metadata-retrieval/metadata-plus/terms/) for consecutive 12–month periods. Either party may terminate the Agreement (i) without cause within thirty (30) days of the end of the current year, or (ii) at any time for material breach that remains uncured following thirty (30) days written notice or is not reasonably capable of cure, or (iii) without cause at any time on ninety (90) days written notice. After the termination (for any reason) or expiration of this Agreement, (i) Subscriber’s access to the Crossref DOIs and Metadata through Crossref Metadata Service Plus shall be terminated and (ii) Subscriber shall remove all references to the Crossref Trademarks from its websites and services. Except in the case of a termination without cause within thirty (30) days after the beginning of a new 12-month term, if Subscriber terminates the Agreement Subscriber shall not be entitled to any refund or pro-ration of the Fees paid for the year in which the termination occurs.\nCROSSREF DISCLAIMERS; NO WARRANTIES. THE METADATA AND CROSSREF DOIs ARE PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. CROSSREF DOES NOT WARRANT, GUARANTEE OR MAKE ANY REPRESENTATIONS THAT THE METADATA OR CROSSREF DOIs WILL MEET THE SUBSCRIBER’S PARTICULAR REQUIREMENTS OR THAT THE OPERATION OF CROSSREF’S WEB SITE OR OF OTHER TOOLS PROVIDED ACCESS FOR SUBSCRIBER WILL BE UNINTERRUPTED OR ERROR FREE. SUBSCRIBER ASSUMES THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE METADATA AND CROSSREF DOIs. CROSSREF NEITHER GIVES NOR MAKES ANY WARRANTIES OR REPRESENTATIONS UNDER OR PURSUANT TO THIS CONTRACT.\nOwnership. Subject only to the specific permissions contained in this Agreement, Subscriber may not assert, provide, transfer, acquire or retain any rights (including all related copyrights, database compilation rights, trademarks, trade names, and other intellectual property rights, currently in existence or later developed) in any Metadata or Crossref DOIs.\nInjunctive Relief. Subscriber acknowledges that the Metadata and Crossref DOIs are unique and of great value and the unauthorized use of any of the foregoing could cause Crossref irreparable harm that could not be compensated by monetary damages. Accordingly, Subscriber agrees that Crossref may seek temporary, preliminary and permanent injunctive relief without the posting of a bond or security to remedy any actual or threatened unauthorized use any of the foregoing in addition to any other damages Crossref can demonstrate.\nLimitation of Liability. IN NO EVENT SHALL CROSSREF OR ANYONE ELSE WHO HAS BEEN INVOLVED IN THE CREATION, PRODUCTION OR DELIVERY OF THE METADATA OR CROSSREF DOIs BE LIABLE OR RESPONSIBLE FOR ANY LOSS OR INACCURACY OF DATA OF ANY KIND NOR FOR ANY LOST PROFITS, LOST SAVINGS, OR ANY OTHER DIRECT OR INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL OR EXEMPLARY DAMAGES ARISING OUT OF OR RELATED IN ANY WAY TO THE USE OR INABILITY TO USE ANY OF THE FOREGOING, EVEN IF CROSSREF OR ITS AGENTS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL APPLY TO ANY CLAIM OR CAUSE WHATSOEVER WHETHER SUCH CLAIM OR CAUSE IS IN CONTRACT, TORT, OR OTHERWISE. THE AGENTS AND EMPLOYEES OF CROSSREF ARE NOT AUTHORIZED TO MODIFY WARRANTIES OR REPRESENTATIONS, OR THE DISCLAIMERS THEREOF, OR TO MAKE ADDITIONAL WARRANTIES OR REPRESENTATIONS BINDING ON CROSSREF. ACCORDINGLY, ADDITIONAL STATEMENTS, WHETHER WRITTEN OR ORAL, DO NOT CONSTITUTE WARRANTIES OF CROSSREF AND SHOULD NOT BE RELIED UPON AS A WARRANTY OF CROSSREF. WITHOUT LIMITING THE FOREGOING, NEITHER PARTY SHALL BE LIABLE TO THE OTHER FOR SPECIAL, INCIDENTAL, CONSEQUENTIAL OR PUNITIVE DAMAGES OF ANY NATURE, FOR ANY REASON, INCLUDING, WITHOUT LIMITATION, THE BREACH OF THE AGREEMENT OR ANY TERMINATION OF THIS AGREEMENT, WHETHER SUCH LIABILITY IS ASSERTED ON THE BASIS OF CONTRACT, TORT (INCLUDING NEGLIGENCE OR STRICT LIABILITY) OR OTHERWISE, EVEN IF THE OTHER PARTY HAS BEEN WARNED OF THE POSSIBILITY OF SUCH DAMAGES. EXCEPT WITH RESPECT TO SECTION 14 BELOW, IN NO EVENT SHALL EITHER PARTY’S AGGREGATE LIABILITY TO THE OTHER EXCEED AN AMOUNT EQUAL TO THREE (3) TIMES THE AMOUNT ACTUALLY PAID AND/OR PAYABLE BY SUBSCRIBER UNDER THIS AGREEMENT.\nIndemnification by Subscriber. Subscriber, at its own expense, shall indemnify, defend and hold harmless Crossref, and its officers, directors, employees, and agents, from and against any claim, demand, cause of action, debt or liability, costs and expenses, including without limitation reasonable attorneys’ fees, arising out of any unauthorized use by Subscriber of, or a third party’s gaining access through Subscriber to, the Metadata and Crossref DOIs. Subscriber will not make any representations, warranties or guarantees to customers or potential customers regarding Crossref’s services or products except as specifically set forth in the written sales and marketing documentation that may be provided to Subscriber by Crossref.\nNo assignment; relationship of parties. Neither party shall have the right, without the prior written consent of the other party, which consent shall not be unreasonably withheld or delayed, to assign this Agreement or any portion thereof. Neither party is an agent, representative, or partner of the other party. Neither party shall have any right, power or authority to enter into any agreement for or on behalf of, or incur any obligation or liability of, or to otherwise bind, the other party. This Agreement shall not be interpreted or construed to create an association, agency, joint venture or partnership between the parties or to impose any liability attributable to such a relationship upon either party. Notices. All notices, requests, demands and other communications that are required or may be given under this Agreement shall be in writing and shall be deemed to have been duly given when received if personally delivered; when transmitted if transmitted by facsimile, electronic or digital transmission method with electronic confirmation of receipt; the day after it is sent, if sent for next-day delivery to a domestic address by recognized overnight delivery service (e.g., FedEx); and upon receipt, if sent by certified or registered mail, return receipt requested. Notice to Crossref shall be sent to Crossref, 50 Salem Street, Lynnfield MA 01940, USA, Attention: Chief Operating Officer; fax: 781 295-0072; email: agreements@crossref.org. Notice to Subscriber shall be sent to the name and address provided by Subscriber as the Subscriber Business Contact.\nGoverning Law, Jurisdiction. This Agreement shall be construed and enforced in accordance with the laws of the State of New York, without giving effect to the principles of conflicts of law. All disputes and/or legal proceedings arising out of or relating to this Agreement shall be maintained in courts located in New York, New York. The parties consent to the personal jurisdiction of said courts, and agree to accept service as provided by the “Notices” section above.\nGeneral Provisions. The invalidity or unenforceability of one or more provisions of this Agreement shall not affect the validity or enforceability of any of the other provisions hereof, and this Agreement shall be construed in all respects as if such invalid or unenforceable provisions were omitted. No delay or omission by either party hereto to exercise any right or power hereunder shall impair such right or power or be construed to be a waiver thereof. A waiver by either of the parties hereto of any of the covenants to be performed by the other or any breach thereof shall not be construed to be a waiver of any succeeding breach thereof or of any other covenant herein contained. The headings of the sections and subsections used in this Agreement are included for convenience only and are not to be used in construing or interpreting this Agreement. This Agreement, including the Description of Services, Fee Schedule and Trademark Guidelines referenced above, sets forth the entire agreement between the parties hereto with respect to the subject matter hereof and supersedes any and all prior agreements, understandings, promises and representations made by either party to the other concerning the subject matter hereof and the terms applicable hereto. This Agreement may not be released, discharged, amended or modified in any manner except by an instrument in writing signed by both parties hereto; provided however that Crossref (i) may from time-to-time make changes in the Description of Service (other than in the Annual Fee and Service Level Agreement sections thereof), Display Guidelines and Trademark Guidelines, in which event notice of such changes will be provided to the Subscriber by email to the address for receipt of notices provided to Crossref by the Subscriber and Subscriber shall be obligated to comply with such changes within (90) days from receipt of such notice; and (ii) may, not more often than once per year, make changes in the Annual Fee and the Service Level Agreement set forth in the Description of Service, such changes to take effect on January 1 of the next calendar year. This Agreement may be executed in counterparts by each party and delivered by electronic transmission, and such delivery shall be legally binding on the parties to the same extent as if original signatures in ink were delivered in person.\nBY DELIVERING A COPY OF THIS AGREEMENT TO CROSSREF WITH THE ELECTRONIC SIGNATURE AND INFORMATION DESCRIBED BELOW, Subscriber IS CONFIRMING THAT IT HAS READ AND ACCEPTED THE TERMS OF THIS AGREEMENT AND ALL DOCUMENTS INCORPORATED THEREIN, AND THAT THE INDIVIDUAL SIGNING THE AGREEMENT HAS THE AUTHORITY TO ACT ON BEHALF OF AND TO BIND SUBSCRIBER.\n", "headings": ["Subscriber Agreement Crossref Metadata APIs Plus Service"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/before-during-and-after-a-journey-through-title-transfers/", "title": "Before, during, and after - a journey through title transfers", "subtitle":"", "rank": 1, "lastmod": "2019-02-25", "lastmod_ts": 1551052800, "section": "Blog", "tags": [], "description": "In January, I wrote about how we’ve simplified the journal title transfer process using our new Metadata Manager tool. For those disposing publishers looking for an easy, do-it-yourself option for transferring ownership of your journal, I suggest you review that blog post. But, whether you choose to process the transfer yourself via Metadata Manager or need some help from Paul, Shayn, or myself, there’s more to a transfer than just the click of a transfer button or the submission of an email to support@crossref.org, as I’m sure those of you who have been through a title transfer can attest.\n", "content": "In January, I wrote about how we’ve simplified the journal title transfer process using our new Metadata Manager tool. For those disposing publishers looking for an easy, do-it-yourself option for transferring ownership of your journal, I suggest you review that blog post. But, whether you choose to process the transfer yourself via Metadata Manager or need some help from Paul, Shayn, or myself, there’s more to a transfer than just the click of a transfer button or the submission of an email to support@crossref.org, as I’m sure those of you who have been through a title transfer can attest.\nPrepping your title transfer Sometimes members get on the other side of a title transfer and find you’re encountering problems even if you followed the process for transferring titles. You might find you can register new content for the new title against your own prefix without any issues. But you are not able to update the metadata for backfile content after we’ve made the transfer.\nWhen we investigate, the problem is usually that the DOIs you’re trying to update don’t exist in our system yet. This means the deposit isn’t considered an update to the content, it’s considered a new deposit. And you don’t have permission to do that, since you’re effectively attempting to register new content to a prefix that is not your own.\nThis problem is because the former publisher didn’t ever register the DOIs with us - even though they’ve been displaying them on their website. This is bad practice and isn’t in keeping with our membership terms, but it does sometimes happen.\nBefore you request a title transfer, do check with the former publisher that they’ve definitely registered all the DOIs that they’ve been displaying and distributing to their readership. You can spot check this yourself by following a few of the DOI links and checking that they resolve to the right place. If you want a full list of DOIs registered to a journal title, our depositor reports are the place to start. Depositor Reports list all DOIs deposited for a title on a publisher-by-publisher basis. Or, alternatively, if you know the journal cite ID, the unique internal, Crossref identifier for the journal, you can bypass the publisher-by-publisher title list (in my example you’d need to replace my fictional 123456 journal ID with your journal’s cite ID):\n`http://0-data-crossref-org.libus.csd.mu.edu/depositorreport?pubid=J123456` Top tips for a pain-free title transfer If your organization has gained new titles, you’ve checked the depositor report for your new journal and are happy that all the existing DOIs have been registered, then you’re ready to process the transfer. Here are three key steps to ensure a pain-free transfer.\nIf you are not acquiring all existing journal articles as part of this transfer, you’ll need to contact us at support@crossref.org to confirm the details. Once we have those details sorted, we\u0026rsquo;ll transfer ownership for the select, specified articles.\nCarefully check the existing metadata associated with your new titles - some metadata provided for text and data mining or Similarity Check are publisher-specific and must be updated or removed when content is acquired by another member.\nIf the metadata supplied is fine, you just need to update the URLs to direct DOIs to your content. You can do this by sending us a URL update file or by redepositing the metadata with the correct URLs.\nIf you need to update more than the URLs, you should redeposit the metadata with the correct information plus the correct URLs.\nNote: If you, as the disposing publisher, are prepared to transfer your journal to an acquiring publisher, and would like to transfer ownership of the journal and all existing journal articles, please try your new title transfer via Metadata Manager. On the other side If you follow the steps I’ve outlined above, you should get to the other side of your title transfer with few problems and are likely to encounter smooth metadata seas ahead. That said, some of our members follow these steps to a tee and still are faced with occasional transfer-related problems.\nPerhaps the previous journal owner used a different scheme to assign timestamps and now you’re receiving mysterious timestamp errors when you deposit. Or, that same previous owner made a mistake with a previous deposit and accidentally submitted more than one journal title record. Or, you encounter a strange, new error in Metadata Manager when working with your new titles (yes, we’re still in beta!). If so, please reach out to us at support@crossref.org and we’ll help solve what are surely confounding problems, since you’ve undoubtedly read this post in its entirety and taken heed of the above advice.\nAs always, if you have questions, need guidance as you’re working through this process, or have recommendations on how we can improve title transfers, please contact us at support@crossref.org.\n", "headings": ["Prepping your title transfer","Top tips for a pain-free title transfer","On the other side"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/title-transfers/", "title": "Title Transfers", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/work-through-your-pid-problems-on-the-pid-forum/", "title": "Work through your PID problems on the PID Forum", "subtitle":"", "rank": 1, "lastmod": "2019-02-21", "lastmod_ts": 1550707200, "section": "Blog", "tags": [], "description": "As self-confessed PID nerds, we’re big fans of a persistent identifier. However, we’re also conscious that the uptake and use of PIDs isn’t a done deal, and there are things that challenge how broadly these are adopted by the community.\nAt PIDapalooza (an annual festival of PIDs) in January, ORCID, DataCite and Crossref ran an interactive session to chat about the cool things that PIDs allow us to do, what’s working well and, just as importantly, what isn’t, so that we can find ways to improve and approaches that work.", "content": "As self-confessed PID nerds, we’re big fans of a persistent identifier. However, we’re also conscious that the uptake and use of PIDs isn’t a done deal, and there are things that challenge how broadly these are adopted by the community.\nAt PIDapalooza (an annual festival of PIDs) in January, ORCID, DataCite and Crossref ran an interactive session to chat about the cool things that PIDs allow us to do, what’s working well and, just as importantly, what isn’t, so that we can find ways to improve and approaches that work.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ror-announces-the-first-org-id-prototype/", "title": "ROR announces the first Org ID prototype", "subtitle":"", "rank": 1, "lastmod": "2019-02-10", "lastmod_ts": 1549756800, "section": "Blog", "tags": [], "description": "What has hundreds of heads, 91,000 affiliations, and roars like a lion? If you guessed the Research Organization Registry community, you\u0026rsquo;d be absolutely right!\nLast month was a big and busy one for the ROR project team: we released a working API and search interface for the registry, we held our first ROR community meeting, and we showcased the initial prototypes at PIDapalooza in Dublin.\nWe\u0026rsquo;re energized by the positive reception and response we\u0026rsquo;ve received and we wanted to take a moment to share information with the community.", "content": "What has hundreds of heads, 91,000 affiliations, and roars like a lion? If you guessed the Research Organization Registry community, you\u0026rsquo;d be absolutely right!\nLast month was a big and busy one for the ROR project team: we released a working API and search interface for the registry, we held our first ROR community meeting, and we showcased the initial prototypes at PIDapalooza in Dublin.\nWe\u0026rsquo;re energized by the positive reception and response we\u0026rsquo;ve received and we wanted to take a moment to share information with the community. Here are the links to our latest work, a recap of everything that happened in Dublin, some of the next steps for the project, and how the community can continue to be involved.\n🎉 Ta da! The first ROR prototype The Research Organization Registry (ROR) is finally here! We\u0026rsquo;re thrilled to officially announce the launch of our ROR MVR (minimum viable registry). The MVR consists of the following components, which are ready for anyone to use right now.\nROR IDs: Starting with seed data from GRID, ROR has begun assigning unique identifiers to approximately 91,000 organizations in its registry. ROR IDs include a random, unique, and opaque 9-character string and are expressed as URLs that resolve to the organization\u0026rsquo;s record. For instance, here is the ROR ID for California Digital Library: https://ror.org/03yrm5c26\nSearch: We also built a search interface to look up organizations in the registry: https://ror.org/search.\nROR records: ROR IDs are stored with additional metadata about the organization, such as alternate names/abbreviations, external URLs (e.g., an organization\u0026rsquo;s official website), and other identifiers, such as Wikidata, ISNI, and the Open Funder Registry. This metadata will allow ROR to be interoperable with other identifiers and across different systems. The current schema is based on GRID\u0026rsquo;s dataset and we plan to incorporate other metadata fields over time and according to community needs. API: The ROR API is now public. You can access the JSON files at https://api.ror.org/organizations.\nOpenRefine reconciler: We\u0026rsquo;ve released an OpenRefine reconciler that can map your internal identifiers to ROR identifiers: https://github.com/ror-community/ror-reconciler.\nDocumentation: We have begun storing documentation on Github and will be adding more as we go along. Please feel free to follow and contribute: https://github.com/ror-community.\nCommunity meeting recap On January 22, 60+ representatives from across the research and publishing community gathered in Dublin to see what the ROR project team has been up to, demo the first prototypes in action, and discuss where we want to go next - and, of course, to practice ROR-ing together.\n\u0026lt;img src=\u0026quot;/images/blog/pride-of-lions.jpg\u0026quot; alt=“ROR-ing lions Dublin 2019\u0026quot; height=\u0026ldquo;300px\u0026rdquo; class=\u0026ldquo;img-responsive\u0026rdquo;\u0026gt;\nIn the second half of the meeting, attendees split into discussion groups to identify specific aspirations for ROR and brainstorm concrete actions needed to achieve these goals, focusing on the main use case of exposing and capturing all research outputs of a given institution. The proposed ideas covered a spectrum of possibilities for ROR, highlighting the following themes:\nROR as seamlessly-integrated and sometimes invisible infrastructure Integration between and within existing systems (and in new ones!)\nAuto-detection of ROR IDs for example in manuscript tracking and funding application platforms\nAs such, researchers don\u0026rsquo;t ever have to be responsible for knowing what a ROR is and using it appropriately - the systems they use will do this for them.\nROR as a critical piece of funder workflows and infrastructure Demonstrate to funders how ROR can help them analyze impact of research they fund\nConduct outreach with key international funders, especially those interested in open infrastructure\nMake funders aware of ROR and encourage them to adopt and mandate use of ROR IDs - involve funders at the beginning to collaborate on technology\nIntegrate ROR with existing systems and identifiers already in use by funders and other stakeholders\nROR as a trusted registry, collaborative partner, and responsible steward Culturally sensitive, inclusive, and respectful of what countries are already doing with regard to organizational identifiers, partnering with national bodies working on this and mapping ROR IDs to locally used identifiers.\nInvolve the institutions listed in the registry early on as well as CRIS systems\nInteroperability with existing communities and governance bodies\nWorkflows to support trust and responsible management of organizational metadata, with policies and procedures for long-term curation and maintenance of records\nWhat we\u0026rsquo;re hearing Now that the ROR MVR is here, we\u0026rsquo;re hearing some really good questions about the data we\u0026rsquo;re capturing, how it can be used, and how we\u0026rsquo;ll be maintaining the registry going forward. We wanted to take a moment to respond to some of these questions.\nWhat is the criteria for being listed in ROR? What is a \u0026ldquo;research organization\u0026rdquo;? We define the notion of \u0026ldquo;research organization\u0026rdquo; quite broadly as any organization that conducts, produces, manages, or touches research. This is in line with ROR\u0026rsquo;s stated scope, which is to address the affiliation use case and be able to identify which organizations are associated with which research outputs. We use \u0026ldquo;affiliation\u0026rdquo; to describe any formal relationship between a researcher and an organization associated with researchers, including but not limited to their employer, educator, funder, or scholarly society.\nWill ROR map organizational hierarchies? No - ROR is focused on being a top-level registry of organizations so we can address the fundamental affiliation use case, and provide a critical source of metadata that can interoperate with other institutional identifiers.\nROR IDs are cool - what can I do with them? Now that we have built our MVR, we will be working to incorporate ROR IDs into relevant pieces of the scholarly communication infrastructure. If you are a publisher, funder, metadata provider, research office, or anyone else interested in capturing affiliations, please get in touch with us to discuss how we might coordinate. If you are a developer, you are welcome to start playing around with the API: https://api.ror.org/organizations.\nThere\u0026rsquo;s an error in my organization\u0026rsquo;s ROR record \u0026mdash; can you fix it? For the time being, please email info@ror.org to request an update to an existing record in ROR or request that a new record be added. We will formalize our data management policies and procedures in the next stage of the project.\nWhat is ROR\u0026rsquo;s relationship to other organizational identifiers? For ROR to be useful, it needs to augment the current offerings in a way that is open, trusted, complementary, and collaborative, and not intentionally competitive. We are committed to providing a service that the community finds helpful and not duplicative, and enables as many connections as possible between organization records across systems.\nI have my own dataset of institutional affiliations \u0026mdash; can I give it to ROR? We are always happy to hear about other efforts to capture affiliation data. Please get in touch with us to discuss how we might coordinate.\nCan ROR support multiple languages and character sets? GRID already supports multiple languages and character sets, so by extension ROR will have this enabled as well. Here is one example: https://ror.org/01k4yrm29.\nHow will ROR handle curation, i.e., updating records if an organization changes its name or ceases to exist? The curation and long-term management of records will be a cornerstone of our efforts in 2019 and we hope to release a working set of policies and procedures soon.\nWhat\u0026rsquo;s next for ROR Now that we have our MVR, what happens next for ROR? We\u0026rsquo;re eager to sustain the momentum from January\u0026rsquo;s stakeholder meeting at the same time we know there are some longer-term plans to put in place, and so we\u0026rsquo;re looking at both some immediate tasks as well as bigger-picture questions.\nProduct development We have a few to-do items on our list following the launch of the MVR to keep everything running smoothly while we develop a comprehensive long-term product roadmap.\nRewrite some of the code for both the API and the OpenRefine reconciler\nAddress a few bugs in our repos\nProvide guidance for troubleshooting issues\nCommunicate our processes for users to request changes, report bugs, and suggest features\nAs a reminder, you can access the existing code in Github: https://github.com/ror-community\nPolicy development We\u0026rsquo;ve been emphasizing here and in community conversations that our primary focus now turns to formulating policies and procedures to ensure the successful management of ROR data over the long term. This is something we can\u0026rsquo;t (and shouldn\u0026rsquo;t) do on our own \u0026mdash; we want to work with community stakeholders to develop the right solutions and establish the right frameworks. We understand the urgency of firming up these policies, but we are also aware that something this important can take time to complete and is not something to rush into lightly.\nCommunity development To help guide the next stages of the project, we are putting out an open call for participation in the ROR community advisory group. Advisory group members will be involved in giving input on data management, testing out new features, giving feedback on the product roadmap, and discussing ideas for events and outreach. We plan to convene this advisory group through bimonthly calls and asynchronous communication channels through the end of the year. We hope you will consider joining us! Please email info@ror.org if you are interested.\nFor those who want to stay informed about the project but not necessarily be part of the advisory group, you have other options!\nSign up for our mailing list (via the footer at ror.org)\nJoin our community on Slack (www.tinyurl.com/ror-community),\nFollow us on Twitter (@ResearchOrgs).\nYou can also always drop us a line at info@ror.org, and let us know if you\u0026rsquo;d ever like to set up a meeting or conference call to talk about the project in more detail.\nFinal thoughts Community engagement has been vital to ROR\u0026rsquo;s beginnings and will likewise be critically important for the next steps that we take. As both a registry of identifiers and a community of stakeholders involved in building open scholarly infrastructure, ROR depends on guidance and involvement at multiple levels. Thank you for being part of the journey thus far, and for joining us on the road that lies ahead. 🦁\n", "headings": ["🎉 Ta da! The first ROR prototype","Community meeting recap","ROR as seamlessly-integrated and sometimes invisible infrastructure","ROR as a critical piece of funder workflows and infrastructure","ROR as a trusted registry, collaborative partner, and responsible steward","What we\u0026rsquo;re hearing","What is the criteria for being listed in ROR? What is a \u0026ldquo;research organization\u0026rdquo;?","Will ROR map organizational hierarchies?","ROR IDs are cool - what can I do with them?","There\u0026rsquo;s an error in my organization\u0026rsquo;s ROR record \u0026mdash; can you fix it?","What is ROR\u0026rsquo;s relationship to other organizational identifiers?","I have my own dataset of institutional affiliations \u0026mdash; can I give it to ROR?","Can ROR support multiple languages and character sets?","How will ROR handle curation, i.e., updating records if an organization changes its name or ceases to exist?","What\u0026rsquo;s next for ROR","Product development","Policy development","Community development","Final thoughts"] }, { "url": "https://www.crossref.org/blog/request-for-feedback-on-grant-identifier-metadata/", "title": "Request for feedback on grant identifier metadata", "subtitle":"", "rank": 1, "lastmod": "2019-02-07", "lastmod_ts": 1549497600, "section": "Blog", "tags": [], "description": "We first announced plans to investigate identifiers for grants in 2017 and are almost ready to violate the first rule of grant identifiers which is “they probably should not be called grant identifiers”. Research support extends beyond monetary grants and awards, but our end goal is to make grants easy to cite, track, and identify, and ‘Grant ID’ resonates in a way other terms do not. The truth is in the metadata, and we intend to collect (and our funder friends are prepared to provide) information about a number of funding types.", "content": "We first announced plans to investigate identifiers for grants in 2017 and are almost ready to violate the first rule of grant identifiers which is “they probably should not be called grant identifiers”. Research support extends beyond monetary grants and awards, but our end goal is to make grants easy to cite, track, and identify, and ‘Grant ID’ resonates in a way other terms do not. The truth is in the metadata, and we intend to collect (and our funder friends are prepared to provide) information about a number of funding types. Hopefully we encompass all of them.\nOur technical \u0026amp; metadata working group (a subset of the broader Funder Advisory Group) includes folks from Children\u0026rsquo;s Tumor Foundation, Europe PMC, European Research Council, JST, OSTI (DOE), Smithsonian, Swiss National Science Foundation, UKRI, Wellcome, as well as colleagues at DataCite and ORCID.\nThey have provided a wealth of funding data and feedback, and together we’ve come up with a metadata schema that works for us. Just as important - does this set of metadata meet your needs? Did we miss something? Let us know.\nThe details For those of you familiar with Crossref Content Registration, Grant IDs will have their own dedicated schema that differs from our publication schema. The Grant ID schema will follow some of the same conventions as we’ll be using the same system to process the files (which will be XML) but since we are collecting metadata for a new community and moving beyond published content, this is an opportunity to rethink how we handle some basics like person names and dates.\nEach Grant ID can be assigned to multiple projects. The metadata within each project includes basics like titles, descriptions, and investigator information (including affiliations) as well as funding information. Funders will supply funder information (including funder identifiers from the Crossref Funder Registry) as well as information about funding types and amounts.\nA major accomplishment of the group was to develop a simple taxonomy of types of funding. Supported types are:\naward contract grant salary-award endowment secondment loan facilities equipment seed-funding fellowship training-grant other Funding involves more than monetary grants or awards and we’ve attempted to capture the broad categories of funding types. This list is taken from types of funding as defined by our participating funder organizations. We anticipate this list will evolve over time.\nReady to dig in? The schema and documentation are available on GitHub. We will actively take feedback until the end of February 2019. We hope to begin implementation soon after that. Please let us know what you think through GitHub, or feel free to contact me via feedback@crossref.org.\n", "headings": ["The details"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/underreporting-of-matched-references-in-crossref-metadata/", "title": "Underreporting of matched references in Crossref metadata", "subtitle":"", "rank": 1, "lastmod": "2019-02-05", "lastmod_ts": 1549324800, "section": "Blog", "tags": [], "description": "TL;DR About 11% of available references in records in our OAI-PMH \u0026amp; REST API don\u0026rsquo;t have DOIs when they should. We have deployed a fix, but it is running on billions of records, and so we don’t expect it to be complete until mid-April.\nNote that the Cited-by API that our members use appears to be unaffected by this problem.\n", "content": "TL;DR About 11% of available references in records in our OAI-PMH \u0026amp; REST API don\u0026rsquo;t have DOIs when they should. We have deployed a fix, but it is running on billions of records, and so we don’t expect it to be complete until mid-April.\nNote that the Cited-by API that our members use appears to be unaffected by this problem.\nThe gory details When a Crossref member registers metadata for a publication, they often include references. Sometimes the member will also include DOIs in the references, but often they don’t. When they don’t include a DOI in the reference, Crossref tries to match the reference to metadata in the Crossref system. If we succeed, we add the DOI of the matched record to the reference metadata. If we fail, we append the reference to an ever-growing list which we re-process on an ongoing basis.\nYou may have seen that the R\u0026amp;D team has been doing work to improve our reference matching system. We will soon be rolling out a new reference matching process that will increase recall significantly.\nBut while testing our new reference matching approach, we started to see inconsistent results with our existing legacy reference matching system. When we implemented new regression tests, we noticed that, even when using our legacy system, we were consistently getting better results than were reflected in the metadata we exposed via our APIs. For example, we would pick a random Crossref DOI record that included 3 matched references, and when we tried matching all the references in the record again using our existing technology, we would get more matched references than were reported in the metadata.\nAt first, we thought this might have something to do with sequencing issues. For example, that article A might cite article B, but somehow article A would get its DOI registered with Crossref prior to article B. In this theoretical case, we would initially fail to match the reference, but it would eventually get matched as we continued to reprocess our unmatched references. But this wasn’t the issue. And the problem was not with the matching technology we are using. Instead, we discovered a problem with the way we process references on deposit.\nWhen a member deposits references with Crossref, each reference has to include a member-defined key that is unique to each reference they are depositing in the DOI record. When we match a reference- we report to the members that we matched the reference with key X to DOI Y. The problem is that sometimes members would deposit references with an empty key. If there was only one such reference, then, technically, it would pass our test for making sure the key was unique within the record. So we would process the reference, and match it, and report it via our Cited-by service, but later in the process, when we went to include the matched DOI in the reference section of our API metadata, we’d skip including DOIs for references that had blank keys. The reference itself would be included in the metadata, it would just appear that we hadn’t matched it to a DOI when we actually had.\nAgain, we estimate this to have resulted in about 11% of the references in our metadata to be missing matched DOIs. We are processing our references again and inserting the correctly matched DOIs in the metadata. We expect the process to complete in mid-April. We will keep everybody up-to-date on the progress of this fix.\nWe will also be integrating the new matching system that we’ve developed. As mentioned at the start of this post, this matching system will also increase the recall rate of our reference matching and so, the two changes combined, should result in users seeing a significant increase in the number of matched references included in Crossref metadata.\nAnd finally, as part of the work that we are doing to improve our reference matching, we are putting a comprehensive testing framework that will make it easier for us to detect inconsistencies and/or regressions in our reference matching.\nPlease contact Crossref support with any questions or concerns.\n", "headings": ["TL;DR","The gory details"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/christine-cormack-wood/", "title": "Christine Cormack Wood", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-crossref-metadata-is-helping-bring-migration-research-in-europe-under-one-roof/", "title": "How Crossref metadata is helping bring migration research in Europe under one roof", "subtitle":"", "rank": 1, "lastmod": "2019-01-29", "lastmod_ts": 1548720000, "section": "Blog", "tags": [], "description": "Conflict, instability and economic conditions are just some of the factors driving new migration into Europe—and European policy makers are in dispute about how to manage and cope with the implications. Everyone agrees that in order to respond to the challenges and opportunities of migration, a better understanding is required of what drives migration towards Europe, what trajectories and infrastructures facilitate migration, and what the key characteristics of different migrant flows are, in order to inform and improve policy making.\n", "content": "Conflict, instability and economic conditions are just some of the factors driving new migration into Europe—and European policy makers are in dispute about how to manage and cope with the implications. Everyone agrees that in order to respond to the challenges and opportunities of migration, a better understanding is required of what drives migration towards Europe, what trajectories and infrastructures facilitate migration, and what the key characteristics of different migrant flows are, in order to inform and improve policy making.\nThe abstract above is taken from the successful Horizon 2020[1] project proposal called CrossMigration, an initiative of IMISCOE, Europe’s largest migration research network, in which a consortium of 15 universities, think tanks and international organizations, led by Erasmus University Rotterdam is currently designing a Migration Research Hub. The Hub is a web-based platform aimed at helping researchers and policymakers get a quick and comprehensive overview on research in the field of migration studies. This platform will also feature reports on specific fields, methodological briefing papers and other relevant content produced by the consortium.\nThe core of this Hub will consist of a database providing access to publications, research projects and datasets on migration drivers, and infrastructures, flows, and policies on current and future migration questions, indicators and scenarios. And that’s where our metadata story starts.\nAt the tail end of December I had the pleasure of speaking to the four researchers and developers working on this database; Vienna-based researchers Roland Hosner and Meike Palinkas from the International Centre for Migration Policy and Development (ICMPD), Bogdan Taut, CEO of YoungMinds, in Bucharest, Romania, and Nathan Levy, currently studying for his PhD at Erasmus University Rotterdam, Department of Public Administration and Sociology, Netherlands.\nThere are four of you, can each of you give me a very brief introduction to yourselves and how you fit into project? Bogdan: I’m from YoungMinds, based in Bucharest in Romania. We were the last to join the consortium as the technical developer on the project. I am the project manager of the team, coordinating the technical development of the database.\nRoland: I am a research officer with the International Centre for Migration Policy Development (ICMPD) in Vienna, and we are leading a part of this research project which deals with the population and implementation of the research database—which is core to the Migration Research Hub, and to the whole project.\nMeike: I am also a research officer at ICMPD and work together with Roland. I joined the team in September this year.\nNathan: I’m part of the coordinating team of the overall project of CrossMigration. We are coordinating putting together the Migration Research Hub, the biggest part of which is the migration database. I am based at Erasmus University in Rotterdam and I work for Professor Peter Scholten who is the overall coordinator of the whole project along with Dr. Asya Pisarevskaya.\nHow long has the project been in progress? Roland: It’s a two-year project than runs from March 2018 to the end of February 2020.\nSo it’s a two-year project and you are 10 months in—that makes it nearly at the halfway mark. Have you encountered any stumbling blocks that have held you back? Bogdan: How to put this in a diplomatic way? We are all working around the clock to meet the deadline that we set ourselves and promised to deliver by. We have made the decision to produce the database in stages—very soon we will have the beta version out, so we have something to present. Then we are going to continue populating it with more items from every record type – journal articles, datasets, books, book chapters, reports etc.. At this point the other partners in the consortium can actually use it and work with it to map the fields and find the most recent and relevant literature on their respective subtopics such as migration drivers or migration infrastructures. In the summer when we are confident that it is a sound and attractive tool to be released, we will make it publicly available.\nNathan: In terms of specific deliverables for the project so far, our team has developed a taxonomy for migration research to give the fields a logical structure, and to structure this research database.\nHow has Crossref metadata contributed to your project? Bogdan: We began by discussing all of the sources that need to be in the database and we put together an inventory of publishers, books and book chapters, etc., that would be relevant. Part of the scope of work for YoungMinds was to find ways of extracting information and relevant content from those sources. Once we started to dig into the content we found out that there are relevant aggregators, such as Scopus, Crossref, Web of Science and so on. We actually found Crossref through a recommendation from Scopus, someone there said ‘OK Crossref might be able to help you more’. Then Crossref became one of our main sources for metadata—in terms of basic metadata related to some types of content we gather for our database, such as journals and journal articles.\nRoland: The more we moved forward, the more we saw how difficult it was to get in touch with each publisher individually, with each journal individually, to try and secure an agreement with them. So, it became very clear to us very quickly that we would not be able to create a properly inclusive database this way and we knew we had to look for partners and make use of existing resources. As we progressed from one conversation to the next we received a lot of advice, and that’s how we found out about Crossref. It soon became clear that Crossref was the ideal source for us because everything that has a DOI can be found in there. We knew if we had an agreement with Crossref then our project is half won, our database is halfway built, perhaps even more. And, then we just need to fill the gaps.\nNathan: Yes, this is one of Crossref’s key strengths—rather than having individual researchers or individual projects go to each publisher to try to find the appropriate people to talk to and negotiate—you use Crossref.\nWhich of the metadata values are important to you, what do you extract? Roland: We thought about this a lot at the beginning, what we wanted to include. There are certain key things that are indisputably relevant—such as titles, names of the authors, editors, the year, DOI, dataset and so on, because we always link to the original source—the publisher’s website, or the journal article website. Ideally we would include keywords and abstracts (where they are available) because the richer the information the better. We also wanted to classify the items we have according to the taxonomy the CrossMigration project has established.\nNathan: In addition, abstracts and keywords have value for us. We want to apply a logical structure into the taxonomy on migration research, but we need content in order to do that. We need something for the algorithms that YoungMinds have developed to read to in order categorize research accordingly. The body of research on migration is so great and we cannot read through every abstract that’s ever been published on migration. That’s where the value of abstracts and keywords comes in for the Taxonomizer (as we fondly refer to it!).\nWhat else would you like to see in the REST API that isn’t there? Roland: More abstracts! We love abstracts!\nBogdan: Our data schema contains more fields, so we need more metadata than we can find from Crossref and other sources. Basically, the publisher’s website would produce the richest data, but it is the hardest to read. We are on a quest to find more sources because our algorithm works better if it has more information.\nOnce it’s complete, what are your plans to roll it out to the wider world? Bogdan: IMISCOE is the leading organization of this consortium and it is in touch with most of the migration experts in Europe, so we already have all the contacts of the relevant people in the field.\nMeike: It’s a tool for helping the community, so once we have all the relevant content inside it, we believe that word will spread relatively easily.\nHave you all actually met in person? Roland: Yes! Myself and Nathan met at the project kick-off meeting in Rotterdam in March 2018, then we met at a conference in Florence in June that was partly for the consortium but also had other invited experts and scholars. That was where we met face-to-face for the first time—it was just after we signed with YoungMinds for the IT services. And we recently met at another joint conference of IMISCOE and CrossMigration called 'Towards the IMISCOE Research Infrastructure of the Future'.\n[1] Horizon 2020, the biggest EU Research and Innovation programme ever with nearly €80 billion of funding available.\nGreat speaking to you all and learning a bit about this important project that will help policymakers manage and cope with the implications of migration—and may possibly even help them find ways to influence it.\nIf you\u0026rsquo;d like to share how you use our Metadata APIs please contact the Community team.\n", "headings": ["There are four of you, can each of you give me a very brief introduction to yourselves and how you fit into project?","How long has the project been in progress?","So it’s a two-year project and you are 10 months in—that makes it nearly at the halfway mark. Have you encountered any stumbling blocks that have held you back?","How has Crossref metadata contributed to your project?","Which of the metadata values are important to you, what do you extract?","What else would you like to see in the REST API that isn’t there?","Once it’s complete, what are your plans to roll it out to the wider world?","Have you all actually met in person?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/zen-and-the-art-of-platform-migration/", "title": "Zen and the Art of Platform Migration", "subtitle":"", "rank": 1, "lastmod": "2019-01-28", "lastmod_ts": 1548633600, "section": "Blog", "tags": [], "description": "Nowadays we’re all trying to eat healthier, get fitter, be more mindful and stay in the now. You think you’re doing a good job — perhaps you’ve started a yoga class or got a book on mindfulness. And then, wham! Someone in your organization casually mentions they’re planning a platform migration. I can sense the panic from here.\n", "content": "Nowadays we’re all trying to eat healthier, get fitter, be more mindful and stay in the now. You think you’re doing a good job — perhaps you’ve started a yoga class or got a book on mindfulness. And then, wham! Someone in your organization casually mentions they’re planning a platform migration. I can sense the panic from here.\nWhile the [Holmes and Rahe Stress Scale] (https://www.stress.org/holmes-rahe-stress-inventory/) doesn’t include platform migration as one of the top ten most stressful life events, we hear from our members that it should probably be in there somewhere. There’s so much to think about and plan for - how do you know you’re choosing the right platform partners for the future? How can you be sure that your understanding of what they offer really matches what you need? Will it make it easier for your readers to access your content? What about delays? What if it all breaks on changeover day?\nGaaaaah!\nWith all that to think about, worrying about whether your DOIs will resolve and what the migration will mean for the quality of your Crossref metadata just seems like an unnecessary layer of stress. It is, however, very important to consider this - even before you start thinking about who your platform partners will be. The process of working through these things up front could help you make better decisions, and set you up for success with the project and into the future.\nSo, to help you plan ahead, we’ve created a [platform migration guide] (/service-providers/migrating-platforms/) that offers guidance on things like:\nWhat to consider even before you start selecting a new service provider Planning the change over process The change over itself (and what that means for your URLs) What you should do after the migration is complete The guide gives advice on how to plan for what you really need right now, and what you’re going to need in the future. For example, what metadata are you going to want to register with us and share with the thousands of industry organizations that make use of the data? What other Crossref services might benefit you in the future? What different record types are in your publishing plans?\nThe guide also has a [handy checklist] (/education/member-setup/working-with-a-service-provider/checklist-for-platform-migration/) which you can include in your Request For Proposal documentation, to ensure that you’re asking the right questions of potential suppliers.\nOnce you’ve read the [platform migration guide] (/service-providers/migrating-platforms/), [let us know] (mailto:feedback@crossref.org) if there’s anything else you think we should add to it - we’re sure many of you have platform migration stories, and it’s good to share!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-can-often-change-but-always-stays-the-same/", "title": "What can often change, but always stays the same?", "subtitle":"", "rank": 1, "lastmod": "2019-01-24", "lastmod_ts": 1548288000, "section": "Blog", "tags": [], "description": "Hello. Isaac here again to talk about what you can tell just by looking at the prefix of a DOI. Also, as we get a lot of title transfers at this time of year, I thought I’d clarify the difference between a title transfer and a prefix transfer, and the impact of each.\n", "content": "Hello. Isaac here again to talk about what you can tell just by looking at the prefix of a DOI. Also, as we get a lot of title transfers at this time of year, I thought I’d clarify the difference between a title transfer and a prefix transfer, and the impact of each.\nWhen you join Crossref, you are provided with a unique prefix, you then add suffixes of your choice to your prefix and this creates the DOIs for your content.\nIt’s a logical step then to assume you can tell just by looking at a DOI prefix who the current publisher is—but that’s not always the case. Things can (and often do) change. Individual journals get purchased by other publishers, and whole organizations get bought and sold.\nWhat you can tell from looking at a DOI prefix is who originally registered it, but not necessarily who it currently belongs to. That’s because if a journal (or whole organization) is acquired, DOIs don’t get deleted and re-registered to the new owner. The update will of course be reflected in the relevant metadata, but the prefix itself will stay the same. It never changes—and that’s the whole point, that’s what makes the DOI persistent.\nHere’s a breakdown of how this works internally at Crossref:\nTitle transfers Member A acquires a single title from member B. We transfer the title (and all relevant reports) over to member A. Member A must then register new content for that journal on their own prefix. The existing (newly acquired) DOIs maintain the ‘old’ prefix but member A can update metadata against these existing DOIs for that journal. Backfile and current DOIs for that journal may, therefore, have different prefixes—and that’s OK!\nOrganization transfers Member C acquires member D. We move the entire prefix (and all relevant reports) over to Member C, and close down Member D’s account with Crossref. Member C can continue to register DOIs on member D’s prefix (the original prefix) if they want to, or they can use their own existing prefix. So again, backfile and current DOIs for that journal may have different prefixes.\nAnd by the way, if Member C uses a service provider to register metadata on their behalf, we will simply enable their username to work with the prefix.\nIt’s now easier to transfer titles We\u0026rsquo;ve recently made the process of transferring journal titles a lot easier with our new Content Registration tool, Metadata Manager.\n", "headings": ["Title transfers","Organization transfers","It’s now easier to transfer titles"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/myth-busting-in-mumbai/", "title": "Myth busting in Mumbai", "subtitle":"", "rank": 1, "lastmod": "2019-01-22", "lastmod_ts": 1548115200, "section": "Blog", "tags": [], "description": "In December, Crossref’s Head of Metadata, Patricia Feeney and I headed to Mumbai for our first ever LIVE local event in India, held in collaboration with Editage.\nCrossref membership in India has escalated in recent years, with a fifth of its 500 members joining in 2017 alone. Around 40% of these new members are smaller organizations who joined through one of the eight sponsors we currently have in the country.", "content": "In December, Crossref’s Head of Metadata, Patricia Feeney and I headed to Mumbai for our first ever LIVE local event in India, held in collaboration with Editage.\nCrossref membership in India has escalated in recent years, with a fifth of its 500 members joining in 2017 alone. Around 40% of these new members are smaller organizations who joined through one of the eight sponsors we currently have in the country.\nWith such a large increase in membership numbers, it seemed timely to visit and meet both our new and longer-standing members face-to-face. Our LIVE local events provide a great opportunity for us to learn what challenges our members in the community face, so we can understand how to best meet their needs. It also gives us a chance to explain in detail how to benefit from the services we offer, as well as keep them informed about any future developments. A special thanks goes to Editage for all their help in organizing, promoting, and running this event with us.\n", "headings": ["Myth #1: Crossref is a mark of publisher and content quality","Myth #2: Crossref archives content","Myth #3: Crossref provides impact factors","Myth #4: Crossref charges to make updates or corrections to the metadata associated with a DOI","Myth #5: Crossref charges for failed deposits","Myth #6: You need to have separate prefixes to register different record/resource types","Myth #7: DOI resolutions are how many DOIs you have registered","Myth #8: Crossref own the plagiarism software used in Similarity Check"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/whats-that-doi/", "title": "What’s that DOI?", "subtitle":"", "rank": 1, "lastmod": "2019-01-21", "lastmod_ts": 1548028800, "section": "Blog", "tags": [], "description": "This is a long overdue followup to 2016\u0026rsquo;s \u0026ldquo;URLs and DOIs: a complicated relationship\u0026rdquo;. Like that post, this accompanies my talk at PIDapalooza, the festival of open persistent identifiers). I don\u0026rsquo;t think I need to give a spoiler warning when I tell you that it\u0026rsquo;s still complicated. But this post presents some vocabulary to describe exactly how complicated it is. Event Data has been up and running and collecting data for a couple of years now, but this post describes changes we made toward the end of 2018.", "content": "This is a long overdue followup to 2016\u0026rsquo;s \u0026ldquo;URLs and DOIs: a complicated relationship\u0026rdquo;. Like that post, this accompanies my talk at PIDapalooza, the festival of open persistent identifiers). I don\u0026rsquo;t think I need to give a spoiler warning when I tell you that it\u0026rsquo;s still complicated. But this post presents some vocabulary to describe exactly how complicated it is. Event Data has been up and running and collecting data for a couple of years now, but this post describes changes we made toward the end of 2018.\nIf Event Data is new to you, you can read about its development in other blog posts and the User Guide. Today I\u0026rsquo;ll be describing a specific but important part of the machinery: how we match landing pages to DOIs.\nSome background Our Event Data service provides you with a live database of links to DOIs, found from across the web and social media. Data comes from a variety of places, and most of it is produced by Agents operated by Crossref. We have Agents monitoring Twitter, Wikipedia, Reddit, Stack Overflow, blogs and more besides. It is a sad truth that the good news of DOIs has not reached all corners of world, let alone the dustiest vertices of the world wide web. And even within scholarly publishing and academia, not everyone has heard of DOIs and other persistent identifiers.\nOf course, this means that when we look for links to content-that-has-DOIs, what we at Crossref call \u0026lsquo;registered content\u0026rsquo;, we can\u0026rsquo;t content ourselves with only looking for DOIs. We also have to look for article landing pages. These are the pages you arrive at when you click on a DOI, the page you\u0026rsquo;re on when you decide to share an article.\nHalf full or half empty? So we\u0026rsquo;re trying to track down links to these landing pages, rather than just DOIs. You could look at this two ways.\nThe glass-half-empty view would be that it\u0026rsquo;s a real shame people don\u0026rsquo;t use DOIs. Don\u0026rsquo;t they know that their links aren\u0026rsquo;t future-proof? Don\u0026rsquo;t they know that DOIs allow you to punch the identifier into other services?\nThe glass-half-full view is that it\u0026rsquo;s really exciting that people outside the traditional open identifier crowd are interacting with the literature. We\u0026rsquo;ve been set a challenge to try and track this usage. By collecting this data and processing it into a form that\u0026rsquo;s compatible with other services we can add to its value and better help join the dots in and around the community that we serve. Not everyone tweeting about articles counts as \u0026lsquo;scholarly Twitter\u0026rsquo;, and hopefully we can bridge some divides (the subject of my talk at PIDapalooza last year, 'Bridging Identifiers').\nHow do we do it? One of the central tenets of Event Data is transparency. We record as much information as we can about the data we ingest, how we process it, and what we find. Of course, you don\u0026rsquo;t have to use this data, it\u0026rsquo;s up to you how much depth you want to go into. But it\u0026rsquo;s there if you want it.\nThe resulting data set in Event Data is easy to use, but allows you to peek beneath the surface. We do this by linking every Event that our Agents collect through to an Evidence Record. This in turn links to Artifacts, which describe our working data set.\nOne such Artifact is the humbly named domain-decision-structure. This is a big tree that records DOI prefixes, domain names, and how they\u0026rsquo;re connected. It includes information such as \u0026ldquo;some DOIs with the prefix 10.31139 redirect to the domain polishorthopaedics.pl, and we can confirm that pages on that domain correctly represent their DOI\u0026rdquo;. We produce this list by visiting a sample of DOIs from every known prefix. We then ask the following questions:\nWhich webpage does this DOI redirect to, and what domain name does it have? Does the webpage include its correct DOI in the HTML metadata? From this we build the Artifact that records prefix → domain relationships, along with a flag to say whether or not the domain correctly represents its DOI in at least one case. You can put this data to a number of uses, but we use it to help inform our URL to DOI matching.\nWhat Agents do The Agents use the domain list to search for links. For example, the Reddit Agent uses it to query for new discussions about websites on each domain. They then pass this data to the Percolator, which is the machinery that produces Events.\nThe Percolator takes each input, whether it\u0026rsquo;s a blog post or a Tweet, and extracts links. If it finds a DOI link, that\u0026rsquo;s low hanging fruit. It then looks for links to URLs on one of the domains in the list. All of these are considered to be candidate landing page URLs. Once it has found a set of candidate links in the webpage it then has to find which ones correspond to DOIs, and validate that correspondence.\nFor each candidate URL it follows the link and retrieves the webpage. It looks in the HTML metadata, specifically in the \u0026lt;meta name='dc.identifier' content='10.5555/12345678' \u0026gt;, to see if the article indicates its DOI. It also looks in the webpage to see if it reports its DOI in the body text.\nNot so fast But can you trust the web page to indicate its own DOI? What about sites that say that they have a DOI belonging to another member? What about those pages that have invalid or incorrect DOIs? These situations can, and do, occur.\nWe have the following methods at our disposal, in order of preference.\ndoi-literal - This is the most reliable, and it indicates that the URL we found in the webpage was a DOI not a landing page. We didn\u0026rsquo;t even have to visit the article page. pii - The input was a PII (Publisher Item Identifier). We used our own metadata to map this into a DOI. landing-page-url - We thought that the URL was the landing page for an article. Some webpages actually contain the DOI embedded in URL. So we don\u0026rsquo;t even have to visit the page. landing-page-meta-tag - We had to visit the article landing page. We found a meta tag, eg. dc.identifier, indicating the DOI. landing-page-page-text - We visited the webpage but there was no meta tag. We did find a DOI in the body text and we think this is the DOI for this page. This is the least reliable. On top of this, we have a number of steps of validation. Again, these are listed in order of preference.\nliteral - We found a DOI literal, so we didn\u0026rsquo;t have to do any extra work. This is the most reliable. lookup - We looked up the PII in our own metadata, and we trust that. checked-url-exact - We visited the landing page and found a DOI. We visited that DOI and confirmed that it does indeed lead back to this landing page. We are therefore confident that this is the correct DOI for the landing page URL. checked-url-basic - We visited the DOI and it led back to almost the same URL. The protocol (http vs https), query parameters or upper / lower case may be different. This can happen if tracking parameters are automatically added by the website meaning the URLs are no longer identical. We are still quite confident in the match. confirmed-domain-prefix - We were unable to check the link between the DOI and the landing page URL, so we had to fall back to previously observed data. On previous occasions we have seen that DOIs with the given prefix (e.g. \u0026ldquo;10.5555\u0026rdquo;) redirect to webpages with the same domain (e.g. \u0026ldquo;www.example.com\u0026rdquo;) and those websites correctly report their DOIs in meta tags. Only the domain and DOI prefix are considered. We therefore believe that the domain reliably reports its own DOIs correctly in at least some cases. recognised-domain-prefix - On previous occasions we have seen that DOIs with the given prefix (e.g. \u0026ldquo;10.5555\u0026rdquo;) redirect to webpages with the same domain (e.g. \u0026ldquo;www.example.com\u0026rdquo;). Those websites do not always correctly report their DOIs in meta tags. This is slightly less reliable. recognised-domain - On previous occasions we have seen that this domain is associated with DOIs in general. This is the least reliable. We record the method we used to find the DOI, and the way we verified it, right in the Event. Look in the obj.method and obj.verification fields.\nOf course, there\u0026rsquo;s a flowchart.\nYou can take a closer look in the User Guide.\nIf you think that\u0026rsquo;s a bit long-winded, well, you\u0026rsquo;re right. But it does enable us to capture DOI links without giving a false sense of security.\nSo, what happens? If you ask the Event Data Query API for the top ten domains that we matched to DOIs in the first 20 days of January 2019, it would tell you:\nDomain Number of Events captured doi.org 2058433 dx.doi.org 242707 www.nature.com 170808 adsabs.harvard.edu 163387 www.sciencedirect.com 96849 onlinelibrary.wiley.com 88760 link.springer.com 63869 www.tandfonline.com 41911 www.sciencemag.org 39489 academic.oup.com 39267 Here we see a healthy showing for actual DOIs (which you can explain by Wikipedia\u0026rsquo;s excellent use of DOIs) followed by some of the larger publishers. This demonstrates that we\u0026rsquo;re capturing a healthy number of Events from Wikipedia pages, tweets, blog posts etc that reference landing pages.\nAwkward questions This is not a perfect process. The whole point of PIDs is to unambiguously identify content. When users don\u0026rsquo;t use PIDs, there will inevitably be imperfections. But because we collect and make available all the processing along the way, hopefully we can go back to the old data, or allow any researchers to try and squeeze more information out of the data.\nQ: Why bother with all of this? Can\u0026rsquo;t you just use the URLs? We care about persistent identifiers. They are stable identifiers, which means they don\u0026rsquo;t change over time. The same DOI will always refer to the same content. In contrast, publishers\u0026rsquo; landing pages can and do change their URLs over time. If we didn\u0026rsquo;t use the DOIs then our data would suffer from link-rot.\nDOIs are also compatible across different services. You can use the DOI for an article to look it up in metadata and citation databases, and to make connections with other services.\nThis is not the only solution to the problem. Other services out there, such as Cobalt Metrics, do record the URLs and store an overlaid data set of identifier mappings. At Crossref we have a specific focus on our members and their content, and we all subscribe to the value of persistent identifiers for their content.\nOf course, we don\u0026rsquo;t throw anything away. The URLs are still included in the Events. Look in the obj.url field.\nQ: If DOIs are so amazing why keep URLs? Event Data is useful to a really wide range of users. Some will need DOIs to work with the data. But others, who may want to research the stuff under the hood, such as the behaviour of social media users, or the processes we employ, may want to know more detail. So we include it all.\nQ: Can\u0026rsquo;t you just decide for me? In a way, we do. If an Event is included in our data set, we are reasonably confident that it belongs there. All we are doing is providing you with more information.\nQ: Why only DOIs? We specialise in DOIs and believe they are the right solution for unambiguously and persistently identifying content. Furthermore the content registered with Crossref has been done so for the specific benefits that DOIs bring.\nQ: What about websites that require cookies and/or JavaScript to execute? Some sites don\u0026rsquo;t work unless you allow your browser to accept cookies. Some sites don\u0026rsquo;t render any content unless you allow their JavaScript to execute. Large crawlers, like Google, emulate web browsers when they scrape content, but it\u0026rsquo;s resource-intensive and not everyone has the resources of Google!\nThis is an issue we\u0026rsquo;ve known about for a while. My talk two years ago was about precisely this topic. We know it\u0026rsquo;s a hurdle we\u0026rsquo;ll have to overcome at some point. We do have plans to look into it, but we haven\u0026rsquo;t found a sufficiently cost-effective and reliable way to do it yet.\nAny sites that do do this will be inherently less reliable, so we recommend everyone to put their Dublin Core Identifiers in the HTML, render your HTML server-side (which is the default way of doing things) and don\u0026rsquo;t require cookies.\nQ: What\u0026rsquo;s the success rate? This is an interesting question. The results aren\u0026rsquo;t black and white. At the low end of the confidence spectrum we do have a cut-off point, at which we don\u0026rsquo;t generate an Event. But when we do create one we qualify it by describing the method we used to match and verify the connection. What level of confidence you want to trust is for you to decide. We just describe the steps we took to verify it.\nIt\u0026rsquo;s tricky quantifying false negatives. We have plenty of unmatched links, but not every unmatched link even could be matched to a DOI, for example there are some domains that have some DOI-registered content mixed with non-registered content.\nWe therefore err on the side of optimism, and let users choose what level of verification they require.\nSo talking of false positives or false negatives is a complicated question. We\u0026rsquo;ve not done any analytical work on this yet, but would welcome any input from the community.\nQ: Why isn\u0026rsquo;t the domain-decision-structure Artifact more detailed? We looked into various ways of constructing this, including more detailed statistics. At the end of the day our processes have to be understandable and easy to re-use. The process already takes a flow-chart to understand, and we felt that we got the balance right. Of course, as a user of this data, you are welcome to further refine and verify it.\n", "headings": ["Some background","Half full or half empty?","How do we do it?","What Agents do","Not so fast","So, what happens?","Awkward questions","Q: Why bother with all of this? Can\u0026rsquo;t you just use the URLs?","Q: If DOIs are so amazing why keep URLs?","Q: Can\u0026rsquo;t you just decide for me?","Q: Why only DOIs?","Q: What about websites that require cookies and/or JavaScript to execute?","Q: What\u0026rsquo;s the success rate?","Q: Why isn\u0026rsquo;t the domain-decision-structure Artifact more detailed?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/improved-processes-and-more-via-metadata-manager/", "title": "Improved processes, and more via Metadata Manager", "subtitle":"", "rank": 1, "lastmod": "2019-01-17", "lastmod_ts": 1547683200, "section": "Blog", "tags": [], "description": "Hi, Crossref blog-readers. I’m Shayn, from Crossref’s support team. I’ve been fielding member questions about how to effectively deposit metadata and register content (among other things) for the past three years. In this post, I’ll take you through some of the improvements that Metadata Manager provides to those who currently use the Web Deposit form.\n", "content": "Hi, Crossref blog-readers. I’m Shayn, from Crossref’s support team. I’ve been fielding member questions about how to effectively deposit metadata and register content (among other things) for the past three years. In this post, I’ll take you through some of the improvements that Metadata Manager provides to those who currently use the Web Deposit form.\nWe recently announced the launch of Metadata Manager, a new tool from Crossref that makes it easier for you to submit robust, accurate, and thorough metadata for the content you register. Metadata Manager already covers journals and articles; more record types will be supported soon. It offers some extra features that will make your experience less stressful, make your metadata better, and ultimately make your content more discoverable.\nMetadata Manager has the potential to improve your metadata registration experience in a number of ways:\nby correcting one-off errors in previously registered metadata by directly allowing you to add references, license data, funder information, or any other ancillary metadata to items that have previously been registered by updating Crossmark data, in the case of a retraction or withdrawal Login first, not last With the Web Deposit form, you finish entering your metadata for a new issue of your journal, and then get asked for your password, and of course that\u0026rsquo;s when you realize you\u0026rsquo;ve forgotten it (it happens a lot!). With Metadata Manager, the very first step is to log in, so you know your login credentials are accurate before you get down to the task of entering your metadata.\nEasily import journals, or add new ones When you switch to Metadata Manager, you can import the journals already associated with your account. Simply go to the search bar on the Home screen, search for your journal by title, then click ’Add’. If you are registering your first article for a journal that you’ve not registered before, you can add the journal information on the Home screen, by clicking “New Publication”.\nAdding a Journal DOI In the Web Deposit form, the Journal DOI is optional, as long as you include a valid ISSN. However, with Metadata Manager, a Journal DOI must be created for each journal you register. So, you need to enter a Journal DOI and a Journal URL for each of your journals before your deposits can be submitted. The Journal DOI won’t become active until you submit your first successful deposit for an article within that journal.\nIf you’ve never registered a Journal DOI before and are unsure what to use for your Journal DOI’s suffix, take a look at our suggested best practice for constructing DOI suffixes.\nAdding new articles Once your journal is added, the process of adding articles in Metadata Manager should be familiar, as it’s similar to the Web Deposit form process. You type in or paste as plain text (without formatting) all your relevant, accurate, and thorough metadata into the appropriate fields in the form.\nSave your work as you go In Metadata Manager there is no need to complete a full issue’s worth of articles at once. And, you don’t need to worry about losing your progress if you accidentally close your browser window, or your laptop runs out of battery while you’re in the middle of a deposit. You can simply and easily ‘save-as-you-go’, one article at a time, until you’re ready to submit them all. You can even review your saved metadata to make sure there aren’t any errors before the deposit is finalized.\nOther metadata fields you didn’t know you needed (but you do!) Have you ever wanted to add an abstract to your content’s metadata? How about license information, so that other organizations know what they can and can’t do with the work? Does your journal use article ID numbers instead of page numbers? These are all elements that can be added to Metadata Manager that were not available in the Web Deposit form. Additionally, you can add funding data, Similarity Check links, and relationships between your articles and other content. These types of metadata are hugely valuable for building a robust, interconnected web of scholarly communication.\nAdding references Unlike the Web Deposit form, Metadata Manager allows you to easily add references to your article’s metadata—this is an important requirement for participating in our Cited-by service.\nTo add references to an article’s metadata, you can copy and paste its reference list into the references field on the same screen as the rest of the article metadata (as per the image below).\nMetadata Manager will match DOIs to those references (where available), and include the full list in your record. So, if you’ve been putting off participating in Cited-by because the reference deposit requirement was too much of a hassle, we hope this will help ease the way! The more references everyone registers, the more robust our Cited-by counts and Cited-by data become.\nEdit mistakes without having to re-enter all your metadata Mistakes happen. Sometimes you put an author’s first name in the last name field. Sometimes you copy and paste some stray HTML tags into your abstract. You might break a link by leaving a space in the middle of a URL, or enter the first-page number as 3170 instead of 317.\nWith Metadata Manager you can fix any errors quickly and easily right in the interface, then just click to redeposit the article with its metadata corrected. You won’t need to re-enter all the metadata or worry about editing the XML files directly.\nWe’ll have another blog post coming soon that will be devoted entirely to updating, correcting, or otherwise editing metadata for already-registered DOIs in Metadata Manager.\nFind out immediately if your registration was successful When you have finished adding the metadata for your articles, navigate to the “To deposit” section and click ‘Deposit’ to submit them. Instead of having to wait for your content to go through our processing queue, you’ll get immediate feedback. The number of Accepted and Failed deposits show immediately. Any articles which have failed are clearly marked with a red triangle icon and an explanation for the error. If you don’t understand an error message or how to correct the metadata, please contact us at support@crossref.org.\nTo get started with Metadata Manager take a look at our full help documentation.\n", "headings": ["Login first, not last","Easily import journals, or add new ones","Adding a Journal DOI","Adding new articles","Save your work as you go","Other metadata fields you didn’t know you needed (but you do!)","Adding references","Edit mistakes without having to re-enter all your metadata","Find out immediately if your registration was successful"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/resolutions-2019-journal-title-transfers-metadata-manager/", "title": "Resolutions 2019: Journal Title Transfers = Metadata Manager", "subtitle":"", "rank": 1, "lastmod": "2019-01-03", "lastmod_ts": 1546473600, "section": "Blog", "tags": [], "description": " UPDATE, 12 December 2022\nDue to the scheduled sunsetting of Metadata Manager, this title transfer process has been deprecated. Please find detailed guidance for transferring titles on our documentation site here. When you thought about your resolutions for 2019, Crossref probably didn’t cross your mind—but, maybe it should have\u0026hellip;\n", "content": " UPDATE, 12 December 2022\nDue to the scheduled sunsetting of Metadata Manager, this title transfer process has been deprecated. Please find detailed guidance for transferring titles on our documentation site here. When you thought about your resolutions for 2019, Crossref probably didn’t cross your mind—but, maybe it should have\u0026hellip;\nBecause we know—with a high level of certainty—that Shayn, Paul and I will be spending the first few weeks of the year transferring the ownership of many journal titles. Last year we processed almost 60 journal transfer requests during this time, and we’re heading toward a similar number for 2019. There’s no objection; it’s a just a fact. We’re happy to do it, but there is another way.\nUnlike previous years, we now have a tool that gives you the control to transfer titles without any intervention from the Crossref support team—Metadata Manager. With just a few clicks, you, as the disposing publisher, can transfer your journal to the acquiring publisher yourself. Here’s how:\nTransferring your journal in five easy steps using Metadata Manager: Log into Metadata Manager using your username and password (the same one you use for the Crossref Web Deposit form). Find the journal you’re transferring on your Metadata Manager workspace using the “search publications” box and click to load the journal’s container (or, dashboard). Within the journal container, select Transfer Title from the Action drop-down. On the transfer title screen select the acquiring (destination) publisher’s name and DOI prefix of where ownership will be transferred to. Click Transfer. (In addition to transferring ownership of the title itself, all existing journal article DOIs previously registered will also be transferred to the new owner using this mechanism. They will persist on their original prefix, but the acquiring publisher will be able to update the metadata associated with these DOIs).\nConfirm the title transfer. It may take up to 24 hours for the transfer to be reflected within Metadata Manager, and we’ll send a courtesy email to the acquiring (destination) publisher’s technical contact when the transfer has been completed. As always, if you have questions, need guidance as you’re working through this process, or have recommendations on how we can improve title transfers—or anything else within Metadata Manager (the tool is in beta)–please let us know at support@crossref.org. There’s also comprehensive support documentation available for Metadata Manager to help and guide you.\n", "headings": ["Transferring your journal in five easy steps using Metadata Manager:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2018/", "title": "2018", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/live18-roaring-attendees-incomplete-zebras-and-missing-tablecloths/", "title": "LIVE18: Roaring attendees, incomplete zebras, and missing tablecloths", "subtitle":"", "rank": 1, "lastmod": "2018-12-27", "lastmod_ts": 1545868800, "section": "Blog", "tags": [], "description": "Running a smooth event is always the goal, but not always the case! No matter how well managed an event is, there is always a chance that things will not go according to plan. And so it was with LIVE18.\nFor the first day we were without the tablecloths we had ordered, which actually gave the room quite a nice, but unintentional, ‘rustic’ look. When they finally did arrive the following day, we realized we preferred the rustic look!", "content": "Running a smooth event is always the goal, but not always the case! No matter how well managed an event is, there is always a chance that things will not go according to plan. And so it was with LIVE18.\nFor the first day we were without the tablecloths we had ordered, which actually gave the room quite a nice, but unintentional, ‘rustic’ look. When they finally did arrive the following day, we realized we preferred the rustic look! Some of the merchandise we had prepared ended up sitting in Canadian Customs for a day and a half, which meant they arrived to us halfway through the first day of the event. Luckily attendees were distracted by the very cool ‘I heart metadata’ bags and didn’t seem to notice.\nUnfortunately a significant number of registrants also had problems with Canadian regulations: they were denied visas to enter the country. Despite always trying to choose countries with international airport hubs and a welcoming policy, this was an unforseen blow.\nBut from setting up to take down, LIVE18 was truly a team effort. Even though many Crossref staff had traveled far and wide to get there, they all rallied to help the night before—hauling boxes through the streets of Toronto, stuffing attendee bags, hanging signage, and moving furniture around until 11:30 pm.\nBecause of these efforts\u0026mdash;and despite the glitches\u0026mdash;Crossref LIVE18 was a great success.\nHow good is your metadata? That was the framing question at Crossref LIVE 18 in Toronto which this year focused on all things metadata. Over the course of the two-day event, we heard from guest speakers on the importance of collaboration, the significance of metadata to metrics, and what good metadata looks like. In our usual lively way, Crossref staff introduced a variety of new services, initiatives, and collaborations.\nCrossref LIVE is helping surface key issues in the cleanup of metadata mismatch, after decades of the industry working in silos. I applaud Crossref for doing this. It’s great that we’re considering how to change the way we work and collaborate as an industry to make sure that we don’t run into metadata issues in this way again.\n- Keynote speaker, Kristen Ratan, Co-Founder of the Collaborative Knowledge Foundation (Coko)\nIn her keynote speech, ‘Publishing Infrastructure: The Good, The Bad, and The Expensive’, Coko’s Kristen Ratan challenged the industry to rethink its slow, inefficient, and expensive resignation to infrastructure; and instead consider how a collaborative approach to sharing expertise in developing community-owned infrastructure could be faster, more flexible, and less costly.\nView Kristen’s talk, The Good, The Bad, and The Expensive The collaborations Collaboration was a running theme at LIVE18. Geoffrey Bilder provided an overview of Crossref’s selective collaborations; DataCite’s Patricia Cruse introduced ROR, the community project to develop an open, sustainable, usable and unique identifier for every research organization in the world—and she got the crowd really engaged at the beginning of her talk by encouraging us all to ROAR out loud!; Clare Dean and Ravit David sketched out the evolution of Metadata 2020, and Shelley Stall from the AGU introduced the ways they are urging the scientific community to adopt FAIR data principles (using her first data collection as an 11-year-old as an example!)\nView Geoffrey’s talk, How Crossref (selectively) collaborates with others View Patricia’s talk, ROR: The Research Oragnization Registry (Roar!) 🦁 View Clare and Ravit’s talk, Metadata 2020: This talk is sooo meta View Shelley’s talk, My first data collection: Was it FAIR? The solutions Patricia Feeney, in the newly-created role of Head of Metadata, used a zebra to illustrate that not all of a publisher’s metadata is deposited with Crossref. View Patricia’s talk, I am the boss of your Metadata (this one has the zebras) and also her talk on New resource/record types in the works at Crossref. New tools Jennifer Lin introduced Event Data, the new API that Crossref and DataCite have built together, enabling organizations to capture what happens to a DOI, including all of the places it is mentioned and links from/to. She also talked about Participation Reports, the new open dashboard to help members evaluate the completeness of their own metadata deposited with Crossref.\nView Jennifer’s talks on Event Data, and Simplifying our services The community We also heard from the community. Paul Dlug from the American Physical Society boldly gave his view on ‘Why Crossref sucks’, and, with a view to helping Crossref improve in key areas, surfaced issues that members struggle with. Ed Pentz, Executive Director, provided an overview of the direction that Crossref is headed towards. Ginny Hendricks, Director of Member \u0026amp; Community Outreach, updated everyone on the expanding Crossref community and all the outreach activities her team conducts to engage them. Isaac Farley, new Technical Support Manager in the community team, told of his vision for moving to a more public, open, support model. Lisa Hart, Director of Finance \u0026amp; Operations announcing the results of our members votes in this year\u0026rsquo;s board election.\nView Paul’s talk, Crossref sucks and how to cope! View Ed’s talk, Our strategic direction View Ginny\u0026rsquo;s talk, Expanding our constituencies View Isaac\u0026rsquo;s talk, Open Support: From 1:1 to everyone The perspectives Guest speakers provided a range of fascinating perspectives from across scholarly communications. Graham Nott, who works with eLife, outlined how they were making their JATS to Crossref schema conversion tool openly available to the community for use. Jodi Schneider, Assistant Professor of Information Sciences at the University of Illinois at Urbana-Champaign, gave us an in-depth look at problem citations, with a focus on retractions. Bianca Kramer from Utrecht University discussed Crossref metadata use in an open scholarly ecosystem. Stephanie Haustein from the University of Ottawa gave a researcher perspective on the problems with traditional journal metrics, and how they are dependent on metadata, which is essentially flawed. She outlined her efforts to increase metrics literacy, putting metrics in context with comprehensive metadata. Geoffrey Bilder talked about Dominika\u0026rsquo;s work to evaluate our reference matching, and finally closed the show discussing the role of metadata in creating a provenance infrastructure, providing trustworthiness which is essential to progress the scholarly research cycle.\nView Graham’s talk, JATS at eLife View Jodi’s talk, Trouble at The Academy: Problem Citations View Bianca’s talk, DOIs for whom? Crossref metadata in an open scholarly ecosystem View Stephanie’s talk, Good metadata + metrics literacy = better academia View Geoffrey’s talks on Reference matching, and Metadata as a signal of trust. As LIVE18 came to a close we took the opportunity to acknowledge and thank everyone once again for helping us reach the milestone of 100 million registered content items this September. Everyone took to the stage and waved their Crossref Bigger Ambitions flags.\nThank you to everyone who participated in the event. Please save the dates for LIVE19 in Europe on 13-14 November, 2019! ", "headings": ["How good is your metadata?","The collaborations","The solutions","New tools","The community","The perspectives","Thank you to everyone who participated in the event. Please save the dates for LIVE19 in Europe on 13-14 November, 2019!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/reference-matching-for-real-this-time/", "title": "Reference matching: for real this time", "subtitle":"", "rank": 1, "lastmod": "2018-12-18", "lastmod_ts": 1545091200, "section": "Blog", "tags": [], "description": "In my previous blog post, Matchmaker, matchmaker, make me a match, I compared four approaches for reference matching. The comparison was done using a dataset composed of automatically-generated reference strings. Now it\u0026rsquo;s time for the matching algorithms to face the real enemy: the unstructured reference strings deposited with Crossref by some members. Are the matching algorithms ready for this challenge? Which algorithm will prove worthy of becoming the guardian of the mighty citation network? Buckle up and enjoy our second matching battle!\n", "content": "In my previous blog post, Matchmaker, matchmaker, make me a match, I compared four approaches for reference matching. The comparison was done using a dataset composed of automatically-generated reference strings. Now it\u0026rsquo;s time for the matching algorithms to face the real enemy: the unstructured reference strings deposited with Crossref by some members. Are the matching algorithms ready for this challenge? Which algorithm will prove worthy of becoming the guardian of the mighty citation network? Buckle up and enjoy our second matching battle!\nTL;DR I evaluated and compared four reference matching approaches: the legacy approach based on reference parsing, and three variants of search-based matching. The dataset comprises 2,000 unstructured reference strings from the Crossref metadata. The metrics are precision and recall calculated over the citation links. I also use F1 as a standard single-number metric that combines precision and recall, weighing them equally. The best variant of search-based matching outperforms the legacy approach in F1 (96.3% vs. 92.5%), with the precision worse by only 0.9% (98.09% vs. 98.95%), and the recall better by 8.9% (94.56% vs. 86.85%). Common causes of SBMV\u0026rsquo;s errors are: incomplete/erroneous metadata of the target documents, and noise in the reference strings. The results reported here generalize to the subset of references in Crossref that are deposited without the target DOI and are present in the form of unstructured strings. Introduction In reference matching, we try to find the DOI of the document referenced by a given input reference. The input reference can have a structured form (a collection of metadata fields) and/or an unstructured form (a string formatted in a certain citation style).\nIn my previous blog post, I used reference strings generated automatically to compare four matching algorithms: Crossref\u0026rsquo;s legacy approach based on reference parsing and three variations of search-based matching. The best algorithm turned out to be Search-Based Matching with Validation (SBMV). SBMV uses our REST API\u0026rsquo;s bibliographic search function to select the candidate target documents, and a separate validation-scoring procedure to choose the final target document. The legacy approach and SBMV achieved very similar average precision, and SBMV was much better in average recall.\nThis comparison had important limitations, which affect the interpretation of these results.\nFirst of all, the reference strings in the dataset might be too perfect. Since they were generated automatically from the Crossref metadata records, any piece of information present in the string, such as the title or the name of the author, will exactly match the information in Crossref\u0026rsquo;s metadata. In such a case, a matcher comparing the string against the record can simply apply exact matching and everything should be fine.\nIn real life, however, we should expect all sorts of errors and noise in the reference strings. For example, a string might have been manually typed by a human, so it can have typos. The string might have been scraped from the PDF file, in which case it could have unusual unicode characters, ligatures or missing and extra spaces. A string can also have typical OCR errors, if it was extracted from a scan.\nThese problems are typical for messy real-life data, and our matching algorithms should be robust enough to handle them. However, when we evaluate and compare approaches using the perfect reference strings, the results won\u0026rsquo;t tell us how well the algorithms handle harder, noisy cases. After all, even if you repeatedly win chess games against your father, it doesn\u0026rsquo;t mean you will likely defeat Garry Kasparov (unless, of course, you are Garry Kasparov\u0026rsquo;s child, in which case, please pass on our regards to your dad!).\nEven though I attempted to make the data more similar to the noisy real-life data by simulating some of the possible errors (typos, missing/extra spaces) in two styles, this might not be enough. We simply don\u0026rsquo;t know the typical distribution of the errors, or even what all the possible errors are, so our data was probably still far from the real, noisy reference strings.\nThe differences in the distributions are a second major issue with the previous experiment. To build the dataset, I used a random sample from Crossref metadata, so the distribution of the cited item types (journal paper, conference proceeding, book chapter, etc.) reflects the overall distribution in our collection. However, the distribution in real life might be different if, for example, journal papers are on average cited more often than conference proceedings.\nSimilarly, the distribution of the citation styles is most likely different. To generate the reference strings, I used 11 styles distributed uniformly, while the real distribution most likely contains more styles and is skewed.\nAll these issues can be summarized as: the data used in my previous experiment is different from the data our matching algorithms have to deal with in the production system. Why is this important? Because in such a case, the evaluation results do not reflect the real performance in our system, just like the child\u0026rsquo;s score on the math exam says nothing about their score on the history test. We can hope my previous results accurately showed the strengths and weaknesses of each algorithm, but the estimations could be far off.\nSo, can we do better? Sure!\nThis time, instead of automatically-generated reference strings, I will use real reference strings found in the Crossref metadata. This will give us a much better picture of the matching algorithms and their real-life performance.\nEvaluation This time the evaluation dataset is composed of 2,000 unstructured reference strings from the Crossref metadata, along with the target true DOIs. The dataset was prepared mostly manually:\nFirst, I drew a random sample of 100,000 metadata records from the system. Second, I iterated over all sampled items, and extracted those unstructured reference strings, that do not have the DOI provided by the member. Next, I randomly sampled 2,000 reference strings. Finally, I assigned a target DOI (or null) to each reference string. This was done by verifying DOIs returned by the algorithms and/or manual searching. The metrics this time are based on the citation links. A citation link points from the reference (or the document containing the reference) to the referenced (target) document.\nWhen we apply a matching algorithm to a set of reference strings in our collection, we get a set of citation links between our documents. I will call those citation links returned links.\nOn the other hand, in our collection we have real, true links between the documents. In the best-case scenario, the set of true links and the set of returned links are identical. But we don\u0026rsquo;t live in a perfect world and our matching algorithms make mistakes.\nTo measure how close the returned links are to the true links, I used precision, recall and F1. This time they are calculated over all citation links in the dataset. More specifically:\nPrecision is the fraction of the returned links that are correct. Precision answers the question: if I see a citation link A-\u0026gt;B in the output of a matcher, how certain can I be that paper A actually cites paper B? Recall is the percentage of true links that were returned by the algorithm. Recall answers the question: if paper A cites paper B and B is in the collection, how certain can I be that the matcher\u0026rsquo;s output contains the citation link A-\u0026gt;B? F1 is the harmonic mean of precision and recall. In the previous experiment, I also used precision, recall and F1, but they were calculated for each target document and then averaged. This time precision, recall and F1 are not averaged but simply calculated over all citation links. This is a more natural approach, since now the dataset comprises isolated reference strings rather than target documents, and in practice each target document has at most one incoming reference.\nI tested the same four approaches as before:\nthe legacy approach, based on reference parsing SBM with a simple threshold, which searches for the reference string in the search engine and returns the first hit, if its relevance score exceeds the predefined threshold SBM with a normalized threshold, which searches for the reference string in the search engine and returns the first hit, if its relevance score divided by the string length exceeds the predefined threshold SBMV, which first applies SBM with a normalized threshold to select a number of candidate items, and a separate validation procedure is used to select the final target item All the thresholds are parameters which have to be set prior to the matching. The thresholds used in the experiments were chosen using a separate dataset, as the values maximizing the F1 of each algorithm.\nResults The plot shows the overall results of all tested approaches:\nThe exact values are also given in the table (the best result for each metric is bolded):\nprecision recall F1 legacy approach 0.9895 0.8685 0.9251 SBM (simple threshold) 0.8686 0.8191 0.8431 SBM (normalized threshold) 0.7712 0.9121 0.8358 SBMV 0.9809 0.9456 0.9629 As we can see, the legacy approach is the best in precision, slightly outperforming SBMV. In recall, SBMV is clearly the best, which also decided about its victory over the legacy approach in F1.\nHow do these results compare to the results from my previous blog post? The overall trends (the legacy approach slightly outperforms SBMV in precision, and SBMV outperforms the legacy approach in recall and F1) are the same. The most important differences are: 1) on the real dataset SBM without validation is worse than the legacy approach, and 2) this time the algorithms achieved much higher recall. These differences are most likely related to the difference in data distributions explained before.\nSBMV\u0026rsquo;s strengths and weaknesses Let\u0026rsquo;s look at a few example cases where SBMV successfully returned the correct DOI, while the legacy approach failed.\nLundqvist D, Flykt A, Ohman A: The Karolinska Directed Emotional Faces - KDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet. 1998 matched to https://0-doi-org.libus.csd.mu.edu/10.1037/t27732-000\nThe target item is a dataset, which means unusual metadata fields and an unusual reference string.\nSchminck, A. , ‘The Beginnings and Origins of the “Macedonian” Dynasty’ in J. Burke and R. Scott , eds., Byzantine Macedonia: Identity, Image and History (Melbourne, 2000), 61–8. matched to https://0-doi-org.libus.csd.mu.edu/10.1163/9789004344730_006\nThis is an example of a book chapter. The reference string contains special quotes and dash characters.\nR. Schneider,On the Aleksandrov-Fenchel inequality, inDiscrete Geometry and Convexity (J. E. Goodman, E. Lutwak, J. Malkevitch and R. Pollack, eds.), Annals of the New York Academy of Sciences440 (1985), 132–141. matched to https://0-doi-org.libus.csd.mu.edu/10.1111/j.1749-6632.1985.tb14547.x\nIn this case, spaces are missing in the reference string, which might be problematic for the parsing.\nR. B. Husar andE. M. Sparrow, Int. J. Heat Mass Transfer11, 1206 (1968). matched to https://0-doi-org.libus.csd.mu.edu/10.1016/0017-9310(68)90036-7\nThis is another example of a reference string with missing spaces.\nF. Cappello, A. Geist, W. Gropp, S. Kale, B. Kramer, and M. Snir. Toward exascale resilience: 2014 update. Supercomputing frontiers and innovations, 1(1), 2014. matched to https://0-doi-org.libus.csd.mu.edu/10.14529/jsfi140101\nIn this case authors are missing in the Crossref metadata.\nLi KZ, Shen XT, Li HJ, Zhang SY, Feng T, Zhang LL. Ablation of the Carbon/carbon Composite Nozzle-throats in a Small Solid Rocket Motor[J]. Carbon, 2011, 49: 1 208–1 215 matched to https://0-doi-org.libus.csd.mu.edu/10.1016/j.carbon.2010.11.037\nHere we have unexpected spaces inside page numbers.\nN. Kaloper, A. Lawrence and L. Sorbo, An Ignoble Approach to Large Field Inflation, JCAP 03 (2011) 023 [ arXiv:1101.0026 ] [ INSPIRE ]. matched to https://0-doi-org.libus.csd.mu.edu/10.1088/1475-7516/2011/03/023\nIn this case we have an acronym of the journal name and additional arXiv id.\nKrönerE. ?Stress space and strain space continuum mechanics?, Phys. Stat. Sol. (b), 144 (1987) 39?44. matched to https://0-doi-org.libus.csd.mu.edu/10.1002/pssb.2221440104\nThis reference string has a missing space, a missing word in the title, and incorrectly encoded special characters.\nSuyemoto K. L., (1998) The functions of self-mutilationClinical Psychology Review 18(5): 531–554 matched to https://0-doi-org.libus.csd.mu.edu/10.1016/s0272-7358(97)00105-0\nIn this case the space is missing between the title and the journal name.\nOno , N. 2011 Stable and fast update rules for independent vector analysis based on auxiliary function technique Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics 189 192 matched to https://0-doi-org.libus.csd.mu.edu/10.1109/aspaa.2011.6082320\nThe parsing can also have problems with missing punctuation, like in this case.\nHybertsen M.S., Witzigmann B., Alam M.A., Smith R.K. (2002) 1 113 matched to https://0-doi-org.libus.csd.mu.edu/10.1023/a:1020732215449\nIn this case both title and journal name are missing from the reference string.\nWe can see from these examples that SBMV is fairly robust and able to deal with a small amount of noise in the metadata and reference strings.\nWhat about the errors SBMV made? From the perspective of citation links, we have two types of errors:\nFalse positives: incorrect links returned by the algorithm. False negatives: links that should have been returned but weren\u0026rsquo;t. When we apply SBMV instead of the legacy approach, the fraction of false positives within the returned links increases from 1.05% to 1.91%, and the fraction of false negatives within the true links decreases from 13.15% to 5.44%. This means with SBMV:\n1.91% of the links in the algorithm\u0026rsquo;s output are incorrect 5.44% of the true links are not returned by the algorithm We can also classify all the references in the dataset into several categories, based on the values of true and returned DOIs:\nWe have the following categories:\nReferences matched to correct DOIs (1129 cases, returned and true blue) References correctly not matched to anything (791 cases, returned and true white) References not matched to anything, when they should be (58 cases, returned white, true grey) References matched to wrong DOIs (7 cases, returned red, true yellow) References matched to something, when they shouldn\u0026rsquo;t be matched to anything (15 cases, returned black, true white) Note that in terms of these categories, precision is equal to:\nAnd recall is equal to:\nWhat are the most common causes of SBMV\u0026rsquo;s errors?\nIncomplete or incorrect Crossref metadata. Even a perfect reference string formatted in the most popular citation style will not be matched, if the target record in the Crossref collection has many missing or incorrect fields. Similarly, missing or incorrect information in the reference string is very problematic for the matchers. Errors/noise in the reference string, such as: HTML/XML markup not stripped from the string multiple references mixed in one string spacing issues and typos In a few cases a document related to the real target was matched, such as the book instead of its chapter, or the conference proceedings paper instead of the thesis. Limitations The most important limitation is the size of the dataset. Every item had to be verified manually, which significantly limited the possibility of creating a large set and also using a lot of independent sets.\nFinally, the numbers reported here still don\u0026rsquo;t reflect the overall precision and recall of the current links in the Crossref metadata. This is because:\nwe still use the legacy approach for matching, some references are deposited along with the target DOIs and are not matched by Crossref, these links are not analyzed here, and in Crossref we have both unstructured and structured references, and in this experiment only the unstructured ones were tested. What\u0026rsquo;s next? The next experiment will be related to the structured references. Similarly as here, I will try to estimate the performance of the search-based matching approach and compare it to the performance of the legacy approach.\nThe evaluation framework, evaluation data and experiments related to the reference matching are available in the repository https://github.com/CrossRef/reference-matching-evaluation. Future experiments will be added there as well.\nhttps://github.com/CrossRef/reference-matching-evaluation also contains the Python implementation of the SBMV algorithm. The Java implementation of SBMV is available in the repository https://gitlab.com/crossref/search_based_reference_matcher.\n", "headings": ["TL;DR","Introduction","Evaluation","Results","SBMV\u0026rsquo;s strengths and weaknesses","Limitations","What\u0026rsquo;s next?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/phew-its-been-quite-a-year/", "title": "Phew - its been quite a year", "subtitle":"", "rank": 1, "lastmod": "2018-12-13", "lastmod_ts": 1544659200, "section": "Blog", "tags": [], "description": "As the end of the year approaches it’s useful to look back and reflect on what we’ve achieved over the last 12 months—a lot! To be honest, there were some things we didn’t get done—or didn’t make as much progress with as we hoped—but that happens when you have an ambitious agenda. However, we also got some things done that we didn’t expect to or that weren’t even on our radar at the end of 2017—this is inevitable as the research and scholarly communications landscape is rapidly changing.\n", "content": "As the end of the year approaches it’s useful to look back and reflect on what we’ve achieved over the last 12 months—a lot! To be honest, there were some things we didn’t get done—or didn’t make as much progress with as we hoped—but that happens when you have an ambitious agenda. However, we also got some things done that we didn’t expect to or that weren’t even on our radar at the end of 2017—this is inevitable as the research and scholarly communications landscape is rapidly changing.\nIn my blog post from the beginning of the year, the key projects I highlighted were Metadata Plus, Event Data, Organization IDs, Grant IDs and Metadata 2020, and that richer metadata and more record types were key goals. We did make very good progress on all of these projects as reported below.\nFor 2018 we were operating in the framework of the four strategic themes, or areas of focus, developed by the board and staff. These are: 1) Simplifying and enriching our services; 2) Improving our metadata; 3) Expanding constituencies, and 4) Selectively collaborating and partnering. These themes will also be guiding us in 2019.\nSimplifying and enriching our services Upgrading our tools Over the past year, we’ve been busy streamlining our processes, developing new tools and adding new services. A key new tool is Metadata Manager which supports the Content Registration service by offering a simpler, more user-friendly, non-technical way to register and update metadata. It provides lots of context-sensitive help, registers content immediately, in real time, and provides guidance on how to make corrections—thereby ensuring each deposit is successful. Metadata Manager currently supports journal deposits (we would have liked to add more in 2018) but we will be adding other record types in 2019.\nUpgrading our services Crossref metadata has always been open through a number of interfaces without restriction, but this year we introduced an option for extra support and functionality, through Metadata Plus. Metadata Plus provides guaranteed uptime, snapshots of the complete set of metadata and enhanced support for organizations (members or not) that want to use Crossref metadata in their own services and systems.\nImproving the member experience: New membership terms This year we began to redesign the member experience and have made a lot of improvements to the sign-up and onboarding process, the most significant of which is the new click-through membership terms, introduced in July for new members and coming into effect for existing members in March 2019, which is proving to be a huge time saver for both our members and our team.\nImproving our metadata Our objective this year was to better communicate what metadata best practice is, to equip our members with all the data and tools they need to meet this best practice, and to achieve closer cooperation from service providers.\nBest practice tools: Participation Reports Released in Beta in August this year, Participation Reports provides a dashboard that gives a clear picture of the metadata that each member provides. This is a useful visualization of metadata that has long been available via our public REST API. Members can see where the gaps in the metadata are and get information on how to fill those gaps.\nCommunicating metadata best practice: Data Citations The importance of linking data with literature can’t be understated. Research integrity and reproducibility depend on it. We\u0026rsquo;re committed to exposing the links between the literature and the data or software that supports it, and earlier this year we partnered with DataCite to make this a reality. All the data citations coming in from Crossref and DataCite are being pulled into Event Data.\nEquipping members with all the data: Event Data Event Data reached technical readiness. Event Data captures and records “events” such as comments, links, shares, bookmarks, and references. It provides open, transparent, and traceable information about the provenance and context of every event.\nExpand constituencies Crossref currently has 15,000 members in 140 countries. With that comes the need to increasingly and proactively work with emerging markets as they start to share research outputs globally.\nAmbassador program The Crossref Ambassador program launched in January and now has a team of 16 trusted contacts who work within our communities (as librarians, researchers, publishers, and innovators) around the world. They share great enthusiasm and belief in our work. We provide them with training and support, and they help us improve education about global research infrastructure in general and the opportunities that are enabled through richer metadata.\nFunders and grant identifiers I’m very happy to report that the Crossref board approved grants as a new record/resource type to be rolled out in 2019 - we made faster progress on this than expected. The proposal for grant identifiers was developed by staff in collaboration with the Crossref Funder Advisory Group and the Membership and Fees Committee. This means that funders will be joining Crossref and registering a standard set of metadata and a persistence identifier - a DOI - for their grants.\nCollaborate and partner So that our alliances with others have the greatest impact, we have aligned our strategic plans for scholarly infrastructure with others. Some of these alliances are led or driven by Crossref and with others we are involved but not leading.\nROR We are working with the California Digital Library, DataCite and Digital Science as the Steering group for ROR - the Research Organization Registry - which is a new, community-led project that is developing an open, sustainable, usable, and unique identifier for research organizations based on the work done by the Organization Identifier Working Group in 2017 and 2018.\nMetadata 2020 Metadata 2020 is a collaboration that advocates richer, connected, and reusable, open metadata for all research outputs, which will advance scholarly pursuits for the benefit of society. Over 140 volunteers—including publishers, librarians, researchers, platforms/tools, and other stakeholders—from 86 organizations, are working in six project groups. The projects are very strategically focused, looking at key issues like researcher communications, incentives, and shared best practices.\nI can’t close off the year without mentioning the incredible milestone we reached this September when the 100th million content item was registered with Crossref. This was made possible by our members’ and the wider community’s commitment and contribution, so once again, thank you.\nRoll on 2019!\n", "headings": ["Simplifying and enriching our services","Upgrading our tools","Upgrading our services","Improving the member experience: New membership terms","Improving our metadata","Best practice tools: Participation Reports","Communicating metadata best practice: Data Citations","Equipping members with all the data: Event Data","Expand constituencies","Ambassador program","Funders and grant identifiers","Collaborate and partner","ROR","Metadata 2020"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/newly-approved-membership-terms-will-replace-existing-agreement/", "title": "Newly approved membership terms will replace existing agreement", "subtitle":"", "rank": 1, "lastmod": "2018-12-05", "lastmod_ts": 1543968000, "section": "Blog", "tags": [], "description": "In its July 2018 meeting, the Crossref Board voted unanimously to approve and introduce a new set of membership terms. At the same meeting, the board also voted to change the description of membership eligibility in our Bylaws, officially broadening our remit beyond publishers, in line with current practice and positioning us for future growth.\n", "content": "In its July 2018 meeting, the Crossref Board voted unanimously to approve and introduce a new set of membership terms. At the same meeting, the board also voted to change the description of membership eligibility in our Bylaws, officially broadening our remit beyond publishers, in line with current practice and positioning us for future growth.\nTl;dr It’s a very good thing to have clearer terms; we want everyone to understand what Crossref is about and what you’re getting into. It’s a material change so we will be notifying members by direct email in December. Nobody needs to sign anything as the new terms are not signed, but are click-through acceptances on application, and that process is already in effect for new applicants. The new terms come into effect on 1st March 2019 for existing members and no action is needed.\nIf you\u0026rsquo;re a sponsored member you\u0026rsquo;ll have a slightly adapted message soon as we work with your sponsor. If you\u0026rsquo;re an NGO or US State Actor you will receive a slightly adapted message. This post is for background explanation and information. We will email existing members directly, but no acceptance or signature\u0026mdash;nor any action\u0026mdash;will be needed. Why are we updating the terms? Being almost 20 years old the old agreement is out-of-date with current practice and technology, and has become quite long and confusing, especially for applicants for whom English is not their first language. Specific reasons include:\n1. To improve efficiency Over the years we’ve had feedback that our application process is too long and involved. The membership agreement used to be signed manually by each new Crossref member, often days after they applied. We also now process around 180 new members each month which is too many for a wholly manual process managed by just one person.\n2. To clarify the wording People would tell us that the agreement is too long and confusing, especially when English is not their first language. There are often questions about the “legalese” style of language that takes up too much time in back-and-forth discussions to ensure everyone has understood. Also, the main structure of the agreement has been in place for over a decade and needs updating to avoid confusion and to align with up-to-date language, services, technologies, and current practices.\n3. To emphasize the community aspect and our members’ obligations It is quite a commitment to participate fully in Crossref, and we want people to understand up-front what their obligations are as part of the collective membership. And also to realize what value they are receiving as well as contributing to other members. We needed clearer terms so that every organization can understand what they are getting into.\nAdditionally, moving from signing contracts to click-through acceptance of standard terms emphasizes that Crossref is not a service provider or vendor. We are a not-for-profit community organization. We don’t have the resources to negotiate and keep track of individual custom agreements.\nWhat’s changing, step-by-step We consulted with former and current legal counsel, the Membership \u0026amp; Fees Committee, and also with the M\u0026amp;F organizations individually. We have also absorbed a lot of feedback from many other members of all kinds and sizes.\nFor new members The manually-signed membership agreement has already\u0026mdash;for new members\u0026mdash; been turned into a set of click-through terms that organizations agree to as part of the initial application process. It is no longer a separate document that needs to be signed or countersigned. This will simplify the application process for both new applicants and our staff.\nFor existing members The new membership terms will come into effect for existing members on March 1st, 2019. Because this is a material change to the terms, we will be emailing members with more information but it’s important to note that no action is necessary from existing members. The new terms will replace the old terms automatically.\nThe table below sets out clause-by-clause the precise changes. Here is the 2018 membership agreement and the new terms in full.\nThe nitty-gritty details Topic New section Old section Summary of change(s) Overall Eliminates legalese in favor of plain English. Updates defined terms to current usage. Shifts from execution by signature to acceptance by affirmative action. Introduction Background 1 Updates description of Crossref’s activities to be current. Provides for a new applicant’s acceptance of Terms upon acceptance of application by Crossref and payment of first annual fee. Members’ rights 1 2(a) Streamlines wording; eliminates reference to right to recommend working committee members. Members’ obligations 2 2(b) Significant revision. Old 2(b) mentioned only payment of fees and appointment of a contact person. New Sec. 2 aims to capture all of a Member’s operational obligations in one place. Metadata deposits 2(a), (b) 3(a)(i) Updates language regarding metadata deposits to current terminology and practice. Rights to content 2(c) 15 Streamlines wording. Registering identifiers 2(d) 3(a)ii) Streamlines the language around registering identifiers. Linking 2(e) 3(a)(iii) States, in clearer language, the obligation to embed identifiers. Reference linking 2(f) 3(a)(iv) Eliminates outdated provision on Cross-Linking; replaces with a best efforts covenant to engage in Reference Linking. Display identifiers 2(g) N/A Adds an obligation to comply with Crossref’s display guidelines and ensure each identifier is hyperlinked to be citable. Maintaining and updating metadata 2(h) 3(b) Streamlines language. Adds obligation to maintain the URL and the accuracy of identifier data. Adds common examples of failure to maintain and update metadata. Archiving 2(i) 3(d) Adds link to examples of third-party archive providers. Adds option for Crossref to point to a “defunct DOI” page. Inserts best efforts obligation to contract with a third-party archive. Content-specific obligations 2(j) N/A Adds reference to Crossref’s record type rules and obligation to comply. Fees 3 2(b) Old agreement referred generally to “all membership dues and any charges or fees as established by the Board from time to time and set forth on the PILA Site.” New Section 3 aims to summarize the categories of fees associated with membership, including a reference to service fees for optional services if and when elected by the Member. Adds Member obligation to cover wire transfer fees/other payment costs. General license 4(a) 4 Clarifies that the license grant covers only metadata and identifiers “corresponding to such Member’s Content.” Metadata rights \u0026amp; limitations 4(b) 5 Significantly streamlines wording. Crossref’s IP 4(c) 6 Significantly streamlines wording. Distribution of metadata 5 9(b) Updates language regarding Crossref’s rights to distribute Metadata. Adds an explicit carveout for a Member’s reference distribution preference. N/A 7, 8, 9(a) Deletes extensive provision relating to obsolete “Clean-Up” and “Reverse Look-Up” services. Deletes provisions relating to obsolete “caching and transfer” activities, and local hosting. Use of marks 6 10 Substantially rewritten, including to reflect Crossref’s more permissive approach to use of its logo. Maintenance of the Crossref Infrastructure 7 [No analog.] Adds covenant of Crossref to maintain the Crossref Infrastructure. Term 8 11 Eliminates the concept of automatically renewing 12-month terms. Replaces with a perpetual term that continues until superseded by an amended version. Termination of membership 9(a) 11 Provides for termination by the member upon written notice, rather than 90 days’ written notice, to align with the Bylaws. Adds a for-cause termination right by the Member, and corresponding right to receive a refund of fees. Sets out certain bases for termination of membership by Crossref, consistent with the Bylaws. Appeal rights 9(b) 13 No material change. Effect of termination of membership 9(c) 12 Adds refund right for for-cause terminations. Enforcement 10 13 Replaces “Crossref has the right but not the obligation to enforce the terms of this Agreement …” with “Crossref shall take reasonable steps to enforce these Terms … .” Governing law; venue 11 14(a) Keeps New York as choice of law, but moves forum to Boston, nearer to Crossref’s US location. Disputes 12 14(b) No material change (but note venue provision moved to 11(a)). N/A 15 Eliminates mutual “warranty” provision; addresses rights to content and anti-infringement under other provisions. Indemnification 13 16 Removes concept that Member is indemnifying other Crossref Members. Streamlines and cleans up the indemnity language. Limitation of Liability 14 17 Adds explicit reference to the Crossref Infrastructure. Assignment 16(c) 22 Removed language providing that Crossref’s consent to assignment of the Terms shall not be unreasonably delayed or conditioned. Amendment 18 2(c) Old: “The Board shall have the power to modify the terms of this Agreement by publishing amended versions that will automatically supersede prior versions … . PILA will use its reasonable discretion in deciding if a modification is material, and if so will provide written notice” to the Member of the material changes. New: “These Terms may be amended by Crossref, via updated Terms posted on the Website and emailed to each Member not less than sixty (60) days prior to effectiveness. By using the Crossref Infrastructure after the effective date of any such amendment hereto, the Member accepts the amended Terms.” Data privacy 19 N/A Adds a GDPR-compliant privacy provision; adds a linked reference to Crossref’s new Privacy Policy. Compliance 20 N/A Adds a mutual compliance covenant and an OFAC/sanctions representation. Various legal “boilerplate” terms (taxes, waiver, independent contractor 15-17 18-28 Streamlined; replaced with more contemporary formulations; eliminated some excess verbiage. Thanks for reading this far! Please contact our member experience team with any questions.\n", "headings": ["Tl;dr","Why are we updating the terms?","1. To improve efficiency","2. To clarify the wording","3. To emphasize the community aspect and our members’ obligations","What’s changing, step-by-step","For new members","For existing members","The nitty-gritty details","Thanks for reading this far!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/updates-to-our-by-laws/", "title": "Updates to our by-laws", "subtitle":"", "rank": 1, "lastmod": "2018-11-29", "lastmod_ts": 1543449600, "section": "Blog", "tags": [], "description": "Good governance is important and something that Crossref thinks about regularly so the board frequently discusses the topic, and this year even more so. At the November 2017 meeting there was a motion passed to create an ad-hoc Governance Committee to develop a set of governance-related questions/recommendations. The Committee has met regularly this year and the following questions are under deliberation regarding term limits, role of the Nominating Committee, implications of contested elections, and more.\n", "content": "Good governance is important and something that Crossref thinks about regularly so the board frequently discusses the topic, and this year even more so. At the November 2017 meeting there was a motion passed to create an ad-hoc Governance Committee to develop a set of governance-related questions/recommendations. The Committee has met regularly this year and the following questions are under deliberation regarding term limits, role of the Nominating Committee, implications of contested elections, and more.\nThe full motion to create the committee is:\nThe ad hoc Governance Committee should discuss and make specific recommendations (including where necessary proposing appropriate by-law amendments) about (i) the timing of the annual election of members and whether the newly elected Board can take office a fixed period after the election results are finalized; (ii) the role and responsibilities of the Nominating Committee and its relationship to the Board; (iii) the implications of having contested Board elections; (iv) the election of officers, Executive Committee members, and committee chairs, and v) options and required changes for board members to represent specific constituencies (e.g. based on membership types).\nThe Governance Committee members are:\nPaul Peters (Hindawi and Board Chair) Scott Delman (ACM and Board Treasurer) Chris Shillum (Elsevier and Executive Committee member) Mark Patterson (eLife) Ed Pentz (Crossref Executive Director) Lisa Hart (Crossref Finance \u0026amp; Operations Director) Emily Cooke (Pierce Atwood, legal counsel). The committee’s goal was to try to maintain and increase transparency; consider practicality and impact of any changes and ensure continuity and balance.\nAt the March meeting the committee provided an overview of the issues they had discussed. There was consensus to accept the committee’s recommendation to address all of the governance matters comprehensively at the July 2018 meeting.\nDiscussions resulted in two changes to our by-laws:\n1. Membership eligibility To provide clarity around membership qualification, we resolved to amend Article I Section 1 by replacing the text in its entirety with the following text:\nMembership in Crossref shall be open to any organization that publishes professional and scholarly materials and content and otherwise meets the terms and conditions of membership established from time to time by the Board of Directors, and to such other entities as the Board of Directors shall determine from time to time.\n2. Start date of board terms We also resolved to amend Article V Section 4 to replace the phrase “on the day after” with the phrase “during the next calendar quarter immediately following”. This allows the Board to meet directly ahead of Crossref’s Annual Meeting and Board election (from 2019) instead of directly after.\nThe first change captures the fact that we have a very broad community beyond what is seen as traditional publishers, who themselves do not solely identify as publishers anymore. It reflects how our membership has evolved, and also includes organizations that publish that aren’t publishers (universities, government agencies, etc.)\nThe second change was a practical one. As Crossref had its first contested election in 2017, and in 2018 as well, it seemed unreasonable to have a brand new Board meet the day after the election, especially when there is potential for officers to not be re-elected. The old by-laws were very specific about holding the Board meeting the day after the election. With this change, starting with the March meeting, the new Board will have a full calendar year of meetings, which seems more practical, and we will establish a process for the election of officers.\nDuring that meeting we deliberated the following questions/recommendations raised by the committee:\nDevelopment of a policy on canvassing/campaigning by candidates in Board elections; Development of policies on nominations to the positions of Chair, Treasurer, Executive Committee members, the Nominating Committee Chair, and the Audit Committee Chair; Analysis of how best to achieve balance and representation on the Board going forward (designated seats and/or a binding Board remit to the Nominating Committee); Analysis as to whether to impose term limits on board members; Analysis as to how best to handle independent nominations to the Board (eliminate the option, or improve the process); and Review of our governing documents’ provisions on vacancies, to confirm that the Board follows the required steps on the filling of vacancies. At the November 2018 Board meeting\u0026mdash;following Crossref LIVE18\u0026mdash;there were two more amendments to the bylaws:\n3. Removal of independant nominations To remove Art. V II Sec. 3 on independent nominations. This change reflects the consensus that there is no need for independent nominations with the introduction of contested elections.\n4. Membership start date of record To amend Art. I Sec. 2 to amend language dealing with record date of membership. This is a practical change following the July 2018 introduction of new membership terms which are click-through online terms and don\u0026rsquo;t need counter-signatures.\nThe new Board will resume the discussion on designated seats at our March 2019 meeting.\n", "headings": ["1. Membership eligibility","2. Start date of board terms","3. Removal of independant nominations","4. Membership start date of record"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/data-citation-what-and-how-for-publishers/", "title": "Data Citation: what and how for publishers", "subtitle":"", "rank": 1, "lastmod": "2018-11-23", "lastmod_ts": 1542931200, "section": "Blog", "tags": [], "description": "We’ve mentioned why data citation is important to the research community. Now it’s time to roll up our sleeves and get into the ‘how’. This part is important, as citing data in a standard way helps those citations be recognised, tracked, and used in a host of different services.\n", "content": "We’ve mentioned why data citation is important to the research community. Now it’s time to roll up our sleeves and get into the ‘how’. This part is important, as citing data in a standard way helps those citations be recognised, tracked, and used in a host of different services.\nThis week A Data Citation Roadmap for Scientific Publishers was published in Scientific Data. This roadmap is the outcome of a collaboration between different publishers that worked on identifying all steps you need to take as a publisher to implement data citation. If you want to know more about establishing a data policy, capturing data citations at the point of submission, or tagging data citations in your XML, we recommend you take a look at this article!\nIn this blog post, we’ll discuss the steps you need to take after you’ve implemented this roadmap. The steps in the roadmap describe how you can track \u0026amp; tag data citation yourself. Here we describe how Crossref can help you make these available to the rest of the community.\nThe \u0026lsquo;what\u0026rsquo; Here’s the recap! From the Crossref perspective, there are two ways to add data citation links into the metadata that you register:\n1. Metadata deposits using the references section of the schema This is where ‘citations’ are normally recorded. Publishers include the data citation into the deposit of bibliographic references for each publication.\nPublishers can deposit the full data or software citation as a unstructured reference. For guidance here, we recommend that authors cite the dataset or software based on community best practice (Joint Declaration of Data Citation Principles, FORCE11 citation placement, FORCE11 Software Citation Principles).\n\u0026lt;citation key=\u0026#34;ref=3\u0026#34;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Morinha F, Dávila JA, Estela B, Cabral JA, Frías Ó, González JL, Travassos P, Carvalho D, Milá B, Blanco G (2017) Data from: Extreme genetic structure in a social bird species despite high dispersal capacity. Dryad Digital Repository. http://0-dx-doi-org.libus.csd.mu.edu/10.5061/dryad.684v0\u0026lt;/unstructured_citation\\\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt; Or they can employ any number of reference tags currently accepted by Crossref.\n\u0026lt;citation key=\u0026#34;ref2\u0026#34;\u0026gt; \u0026lt;doi\u0026gt;10.5061/dryad.684v0\u0026lt;/doi\u0026gt; \u0026lt;cYear\u0026gt;2017\u0026lt;/cYear\u0026gt; \u0026lt;author\u0026gt;Morinha F, Dávila JA, Estela B, Cabral JA, Frías Ó, González JL, Travassos P, Carvalho D, Milá B, Blanco G\u0026lt;/author\u0026gt; \u0026lt;/citation\u0026gt; We are exploring JATS4R recommendations to expand the current collection and better support these citations - more on this soon. We also encourage additional suggestions from the community.\n2. Metadata deposits using the relations section of the schema This is where other relationships can be recorded. Publishers assert the data link in the relationship section of the metadata deposit. Here, publishers can identify data which are direct outputs of the research results if this is known. This level of specificity is optional, but we’d recommend it as it can support scientific validation and research funding management.\nData and software citations via relation type enables precise tagging of the dataset and its specific relationship to the research results published. To tag the data \u0026amp; software citation in the metadata deposit, we ask for the description of the dataset \u0026amp; software (optional), dataset \u0026amp; software identifier and identifier type (DOI, PMID, PMCID, PURL, ARK, Handle, UUID, ECLI, and URI), and relationship type.\n\u0026lt;program xmlns=\u0026#34;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026#34;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026#34;references\u0026#34; identifier-type=\u0026#34;doi\u0026#34;\u0026gt;10.5061/dryad.684v0\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; \u0026lt;/doi_relations\u0026gt; In general, use the relation type references for data and software resources.\nPublishers who wish to specify that the data or software resource was generated as part of the research results can use the isSupplementedBy relation type.\nThe \u0026lsquo;how\u0026rsquo; I create my own XML and register it with Crossref Add links to datasets into your reference lists, including their DOIs if available as shown above and deposit them with Crossref. We’ll do the rest. If you want to add references to existing metadata records, you don’t need to redeposit the full article metadata, you can send us a resource-only deposit that just contains the reference metadata to append that to the existing metadata for the article. You can also use this method if you prefer to deposit references in a separate workflow to registering your content (we know some members prefer to work this way).\nI’ve started using Metadata Manager for journal article deposits Article\u0026lt;-\u0026gt;Data relationships in Crossref\nYou can deposit data citations using either method using our new Metadata Manager tool. When entering journal article metadata, you can use the ‘Related Items’ section to enter the DOI (or other identifier) for the dataset, the type of identifier, a description of the relation type e.g. \u0026lsquo;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity’, and the relation type - ‘references’ or ‘is supplemented by’ depending on the relationship between the data and the article as described above. When you make the deposit, this relationship information will be registered in Crossref along with the rest of the article metadata.\nMetadata Manager also has a section where you can enter and match your references, and then deposit these with Crossref. If you choose this method, enter any data citations into the references section before depositing the article metadata with Crossref.\nIf you want to add this information to deposits you have already made using Metadata Manager, you can search for the journals and articles in the interface, bring up the existing metadata and add in the additional information before redepositing.\nI use \u0026ldquo;simple text query\u0026rdquo; to search for and deposit references Make sure you include any citations to data in the references you add into Simple Text Query. When you use simple text query to deposit these references, they will then be added into the article metadata in the Crossref database.\nIf you use OJS, they’re working on functionality (due for release soon) that will make it easier to deposit reference metadata with Crossref, so you can include citations to data in that.\nAll of this metadata\u0026mdash;registered with Crossref\u0026mdash;make it possible to build up pictures of data citations, linking, and relationships. Whether the citations come from the authors in the reference list or they are extracted by the publisher and then deposited, Crossref collects them across publishers. We then make the aggregate set freely available via Crossref’s APIs in multiple interfaces (REST, OAI-­PMH, OpenURL) and formats (XML and JSON). DataCite does the same for data repositories and so this provides an easy way for publishers and data repositories to exchange information about data citations. As mentioned previously, this all feeds in Event Data. Data is made openly available to a wide host of parties across the extended research ecosystem including funders, research organisations, technology and service providers, indexers, research data frameworks such as Scholix, etc.\nDo you have questions about how to add these links to your Crossref or DataCite metadata? We’ll be running a series of webinars in early 2019 to give you a chance to join us live and ask any questions you have. Eager to get started in the meantime? Let us know and we’ll start to coordinate.\n", "headings": ["The \u0026lsquo;what\u0026rsquo;","1. Metadata deposits using the references section of the schema","2. Metadata deposits using the relations section of the schema","The \u0026lsquo;how\u0026rsquo;","I create my own XML and register it with Crossref","I’ve started using Metadata Manager for journal article deposits","I use \u0026ldquo;simple text query\u0026rdquo; to search for and deposit references"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/matchmaker-matchmaker-make-me-a-match/", "title": "Matchmaker, matchmaker, make me a match", "subtitle":"", "rank": 1, "lastmod": "2018-11-12", "lastmod_ts": 1541980800, "section": "Blog", "tags": [], "description": "Matching (or resolving) bibliographic references to target records in the collection is a crucial algorithm in the Crossref ecosystem. Automatic reference matching lets us discover citation relations in large document collections, calculate citation counts, H-indexes, impact factors, etc. At Crossref, we currently use a matching approach based on reference string parsing. Some time ago we realized there is a much simpler approach. And now it is finally battle time: which of the two approaches is better?\n", "content": "Matching (or resolving) bibliographic references to target records in the collection is a crucial algorithm in the Crossref ecosystem. Automatic reference matching lets us discover citation relations in large document collections, calculate citation counts, H-indexes, impact factors, etc. At Crossref, we currently use a matching approach based on reference string parsing. Some time ago we realized there is a much simpler approach. And now it is finally battle time: which of the two approaches is better?\nTL;DR I evaluated and compared four approaches to reference matching: the legacy approach based on reference parsing, and three variants of the new idea called search-based matching. A large automatically generated dataset was used for the experiments. It is composed of 7,374 metadata records from the Crossref collection, each of which was formatted automatically into reference strings using 11 citation styles. The main metrics used for the evaluation are precision and recall. I also use F1 as a standard metric that combines precision and recall into a single number, weighing them equally. All values are calculated for each metadata record separately and averaged over the dataset. In general, search-based matching is better than the legacy approach in F1 and recall, but worse in precision. The best variant of search-based matching outperforms the legacy approach in average F1 (84.5% vs. 52.9%), with the average precision worse by only 0.1% (99.2% vs 99.3%), and the average recall better by 88% (79.0% vs. 42.0%). The best variant of search-based matching also outperforms the legacy approach in average F1 for each one of the 11 styles. A weak spot of the parsing-based approach is degraded/noisy reference strings, which do not appear to use any of the known citation styles. A weak spot of search-based approach is short reference strings, and in particular citation styles that do not include the title in the reference string. Introduction In reference matching, on the input we have a bibliographic reference. It can have the form of an unstructured string, such as:\n(1) Adamo, S. H.; Cain, M. S.; Mitroff, S. R. Psychological Science 2013, 24, 2569–2574.\nThe input can also have the form of a structured reference, such as (BibTex format):\n@article{adamo2013, author = {Stephen H. Adamo and Matthew S. Cain and Stephen R. Mitroff}, title = {Self-Induced Attentional Blink: A Cause of Errors in Multiple-Target Search}, journal = {Psychological Science}, volume = {24}, number = {12}, pages = {2569-2574}, year = {2013} } The goal of matching is to find the document, which the input reference points to.\nMatching algorithms Matching references is not a trivial task even for a human, not to mention the machines, which are still a bit less intelligent than us (or so they want us to believe…). A typical meta-approach to reference matching might be to score the similarity between the input reference and the candidate target documents. The document most similar to the input is then returned as the target.\nOf course, still a lot can go wrong here. We can have more than one potential target record with the same score (which one do we choose?). We can have only documents with low to medium scores (is the actual target even present in our collection?). We can also have errors in the input string (are the similarity scores robust enough?). Life\u0026rsquo;s tough!\nThe main difference between various matching algorithms is in fact how the similarity is calculated. For example, one idea might be to compare the records field by field (how similar is the title/author/journal in the input reference to the title/author/journal of our candidate target record?). This is roughly how the matching works currently at Crossref.\nThe main problem with this approach is that it requires a structured reference, and in practise, often all we have is a plain reference string. In such a case we need to extract the metadata fields from the reference string (this is called parsing). Parsing introduces errors, since no parser is omniscient. The errors propagate further and affect the scoring… you get the picture.\nLuckily, as we have known for some time now, this is not the only approach. Instead of comparing structured objects, we could calculate the similarity between them using their unstructured textual form. This effectively eliminates the need for parsing, since the unstructured form is either already available on the input or can be easily generated from the structured form.\nWhat about the similarity scores? We already know a powerful method for scoring the similarities between texts. Those are (you guessed it!) scoring algorithms used by search engines. Most of them, including Crossref\u0026rsquo;s, do not need a structured representation of the object, they are perfectly happy with just a plain text query.\nSo all we need to do is to pass the original reference string (or some concatenation of the reference fields, if only a structured reference is available) to the search engine and let it score the similarity for us. It will also conveniently sort the results so that it is easy to find the top hit.\nEvaluation So far so good. But which strategy is better? Is it better to develop an accurate parser, or just rely on the search engine? I don\u0026rsquo;t feel like guessing. Let\u0026rsquo;s try to answer this using (data) science. But first, we need to decompose our question into smaller pieces.\nQuestion 1. How can I measure the quality of a reference matcher? Generally speaking, this can be done by checking the resulting citation links. Simply put, the better the links, the better the matching approach must have been.\nA few standard metrics can be applied here, including accuracy, precision, recall and F1. We decided to calculate precision, recall and F1 separately for each document in the dataset, and then average those numbers over the entire dataset.\nWhen I say \u0026ldquo;documents\u0026rdquo;, I really mean \u0026ldquo;target documents\u0026rdquo;:\nprecision for a document X tells us, what percentage of links to X in the system are correct recall for a document X tells us, what percentage of true links to X are present in the system F1 is the harmonic mean of precision and recall F1 is a single-number metric combining precision and recall. In F1 precision and recall are weighted equally. It is also possible to combine precision and recall using different weights, to place more emphasis on one of those metrics.\nWe decided to look at links from the target document\u0026rsquo;s perspective, because this is what the academic world cares about (i.e. how accurate the citation counts of academic papers are).\nCalculating separate numbers for individual documents and averaging them within a dataset is the best way to have reliable confidence intervals (which makes the whole analysis look much smarter!).\nQuestion 2. Which approaches should be compared? In total we tested four reference matching approaches.\nThe first approach, called the legacy approach, is the approach currently used in Crossref ecosystem. It uses a parser and matches the extracted metadata fields against the records in the collection.\nThe second approach is the search-based matching (SBM) with a simple threshold. It queries the search engine using the reference string and returns the top hit from the results, if its relevance score exceeds the threshold.\nThe third approach is the search-based matching (SBM) with a normalized threshold. Similarly as in the simplest SBM, in this approach we query the search engine using the reference string. In this case the first hit is returned if its normalized score (the score divided by the reference length) exceeds the threshold.\nFinally, the fourth approach is a variation of the search based matching, called search-based matching with validation (SBMV). In this algorithm we use additional validation procedure on top of SBM. First, SBM with a normalized threshold is applied and the search results with the scores exceeding the normalized threshold are selected as candidate target documents. Second, we calculate validation similarity between the input string and each of the candidates. This validation similarity is based on the presence of the candidate record\u0026rsquo;s metadata fields (year, volume, issue, pages, the last name of the first author, etc.) in the input reference string, as well as the relevance score returned by the search engine. Finally, the most similar candidate is returned as the final target document, if its validation similarity exceeds the validation threshold.\nBy adding the validation stage to the search-based matching we make sure that the same bibliographic numbers (year, volume, etc.) are present in both the input reference and the returned document. We also don\u0026rsquo;t simply take the first result, but rather use this validation similarity to choose from results scored similarly by the search engine.\nAll the thresholds are parameters which have to be set prior to the matching. The thresholds used in these experiments were chosen using a separate dataset, as the values maximizing the F1 of each algorithm.\nQuestion 3. How to create the dataset? Results We could try to calculate our metrics for every single document in the system. Since we currently have over 100M of them, this would take a while, and we already felt impatient\u0026hellip;\nA faster strategy was to use sampling with all the tools statistics was so generous to provide. And this is exactly what we did. We used a random sample of 2500 items from our system, which is big enough to give reliable results and, as we will see later, produces quite narrow confidence intervals.\nApart from the sample, we needed some input reference strings. We generated those automatically by formatting the metadata of the chosen items using various citation styles. (Similarly to what happens when you automatically format the bibliography section for your article. Or at least we hope you don\u0026rsquo;t produce those reference strings manually…)\nFor each record in our sample, we generated 11 citation strings, using the following styles:\nWell known citation styles from various disciplines: american-chemical-society (acs) american-institute-of-physics (aip) elsevier-without-titles (ewt) apa chicago-author-date modern-language-association (mla) Known styles + random noise. To simulate not-so-clean data, we randomly added noise (additional spaces, deleted spaces, typos) to the generated strings of the following styles: american-institute-of-physics apa Custom degraded \u0026ldquo;styles\u0026rdquo;: degraded: a simple concatenation of authors\u0026rsquo; names, title, container title, year, volume, issue and pages one author: a simple concatenation of the first author\u0026rsquo;s name, title, container title, year, volume, issue and pages title scrambled: same as degraded, but with title words randomly shuffled Some styles include the DOI in the reference string. In such cases we stripped the DOI from the string, to make the matching problem non-trivial.\nAn ideal matching algorithm will match every generated string to the record it was generated from. In practise, some of the expected matches will be missing, which will lower the recall of the tested matching approach. On the other hand, it is very probable that we will get the precision of 100%. To have the precision lower than 100%, we would have to have some unexpected matches to our sampled documents, which is unlikely. This is obviously not great, because we are missing a very important piece of information.\nWhat can we do to “encourage” such mismatches to our sampled documents? We could generate additional reference strings of documents that are not in our sample, but are similar to the documents in our sample. Hopefully, we will see some incorrect links from those similar strings to our sampled documents.\nFor each sampled document I added up to 2 similar documents (I used, surprise surprise, our search engine to find the most similar documents). I ended up with 7,374 items in total (2,500 originally sampled and 4,874 similar items). For each item, 11 different reference strings were generated. Each reference string was then matched using the tested approaches and I could finally look at some results.\nResults First, let\u0026rsquo;s compare the overall results averaged over the entire dataset:\nThe small vertical black lines at the top of the boxes show the confidence intervals at the confidence level 95%. The table gives the exact values and the same confidence intervals. The best result for each metric is bolded.\naverage precision average recall average F1 legacy approach 0.9933\n(0.9910 - 0.9956) 0.4203\n(0.4095 - 0.4312) 0.5289\n(0.5164 - 0.5413) SBM (simple threshold) 0.9890\n(0.9863 - 0.9917) 0.7127\n(0.7021 - 0.7233) 0.7866\n(0.7763 - 0.7968) SBM (normalized threshold) 0.9872\n(0.9844 - 0.9901) 0.7905\n(0.7796 - 0.8015) 0.8354\n(0.8249 - 0.8458) SBMV 0.9923\n(0.9902 - 0.9945) 0.7902\n(0.7802 - 0.8002) 0.8448\n(0.8352 - 0.8544) The confidence intervals given in the table are the ranges, in which it is 95% likely to have the real average precision, recall and F1. For example, we are 95% sure that the real F1 for SBMV in our entire collection is within the range 0.8352 - 0.8544.\nAs we can see, each metric has a different winner.\nThe legacy approach is the best in precision. This suggests the legacy approach is quite conservative and outputs a match only if it is very sure about it. This might also result in missing a number of true matches (false negatives).\nAccording to the paired Student\u0026rsquo;s t-test, the difference between the average precision of the legacy approach and the average precision of the second best SBMV is not statistically significant. This means we cannot rule out that this difference is simply the effect of the randomness in sampling, and not the sign of the true difference.\nSBM with a normalized threshold is the best in recall. This suggests that it is fairly tolerant and returns a lot of matches, which might also result in returning more incorrect matches (false positives). Also in this case the difference between the winner and the second best (SBMV) is not statistically significant.\nSBMV is the best in F1. This shows that this approach balances precision and recall the best, despite being only the second best in both of those metrics. According to the paired Student\u0026rsquo;s t-test, the difference between SBMV and the second best approach (SBM with a normalized threshold) is statistically significant.\nAll variants of the search-based matching outperform the parsing-based approach in terms of F1, with statistically significant differences. This shows that in search based-matching it is possible to keep precision almost as good as in the legacy approach, and still include many more true positives.\nLet\u0026rsquo;s also look at the same results split by the citation style:\nFor all styles the precision values are very high, and the legacy approach is slightly better than all variations of the search-based approach.\nIn terms of recall and F1 SBM with a simple threshold is better than the legacy approach in 8 out of 11 styles. The three styles for which the legacy approach outperforms SBM with a simple threshold are styles that do not include the title in the reference strings (acs, aip and ewt). The reason for this is that the simple threshold cannot be well calibrated for shorter and longer reference strings at the same time.\nSBM with a normalized threshold and SBMV is better than the legacy approach in recall and F1 for all 11 styles.\nThe weak spot of the legacy approach is degraded and noisy reference strings, which do not appear to use any of the known citation styles.\nThe weak spot of the search-based matching is short reference strings, and in particular citation styles that do not include the title in the string.\nLimitations The limitations are related mostly to the method of building the dataset.\nAll the numbers reported here are estimates, since they were calculated on a sample. The numbers show strengths and weaknesses of each approach, but they do not reflect the real precision and recall in the system: Since we included only 2 similar documents for each document in the sample, precision is most likely lower in the real data. We used a number of styles distributed uniformly. Of course in the real system the styles and their distribution might be different, which affects all the calculated numbers. ", "headings": ["TL;DR","Introduction","Matching algorithms","Evaluation","Question 1. How can I measure the quality of a reference matcher?","Question 2. Which approaches should be compared?","Question 3. How to create the dataset?","Results","Results","Limitations"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-does-the-sample-say/", "title": "What does the sample say?", "subtitle":"", "rank": 1, "lastmod": "2018-11-09", "lastmod_ts": 1541721600, "section": "Blog", "tags": [], "description": "At Crossref Labs, we often come across interesting research questions and try to answer them by analyzing our data. Depending on the nature of the experiment, processing over 100M records might be time-consuming or even impossible. In those dark moments we turn to sampling and statistical tools. But what can we infer from only a sample of the data?\n", "content": "At Crossref Labs, we often come across interesting research questions and try to answer them by analyzing our data. Depending on the nature of the experiment, processing over 100M records might be time-consuming or even impossible. In those dark moments we turn to sampling and statistical tools. But what can we infer from only a sample of the data?\nImagine you are cooking soup. You just put some salt in it and now you are wondering if it is salty enough. What do you do next?\nOption #1: Since you carefully measured 1/7 of a teaspoon of salt per 0.13 litres of soup (as always), you already know the soup is fine. Everyone else better stop asking silly questions and eat their soup. Option #2: You stir everything carefully and taste a tablespoon. If it is not salty enough, you put more salt in the soup and repeat the tasting procedure. Option #3: You eat a tablespoon of soup and it tastes fine. But wait, there\u0026rsquo;s more soup in the pot, what if the sip you\u0026rsquo;ve just tasted was somehow different than the rest? You decide it\u0026rsquo;s better to eat another spoon of soup (which tastes fine). Still, a lot of soup left, who knows what that tastes like? It might be safer to eat an entire bowl of soup. Hmm, still not sure, you\u0026rsquo;ve eaten such a small fraction of the soup, who can guarantee the rest tastes the same? You have no choice but to eat another bowl, and then some more… Ooops, now you have eaten the entire pot of soup! At least you can be 100% sure now that the soup was indeed salty enough. The problem is, there is no soup left, and also, you don\u0026rsquo;t feel so good. But people are getting hungry, so you start cooking a new batch… If your answer was option #3, read on. Your life is going to get easier!\nTL;DR Sampling and confidence intervals can be used to estimate the mean of a certain feature, or the proportion of items passing a certain test, by calculating it only for a random sample of items, instead of the entire large set of items. Note that estimating =/= guessing. Confidence intervals are a way of controlling the amount of uncertainty related to randomness in sampling. The confidence interval has a form (estimated value - something, estimated value + something). Confidence interval at the confidence level 95% is interpreted as follows: we are 95% sure that the real value that we are estimating is within our calculated confidence interval. The higher the confidence level (i.e. the more certain we want to be about the interval), the wider the interval has to be. The larger the sample, the narrower the confidence interval. We are never 100% sure that the value we are estimating is actually within our calculated confidence interval. By setting the confidence level high, we only make sure this is a very likely event. The problem Sampling and estimating drew my attention while I was working on the evaluation of the reference matching algorithms. In Crossref\u0026rsquo;s case, reference matching is the task of finding the target document DOI for the given input reference string, such as:\n(1) Adamo, S. H.; Cain, M. S.; Mitroff, S. R. Psychological Science 2013, 24, 2569–2574.\nAccurate reference matching is very important for the scientific community. Thanks to automatic reference matching we are able to find citing relations in large document sets, calculate citation counts, H-indexes, impact factors, etc.\nFor several weeks now I have been investigating a simple reference matching algorithm based on the search engine. In this algorithm, we use the input reference string as the query in the search engine, and we return the first item from the results as the target document. Luckily, at Crossref we already have a good search engine in place, so all the pieces are there.\nI was interested in how well this simple algorithm works, i.e. how often the correct target document is found. For example, let\u0026rsquo;s say we have a reference string in APA citation style generated for a specific record in Crossref system. How certain can I be that it will be correctly matched to the record\u0026rsquo;s DOI?\nI could calculate this directly by generating the APA reference string for every record in the system and trying to match those strings to DOIs. Since we already have over 100M records, this would take a while and I was getting impatient. So instead of eating the whole pot of soup, I decided to stir and taste just a little bit of it, or, academically speaking, use sampling and confidence intervals.\nThese statistical tools are useful in situations, where we have a large set of items, and we want to know the average of a certain feature of an item in our set, or the proportion of items passing a certain test, but calculating it directly is impossible or difficult. For example, we might want to know the average height of all women living in USA, the average salary of a Java programmer in London, or the proportion of book records in the Crossref collection. The entire set we are interested in is called a population and the value we are interested in is called a population average or a population proportion. Sampling and confidence intervals let us estimate the population average or proportion using only a sample of items, in a reliable and controlled way.\nExperiments In general I wanted to see, how well I can estimate the population proportion of records passing a certain test, using only a sample.\nIn the following experiments, the population is 1 million metadata records from the Crossref collection. I didn\u0026rsquo;t use the entire collection as the population, because I wanted to be able to calculate the real proportion and compare it to the estimates.\nThe test for a single record is: whether the APA reference string generated from said record is correctly matched to the record\u0026rsquo;s original DOI. In other words: if I generate the APA reference string from my record and use it as the query in Crossref\u0026rsquo;s search, will the record be the first element in the result list? Note that this proportion can also be interpreted as the probability that the APA reference string will be correctly matched to the target DOI.\nEstimating from a sample I took a random sample of size 100 from my population and calculated the proportion of the records correctly matched - this is called a sample proportion. In my case, the sample proportion is 0.92. This means that in my sample 92 reference strings were successfully matched to the right DOIs. Not too bad.\nI could now treat this number as the estimate and assume that 0.92 is close to the population proportion. On the other hand, this is only a sample, and a rather small one, which raises doubts. What if our 92 correct matches happen to be the only correct matches in the entire 1M population? In such a case, our estimate of 0.92 would be very far from the population proportion. This uncertainty related to sampling randomness can be captured by the confidence interval.\nConfidence interval The confidence interval for my 100-point sample, at the confidence level 95%, is 0.8668-0.9732. This is interpreted as follows: we are 95% sure that the real population proportion is within the range 0.8668-0.9732. Note that the sample average (0.92) is exactly in the middle of this range.\n100 items is not a big sample. Let\u0026rsquo;s calculate the confidence interval for a sample 10 times larger. From a sample of size 1000 I got the estimate 0.932, and the confidence interval 0.9164-0.9476. Based on this, we can be 95% sure that the real population proportion is within the range 0.9164-0.9476.\nIt seems the our interval got smaller when we increased the sample size. Let\u0026rsquo;s plot the intervals for a variety of sample sizes:\nThe blue line represents the estimated proportion for samples of different sizes, and the grey vertical lines are confidence intervals. The estimated proportion varies, because for each size a different sample was drawn.\nWe can see that increasing the sample size decreases the interval. This should make intuitive sense: if we have more data to estimate from, we can expect our estimate to be more reliable (i.e. closer to the population proportion).\nWhat about the confidence level? By setting the confidence level we specify, how certain we want to be about our confidence interval. So far I used 95%. What happens if I calculate the confidence intervals for my original sample of 100 records, but with varying confidence level?\nIn this case the average is always the same, because only one sample was used.\nAs we can see, increasing the confidence level widens the interval. In other words, the more certain we want to be about the interval containing the real population average, the wider the interval has to be.\nSampling distribution So far so good, but where does this magic confidence interval actually come from? It is calculated by the theoretical analysis of the sampling distribution (not to be confused with sample distribution):\nSample distribution is when we collect one sample of size k and calculate a certain feature for every element in the sample. It is a distribution of k values of the feature in one sample. Sampling distribution is when we independently collect n samples, each of size k, and calculate the sample proportion for each sample. It is the distribution of n sample proportions. Imagine I collect all samples of size 100 from my population and I calculate the sample proportion for each sample. This is the sampling distribution. Now I randomly choose one number from this sampling distribution. Note that this is equivalent to what I did before: choosing one random sample of size 100 and calculating its sample proportion.\nAccording to Central Limit Theorem, sampling distribution is approximately normal with the mean equal to the population proportion. Here is the visualisation of the sampling distribution:\nThe black vertical line shows the mean of the sampling distribution. This is also the real population proportion. The grey area covers the middle 95% of the distribution mass (within 2 standard deviations from the mean).\nWhen we choose one sample and calculate the sample proportion, there are two possibilities:\nWith 95% probability, we were lucky and the sample proportion is within the grey area. In that case, the real population proportion is not further than 2 standard deviations from our estimate. With 5% probability, we were unlucky and the sample proportion is outside the grey area. In that case, the real population proportion is further than 2 standard deviations from our estimate. So with the confidence of 95% we can say that the real population proportion is within 2 standard deviations from our sample proportion. We can see now that these 2 standard deviations of the sampling distribution define our confidence interval at the confidence level of 95%.\nSmaller confidence level would make the grey area narrower, and the confidence interval would shrink as well. Larger confidence level makes the grey area, and the confidence interval, larger.\nTo look more closely at the sampling distribution, I generated sampling distributions for all combinations of \u0026ldquo;n samples of size k\u0026rdquo;, where n and k are the elements of the set {25, 50, 100, 200, 400, 800, 1600, 3200}. This is only an approximation, since the real sampling distributions would contain many more samples.\nHere is the heatmap showing the mean of each sampling distribution (this should be approximately the same as the real population proportion):\nWe can see that there is some variability in the top left part of the heatmap, which corresponds to small sample sizes and small numbers of samples. The bottom right part of the heatmap shows much less variability. As we increase the sample size and number of samples, the mean of the sampling distribution approaches numbers around 0.933.\nHere is the heatmap showing the standard deviation for each sampling distribution:\nWe can clearly see how the standard deviation decreases when we increase the sample size. This is consistent with the previous observation, that the confidence interval decreases when the sample size is increased.\nLet\u0026rsquo;s also see the histograms of all the sampling distributions:\nHere we can see the following patterns:\nAll histograms indeed seem to be centered around approximately the same number. The more samples we include, the more normal the sampling distribution appears. This happens because with more samples the real sampling distribution is better approximated. The larger the sample size, the narrower the sampling distribution (i.e. smaller standard deviation). The estimation vs. the real value Let\u0026rsquo;s go back to my original question. What is the proportion of reference strings in APA style, that are successfully matched to the original DOIs of the records they were generated from? So far we observed the following:\nA small sample of 100 gave the estimate 0.92 (confidence interval 0.8668-0.9732) A larger samples of 1000 gave the estimate 0.932 (confidence interval 0.9164-0.9476) The means of sampling distributions seem to slowly approach 0.933 So what is the real population proportion in my case? It is 0.933005. As we can see, the estimations were fairly close, and the intervals indeed contain the real value.\nNow I can also calculate the confidence interval for each sample in my sampling distributions, and then the fraction of the intervals that contain the real population proportion (I expect these numbers to be close to the confidence level 95%). Here is the heatmap:\nWe can see that for larger sample sizes indeed the fractions are high. The fraction is not always above 95%, as we would expect, especially for smaller sample sizes. One of the reasons is that when we calculate the confidence interval, we approximate the standard deviation of the population with the standard deviation of the sample. This is not always a reliable estimate, especially for small samples. This suggests that sample sizes of at least 1000-2000 should be used.\nBe careful Some important things to remember:\nAggregate functions. As mentioned before, apart from estimating the proportion, a similar procedure can be applied for estimating the average of a certain numeric feature. (Lack of) certainty. Remember that the confidence level \u0026lt; 1. This means that we are never sure that our confidence interval contains the true population proportion. If for any reason you need to be 100% sure, just process the entire dataset. Randomness, a.k.a. “stirring before tasting”. The sample has to be chosen randomly. Beware of assuming that the dataset is shuffled and taking the first 1000 rows! Sample size. We know already that the larger the sample, the better. As a rule of thumb, using sample sizes \u0026lt; 30 makes the estimates, including the interval, rather unreliable. Skewness. In general, the more skewed the original feature distribution, the larger sample we need. In case of the proportion, the sample should contain at least 5 data points of each value of the feature (passes/doesn\u0026rsquo;t pass the test). Generalization. The sample average/proportion can be used as an estimate for the population average/proportion, but only the population it was drawn from. This means that if we applied any filters before sampling (which is equivalent to sampling from a subset passing the filter), we can reason only about the filtered subset of the data. Reproducibility. This is more of an engineering concern. In short, all the analyses we do should be reproducible. In the context of sampling it means, at the very least, that we should record the samples we use. ", "headings": ["TL;DR","The problem","Experiments","Estimating from a sample","Confidence interval","Sampling distribution","The estimation vs. the real value","Be careful"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/daatcite/", "title": "DaatCite", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/why-data-citation-matters-to-publishers-and-data-repositories/", "title": "Why Data Citation matters to publishers and data repositories", "subtitle":"", "rank": 1, "lastmod": "2018-11-08", "lastmod_ts": 1541635200, "section": "Blog", "tags": [], "description": "A couple of weeks ago we shared with you that data citation is here, and that you can start doing data citation today. But why would you want to? There are always so many priorities, why should this be at the top of the list?\n", "content": "A couple of weeks ago we shared with you that data citation is here, and that you can start doing data citation today. But why would you want to? There are always so many priorities, why should this be at the top of the list?\nI’m sure you heard this before—data sharing and data citation are important for scientific progress. The three key reasons for this are:\n1) Transparency and reproducibility Most scientific results that are shared today are just a summary of what researchers did and found. The underlying data are not available, making it difficult to verify and replicate results. If data would always be made available with publications, transparency of research would be greatly improved.\n2) Reuse The availability of raw data allows other researchers to reuse the data. Not just for replication purposes, but to answer new research questions.\n3) Credit When researchers cite the data they used, this forms the basis for a data credit system. Right now researchers are not really incentivized to share their data, because nobody is looking at data metrics and measuring their impact. Data citation is a first step towards changing that.\nThe benefits described above are all quite long-term, so why, as a publisher or data repository, should you put your resources towards implementing data citation workflows now? During our pre-conference workshop at FORCE2018 we asked repositories and publishers this question. Below you’ll find some of the answers.\nData repositories For data repositories, data citation leads to increased visibility of both the repository and the datasets. The workshop revealed that many repositories do a lot of work to establish links between articles and datasets, thereby significantly contributing to transparency in research. Some of the repositories explained that they hire curators that text mine articles to find associations and manually curate datasets to ensure information about links is part of the metadata. This is reflected in Event Data, where 99% of links between articles and datasets comes from data repository metadata. This downstream enrichment of metadata is useful, but it would be more effective if all stakeholders strive to establish these links at a much earlier stage in the research communication process.\nICPSR, the Inter-university Consortium for Political and Social Research, shared:\nICPSR views data citation as vital. As a large social science data archive, ICPSR curates, preserves, and distributes data for the research community to re-use over time. Data citation makes data visible to the research community. Without it, data cannot be accessed for re-use or reproduced for transparency. Its use cannot be tracked and counted to reveal its impact and potential for new uses by investigators in new fields or in combination with new types of data. Data creators cannot receive adequate credit for their intellectual output. And the original investment by funders and scientists to create those data stops producing dividends. Therefore, data citation plays an essential role in the data sharing lifecycle.\nProper data citation, with a unique identifier, makes it much easier to measure impact. When data use is not cited or cited obliquely, it is rendered virtually invisible. Hence, much data use is still not easily detected. The ICPSR Bibliography of Data-related Literature represents ICPSR’s efforts to identify publications that analyze data distributed at ICPSR and link them directly to the data in the ICPSR catalog. As of 2018, ICPSR has a searchable database that contains nearly 80,000 citations of published and unpublished works resulting from analyses of data held in the archive. ICPSR also makes the case for data citation in its brief new video, “ICPSR 101: Why Should I Cite Data?”\nGBIF, the Global Biodiversity Information Facility, explained:\nThe work required to collect, clean, compile and publish biodiversity datasets is significant and deserves recognition. Researchers publish studies based on data made available through GBIF.org at a rate of about 2 papers every single day. It is crucial for GBIF to link these scientific uses to the underlying data as one measure of demonstrating the value and impact of sharing free and open biodiversity data. At the moment, however, only about 10 percent of authors cite or acknowledge the datasets used in research papers properly. As a result, data publishers efforts often risk going unnoticed, and the true impact of sharing data remains invisible. GBIF will continue to work with publishers and researchers to provide guidance and input for how to best cite the use of GBIF-mediated data in scientific journals to ensure proper attribution and reproducible research and to demonstrate the true value of free and open access to biodiversity data.\nPublishers By ensuring data is cited in a consistent way, publishers help provide transparency and context for the content they publish. Depositing that information as part of the Crossref metadata helps that work go further by uncovering how data is being used across multiple publications and publishers This means patterns can be explored and researchers can gain more comprehensive recognition and credit for the work they have done.\nMelissa Harrison, Head of Production Operations at eLife says:\neLife is committed to ensuring researchers get credit for all their outputs, and data is a major component of this. We\u0026rsquo;re working with Crossref and JATS4R to enable publishers to tag their JATS data content consistently and thus create an easy crosswalk to their Crossref deposits. The JATS4R guidance on Data Availability Statements, linked to and incorporating data citations, will be updated soon, please watch that space!\nIt will be really interesting to see how much re-use of previously published data is happening, look for patterns in re-use, and see links and hopefully building up of data by different research groups. Ultimately, this will incentivize researchers and publishers to ensure it is correctly accredited at source and in publications, improving the cycle further.’\nAnita de Waard, VP of Research Collaborations at Elsevier, says:\nOne of the key recommendations of the Force11 Manifesto was to “3.3 Add data, software, and workflows into the publication as first-class research objects”, which will allow greater reproducibility and rigor to experimental research, and allow the reuse of all digital artefacts in the scholarly lifecycle. By following the data citation principles, we achieve two things: the author presents a richer representation of their work, and the data producer receives credit for the hard work of curating and publishing citable datasets.\nMendeley Data and Elsevier are active contributors to the Scholix framework that as a collaborative and open standard, allows the open mining of relationships between articles and datasets. We are also active participants in the new Enabling FAIR Data Project, and next to supporting the TOP Guidelines in all domains, require all authors in the earth and space sciences to deposit their data before publication.\nNext week at Crossref LIVE18, Patricia Cruse from DataCite will talk about Data Citations and why they matter. If you’re in Toronto next week, do not hesitate to ask her or anyone from Crossref anything you want to know about data citation!\n", "headings": ["1) Transparency and reproducibility","2) Reuse","3) Credit","Data repositories","Publishers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ten-more-days-til-toronto/", "title": "Ten more days ’til Toronto", "subtitle":"", "rank": 1, "lastmod": "2018-11-02", "lastmod_ts": 1541116800, "section": "Blog", "tags": [], "description": "Our LIVE Annual Meeting is back in North America for the first time since 2015, and with just 10 days to go, there’s a lot going on in preparation. As you’d expect with a How good is your metadata? theme\u0026mdash;the two-days will be entirely devoted to the subject of metadata\u0026mdash;because it touches everything we do, and everything that publishers, hosting platforms, funders, researchers, and librarians do. Oh, and it\u0026rsquo;s actually super awesome too\u0026mdash;and occasionally fun.\n", "content": "Our LIVE Annual Meeting is back in North America for the first time since 2015, and with just 10 days to go, there’s a lot going on in preparation. As you’d expect with a How good is your metadata? theme\u0026mdash;the two-days will be entirely devoted to the subject of metadata\u0026mdash;because it touches everything we do, and everything that publishers, hosting platforms, funders, researchers, and librarians do. Oh, and it\u0026rsquo;s actually super awesome too\u0026mdash;and occasionally fun.\nMetadata is what is used to describe the story of research: its origin, its contributors, its attention, and its relationships with other objects. The more machines start to do what humans cannot\u0026mdash;parse millions of files through multiple views\u0026mdash;the more we see what connections are missing, and the more we start to understand the opportunities that better metadata could offer.\nWe love metadata so much that we\u0026rsquo;re producing an 8-foot-high depiction of the \u0026lsquo;perfect\u0026rsquo; record, in both XML and JSON, for people to gape at and annotate in person. Sneak preview:\nThe perfect metadata record is eight feet tall. SchemaSchemer\nBoth days feature plenary-style talks, insights from ourselves and guests who will regale us with tales of metadata woes and wonders.\nLisa will be there at the end of Day 1 to update everyone on some recent and potential governance changes, and\u0026mdash;the reason we started these gatherings\u0026mdash;to reveal the results of our 2018 board election, the second contested election we\u0026rsquo;ve held, and already with twice the voters from 2017.\nOur amazing guest speakers are too brilliant and too experienced to highlight in just one blog. But check out the LIVE18 schedule to see what they\u0026rsquo;ll be talking about:\nPatricia Cruse, DataCite Ravit David, University of Toronto Clare Dean, Metadata 2020 Paul Dlug, American Physical Society Kristen Fisher Ratan, CoKo Foundation Stefanie Haustein, University of Ottawa Bianca Kramer, Utrecht University Graham Nott, Freelance developer (eLife/JATS) Jodi Schneider, University of Urbana-Champaign Shelley Stall, American Geophysical Union We’ll be taking over the entire second floor of the Toronto Reference Library, whose three rooms will house a bunch of conversational sessions as well as some more formal talks:\nRally is the main room where we’ll have the plenary-style talks, a corner for Unscheduled Maintenance offering live support for your questions about billing or tech for Ryan, Shayn, Isaac, Jason, Chuck, \u0026amp; Mike. Running down the whole left side of this room is also the You-are-Crossref wall where the community will showcase their work with metadata through posters - feel free to bring one along and find Patricia to get the sticky tack.\nThe LIVE Lounge is where you can eat, drink, rest, and chat and where you\u0026rsquo;ll likely find Rosa as she laises between the caterers, the venue, AV, and all of us. The Lounge is also where we\u0026rsquo;ll gather for much-needed post-election refreshments at the end of Tuesday.\nThe Bigger Ambitions Room is where a lot of the Unplugged sessions will take place. This room will feature three separate stations:\nCrossref Labs \u0026amp; Product where you can chat with Geoffrey, Esha, Jennifer L, Patrick, and Christine about your big ideas for us, and what we\u0026rsquo;re working on already. Metadata discussions and annotations of the perfect record (previewed above) with Patricia, together with space to ideate around metadata principles. Uses and users of metadata where Jennifer K will help us understand just how far Crossref metadata can reach, and who and what people are doing with it. We cannot wait to show you what else we have planned :-)\nFor those of you not able to attend, recordings of the presentations will be made available on the event page directly soon after.\nOtherwise - see you there!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-live-brazil-evoked-vibrant-qa-session/", "title": "Crossref LIVE Brazil evoked vibrant Q&A session", "subtitle":"", "rank": 1, "lastmod": "2018-10-31", "lastmod_ts": 1540944000, "section": "Blog", "tags": [], "description": "There has been a steady increase in the growth of our membership in Latin America—and in Brazil in particular—over the past few years. We currently have more than 800 Brazil-based members; some as individual members, but most are sponsored by another organization. As part of our LIVE Local program Chuck Koscher and I traveled to meet some of these members in Goiânia and Fortaleza, where we co-hosted events with Associação Brasileira de Editores Científicos do Brasil (ABEC Brasil)—one of our largest Sponsors.", "content": "There has been a steady increase in the growth of our membership in Latin America—and in Brazil in particular—over the past few years. We currently have more than 800 Brazil-based members; some as individual members, but most are sponsored by another organization. As part of our LIVE Local program Chuck Koscher and I traveled to meet some of these members in Goiânia and Fortaleza, where we co-hosted events with Associação Brasileira de Editores Científicos do Brasil (ABEC Brasil)—one of our largest Sponsors.\nThese events always provide a great opportunity for us to update our members on new and upcoming Crossref developments. They are also an important way for us to discover more about the varied needs of our members’ communities and learn how we can work together better.\nThe LIVE Brazil events were attended by more than two hundred members and were held at the Universidade Federal de Goiás and the Universidade de Fortaleza respectively. Chuck and I enthusiastically demonstrated two new tools from Crossref— Participation Reports and Metadata Manager, we discussed our newest record types—preprints and peer review reports, and continually highlighted the importance (and the uses) of quality metadata.\nWe were joined by some fantastic guest speakers; Milton Shintaku from ABEC explained how to register content using the Crossref/OJS deposit plugin and Crossref ambassador, Edilson Damasio, spoke about Similarity Check and gave a demonstration of how to use the iThenticate interface when checking papers for originality.\nThe vibrant Q\u0026amp;A sessions reflected the varying needs of the audience. We talked generally about the different Crossref services and went more in-depth with discussions around submitting relationship metadata for peer review and preprints. Crossmark and its implementation was also a hot topic, as was how to benefit from Similarity Check—and in particular how to address cases of duplication in submitted manuscripts, and the setting up of plagiarism policies for each journal. There was also a lot of discussion around OJS integrations, and we were able to share that PKP/OJS is currently in the process of enhancing the Crossref/OJS integration, including the ability for publishers to deposit references.\nWe were also pleased to see so much interest in supplementing Crossref metadata with references, Similarity Check URLs, license information, etc. To address this we’re running a webinar in Brazilian Portuguese entitled: “Registering content and adding to your Crossref metadata in Portuguese” on 26th November. You can sign up here if you’d like to attend.\nI’d like to thank Universidade Federal de Goiás and the Universidade de Fortaleza for hosting the events, providing the venues and the translation team, and of course, thanks to everyone who came!\nA special mention of ABEC for their help in organizing and promoting the events. As a Sponsor, they relieve our team of an intense amount of technical support, billing, and other administrative burdens, saving us time and expense, while offering a localized service to Brazilian publishers.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/amy-bosworth/", "title": "Amy Bosworth", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/its-not-about-the-money-money-money./", "title": "It’s not about the money, money, money.", "subtitle":"", "rank": 1, "lastmod": "2018-10-18", "lastmod_ts": 1539820800, "section": "Blog", "tags": [], "description": "But actually, sometimes it is about the money. As a not-for-profit membership organization that is obsessed with persistence, we have a duty to remain sustainable and manage our finances in a responsible way. Our annual audit is incredibly thorough, and our outside auditors and Board-based Audit committee consistently report that we’re in good shape.\nOur Membership \u0026amp; Fees committee regularly reviews both membership fees and Content Registration fees for a growing range of research outputs.", "content": "But actually, sometimes it is about the money. As a not-for-profit membership organization that is obsessed with persistence, we have a duty to remain sustainable and manage our finances in a responsible way. Our annual audit is incredibly thorough, and our outside auditors and Board-based Audit committee consistently report that we’re in good shape.\nOur Membership \u0026amp; Fees committee regularly reviews both membership fees and Content Registration fees for a growing range of research outputs. Together with our staff, the Board regularly reviews financial projections that inform our budgeting process and approve our budget each year.\nFinancial sustainability means the persistence of our infrastructure and services We run a tight ship here at Crossref. We have to. So it’s not ideal when we have to chase members and users for late payments, but it’s an important part of keeping the organization afloat, and keeping our dedicated service to scholarly communications running. And that’s my job at Crossref.\nWorking here for over six years now, I’ve seen a lot of development in our finance department. We strive as a team to always improve our communication with members and users to deliver the best ‘customer’ experience. To do this, we are always tweaking our processes to improve efficiency and accuracy, and welcome all feedback.\nHow the invoice schedule works Our annual membership invoices are sent out each January, and our Content Registration invoices are generated four times a year, each quarter. All invoices are emailed to the billing contact for your organization (please be sure to update us with any contact changes!) and have a due date of net 45 days. Our invoices now have a “pay now” link in the body of the email. This offers a faster and more convenient way for you to pay, simply by clicking on the link to our payment portal. You can also view invoices as PDFs in the payment portal. An important part of our accounting process is the automated invoice reminder schedule. There are three billing reminders we send by email:\nThe day immediately after the invoice due date; 21 days past the invoice due date; and 45 days past the invoice due date. We don’t want to see you go! We understand there are many factors that can make prompt payment a challenge for some people: international transfer delays or fees; funding for your publishing operations may end; change of contacts; problems receiving our emails.\nWhen an account is 90 days past due, a further email notifies you that your service is at risk of suspension. If an account is then suspended for non-payment it becomes at risk of being ‘terminated’. Once an account has been terminated, you will need to contact our membership specialist to rejoin Crossref. Please note that we send numerous notifications/reminders before suspension or termination takes place (we don’t want to see you go!). We can always be reached at billing@crossref.org for any invoice inquiries you may have.\nTips that work for other users There are some things you can do to speed-up or simplify payments:\nPay with a credit card, using our online payment portal. This is fast, convenient, and lower in fees Always reference an invoice number on the payment to ensure that it’s applied to your account efficiently Be sure to make billing@crossref.org a ‘safe’ email address, so that you receive our invoices and reminders Always keep us up-to-date with any contact changes at your organization, to ensure that we have accurate information for invoicing and other communication We recommend giving us a generic email address for your accounts payable team, such as accounts@publisher.com so that if somebody leaves that job, invoices can still get through. Thanks for working with us! Please let me know in the comments below if you have any feedback or additional tips for your fellow Crossref community members.\n", "headings": ["Financial sustainability means the persistence of our infrastructure and services","How the invoice schedule works","We don’t want to see you go!","Tips that work for other users"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/good-better-best.-never-let-it-rest./", "title": "Good, better, best. Never let it rest.", "subtitle":"", "rank": 1, "lastmod": "2018-10-16", "lastmod_ts": 1539648000, "section": "Blog", "tags": [], "description": "Best practices seem to be having a moment. In the ten years since the Books Advisory Group first created a best practice guide for books, the community beyond Crossref has developed or updated at least 17 best practice resources, as collected here by the Metadata 2020 initiative. (Full disclosure: I co-chair its Best Practices group.)\n", "content": "Best practices seem to be having a moment. In the ten years since the Books Advisory Group first created a best practice guide for books, the community beyond Crossref has developed or updated at least 17 best practice resources, as collected here by the Metadata 2020 initiative. (Full disclosure: I co-chair its Best Practices group.)\nBooks have been one of the fastest growing resource/record types at Crossref for some time, and best practices are just one of the Book Advisory Group\u0026rsquo;s efforts. Over the past ten years, the members of the books group have updated and added to the guide, and it’s now time for it to get some visibility, so we have added it to our website for easy reference.\nThese best practices are not documented for the sake of it. They have real value and can help guide internal conversations to evaluate current practices, for example. They can also play a role in making or changing policies, training staff and providing instructions to authors on citation formatting.\nHere are a few recent changes I’d like to highlight:\nA new section has been added that addresses books hosted on multiple platforms The section on versions, (including books in multiple formats) has been expanded and clarified A section on the use of DOIs in citations has been added It is neither final nor comprehensive, and never will be. Best practices by their very nature must evolve over time—and those with such a broad scope as books will inevitably lack some detail—but that’s all the more reason for the community to stay engaged. Looking ahead to future work from the group, chapter-level metadata is likely to get more attention.\nOver the past few years the Books Advisory Group, chaired with aplomb by Emily Ayubi of the American Psychological Association (APA), has spent a lot of time on Crossref initiatives, like Multiple Resolution and DOI display changes but also on broader industry topics like ORCID iDs for book authors, and the Books Citation Index.\nAs Emily’s term as chair comes to an end this year, we welcome Charles Watkinson of the University of Michigan as chair starting in 2019. The group meets next on 12 December when we will hear from Coko about Editoria and have a discussion about developing our new Metadata Manager Content Registration tool for books, and more.\nIf you want to share your thoughts on best practices or if you have other topics you’d like us to consider, please get in touch.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-manager-members-represent/", "title": "Metadata Manager: Members, represent!", "subtitle":"", "rank": 1, "lastmod": "2018-10-15", "lastmod_ts": 1539561600, "section": "Blog", "tags": [], "description": "Over 100 Million unique scholarly works are distributed into systems across the research enterprise 24/7 via our APIs at a rate of around 633 Million queries a month. Crossref is broadcasting descriptions of these works (metadata) to all corners of the digital universe.\n", "content": "Over 100 Million unique scholarly works are distributed into systems across the research enterprise 24/7 via our APIs at a rate of around 633 Million queries a month. Crossref is broadcasting descriptions of these works (metadata) to all corners of the digital universe.\nWhether you’re a publisher, institution, governmental agency, data repository, standards body, etc.: when you register and update your metadata with Crossref, you’re relaying it to the entire research enterprise. So make sure your publications are fully and accurately represented.\nMetadata Manager is here to help This year, we’ve released a new tool aimed to make this easier and give you, members, full control over your metadata. Presenting: Metadata Manager. It helps to:\nSimplify and streamline the Content Registration service, with a user-friendly interface Give you greater flexibility and control of metadata deposits Support users who are less familiar with XML Boost metadata quality, encourage cleaner and more complete metadata records Metadata Manager is available to all our members and the service providers they work with, providing assistance with a wide range of metadata-related tasks:\nRegular Content Registration conducted by journal staff, editors and service providers Registering corrections, retractions, or other editorial expressions of concern Matching references to their DOIs and registering them with the publication Adding metadata to existing records such as license and funding information, abstracts, or data citations Late-arriving editorial updates/corrections after initial publication Unexpected corrections to production hiccups Emergency editorial changes that affect publication record Accelerated registration for special pieces published outside of regular workflow Securely and efficiently transfer titles to another publisher as the authorized owner Issues arise all the time in the dynamic and challenging work of scholarly communications. Metadata Manager provides a fast and easy way to meet these head-on when broadcasting new content or updating existing content. Submissions through this tool are processed immediately upon submission (i.e., no queues!).\nThis new tool empowers our members to “represent” in the exhilarating thrum of data reaching our API users. At this moment in time, it only supports journals, but our development team is currently working hard to include the remaining record types.\nFeatures Here’s a smattering of highlights from the Metadata Manager feature list:\nAll metadata: easily adds any and all metadata, allowing publishers to add richness and depth to their records. Prevents rejected submissions: it ensures you have satisfied all the basic Content Registration requirements and points out any input errors. Expedited deposit: the Content Registration system processes each submission immediately, bypassing the deposit queue. Historic log: easy to read archive of all previous submissions. Effortless review: provides a clean, condensed view of metadata (invariably complicated and lengthy) to support human review of the content before submission. Aids members to follow best practices: checks for completeness and reminds users of the full breadth of metadata available for the article, volume/issue, and the journal itself. Full control over title transfers: no need to make these requests through our support channels. Complete the transfer at your convenience, directly through the system. For those of you that have looked at your own metadata contribution with the use of our new Participation Reports, you’ll find using Metadata Manager a quick and useful way to help you level-up your records.\nMembers, represent! We invite you to register and update your publications with Metadata Manager, relay the metadata fully and accurately to the entire research enterprise. Check out the comprehensive help documentation to find out how to set up your workspace and get started right away with your usual Content Registration login details.\nAs mentioned, we are continuing development, adding support for all remaining record types as well as enhancing existing features. The webDeposit form will remain available throughout this time. For journal publishers, give us a whirl and let us know if you see something missing or there’s a function that would improve your Content Registration experience!\n", "headings": ["Metadata Manager is here to help","Features","Members, represent!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/michael-parkin/", "title": "Michael Parkin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-12-with-europe-pmc/", "title": "Using the Crossref REST API. Part 12 (with Europe PMC)", "subtitle":"", "rank": 1, "lastmod": "2018-10-10", "lastmod_ts": 1539129600, "section": "Blog", "tags": [], "description": "As part of our blog series highlighting some of the tools and services that use our API, we asked Michael Parkin\u0026mdash;Data Scientist at the European Bioinformatics Institute\u0026mdash;a few questions about how Europe PMC uses our metadata where preprints are concerned.\n", "content": "As part of our blog series highlighting some of the tools and services that use our API, we asked Michael Parkin\u0026mdash;Data Scientist at the European Bioinformatics Institute\u0026mdash;a few questions about how Europe PMC uses our metadata where preprints are concerned.\nTell us a bit about Europe PMC Europe PMC is a knowledgebase for life science research literature and a platform for innovation based on the content, such as text mining. It contains 34.6 million abstracts and 5 million full-text articles. At Europe PMC we support the research community by developing tools for knowledge discovery, linking publications with underlying research data, and building infrastructure to support text and data mining. Our goal is to create a supportive environment around open access content and data, to maximise its reuse.\nWhat problem is your service trying to solve? Recent years have seen a dramatic increase in the popularity of preprints within life sciences literature. Preprints have been supported by Crossref since November 2016. In response to the rise in popularity, we have started indexing preprints alongside traditional journal publishing within Europe PMC. We expect this will:\nprovide another means to access and discover this emergent form of scholarly content help explore more transparently the role of preprints in the publishing ecosystem support their inclusion in processes such as grant reporting and credit attribution systems How do you use Crossref metadata? Europe PMC operates an open citation network that uses reference lists from our full-text content, supplemented with metadata supplied by the Crossref OAI-PMH API. The number of citations we retrieve from Crossref increased significantly in 2017 thanks to the efforts of the Initiative for Open Citations (I4OC) in improving awareness about sharing citation data.\nOur work to ingest preprints into Europe PMC, however, represents our first use of the Crossref REST API. We make a series of queries for each preprint provider, making use of the “posted-content”, “prefix” and (optionally) “has-abstract” filters. We intend to migrate to using the REST API for the majority of retrievals of Crossref content in due course.\nWhat metadata values do you make use of? Currently we make use of the following fields:\nposted as a publication date abstract DOI author for author given names and surnames title as the preprint title is-preprint-of to establish preprint –\u0026gt; article links How often do you extract/query metadata? We query the REST API daily making use of the from-index-date filter and cursor pagination to insert new or modify existing records. This means that preprints will be available in Europe PMC within 24 hours of the metadata being sent to Crossref. We store the full REST response in MongoDB, a document-based database. Here are some examples of Crossref API queries used to preprint provider PeerJ Preprints:\ncalling `https://0-api-crossref-org.libus.csd.mu.edu/works?filter=type:posted-content,has-abstract:true,from-index-date:2018-07-29,prefix:10.7287\u0026amp;sort=updated\u0026amp;rows=1000\u0026amp;cursor=*` calling `https://0-api-crossref-org.libus.csd.mu.edu/works?filter=type:posted-content,has-abstract:true,from-index-date:2018-07-29,prefix:10.7287\u0026amp;sort=updated\u0026amp;rows=1000\u0026amp;cursor=AoN4ldf88uQCe6e1g%2FPkAj8SaHR0cDovL2R4LmRvaS5vcmcvMTAuNzI4Ny9wZWVyai5wcmVwcmludHMuMjcwNjJ2MQ%3D%3D` Done importing PeerJ Preprints modified: 2 inserted: 10 What do you do with the metadata? From the database we parse out the relevant fields and pass them to our main relational database prior to indexing. This avails the preprint abstracts to all of the value-added services we offer for peer-reviewed abstracts, such as citations, grants, ORCID claiming, text mining, etc. We assign a unique persistent identifier comprising “PPR” followed by a number (1) to each preprint record.\nThis is displayed on the Europe PMC site as an abstract record, analogous to PubMed records, but with an obvious banner (2) indicating to readers the preprint designation; a tooltip provides further explanation of what a preprint is in comparison to a peer-reviewed article.\nOnce available on the Europe PMC platform, we then apply downstream processes including:\nproviding an Unpaywall link directly to the full-text (3); adding a hyperlink to the final published version (if there is one that we can detect) (4); incorporating the preprint into our citation network (5); adding useful links to e.g. alternative metrics, scientific comments and peer reviews, underlying research data in life science databases (6); providing text mined annotations via SciLite (7); including funding information (8); displaying ORCID claims in the author list (9). What are the future plans for Europe PMC and preprints? The inclusion of preprints within Europe PMC is of immediate benefit to researchers who want to explore the very latest research. Moreover we see this as an opportunity for both ourselves and the community to explore how preprints fit into the wider publishing ecosystem; for example to answer questions such as: How often will they be cited? How will they be linked to grant funding and other credit systems? How will they be reused?\nWhat else would you like our API to do? The REST API and rich metadata model provided by Crossref around preprints are both excellent, but the population of the metadata fields by preprint providers can be limited and/or heterogeneous. The key challenge we see is in encouraging providers to populate the Crossref metadata fields more fully and in a uniform manner.\nThanks to Michael.\nIf you\u0026rsquo;d like to share how you use our Metadata APIs please contact the Community team.\n", "headings": ["Tell us a bit about Europe PMC","What problem is your service trying to solve?","How do you use Crossref metadata?","What metadata values do you make use of?","How often do you extract/query metadata?","What do you do with the metadata?","What are the future plans for Europe PMC and preprints?","What else would you like our API to do?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/membership/join-thank-you/", "title": "Thank you for your application", "subtitle":"", "rank": 1, "lastmod": "2018-10-06", "lastmod_ts": 1538784000, "section": "Become a member", "tags": [], "description": "Thanks for applying - you\u0026rsquo;re on your way! Here\u0026rsquo;s what happens next: We\u0026rsquo;ll take a look at your application within two working days. We may need to come back to you with a couple of questions. We then send you a pro-rated membership order (this is basically an invoice) for your first year of membership.\nOnce our billing team confirms that this has been paid, we\u0026rsquo;ll send you your new Crossref DOI prefix and login details within two working days.", "content": "Thanks for applying - you\u0026rsquo;re on your way! Here\u0026rsquo;s what happens next: We\u0026rsquo;ll take a look at your application within two working days. We may need to come back to you with a couple of questions. We then send you a pro-rated membership order (this is basically an invoice) for your first year of membership.\nOnce our billing team confirms that this has been paid, we\u0026rsquo;ll send you your new Crossref DOI prefix and login details within two working days. Once you\u0026rsquo;ve received these, you can start registering your content straight away. Don\u0026rsquo;t forget that the membership order only covers your membership - there are also charges for the content you register, which will be invoiced quarterly in arrears.\nPlease note: If you are eligible for the GEM programme, we won’t send you a membership order when you first join, and we won\u0026rsquo;t send you any further membership invoices either. You also won’t receive invoices for content registration.\nImportant Our replies come from the email address member@crossref.org. Please add this email address to your contacts list or add it to your safe senders list to make sure that you receive our replies.\nPlease make sure you have read and understood: The payment schedule you\u0026rsquo;re agreeing to The member terms you\u0026rsquo;re agreeing to If you or your billing team has more questions about the future billing schedule, you can read more on our website.\nWe\u0026rsquo;re looking forward to your participation in our community!\nPlease contact our membership team if you have any questions in the meantime.\n", "headings": ["Thanks for applying - you\u0026rsquo;re on your way!","Here\u0026rsquo;s what happens next:","Important","Please make sure you have read and understood:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-wrap-up-of-the-crossref-blog-series-for-scielo/", "title": "A wrap up of the Crossref blog series for SciELO", "subtitle":"", "rank": 1, "lastmod": "2018-10-05", "lastmod_ts": 1538697600, "section": "Blog", "tags": [], "description": "Crossref member SciELO (Scientific Electronic Library Online), based in Brazil, celebrated two decades of operation last week with a three-day event The SciELO 20 Years Conference.\n", "content": "Crossref member SciELO (Scientific Electronic Library Online), based in Brazil, celebrated two decades of operation last week with a three-day event The SciELO 20 Years Conference.\nThe celebration constituted an important landmark in SciELO’s evolution, and an exceptional moment for them to promote the advancement of an inclusive, global approach to scholarly communication and to the open access movement.\nAs part of the anniversary activities SciELO asked us to write a series of five blogs that would help the organizations of Brazil to better understand the following:\nWhy all articles should have a DOI The critical role of the DOI The basics of record types, translations, preprints, Crossmark, and more The basics of Crossref sponsorship, and How to make the most of your Crossref membership Below you’ll find an abstract of each of these blog posts as well as a link to the published posts in Brazilian Portuguese, Spanish and English.\nWhy all articles should have a DOI In today’s world, an author’s work needs a Digital Object Identifier (DOI) for it to become discoverable, citable, and linkable. This unique alphanumeric string identifies the content of a research work, and remains associated with it irrespective of changes to its web location. Discover the origins of the DOI, how Crossref was founded, and why they continue to exist and persist.\nRead the full blog in Brazilian Portuguese, Spanish, or English\nThe critical role of the DOI Find out why URL links to research articles are fragile, and how DOIs are essential in building stable, persistent links between research objects. This is achieved through the metadata that members deposit with Crossref, as part of their obligations. Learn how we can all contribute to creating a global, robust research record.\nRead the full blog in Spanish or English\nThe basics of record types: Preprints, Crossmark, translations, and more What’s the difference between preprints and ahead of print? When should you use each; and, what are the DOI requirements? This article answers those questions and provides a basic overview of how to connect the metadata records of related record types, like translations.\nRead the full blog in Brazilian Portuguese, Spanish, or English\nThe basics of Crossref sponsorship There are many organizations that want to register content and benefit from the services Crossref provides, but may not be able to do so alone. These organizations use sponsors. Sponsors are organizations who publish on behalf of groups of smaller organizations. Nearly 650 of our 800 Brazilian members are represented by such a sponsor.\nRead the full blog in Brazilian Portuguese, Spanish, or English\nHow to make the most of your Crossref membership Since Crossref was founded in 2000, its member organizations have registered metadata and persistent identifiers (DOIs) for over 100 million content items. This information is used extensively by the research community—individuals and organizations—who need to find, cite, link and assess research outputs. As a SciELO member, the metadata you provide to Crossref when you register content is key to the discoverability of your journal content.\nRead the full blog in Brazilian Portuguese, Spanish, or English\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/doi/", "title": "DOI", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/sponsorship/", "title": "Sponsorship", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/translations/", "title": "Translations", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/data-citation-lets-do-this/", "title": "Data citation: let’s do this", "subtitle":"", "rank": 1, "lastmod": "2018-10-04", "lastmod_ts": 1538611200, "section": "Blog", "tags": [], "description": "Data citation is seen as one of the most important ways to establish data as a first-class scientific output. At Crossref and DataCite we are seeing growth in journal articles and other record types citing data, and datasets making the link the other way. Our organizations are committed to working together to help realize the data citation community’s ambition, so we’re embarking on a dedicated effort to get things moving.", "content": "Data citation is seen as one of the most important ways to establish data as a first-class scientific output. At Crossref and DataCite we are seeing growth in journal articles and other record types citing data, and datasets making the link the other way. Our organizations are committed to working together to help realize the data citation community’s ambition, so we’re embarking on a dedicated effort to get things moving.\nEfforts regarding data citation are not a new thing. One of the first large-scale initiatives to establish data citation as a standard academic practice was the FORCE11 Joint Declaration of Data Citation Principles (JDDCP) in 2014. This declaration was endorsed by over 100 organizations in the scholarly community as well as many individuals.\nFollowing this agreement on how data citation should be done, many projects followed. Within FORCE11, the Data Citation Implementation Pilot brought together publishers and repositories to put data citation into practice and work on the implementation of the JDDCP. Within the context of the Research Data Alliance, a data-literature linking group started under the name of Scholix to establish a framework for exchanging information about the relationships between articles and datasets. The infrastructure building blocks now feed into projects such as Make Data Count and Enabling FAIR Data.\nProjects aside, if datasets are cited consistently and in a standard way, it will make it much easier for the research community to see links between different research outputs and work with these outputs. It also makes it much easier to count these citations, so that researchers can get credit for their data and the sharing of that data.\nThe underlying work has been done to create an infrastructure that will effectively support and disseminate information on data citation. Data citation is here today!\nDifferent organizations know how to handle data citations, and are starting to count these and make that information available in turn. This means that the only thing that’s needed is for people to actually cite data, and this information be captured and passed on. Some Crossref and DataCite members have already made great progress on this already (see Melissa Harrison’s blog on what eLife is doing).\nThe goals of all the data citation projects can only be realized if you start doing data citation, and we know you’ll have questions about it…\nIn the coming months, we’ll be posting several blogs and organizing sessions to tell you how you can start doing data citation - if you’re attending FORCE2018 you can catch our joint workshop there. So stay tuned and please get in touch if you can’t wait, we’d love to help you get started!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/100000000-records-thank-you/", "title": "100,000,000 records - thank you!", "subtitle":"", "rank": 1, "lastmod": "2018-09-26", "lastmod_ts": 1537920000, "section": "Blog", "tags": [], "description": "100,000,000. Yes, it’s a really big number—and you helped make it happen. We’d like to say thank you to all our members, without your commitment and contribution we would not be celebrating this significant milestone. It really is no small feat.\n", "content": "100,000,000. Yes, it’s a really big number—and you helped make it happen. We’d like to say thank you to all our members, without your commitment and contribution we would not be celebrating this significant milestone. It really is no small feat.\nTo help put this number into context; the National Museum of China has just over 1 million artifacts, the British Library has around 25 million books, Napster has 40 million tracks, and Wikidata currently contains 50 million+ items.\nDigging into the 100 Million Within these 100 Million registered content records there are many different record types.\nAnd within these record types, more than 69 million records have full-text links, 31 million+ have license information and 3 million+ contain some kind of funding information. An overview of these and other Crossref vital statistics is available on our dashboard.\n100 Million—what does your contribution look like? Our recently-launched participation reports allow anyone to see the metadata Crossref has. It’s a valuable education tool for publishers, institutions and other service providers looking to understand the availability of the metadata they have registered with us.\nThrough an itemized dashboard Participation Reports allows you to monitor the metadata you are registering, even if this work is done by a third party or another department. You can see for yourself where your gaps are, and what you could improve upon. Next to each metadata element, there’s a short definition, letting you know more about it, and—crucially—what practical steps you can take to improve the score.\nThe dashboard provides the percentage counts across ten key metadata elements: References, ORCID iDs, Funder Registry IDs, Funding award numbers, Crossmark metadata, License URLs, Text-mining links, Similarity Check URLs, and Abstracts.\nAnd not only can you see your own metadata—the dashboard enables you to view the registered metadata of all our 11,076 members.\nHow are these 100 Million content records being used? Every service we provide is based on our metadata, and our APIs expose all of that metadata. Over the past year or so we have been collecting use cases from members that actively utilize the Metadata APIs and have turned these into a Metadata APIs blog series so that we can share these stories of how our metadata is used with the wider community.\nA big number. Even bigger ambitions. Gaps or errors in metadata are passed on to thousands of other services, which causes problems downstream and means we all suffer. So it makes sense for the metadata you deposit to be as accurate and complete as possible. The more elements there are to the metadata, the higher the chance of others finding and using the content. We aim to continually find effective ways to communicate this wider story around the importance of open infrastructure and metadata.\nOver the years we’ve made great progress in connecting information about researchers, their affiliations, grants, and research outputs. Imagine how much more powerful this information would be if supplemented by more comprehensive, accurate, and up-to-date metadata.\nSources - all data as of Sept 26, 2018\nNational Museum of China has 1,050,000 artifacts\nThe British Library has around 25 million books, more than any other library\nWikidata currently contains 50,290,632 items\nNAPSTER currently has 40 million tracks (Napster is known as Rhapsody in the US)\n", "headings": ["Digging into the 100 Million","100 Million—what does your contribution look like?","How are these 100 Million content records being used?","A big number. Even bigger ambitions."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/join-us-in-toronto-this-november-for-live18/", "title": "Join us in Toronto this November for LIVE18", "subtitle":"", "rank": 1, "lastmod": "2018-09-25", "lastmod_ts": 1537833600, "section": "Blog", "tags": [], "description": "LIVE18, your Crossref annual meeting, is fast approaching! We’re looking forward to welcoming everyone in Toronto, November 13-14.\n", "content": "LIVE18, your Crossref annual meeting, is fast approaching! We’re looking forward to welcoming everyone in Toronto, November 13-14.\nThis year’s theme “How good is your metadata?” centers around the definition and benefits of metadata completeness, and each half day will cover some element of the theme:\nDay one, AM Defining good metadata Day one, PM Improving metadata quality and completeness Day two, AM What does good metadata enable? Day two, PM Who is using our metadata and what are they doing with it? Both days will be packed with a mixture of plenary and interactive sessions. Speakers include:\nPatricia Cruse, DataCite Kristen Fisher Ratan, CoKo Foundation Stefanie Haustein, University of Ottawa Bianca Kramer, Utrecht University Shelley Stall, American Geophysical Union Ravit David, University of Toronto Libraries Graham Nott, Freelance developer of an eLife JATS conversion tool Paul Dlug, American Physical Society A ‘meet and mingle’ drinks reception will be held directly after the election results on day one.\nAbout the theme—how good is your metadata? The reach and usefulness of research outputs are only as good as how well they are described. Metadata is what is used to describe the story of research: its origin, its contributors, its attention, and its relationship with other objects.\nThe more machines start to do what humans cannot—parse millions of files through multiple views—the more we see what connections are missing, the more we start to understand the opportunities that better metadata can offer.\nLIVE18 will focus this year entirely on the subject of metadata. It touches everything we do, and everything that publishers, hosting platforms, funders, researchers, and libraries do.\nCome and join the discussions Register to join us this 13 and 14 November, at the Toronto Reference Library, 789 Yonge Street, Toronto, Canada—we look forward to seeing you there.\nRead more about our annual events\n", "headings": ["About the theme—how good is your metadata?","Come and join the discussions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/bastien-latard/", "title": "Bastien Latard", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-11-with-mdpi/scilit/", "title": "Using the Crossref REST API. Part 11 (with MDPI/Scilit)", "subtitle":"", "rank": 1, "lastmod": "2018-09-18", "lastmod_ts": 1537228800, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Martyn Rittman and Bastien Latard who tell us about themselves, MDPI and Scilit, and how they use Crossref metadata.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Martyn Rittman and Bastien Latard who tell us about themselves, MDPI and Scilit, and how they use Crossref metadata.\nCan you give us a brief introduction yourselves, and to MDPI/Scilit Martyn is Publishing Services Manager at MDPI. He joined five years ago as an editor and has worked on editorial, production, and software projects. Prior to joining MDPI, he completed a PhD and worked as a postdoc. His research covered physical chemistry, biochemistry and instrument development. Bastien Latard is the project leader of Scilit. He created Scilit as part of his Master’s degree in 2013. He is now completing a PhD on the subject of semantically linking research articles, using data from Scilit.\nScilit was developed in 2014 by open access (OA) publisher MDPI with the goal of having a backup of metadata for all OA articles. Soon, Scilit became more general and embraced all articles with a digital object identifier (DOI) from Crossref and those with a Pubmed ID (PMID). After seeing the potential of the database and how it could be used in a number of different contexts, we decided to make it public. Recently, other article types, including preprints have been integrated. Our main goal now is to provide useful services to the research and academic publishing communities.\nWhat problem is your service trying to solve? Other indexing databases offer paid access, are highly selective, or host documents apart from research articles. We want to offer a comprehensive database, but also one that clearly identifies open access material. The last part is still a work in progress, but we have made good progress recently.\nTo make the access as direct as possible, we have recently integrated several OA aggregators that pick up or host free versions of full-text articles, including CORE, Unpaywall, and PubMed Central.\nCan you tell us how you are using the Crossref Metadata API at MDPI/Scilit? Scilit queries Crossref’s API in order to index metadata for single articles. DOIs are a key part of the system; because they are standards, we can use them to merge new sources into Scilit while avoiding duplicates. We cross-check the data from Crossref against other sources and update it as necessary. Citation data is also really appreciated and opens doors to further developments.\nAs a publisher, MDPI makes daily deposits to Crossref, to register journal articles on mdpi.com, conference papers from sciforum.net, and preprints from Preprints.org. We also use the data collected at Scilit to find suitable reviewers and let authors know when their work has been cited.\nWhat metadata values do you pull from the API? As much as we can! Scilit crawls the latest indexed articles every few hours to ensure it is as up-to-date as possible. This is the most important function of our system because it provides metadata for the very latest published articles, including a link to the publisher version. Scilit parses Crossref metadata and saves them. They are then indexed into our solr search engine for fast, real-time usage.\nHave you built your own interface to extract this data? We wrote our own code to get the data, but the API interface made this very straightforward. Scilit has been developed completely in-house by MDPI and the lead developer, Bastien Latard, is currently completing a PhD looking at how to make the most of the data using semantic data extraction.\nWhat are the future plans for MDPI/Scilit? Scilit is and will be highly used in MDPI current and future projects. We have a few ideas about how to improve Scilit. We are, for example, implementing a scientific profile networking service, which will allow scholars to build their own (scientific) network with lots of functionalities. We think that it will be a really good place to search, comment, exchange around articles… maybe even more!\nWhat else would you like to see the REST API offer? Crossref is already doing a great job, especially with its integrated citation data. Maybe further analysis and mapping of data about organizations and institutions would be an improvement.\nThank you Martin and Bastien. If you\u0026rsquo;d like to share how you use the Crossref Metadata APIs please contact the Community team.\n", "headings": ["Can you give us a brief introduction yourselves, and to MDPI/Scilit","What problem is your service trying to solve?","Can you tell us how you are using the Crossref Metadata API at MDPI/Scilit?","What metadata values do you pull from the API?","Have you built your own interface to extract this data?","What are the future plans for MDPI/Scilit?","What else would you like to see the REST API offer?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/where-does-publisher-metadata-go-and-how-is-it-used/", "title": "Where does publisher metadata go and how is it used?", "subtitle":"", "rank": 1, "lastmod": "2018-09-17", "lastmod_ts": 1537142400, "section": "Blog", "tags": [], "description": "Earlier this week, colleagues from Crossref, ScienceOpen, and OPERAS/OpenEdition joined forces to run a webinar on “Where does publisher metadata go and how is it used?”.\n", "content": "Earlier this week, colleagues from Crossref, ScienceOpen, and OPERAS/OpenEdition joined forces to run a webinar on “Where does publisher metadata go and how is it used?”.\nStephanie Dawson explained how ScienceOpen’s freely-accessible, interactive search and discovery platform works by connecting and exposing metadata from Crossref. Her case study showed that articles with additional metadata had much higher average views than those without - depositing richer metadata helps you get the best value from your DOIs!\nPierre Mounier of OPERAS/OpenEdition showed us how a variety of persistent identifiers (PIDs) including DOIs, ORCID iDs, and Funder Registry IDs have been used on OA book platforms to improve citations, author attribution, and tracking of funding. He described a forthcoming annotations project with Hypothes.is, and explained how Crossref metadata is being used in both usage and alternative metrics.\nFive ways to register content with Crossref My overview of Content Registration outlined the five ways to register content with Crossref:\nVia the manual web deposit form Through Crossref’s new Metadata Manager tool (beta) With OJS’s Crossref plugin - more information here (see OJS downloads Version 3.1.0 and above is the best option for supporting the fullest Crossref metadata) With a manual XML upload file Or, using HTTPS to POST XML I also emphasized the importance of depositing, adding, and updating your metadata, and spoke about:\nBasic citation metadata: titles, author names, author affiliations, funding data, publication dates, issue numbers, page numbers, ISSNs, ISBNs\u0026hellip; Non-bibliographic metadata: reference lists, ORCID iDs, license data, clinical trial information, abstracts, relationships\u0026hellip; Crossmark: errata, retractions, updates, and more How important it is to have accurate, clean, and complete metadata The importance of registering your backfiles How to see the metadata you have Anna Tolwinksa, Crossref’s Member Experience Manager, gave us an overview of the new Participation Reports tool. She explained how Participation Reports allows anyone to see the metadata Crossref members have registered with us, and how you can see for yourself where the gaps in your metadata are, and—importantly—how you can improve your coverage.\nWhat we learnt There are 10 key metadata elements or checks in Participation Reports that aid in Crossref members’ content discoverability, reproducibility and research integrity: References ~Open References~ [EDIT 6th June 2022 - all references are now open by default]. ORCID iDs Funder Registry IDs Funding award numbers Text mining URLs License URLs Similarity Check URLs Every day, research organizations around the world rely on metadata from Crossref, and use it in a variety of systems. Here are a few examples. Many organizations that enable research depend on Crossref’s metadata; we received over 650 million queries just last month Crossref members should check Participation Reports to see what percentage of their content includes rich metadata If the percentages are low, Crossref is happy to work with you to help understand and improve your coverage Richer metadata helps research to be found, cited, linked to, assessed, and reused To make sure your work can be found! Catch up with the webinar recording, and slides from Laura, Stephanie, Pierre, and Anna’s presentations, and please contact us if you have any questions.\n", "headings": ["Five ways to register content with Crossref","How to see the metadata you have","What we learnt"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/christine-buske/", "title": "Christine Buske", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-is-production-ready/", "title": "Event Data is production ready", "subtitle":"", "rank": 1, "lastmod": "2018-09-12", "lastmod_ts": 1536710400, "section": "Blog", "tags": [], "description": "We’ve been working on Event Data for some time now, and in the spirit of openness, much of that story has already been shared with the community. In fact, when I recently joined as Crossref’s Product Manager for Event Data, I jumped onto an already fast moving train—headed for a bright horizon.\n", "content": "We’ve been working on Event Data for some time now, and in the spirit of openness, much of that story has already been shared with the community. In fact, when I recently joined as Crossref’s Product Manager for Event Data, I jumped onto an already fast moving train—headed for a bright horizon.\nWhat’s on the horizon? Well, the reality is you never really reach the horizon. Good product development—in my opinion—is like that train. You keep aiming for the horizon and passing all the stations (milestones) along the way, but the horizon keeps moving as you add features, improve the service, and maybe even review where you are headed. However, for Event Data we are pleased to say we have now arrived at a rather important station.\nTechnical readiness Thank you to all the beta testers who have journeyed with us this far—we’ve listened and learned, refined and rebuilt with the help of your feedback. We are now thrilled to say that we are service production ready. We’ve reached the station called ‘technical readiness’, and are eager to see more users board our train!\nDuring this time of building and refining, Event Data has grown to include at least 66,7 million events from sources like (in order of magnitude): Wikipedia, Cambia Lens, Twitter, Datacite, F1000, Newfeeds, Reddit links, Wordpress.com, Crossref, Reddit, Hypothesis, and Stackexchange. Wikipedia alone accounts for 50 million events (and counting).\nWhat does this mean? Event Data is production ready.\nBeing production ready means we are not going to make any breaking changes to the code, and we are excited to see more people jump on board to explore where you can go with Event Data, and what product or service you might want to build with it.\nGetting started Having a look at Event Data, and using it, is easy. While the user guide outlines everything you need to know to get fully engrossed, you can get your feet wet with a few sample queries:\nAbove I mentioned Event Data has about 50 million Wikipedia events, you can check if that has grown by looking at a query that lists all distinct events by source (your browser will need a JSON viewer extension):\nhttps://api.eventdata.crossref.org/v1/events/distinct?facet=source/:*\u0026amp;rows=0\nYou can also see a live stream of events going through Event Data.\nFor all events registered for a specific content item, you simply query http://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?obj-id=https://0-doi-org.libus.csd.mu.edu/XXX, where XXX is replaced with the DOI.\nWhat next? We are now focusing on the final stretch towards the official roll-out. Beyond this, we will continue to add sources and features and have a healthy roadmap to keep us on track. We value any feedback you have for us about your own journey with Event Data. Your feedback may help shape the direction we take in the future. Most of all, we are all excited to see what people build with it!\nWe look forward to continuing on our Event Data journey and we welcome you all aboard the train! Please contact me with your ideas.\n", "headings": ["Technical readiness","What does this mean?","Getting started","What next?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-at-the-frankfurt-book-fair-2018/", "title": "Crossref at the Frankfurt Book Fair 2018", "subtitle":"", "rank": 1, "lastmod": "2018-09-11", "lastmod_ts": 1536624000, "section": "Blog", "tags": [], "description": "How good is your metadata? Find out at the Frankfurt Book Fair\u0026hellip; At the Frankfurt Book Fair this year (Hall 4.2, Stand M82), the Crossref team will be on hand to give you a personal tour of our new Participation Reports tool. Or join us at The Education Stage to hear about how this new tool can help you view, evaluate and improve your metadata participation.\n", "content": "How good is your metadata? Find out at the Frankfurt Book Fair\u0026hellip; At the Frankfurt Book Fair this year (Hall 4.2, Stand M82), the Crossref team will be on hand to give you a personal tour of our new Participation Reports tool. Or join us at The Education Stage to hear about how this new tool can help you view, evaluate and improve your metadata participation.\nHow good is your metadata? Join us Thursday 11th October at 15.30 at the Education Stage in Hall 4.2 to find out Lots of reasons to visit our stand We’ll be located in the same place as last year, Hall 4.2, Stand M82, and there are lots of reasons to visit us:\nGet your metadata participation evaluated - Anna Tolwinska and Amanda Bartell will walk you through your own Participation Report and provide guidance on how to improve your results. Discover how complete your metadata is, where the gaps are, and how other publishers compare.\nDiscuss a technical issue that’s hindering your metadata participation (or any other technical issue) with Isaac Farley and Paul Davis from our Technical Support team.\nJennifer Kemp will also be around to answer all your metadata use and reuse questions. She’s looking forward to chatting with all kinds of service providers and toolmakers.\nOn the strategy side, Ginny Hendricks will be there on Wednesday 10th if you’d like to discuss any policy stuff, new ideas, or find out what Crossref is planning next.\nAsk us anything Not just Participation Reports—you can ask us about anything. Perhaps about our newer record types such as preprints, pending publications (i.e. DOIs on acceptance), or data citations. Or, ask us how you can:\nAdvance scholarly pursuits for the benefit of society, through Metadata 2020 Check papers for originality, with our service for editorial rigour, through Similarity Check Discover where and how research is being discovered, through Event Data Reveal who is citing your published papers and how platforms can display this information, with our Cited-by service Provide evidence of trust in published outputs, revealing updates, corrections and retractions, through our Crossmark service Let us know if you’d like to book in a meeting with one of us, or do just stop by the stand to say “Guten Tag”.\nWe look forward to seeing you there - bis dann!\n", "headings": ["How good is your metadata? Find out at the Frankfurt Book Fair\u0026hellip;","Lots of reasons to visit our stand","Ask us anything"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/presenting-pidapalooza-2019/", "title": "Presenting PIDapalooza 2019", "subtitle":"", "rank": 1, "lastmod": "2018-08-28", "lastmod_ts": 1535414400, "section": "Blog", "tags": [], "description": "PIDapalooza, the open festival of persistent identifiers is back and it’s better than ever. Mark your calendar for Dublin, Ireland, January 23-24, 2019 and send us your session ideas by September 21.\n", "content": "PIDapalooza, the open festival of persistent identifiers is back and it’s better than ever. Mark your calendar for Dublin, Ireland, January 23-24, 2019 and send us your session ideas by September 21.\nYes, it’s back and \u0026ndash; with your support \u0026ndash; it’s going to be better than ever! The third annual PIDapalooza open festival of persistent identifiers will take place at the Griffith Conference Centre, Dublin, Ireland on January 23-24, 2019 - and we hope you’ll join us there!\nHosted, once again, by California Digital Library, Crossref, DataCite, and ORCID, PIDapalooza will follow the same format as past events \u0026ndash; rapid-fire, interactive, 30-60 minute sessions (presentations, discussions, debates, brainstorms, etc.) presented on three stages \u0026ndash; plus main stage attractions, which will be announced shortly. New for this year is an unconference track, as suggested by several attendees last time.\nIn the meantime, get those creative juices flowing and send us your session PIDeas! What would you like to talk about? Hear about? Learn about? What’s important for your organization and your community and why? What’s working and what’s not? What’s needed and what’s missing? We want to hear from as many PID people as possible! Please use this form to send us your suggestions. The PIDapalooza Festival Committee will review all forms submitted by September 21, 2018 and decide on the lineup by mid-October.\nAs a reminder, the regular themes are:\nPID myths: Are PIDs better in our minds than in reality? PID stands for Persistent IDentifier, but what does that mean and does such a thing exist?\nPIDs forever - achieving persistence: So many factors affect persistence: mission, oversight, funding, succession, redundancy, governance. Is open infrastructure for scholarly communication the key to achieving persistence?\nPIDs for emerging uses: Long-term identifiers are no longer just for digital objects. We have use cases for people, organizations, vocabulary terms, and more. What additional use cases are you working on?\nLegacy PIDs: There are of thousands of venerable old identifier systems that people want to continue using and bring into the modern data citation ecosystem. How can we manage this effectively?\nBridging worlds: What would make heterogeneous PID systems \u0026lsquo;interoperate\u0026rsquo; optimally? Would standardized metadata and APIs across PID types solve many of the problems, and if so, how would that be achieved? What about standardized link/relation types?\nPIDagogy: It’s a challenge for those who provide PID services and tools to engage the wider community. How do you teach, learn, persuade, discuss, and improve adoption? What\u0026rsquo;s it mean to build a pedagogy for PIDs?\nPID stories: Which strategies worked? Which strategies failed? Tell us your horror stories! Share your victories!\nKinds of persistence: What are the frontiers of \u0026lsquo;persistence\u0026rsquo;? We hear lots about fraud prevention with identifiers for scientific reproducibility, but what about data papers promoting PIDs for long-term access to reliably improving objects (software, pre-prints, datasets) or live data feeds? We’ll be posting more information on the PIDapalooza website over the coming months, as well as keeping you updated on Twitter (@pidaplooza).\nIn the meantime, what are you waiting for!? Book your place now \u0026ndash; and we also strongly recommend that you book your accommodation early as there are other big conferences in Dublin that week.\nPIDapalooza, Dublin, Ireland, January 23-24, 2019 - it’s a date!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/karthik-ram/", "title": "Karthik Ram", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/leaving-the-house-where-preprints-go/", "title": "Leaving the house - where preprints go", "subtitle":"", "rank": 1, "lastmod": "2018-08-21", "lastmod_ts": 1534809600, "section": "Blog", "tags": [], "description": "“Pre-prints” are sometimes neither Pre nor Print (c.f. https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.11408.1, but they do go on and get published in journals. While researchers may have different motivations for posting a preprint, such as establishing a record of priority or seeking rapid feedback, the primary motivation appears to be timely sharing of results prior to journal publication.\nSo where in fact do preprints get published?", "content": "“Pre-prints” are sometimes neither Pre nor Print (c.f. https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.11408.1, but they do go on and get published in journals. While researchers may have different motivations for posting a preprint, such as establishing a record of priority or seeking rapid feedback, the primary motivation appears to be timely sharing of results prior to journal publication.\nSo where in fact do preprints get published? Although this is a simple question, we have not had an easy way to answer how this varies across disciplines, preprint repositories and journals. Until now. Crossref metadata provides not only an open and easy way to do so, but up-to-date data to get the latest results.\nrOpenSci makin\u0026rsquo; it sweet \u0026amp; easy Crossref asks preprint repositories to update their metadata once a preprint has been published by adding the article link into its record via the “is-preprint-of” relation. As the record is processed, we make the link available going both directions, while preserving the provenance of the statement in the metadata output (\u0026ldquo;asserted-by\u0026rdquo;: \u0026ldquo;subject\u0026rdquo; or \u0026ldquo;asserted-by\u0026rdquo;: \u0026ldquo;object\u0026rdquo;). This results in bidirectional assertions in the Crossref REST API where search engines, analytics providers, indexes, etc. can get from the preprint to the article (“is-preprint-of”) as well as vice versa (“has-preprint”), making it easier to find, cite, link, assess, and reuse.\nUsing rOpenSci’s R library for the Crossref REST API (rcrossref), we pulled all articles connected to a previous preprint (https://0-api-crossref-org.libus.csd.mu.edu/works?filter=relation.type:has-preprint\u0026facet=publisher-name:\u0026amp;rows=0) and then aggregated them based on journal via their ISSNs (https://0-api-crossref-org.libus.csd.mu.edu/works?filter=relation.type:has-preprint\u0026facet=issn:), tallying the results in a tidy table with the journal name (ex: PLOS Biology (https://0-api-crossref-org.libus.csd.mu.edu/journals/2167-8359)).\nThe big reveal So without further delay, let’s look at the results of the 20 journals with the highest number of preprints associated with its articles (data from August 21, 2018):\nPublisher Journal Count PeerJ PeerJ 1184 Springer Nature Scientific Reports 394 eLife eLife 375 PLOS PLOS ONE 338 Proceedings of the National Academy of Sciences PNAS 205 PLOS PLOS Computational Biology 196 Springer Nature Nature Communications 187 PLOS PLOS Genetics 169 The Genetics Society of America Genetics 168 Oxford University Press Nucleic Acids Research 148 Oxford University Press Bioinformatics 138 The Genetics Society of America Genetics 120 The Genetics Society of America G3: Genes, Genomes, Genetics 104 Cold Spring Harbor Laboratory Genome Research 104 Oxford University Press Molecular Biology and Evolution 100 MDPI AG Energies 98 MDPI AG Sensors 96 Springer Nature BMC Genomics 92 MDPI AG International Journal of Molecular Sciences 86 JMIR Publications Journal of Medical Internet Research 83 This list has not been normalized or weighted based on the size of the journal. The following observations are informed speculations, as we can only infer so much from the raw data: Disciplinary practice: This phenomenon where preprints are a part of disciplinary practice accounts for about half of the journals represented on the list. Certain communities such as genetics and computational fields have been early adopters of preprints. As such, we see higher rates of preprint-to-article publication in journals that publish their work. Partnerships: Partnerships that facilitate submission from the preprint repository directly to a publisher or peer review service (ex: BioRxiv B2J program) make it easier for researchers to move from preprint-sharing seamlessly to submitting their journal article manuscript. Tie-ins: A quarter of the journals on the list are run by publishers with a preprint service, and have been able to tie together both arms of publishing. This removes barriers to journal article submission in the same manner as integrations between repositories and publishers, but does so as a single party. Publisher support and treatment: We also see that strong proponents and early partners of preprint repositories tend to have higher counts. Some publishers have been more outspoken in their welcome of preprints, such as PNAS. Sometimes this support also comes in the form of special treatment. In the process of crafting editorial policy on publishing results previously posted in a preprint, some journals have carved out particular affordances in their publication workflow and content delivery streams that may contribute to the higher counts of articles. For example, Nature Research displays the preprints of submitted articles under consideration: https://0-nature--research--under--consideration-nature-com.libus.csd.mu.edu/. Mega-journals: Mega-journals such as Scientific Reports and PLOS ONE have not discouraged preprints. As such, and due to the size of their publication output, they have easily found a place among the higher counts on the list. Taking a closer look One major consideration in these results, concerns what’s missing in the data. These fall into two camps: incomplete member data, and incomplete membership coverage.\nWe have been working with our members to deposit preprints using the proper record type, and to provide links to published articles in their metadata. However, not all have yet done so (ex: SSRN), leading to holes in our research nexus graph, which subsequently detracts from the completeness of the data.\nWe celebrate the preprint repositories who are required to update their metadata when an article is published from a preprint, thereby populating the map with critical bridges between preprints and articles. Crossref participation benefits not only the content owner, but the membership at large and all the systems across the research ecosystem powered by Crossref metadata.\nLastly, this data is dependent on the coverage of preprint repositories who register content with us. We are thrilled that Center for Open Science, our newest preprints addition who represents 21 community repositories, has recently filled in swaths of the map. But there remain dead zones in the research graph from repositories who are not Crossref members (ex: ArXiv). Their disciplines, as a result, are under represented in these results.\nEveryone dive in! As to the question of “where do preprints get published?”, anyone in fact can answer this question based on the metadata Crossref collects and provides to the community as an open infrastructure provider. We encourage the community to explore and analyze the data further with other available datasets to glean more insights on how scholarly communications is changing with the increasing growth of preprints. For example, the effective results across all journals represented can be weighted based on the number of articles published by each journal.\nCrossref data is open for all to examine and reuse through our REST API. Please dive in and share your findings with us!\n", "headings": ["So where in fact do preprints get published?","rOpenSci makin\u0026rsquo; it sweet \u0026amp; easy","The big reveal","Taking a closer look","Everyone dive in!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2018-election-slate/", "title": "2018 election slate", "subtitle":"", "rank": 1, "lastmod": "2018-08-17", "lastmod_ts": 1534464000, "section": "Blog", "tags": [], "description": "With Crossref developing and extending its services for members and other constituents at a rapid pace, it’s an exciting time to be on our board. We recieved 26 expressions of interest this year, so it seems our members are also excited about what they could help us achieve.\n", "content": "With Crossref developing and extending its services for members and other constituents at a rapid pace, it’s an exciting time to be on our board. We recieved 26 expressions of interest this year, so it seems our members are also excited about what they could help us achieve.\nFrom these 26, the Nominating Committee has put forward the following slate.\nThe 2018 slate: seven candidates for five available seats African Journals OnLine (AJOL), Susan Murray, South Africa American Psychological Association (APA), Jasper Simons, USA Association for Computing Machinery (ACM), Scott Delman, USA California Digital Library (CDL), Catherine Mitchell, USA Hindawi, Paul Peters, UK Sage, Richard Fidczuk, USA Wiley, Duncan Campbell, USA Read the candidates’ organizational and personal statements Candidates were chosen based on the following criteria:\nFollow the guidance from the Board to provide a slate or seven or fewer. Maintain the current balance of the board with respect to size of organizations. Improve balance in other areas, with respect to gender and geography. Also consider types of organizations and sector, as well as engagement with Crossref and its services. You can be part of this important process, by voting in the election If your organization is a member of Crossref on September 14, 2018 you are eligible to vote when voting opens on September 28, 2018 (affiliates, however, are not eligible to vote).\nHow can you vote? On September 28, 2018, your organization’s designated voting contact will receive an email with a link to the formal Notice of Meeting and Proxy Form with concise instructions on how to vote. An additional email will be sent with a username and password along with a link to our online voting platform. It is important to make sure your voting contact is up-to-date.\nWant to add your voice? We are accepting independent nominations until November 7, 2018. Organizations interested in standing as an independent candidate should contact me by this date with a list of ten other Crossref members that endorse their candidacy.\nThe election itself will be held at LIVE18 Toronto, our annual meeting, on 13 November 2018 in Canada. We hope you’ll be there to hear the results.\n", "headings": ["The 2018 slate: seven candidates for five available seats","Read the candidates’ organizational and personal statements","You can be part of this important process, by voting in the election","How can you vote?","Want to add your voice?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/david-sommer/", "title": "David Sommer", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-10-with-kudos/", "title": "Using the Crossref REST API. Part 10 (with Kudos)", "subtitle":"", "rank": 1, "lastmod": "2018-08-13", "lastmod_ts": 1534118400, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to David Sommer, co-founder and Product Director at the research dissemination management service, Kudos. David tells us how Kudos is collaborating with Crossref, and how they use the REST API as part of our Metadata Plus service.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to David Sommer, co-founder and Product Director at the research dissemination management service, Kudos. David tells us how Kudos is collaborating with Crossref, and how they use the REST API as part of our Metadata Plus service.\nIntroducing Kudos ", "headings": ["Introducing Kudos","An example of a Kudos publication page showing the plain language summary","How is Crossref metadata used in Kudos?","What are the future plans for Kudos?","What else would Kudos like to see in Crossref metadata?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/peer-review-publications/", "title": "Peer review publications", "subtitle":"", "rank": 1, "lastmod": "2018-08-12", "lastmod_ts": 1534032000, "section": "Blog", "tags": [], "description": "Peer review publications\u0026mdash;not peer-reviewed publications, but peer reviews as publications Our newest dedicated record type\u0026mdash;peer review\u0026mdash;has received a warm welcome from our members since rollout last November. We are pleased to formally integrate them into the scholarly record, giving the scholars who participated credit for their work, ensuring readers and systems dependably get from the reviews to the article (and vice versa), and making sure that links to these works persist over time.", "content": "Peer review publications\u0026mdash;not peer-reviewed publications, but peer reviews as publications Our newest dedicated record type\u0026mdash;peer review\u0026mdash;has received a warm welcome from our members since rollout last November. We are pleased to formally integrate them into the scholarly record, giving the scholars who participated credit for their work, ensuring readers and systems dependably get from the reviews to the article (and vice versa), and making sure that links to these works persist over time.\nMany of our members make the peer review history available to researchers in different ways. Their extra effort to post review materials alongside the article will now go further once they are registered with us and linked to the journal article. They spoke of publishing peer reviews as a standard part of their publishing operation. The scholarly contributions of their editors and referees are validated, stewarded, and published in the manner of the articles: as per general practice. To fully realize this, they are ensuring that these publications are discoverable, citable, and part of the formal scholarly record—for all the thousands of systems which draw on Crossref metadata.\nArticle metadata + peer review metadata = a fuller picture of the evolution of knowledge\nThe growing collection As of August 12, 2018 three publishers have registered 12446 peer reviews in the dedicated resource type schema we rolled out last November. PeerJ (10.7287) with 12015 at time of writing and Stichting SciPost (10.21468) with 297 works. ScienceOpen (10.14293) has registered 126 reviews of papers on their post-publication platform.\nThe peer review metadata collected is partly similar, though otherwise unique to other content. In the former, general metadata that we accept for the articles, as well as the reviews, include an ORCID iD to identify the reviewer, editor, and/or author 0; license 0. This metadata is quite distinct from the article and is important to collect, not only as a discrete publication in its own right, but also to provide richer context for the actual results shared in the associated article. They are authored by different people than the paper’s contributors (author response/rebuttal excepting). They need not have the same license.\nCurrently, none of this data has been registered. (From the publishers we’ve talked to, this is largely due to factors related to limitations in their technology systems.) And like other record types, we link up scholarly materials in the metadata and fill in the research nexus graph through relations.\nThere’s no better way to understand peer review metadata than to look at real examples from our members:\nPeerJ review (https://0-doi-org.libus.csd.mu.edu/10.7287/peerj.2707v0.1/reviews/1) and its metadata (https://0-api-crossref-org.libus.csd.mu.edu/works/10.7287/peerj.2707v0.1/reviews/1) ScienceOpen review (https://0-doi-org.libus.csd.mu.edu/10.14293/s2199-1006.1.sor-uncat.a5995373.v1.rhrmgu) and its metadata (https://0-api-crossref-org.libus.csd.mu.edu/works/10.14293/s2199-1006.1.sor-uncat.a5995373.v1.rhrmgu) SciPost review (https://0-doi-org.libus.csd.mu.edu/10.21468/scipost.report.10) and its metadata (https://0-api-crossref-org.libus.csd.mu.edu/works/10.21468/scipost.report.10) Review-specific metadata is also critical to capturing the shape of the scholarly discussion. These include:\nReview date (required) Scholarly work reviewed (required) Recommendation Revision stage Review round Contributor name PeerJ, SciPost, and ScienceOpen have registered this whole set where applicable (review round not applicable to post-publication reviews), with the exception of the recommendation.\nScholarly contributions captured in time Published peer reviews uniquely highlight the nature of research ideas evolving over time, spotlighting the nature of this as a collective effort involving multiple individuals. The more metadata, the bolder the story. We have created a set of reference metadata (fictitious) to illustrate this phenomenon. Josiah Carberry submits a manuscript to the Journal of Psychoceramics, entitled “Dog: A Methodology for the Development of Simulated Annealing.” It undergoes two rounds of review with two referees each round. The article https://0-doi-org.libus.csd.mu.edu/10.5555/12345681 is published and registered on May 6, 2012 along with the history of peer review materials on the same day:\nFirst submission\nReferee report 1 - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9879 Referee report 2 - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9880 Editor decision - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9881 Revision round 1\nAuthor rebuttal - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9882 Referee report 1 - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9883 Referee report 2 - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9884 Editor decision - https://0-doi-org.libus.csd.mu.edu/10.5555/12345681.9885 Published reviews can show peer feedback in progress; the progress of scholarly discussion unfolding, as expert ideas build upon each other. Many of us have traditionally located the article’s publication as the climactic event, but the story in fact doesn’t end there. Pre-publication becomes post-publication. Throughout this time, research is validated and sprouts into new ideas.\nPeer review platform Publons is working on getting reviews authored on its platform registered with us. Doing so will mean that PeerJ article, “Transformative optimisation of agricultural land use to meet future food demands” by Lian Pin Koh, Thomas Koellner, and Jaboury Ghazoul https://0-doi-org.libus.csd.mu.edu/10.7717/peerj.188 with three scholarly discussions published over the course of peer review, would also be accompanied by a fourth that occurred after publication from Gene A. Bunin https://publons.com/publon/3374/, not yet registered.\nResearch begets research In my investigation of review publications registered, two examples cropped up, highlighting the richness of the research process not only as it shows a set of research results evolve through scholarly discussion, but as it is then folded into new research outputs.\nA PeerJ article “Software citation principles” https://0-doi-org.libus.csd.mu.edu/10.7717/peerj-cs.86 has had a very rich life: https://0-api-crossref-org.libus.csd.mu.edu/works/10.7717/peerj-cs.86. It was originally submitted as a preprint and underwent multiple iterations of improvement (https://0-doi-org.libus.csd.mu.edu/10.7287/peerj.preprints.2169, https://0-doi-org.libus.csd.mu.edu/10.7287/peerj.preprints.2169v1, https://0-doi-org.libus.csd.mu.edu/10.7287/peerj.preprints.2169v2, etc.). It then was subjected to peer review. And three referee reports are published alongside the final publication: https://0-doi-org.libus.csd.mu.edu/10.7287/peerj-cs.86v0.1/reviews/1 https://0-doi-org.libus.csd.mu.edu/10.7287/peerj-cs.86v0.1/reviews/2 https://0-doi-org.libus.csd.mu.edu/10.7287/peerj-cs.86v0.2/reviews/1. We glimpse a view of time unfolding here:\nNB: in the review metadata, all the dates provided reference September 19, 2016 when they were published with the accompanying research article. To really make the metadata useful, we recommend providing the date the review was received, rather than published (for publishers who are publishing pre-publication review materials). The reviews were then cited in three versions of the F1000Research article, “A multi-disciplinary perspective on emergent and future innovations in peer review” (https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.12037.1, https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.12037.2, and https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.12037.3). These three all link up on the Crossref metadata map. The visualization below is only an entrypoint into this picture of research dissemination and the spread of ideas.\nAndrás Láng served as a reviewer for a paper by Danilo Garcia and Fernando R. González Moraga published as “The Dark Cube: dark character profiles and OCEAN” (https://0-doi-org.libus.csd.mu.edu/10.7717/peerj.3845). As of the blog release date, this paper has been cited by two sources: Source: https://0-doi-org.libus.csd.mu.edu/10.7717/peerj.3845, CC-BY 4.0\nWhat this view of the paper does not reveal is that Láng’s review (https://0-doi-org.libus.csd.mu.edu/10.7287/peerj.3845v0.1/reviews/2) provided such insight to the original researchers that the first author (Garcia) incorporates the discussion in his subquent work. This evidence is documented in the citation list of that new publication, “Encyclopedia of Personality and Individual Differences” https://0-doi-org.libus.csd.mu.edu/10.1007/978-3-319-28099-8_2302-1. What a wonderful illustration of the ways in which peer reviews can operate like other publications, and how far is it from being unique. But up to now, we have not yet programmatically captured them in a formal way as we do now with these materials registered properly as a review.\nThe evolution of Crossref’s piece In the same spirit of ever evolving knowledge, we also continue to update our schemas based upon community feedback. Are references important? Tell us! What new metadata on peer reviews are important to answer your questions or help you do what you need? Members, if you are interested in registering your peer review content with us, please get in touch.\n", "headings": ["Peer review publications\u0026mdash;not peer-reviewed publications, but peer reviews as publications","The growing collection","Scholarly contributions captured in time","Research begets research","The evolution of Crossref’s piece"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/cdl/", "title": "CDL", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/john-chodacki/", "title": "John Chodacki", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/laure-haak/", "title": "Laure Haak", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/orcid/", "title": "ORCID", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/org-id-a-recap-and-a-hint-of-things-to-come/", "title": "Org ID: a recap and a hint of things to come", "subtitle":"", "rank": 1, "lastmod": "2018-08-02", "lastmod_ts": 1533168000, "section": "Blog", "tags": [], "description": "Cross-posted on the blogs of University of California (UC3), ORCID, and DataCite: https://0-doi-org.libus.csd.mu.edu/10.5438/67sj-4y05.\nOver the past couple of years, a group of organizations with a shared purpose\u0026mdash;California Digital Library, Crossref, DataCite, and ORCID\u0026mdash;invested our time and energy into launching the Org ID initiative, with the goal of defining requirements for an open, community-led organization identifier registry. The goal of our initiative has been to offer a transparent, accessible process that builds a better system for all of our communities.", "content": "Cross-posted on the blogs of University of California (UC3), ORCID, and DataCite: https://0-doi-org.libus.csd.mu.edu/10.5438/67sj-4y05.\nOver the past couple of years, a group of organizations with a shared purpose\u0026mdash;California Digital Library, Crossref, DataCite, and ORCID\u0026mdash;invested our time and energy into launching the Org ID initiative, with the goal of defining requirements for an open, community-led organization identifier registry. The goal of our initiative has been to offer a transparent, accessible process that builds a better system for all of our communities. As the working group chair, I wanted to provide an update on this initiative and let you know where our efforts are headed.\nCommunity-led effort First, I would like to summarize all of the work that has gone into this project, a truly community-driven initiative, over the last two years:\nA series of collaborative workshops were held at the Coalition for Networked Information (CNI) meeting in San Antonio TX (2016), the FORCE11 conference in Portland OR (2016), and at PIDapalooza in Reykjavik (2016). Findings from these workshops were summarized in three documents, which we made openly available to the community for public comment: Organization Identifier Project: A Way Forward (PDF) Organization Identifier Provider Landscape (PDF) Technical Considerations for an Organization Identifier Registry (PDF) A Working Group worked throughout 2017 and voted to approve a set of recommendations and principles for \u0026lsquo;governance\u0026rsquo; and \u0026lsquo;product\u0026rsquo;: Governance Recommendations Product Principles and Recommendations We then put out a Request for Information that sought expressions of interest from organizations to be involved in implementing and running an organization identifier registry. There was a really good response to the RFI; reviewing the responses and thinking about next steps led to our most recent stakeholder meeting in Girona in January 2018, where ORCID, DataCite, and Crossref were tasked with drafting a proposal that meets the Working Group\u0026rsquo;s requirements for a community-led, organizational identifier registry. Thank you I want to take this opportunity to thank everyone who has contributed to this effort so far. We\u0026rsquo;ve been able to make good progress with the initiative because of the time and expertise many of you have volunteered. We have truly benefited from the support of the community, with representatives from Alfred P. Sloan Foundation; American Physical Society, California Digital Library, Cornell University, Crossref, DataCite, Digital Science, Editeur, Elsevier, Foundation for Earth Sciences, Hindawi, Jisc, ORCID, Ringgold, Springer Nature, The IP Registry, and U.S. Geological Survey involved throughout this initiative. And we couldn\u0026rsquo;t have done any of it without the help and guidance of our consultants, Helen Szigeti and Kristen Ratan.\nThe way forward The recommendations from our initiative have been converted into a concrete plan for building a registry for research organizations. This plan will be posted in the coming weeks.\nThe initiative\u0026rsquo;s leadership group has already secured start-up resourcing and is getting ready to announce the launch plan\u0026mdash;more details coming soon. We hope that all stakeholders will continue to support the next phase of our work \u0026ndash; look for announcements in the coming weeks about how to get involved. As always, we welcome your feedback and involvement as this effort continues. Please contact me directly with any questions or comments at john.chodacki@ucop.edu. And thanks again for your help bringing an open organization identifier registry to fruition!\nReferences Bilder, G., Brown, J., \u0026amp; Demeranville, T. (2016). Organisation identifiers: current provider survey. ORCID. https://0-doi-org.libus.csd.mu.edu/10.5438/4716\nCruse, P., Haak, L., \u0026amp; Pentz, E. (2016). Organization Identifier Project: A Way Forward. ORCID. https://0-doi-org.libus.csd.mu.edu/10.5438/2906\nFenner, M., Paglione, L., Demeranville, T., \u0026amp; Bilder, G. (2016). Technical Considerations for an Organization Identifier Registry. https://0-doi-org.libus.csd.mu.edu/10.5438/7885\nLaurel, H., Bilder, G., Brown, C., Cruse, P., Devenport, T., Fenner, M., … Smith, A. (2017). ORG ID WG Product Principles and Recommendations. https://0-doi-org.libus.csd.mu.edu/10.23640/07243.5402047\nLaurel, H., Pentz, E., Cruse, P., \u0026amp; Chodacki, J. (2017). Organization Identifier Project: Request for Information. https://0-doi-org.libus.csd.mu.edu/10.23640/07243.5458162\nPentz, E., Cruse, P., Laurel, H., \u0026amp; Warner, S. (2017). ORG ID WG Governance Principles and Recommendations. https://0-doi-org.libus.csd.mu.edu/10.23640/07243.5402002\n", "headings": ["Community-led effort","Thank you","The way forward","References"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/trisha-cruse/", "title": "Trisha Cruse", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/321-its-lift-off-for-participation-reports/", "title": "3,2,1… it’s ‘lift-off’ for Participation Reports", "subtitle":"", "rank": 1, "lastmod": "2018-08-01", "lastmod_ts": 1533081600, "section": "Blog", "tags": [], "description": "Metadata is at the heart of all our services. With a growing range of members participating in our community—often compiling or depositing metadata on behalf of each other—the need to educate and express obligations and best practice has increased. In addition, we’ve seen more and more researchers and tools making use of our APIs to harvest, analyze and re-purpose the metadata our members register, so we’ve been very aware of the need to be more explicit about what this metadata enables, why, how, and for whom.", "content": "Metadata is at the heart of all our services. With a growing range of members participating in our community—often compiling or depositing metadata on behalf of each other—the need to educate and express obligations and best practice has increased. In addition, we’ve seen more and more researchers and tools making use of our APIs to harvest, analyze and re-purpose the metadata our members register, so we’ve been very aware of the need to be more explicit about what this metadata enables, why, how, and for whom.\nThis week we take an important step towards this goal with a much-anticipated announcement: Participation reports are in beta release—so come along and take a look!\nWhat does this mean? Participation Reports gives—for the first time—a clear visualization of the metadata that Crossref has. Search for any member to find out what percentage of their content includes 10 key elements of information, above and beyond the basic bibliographic metadata that all members are obliged to provide. This includes metadata such as ORCID iDs for contributors, funding acknowledgements, reference lists, and abstracts—richer metadata that makes content more discoverable, and much more useful to the scholarly community as a whole, including among members themselves.\nYou can filter by content such as journal articles, book chapters, datasets, and preprints, and compare current content (past two calendar years and year-to-date) to back file content (older than that). And within the journal articles view, you can drill down to view the metadata completeness for each individual journal. We hear that editorial boards are keen to see that aspect!\nWe’re delighted that participation reports are now available in beta. That means that while we are confident that the data shown is accurate, there could be the odd glitch as we monitor use.\nThank you to everyone who has helped us to test the reports and provided so much valuable feedback. We plan to expand and improve participation reports to include additional metadata elements, metadata quality checks, and adherence to Crossref best practice such as DOI display. We’re still listening so do get in touch if you have questions or suggestions, or would like a more detailed walk through. There is also a feedback button right in-situ in the tool.\n", "headings": ["What does this mean?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-live-and-local-to-you/", "title": "Crossref LIVE and local (to you)", "subtitle":"", "rank": 1, "lastmod": "2018-07-18", "lastmod_ts": 1531872000, "section": "Blog", "tags": [], "description": "The last few months have been busy for the Crossref community outreach team. We’ve been out and about from Cape Town to Ulyanovsk—and many places in between—talking at ‘LIVE locals’ to members about all things metadata. Our LIVE locals are one-day events, held around the world—but local to you—that provide both deeper insight into Crossref, and information on our services and how to benefit from them. These events are always free to attend, and whether you are a long-established member, totally new, or not even a member at all, we welcome you all to join us.\n", "content": "The last few months have been busy for the Crossref community outreach team. We’ve been out and about from Cape Town to Ulyanovsk—and many places in between—talking at ‘LIVE locals’ to members about all things metadata. Our LIVE locals are one-day events, held around the world—but local to you—that provide both deeper insight into Crossref, and information on our services and how to benefit from them. These events are always free to attend, and whether you are a long-established member, totally new, or not even a member at all, we welcome you all to join us.\nAt our most recent events we collaborated with some fantastic organizations and welcomed attendees from a variety of backgrounds including editors, publishers, service providers, researchers and other metadata users.\nSouth Africa In April Chuck Koscher, Director of Technology, and I travelled to South Africa for two LIVE locals, one in Pretoria and the other in Cape Town—and both in collaboration with the Academy of Science of South Africa (ASSAf). ASSAf also provided two excellent speakers, Nadine Wubbeling (ASSAf) and Pierre de Villiers (AOSIS), who shared their experiences with Crossref and presented valuable insights into the work that they do.\nDelivering events for a varied audience like this means there are often differing levels of knowledge and experience. So, to make sure everyone benefited from our sessions, we covered the different ways you can work with the Crossref deposit system as an XML pro, or an absolute beginner. This included a live demonstration of our new deposit tool Metadata Manager (currently in beta) which should help those less technically-minded people (like myself), and be a big improvement upon our current web deposit form.\n|||\rThe day ended with a technical session, where attendees discussed specific issues they needed help with, which mainly focussed on retrieving metadata in the Crossref system, interpreting reports, and support with XML.\nImages left to right: Dr. Pierre de Villiers talks about the Crossref Experience at AOSIS, and the stunning scenery of Table Mountain provided a nice backdrop to our Cape Town event.\nRussia Just back from a few days in Russia 🇷🇺. We ran a @CrossrefOrg LIVE local in Ulyanovsk for 60 editors, made plans to do more education and outreach in the region and caught a #FifaWorldCup2018 game... pic.twitter.com/GSdNEujJXa\n\u0026mdash; Rachael Lammey (@rachaellammey) June 22, 2018 The World Cup wasn’t the only big event in Russia last month. That’s right, we were there too—with our very first Russian LIVE local! On the 19th June, 60 attendees from a range of academic and publishing institutions joined us at The Ulyanovsk State Pedagogical University. Rachael Lammey and I introduced Crossref, the role of identifiers, and how to register different resource types with us. We also discussed the use and importance of providing accurate and comprehensive metadata, and shared some interesting use cases.\nGuest speaker Professor Zinaida Kuznetsova talked about her experiences of working with Crossref and the benefits of being a member. This was complimented by a talk by fellow guest speaker Maxim Mitrofanov from Crossref sponsoring organisation, NEICON. Maxim explained how NEICON works with Crossref, and provide services for the smaller members they support. Maxim is also one of our Crossref Ambassadors - and he will be running more Russian webinars on our services in the near future, so look out for those listed on our webinar page!\nWe’d like to say a big thank you to the team at Ulyanovsk State Pedagogical University for their support and help with the event. Also thanks to our fantastic interpreters who helped us immensely by relaying the information to the audience in Russian, as well as helping to translate and answer questions.\nGermany Najko Jahn from Göttingen State and University Library talks about how he uses @CrossrefOrg metadata in his work #CRLIVEGermany pic.twitter.com/Y89ZkBMoSh\n\u0026mdash; Vanessa Fairhurst (@NessaFairhurst) June 27, 2018 One week later and we were in Hannover, Germany. Crossref’s Laura Wilkinson, Joe Wass and Jennifer Kemp joined me for this event, which was held in collaboration with the German National Library of Science and Technology (Technische Informationsbibliothek - TIB at their impressive venue in on the 27th June. \nThe day focused on all things metadata - how it can be used and why good metadata is important. This included taking a look at our new Participation Reports tool and a fascinating talk from guest speaker Najko Jahn from Göttingen State and University Library on the benefits of using Crossref metadata for libraries and scientists.\nDatacite’s Britta Dreyer also spoke about how DataCite and Crossref support research data sharing, before Joe Wass and I presented updates to the collaborative Org ID project and Event Data service. The day concluded with us sharing more ways to participate in Crossref and other community initiatives.\nQuestions? Вопросов? Fragen? Over the course of these events we were asked many questions—and here are some of the more interesting/common ones posed to the team: Q. Do I have to join Crossref directly, or can I join as part of a group of smaller organizations? A. You don’t have to be a direct member, you can join via a Sponsor. See our sponsors page for a list of Sponsors in your area, and for more information on becoming a Sponsor.\nQ. Can I link translations of works together? A. Yes, a journal article published in two languages can each be assigned its own DOI, and then linked in the metadata using the relationship type TranslationOf from our schema.\nQ. Does the web deposit form support depositing abstracts and references?\nA. No, it doesn’t. However, our new Metadata Manager tool does and if you are in interested in trying it out in beta, let us know.\nQ. Can I share your new Participation Report tool with my colleagues?\nA. Yes you can! It’s open and available for use, just come along and search for a member.\nQ. Can I also register book chapters, dissertations and other record types under the same prefix?\nA. Yes you can. You can register any of the different resource types we support under one prefix.\nQ. Will you be doing more events in this region in future?\nA. We hope so, and we are always happy to hear from those who wish to collaborate on future events, so just contact us to get involved.\n", "headings": ["South Africa","Russia","Germany","Questions? Вопросов? Fragen?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/member-experience/", "title": "Member Experience", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/status/", "title": "Status", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/status-i-am-new/", "title": "Status, I am new", "subtitle":"", "rank": 1, "lastmod": "2018-07-02", "lastmod_ts": 1530489600, "section": "Blog", "tags": [], "description": "Hi, I’m Isaac. I’m new here. What better way to get to know me than through a blog post? Well, maybe a cocktail party, but this will have to do. In addition to giving you some details about myself in this post, I’ll be introducing our status page, too.\n", "content": "Hi, I’m Isaac. I’m new here. What better way to get to know me than through a blog post? Well, maybe a cocktail party, but this will have to do. In addition to giving you some details about myself in this post, I’ll be introducing our status page, too.\nA little about me In mid-April, I began as the new Support Manager. My goal is to fill the very large shoes left by Patricia Feeney moving into the Head of Metadata role. I know Patricia knows Crossref and the rich community of members (and metadata!) inside and out. I’ll get there too. For now, I have immersed myself in tackling as many of your support questions as possible, so I may have already met some of you on a support ticket. If so, thanks for your patience; you likely have already taught me a thing or two!\nIsaac, on the lookout to provide you excellent support\nI came to this position from one of our members – the Society of Exploration Geophysicists, where I served as the Digital Publications Manager for the last five years. Like many of you, I was always impressed, intrigued, and excited by the work underway at Crossref and wanted to be a part of the team. So, here I am, very much looking forward to the challenge ahead.\nI work remotely from Tulsa, Oklahoma, where I live with my wife and two daughters. Tulsa doesn’t have as many members as D.C., London, or Jakarta, but I hope to meet some of you during outreach trips, LIVE events, online in a webinar, or in our support community.\nOne of the things that attracts me to being a part of this community are our truths. As a quick reminder, the truths are:\nCome one, come all One member, one vote Smart alone, brilliant together Love metadata, love technology What you see, what you get Here today, here tomorrow I am drawn to forward-thinking, action-oriented communities that value collaboration and openness. These truths, and the ten weeks I have been at Crossref, have confirmed that this is one of those communities. As your new support manager, I want to emphasize our commitment to transparency: Ask me anything; I’ll tell you what I know. In that spirit, I have the privilege of introducing our new status page—a key piece in furthering our own transparency and openness.\nstatus.crossref.org\nOur new status page provides critical, real-time information about our services—it helps us tell our overall story. If you are looking for metrics on the performance of our APIs, websites, the deposit system, or new beta services, bookmark this page. The system metrics provide daily, weekly, and monthly overviews of each of our services’ response time (in milliseconds) and uptime, or percentage of time that service has been operational during your selected time span (daily, weekly, or monthly).\nFrom this page, we’ll announce planned maintenance and keep you regularly updated when we have an incident. And, we’ll provide regular status updates for these incidents when in progress, updated, and completed.\nOur new status page – status.crossref.org\nI encourage you to subscribe to the updates from the top-right corner of the page. While we’ll update this page with any service-related outages, subscribing for notifications will allow you to stay current on the latest. We’ll describe maintenance and incidents clearly, simply, and timely when we have them. And, if we don’t, call us on it.\nIf you have questions about the performance of our services, the status page is a great starting place. If you still have questions, ask us, we’ll tell you what we know.\n", "headings": ["A little about me"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/user-experience/", "title": "User Experience", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/christian-herzog/", "title": "Christian Herzog", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/daniel-hook/", "title": "Daniel Hook", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/simon-porter/", "title": "Simon Porter", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-9-with-dimensions/", "title": "Using the Crossref REST API. Part 9 (with Dimensions)", "subtitle":"", "rank": 1, "lastmod": "2018-06-27", "lastmod_ts": 1530057600, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to the team behind new search and discovery tool Dimensions: Daniel Hook, Digital Science CEO; Christian Herzog, ÜberResearch CEO; and Simon Porter, Director of Innovation. They talk about the work they’re doing, the collaborative approach, and how Dimensions uses the Crossref REST API as part of our Metadata Plus service, to augment other data and their workflow.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to the team behind new search and discovery tool Dimensions: Daniel Hook, Digital Science CEO; Christian Herzog, ÜberResearch CEO; and Simon Porter, Director of Innovation. They talk about the work they’re doing, the collaborative approach, and how Dimensions uses the Crossref REST API as part of our Metadata Plus service, to augment other data and their workflow.\nIntroducing Dimensions Dimensions is a next-generation approach to discovering, connecting with and contextualising research. Modern academics need data about the research ecosystem in which they exist as much as the administrators who develop institutional research strategies. All academics are now required to think long-range about their research projects, contextualise their research, and demonstrate the impact of their program. Additionally, they need to find funding, ensure that students go on to good positions, and hire talented colleagues whose skills fit well with ongoing projects. Dimensions gives the first fully-linked view of publications, grants, patents and clinical trials in an analytically-centred user experience.\nHow is Crossref data used within Dimensions? For an article to appear in Dimensions it must have a Crossref DOI, so it would not be possible to create Dimensions’ Publication index without Crossref’s data. Dimensions is built on several principles that we’ve talked about before. Here the most relevant of those principles are:\nunique identifiers should underlie everything that we do; data should not be inclusive and the tool should allow the user to select what they want to see; data should be more available to our community; data should be presented with as much contextual information as possible; the community should have enough data available to be able to create and experiment with their own metrics and indicators. In the context of these principles, Crossref makes a perfect starting place to create a tool like Dimensions. We use the Crossref data to know about our possible “universe” of articles. We then enhance the Crossref core with data from several different places: open access publications in the DOAJ, PubMed, BioArXiv, and through relationships with publishers. In all, 60 million of the 95 million articles in the Dimensions index have a full text version that we can text and data mine for additional information.\nIn Dimensions’ enhancement stage we can extract address information (where not included in the original Crossref record) and map it to GRID funding information and the list of funders in Crossref’s Funder Registry as well as to our database of grants in Dimensions.\nHow have you incorporated citation data? Access to citations has historically been a thorny issue for citations databases. However, I4OC celebrated its first anniversary in April this year and this project has been a key driver in helping us to build Dimensions with the level of citation coverage that we managed –– it is a fantastic enabling initiative and should be warmly welcomed by the sector. Crossref is not the only source we were able to use to gather citation data; some text mining was needed to get a full graph. Dimensions goes beyond inter-article citations and includes links between patents and publications, links between clinical trials and publications, and Altmetric mentions of publications.\nIs Dimensions openly available? Given that there is so much open data in Dimensions, it was always our intention to give a free version to the community. If you visit http://app.dimensions.ai then you’ll be able to play with the system and use it for your research. While only the publications index is fully open, when you see a link to a grant, patent or clinical trial in an article detail page, you’ll be able to navigate to that record so that you can see the full context of the data.\nBeyond the ability to link the publications, Dimensions also displays the CV information which the researcher made visible publicly.\nMost recently, we’ve integrated ORCID into Dimensions. This means that you can push data from Dimensions into ORCID if you connect your ORCID account to your Dimensions account.\nWhat are the future plans for Dimensions? Dimensions is still moving quickly and adding more functionality. Our aim is to release more data facets very soon. We plan to add a Policy Document archive and a Research Data archive. We’ve already found some fascinating insights from joining the existing data together and these two new archives should add even more interesting data.\nWhat else would Dimensions like to see in Crossref metadata? Open access information is something that we work with Unpaywall to source for Dimensions right now. It would be great if Crossref and Unpaywall could work together to make this data higher quality and more ubiquitous.\nThank you Daniel, Christian and Simon.\nIf you would like to contribute a case study on the uses of Crossref Metadata APIs please contact the Community team.\n", "headings": ["Introducing Dimensions","How is Crossref data used within Dimensions?","How have you incorporated citation data?","Is Dimensions openly available?","What are the future plans for Dimensions?","What else would Dimensions like to see in Crossref metadata?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/meet-the-members/", "title": "Meet the Members", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/meet-the-members-part-3-with-inasp/", "title": "Meet the members, Part 3 (with INASP)", "subtitle":"", "rank": 1, "lastmod": "2018-06-20", "lastmod_ts": 1529452800, "section": "Blog", "tags": [], "description": "Next in our Meet the members blog series is INASP, who isn’t a direct member, but acts as a Sponsor for hundreds of members. Sioux Cumming, Programme Specialist at INASP tells us a bit about the work they’re doing, how they use Crossref and what the future plans for INASP are.\n", "content": "Next in our Meet the members blog series is INASP, who isn’t a direct member, but acts as a Sponsor for hundreds of members. Sioux Cumming, Programme Specialist at INASP tells us a bit about the work they’re doing, how they use Crossref and what the future plans for INASP are.\n", "headings": ["Can you tell us a little bit about INASP?","What’s your role within INASP?","Tell us a bit about who you support, and how you support them","What’s your participation level with Crossref?","What trends are you seeing in your part of the scholarly communications community?","How would you describe the value of being a Crossref Sponsor?","What are INASP’s plans for the future?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/sioux-cumming/", "title": "Sioux Cumming", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/preprints-growth-rate-ten-times-higher-than-journal-articles/", "title": "Preprints growth rate ten times higher than journal articles", "subtitle":"", "rank": 1, "lastmod": "2018-05-31", "lastmod_ts": 1527724800, "section": "Blog", "tags": [], "description": "The Crossref graph of the research enterprise is growing at an impressive rate of 2.5 million records a month - scholarly communications of all stripes and sizes. Preprints are one of the fastest growing types of content. While preprints may not be new, the growth may well be: ~30% for the past 2 years (compared to article growth of 2-3% for the same period). We began supporting preprints in November 2016 at the behest of our members. When members register them, we ensure that: links to these publications persist over time; they are connected to the full history of the shared research results; and the citation record is clear and up-to-date.\n", "content": "The Crossref graph of the research enterprise is growing at an impressive rate of 2.5 million records a month - scholarly communications of all stripes and sizes. Preprints are one of the fastest growing types of content. While preprints may not be new, the growth may well be: ~30% for the past 2 years (compared to article growth of 2-3% for the same period). We began supporting preprints in November 2016 at the behest of our members. When members register them, we ensure that: links to these publications persist over time; they are connected to the full history of the shared research results; and the citation record is clear and up-to-date.\nSummary As of May 24, 2018 we have 44,388 works (see API query https://0-api-crossref-org.libus.csd.mu.edu/types/posted-content/works with a json viewer) registered as posted content. Today that number is over 150k. Preprints are part of this record type category, which is meant to house scholarly outputs that have been posted online and intended for publication in the future.\nFor a more granular view, see the monthly stats captured by Jordan Anaya in PrePubMed. This data is based on a slightly different set of preprint repositories, though both show the same trends.\nThe figure below shows the preprints registered with Crossref, broken down by repository.\nWe eagerly await our newest preprints member, Center for Open Science, who will soon be registering the preprints from their 18 community archives with us (~9k preprints total to date).\nMetadata coverage We accept a range of metadata for the preprints registered with us, including:\nRepository name \u0026amp; hosting platform Contributor names \u0026amp; ORCID iDs Title Dates (posted, accepted) License Funding Abstract Relations References As with all resource/record types, certain metadata is required, though others are optional. We encourage full coverage of metadata in the record where applicable and possible. So what are publishers including in their posted content records? The summary view is as follows:\nLicense: 9926 (json), 22% (PeerJ Preprints, ChemRxiv) Funder: 0 (json), 0% ORCID: 19309 (json), 44% (bioRxiv, PeerJ Preprints, Preprints.org, ChemRxiv) Abstracts: 35874 (json), 81% (bioRxiv, PeerJ Preprints, ChemRxiv) References: 1921 (json):, 4% (JMIR) Compared to all the published content registered with us over time, preprints have above average coverage of ORCID iDs deposited and show well above average with abstract metadata. However, they are significantly lagging behind with depositing references, license, and funding metadata. (See a summary of the full corpus stats taken two months ago in the blog post, A Lustrum over the Weekend.\nPreprint-article pairs Members registering preprints have an obligation to update the metadata record when a journal article is subsequently published, to clearly identify this work. This pairing is passed on to our metadata users: indexing platforms; recommendations engines; platforms; tools, etc. which pull from our APIs. (The preprint landing page also must link to the article.) As such, the preprint-article pairings are amassing as each week passes. We currently have a total of 12983 (json) preprints connected to articles. The figure below provides the counts based on repository.\nCitations We can see from preprint Cited-by counts that researchers are indeed citing preprints in their articles. This practice is an extension of the common citation behavior to provide evidence for and credit to previous work, a natural consequence of work shared with their peers. The most highly cited preprint papers (json) as of May 24, 2018 are as follows. In some cases, a subsequent paper was published from the results shared in the preprint. These have also accrued citations in their own right and these are also indicated in the table below.\nNo. Cited-by Preprint DOI Preprint title Date Subsequent journal article Citations of journal article 1 Cited-by 72 https://0-doi-org.libus.csd.mu.edu/10.1101/005165 qqman: an R package for visualizing GWAS results using Q-Q and manhattan plots May 14, 2014. n/a n/a 2 Cited-by 63 https://0-doi-org.libus.csd.mu.edu/10.1101/002824 HTSeq - A Python framework to work with high-throughput sequencing data August 19, 2014 Bioinformatics, https://0-doi-org.libus.csd.mu.edu/10.1093/bioinformatics/btu638 2372 3 Cited-by 43 https://0-doi-org.libus.csd.mu.edu/10.1101/030338 Analysis of protein-coding genetic variation in 60,706 humans May 10, 2016 Nature, https://0-doi-org.libus.csd.mu.edu/10.1038/nature19057 1598 4 Cited-by 38 https://0-doi-org.libus.csd.mu.edu/10.1101/002832 Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2 November 17, 2014 Genome Biology, https://0-doi-org.libus.csd.mu.edu/10.1186/s13059-014-0550-8 3284 5 Cited-by 32 https://0-doi-org.libus.csd.mu.edu/10.1101/021592 Salmon provides accurate, fast, and bias-aware transcript expression estimates using dual-phase inference August 30, 2016 Nature Methods, https://0-doi-org.libus.csd.mu.edu/10.1038/nmeth.4197 112 6 Cited-by 22 https://0-doi-org.libus.csd.mu.edu/10.1101/012401 DensiTree 2: Seeing Trees Through the Forest December 8, 2014 n/a n/a 7 Cited-by 21 https://0-doi-org.libus.csd.mu.edu/10.1101/011650 FusionCatcher - a tool for finding somatic fusion genes in paired-end RNA-sequencing data November 19, 2014 n/a n/a 8 Cited-by 19 https://0-doi-org.libus.csd.mu.edu/10.1101/048991 Analysis of shared heritability in common disorders of the brain September 6, 2017 n/a n/a 9 Cited-by 18 https://0-doi-org.libus.csd.mu.edu/10.1101/006395 Error correction and assembly complexity of single molecule sequencing reads June 18, 2014 n/a n/a 10 Cited-by 18 https://0-doi-org.libus.csd.mu.edu/10.1101/032839 Spread of the pandemic Zika virus lineage is associated with NS1 codon usage adaptation in humans November 25, 2015 n/a n/a The relationship between preprints and the proceeding publication is an interesting area that is not yet well understood. We invite the community to analyze the Crossref metadata using the REST API in concert with other datasets. For example, the citation lifecycle for these two research products has been one of speculation so far without a systematic investigation into patterns and timeframes of preprint citations and those of its succeeding article across the corpus. Here, submission dates would be critical data to this research question as publication windows vary significantly by publisher and by paper.", "headings": ["Summary","Metadata coverage","Preprint-article pairs","Citations"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/cited-by/", "title": "Cited-By", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linking-references-is-different-from-registering-references/", "title": "Linking references is different from registering references", "subtitle":"", "rank": 1, "lastmod": "2018-05-30", "lastmod_ts": 1527638400, "section": "Blog", "tags": [], "description": "From time to time we get questions from members asking what the difference is between reference linking and registering references as part the Content Registration process.\nHere\u0026rsquo;s the distinction:\nLinking out to other articles from your reference lists is a key part of being a Crossref members - it\u0026rsquo;s an obligation in the membership agreement and it levels the playing field when all members link their references to one another.", "content": "From time to time we get questions from members asking what the difference is between reference linking and registering references as part the Content Registration process.\nHere\u0026rsquo;s the distinction:\nLinking out to other articles from your reference lists is a key part of being a Crossref members - it\u0026rsquo;s an obligation in the membership agreement and it levels the playing field when all members link their references to one another.\nRegistering references when you register your content is completely different. It\u0026rsquo;s enriching the metadata record that describes your content, and it allows Crossref and others\u0026mdash;including non-members\u0026mdash;to use them.\nReference Linking A research article usually includes a reference list of citations to other works that helped inform it. The original function of Crossref was to provide a central service for publishers that enabled them to link to each others\u0026rsquo; content from these reference lists\u0026mdash;using a DOI as a persistent link. This meant that members of all sizes and in all disciplines could easily link to one another without having to sign hundreds of bilateral agreements.\nWe made Reference Linking obligatory for Crossref members because it\u0026rsquo;s fundamental to making content discoverable, and because when everyone links their references, research travels further and benefits everyone.\nRegistering references Every single day hundreds of members register and update their metadata with us\u0026mdash;and every single day hundreds of organizations search for, extract and use it. To make sure your content is discovered in this process, it\u0026rsquo;s important to make the metadata you register with us as rich as possible. Rich metadata includes information such as journal title, article author, publication date, page numbers, ISSN, abstracts, ORCID iDs, funding information, clinical trials numbers, license information, and of course\u0026mdash;references.\nAdditionally, registering references is a prerequisite recommended for participating in our Cited-by service\u0026mdash;which provides citation counts and lists, and ultimately makes your content more discoverable. [EDIT 7th February 2024 - it is no longer required but highly recommended.]\nWe know it\u0026rsquo;s not easy for smaller publishers to deposit references. Read more on how to here. Our upcoming Metadata Manager tool will allow you to register your references at the same time as the rest of your content. This service is currently in development but let us know if you want to try it out. [EDIT 7th February 2024 - Metadata Manager has been deprecated. More info about it here.]\nReference Linking Reference Linking means adding Crossref DOI links to the reference list for journal articles on your article pages as per this example: https://0-doi-org.libus.csd.mu.edu/10.1088/1367-2630/1/1/006.\nHow it works First retrieve DOIs for all available references either through our human or machine interfaces. Then make sure you use the DOI link in your references and on your article landing page using the Crossref DOI display guidelines.\nWhy it’s useful Reference Linking:\nEnables you to link to more than 10,000 publishers without having to sign multiple agreements Helps with discoverability, because DOIs don’t break if implemented correctly Displays your DOIs as URLs so that anyone can copy and share them Makes your content more useful to readers Drives traffic to your website from other publishers. Is it obligatory? Yes, within a short time after becoming a member you should be including references.\nRegistering References Registering references means submitting them as part of your Crossref metadata deposit as per this example: https://0-www-crossref-org.libus.csd.mu.edu/xml-samples/article_with_references.xml.\nHow it works Whenever you register content with us, make sure you include your references in the submission. You can also add references to your existing content via a metadata redeposit, or our resource-only deposit, or our Simple Text Query form.\nWhy it’s useful References registered as part of your metadata:\nMake your content more discoverable Make your content richer and more useful Are required to participate in our Cited-by service (this service shows what articles cite your article) Enables discovery of research Enables evaluation of research Highlights your contents’ provenance Helps with citation counts. Is it obligatory? No, it’s optional, but strongly encouraged. It is required recommended if you are participating in our Cited-by service. [EDIT 7th February 2024 - it is no longer required but highly recommended].\nIf you have any questions about reference linking or registering your references please get in touch.\n", "headings": ["Reference Linking","Registering references","Reference Linking","How it works","Why it’s useful","Is it obligatory?","Registering References","How it works","Why it’s useful","Is it obligatory?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/reference-linking/", "title": "Reference Linking", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ssp-roadtrip-for-the-crossref-team/", "title": "SSP roadtrip for the Crossref team", "subtitle":"", "rank": 1, "lastmod": "2018-05-30", "lastmod_ts": 1527638400, "section": "Blog", "tags": [], "description": "What do you think of when you think of Chicago? Deep dish pizza? Art Deco architecture?\nWell for one week only this year you can add scholarly publishing to the list as the #SSP2018 Conference comes to town. Some Crossref people are excited to be heading out for the conference, and we\u0026rsquo;re looking forward to meeting as many of our members as possible.\nCome along to stand 212A and talk to Anna Tolwinska about Participation Reports.", "content": "What do you think of when you think of Chicago? Deep dish pizza? Art Deco architecture?\nWell for one week only this year you can add scholarly publishing to the list as the #SSP2018 Conference comes to town. Some Crossref people are excited to be heading out for the conference, and we\u0026rsquo;re looking forward to meeting as many of our members as possible.\nCome along to stand 212A and talk to Anna Tolwinska about Participation Reports. Although this new tool is still in beta, she\u0026rsquo;s giving SSP attendees a sneak peek and the chance to get an early look at whether they (and over 10 000 other members) are registering the ten key elements that add context and richness to the basic required metadata. You\u0026rsquo;ll get real insight into what metadata you\u0026rsquo;re registering, even if this work is done by a third party or other department.\nThinking about registering preprints or including data citations? Want to find out more about our forthcoming Event Data service? Our product director Jennifer Lin will be able to give you the ins and outs of all our latest services so do keep an eye out for her at the conference.\nSpeaking of third parties, I\u0026rsquo;ll will be popping along to the \u0026ldquo;Thinking the Unthinkable, or How to Prepare for a Platform Migration\u0026rdquo; pre meeting seminar on Wednesday with copies of our new Platform Migration Checklist and lots of hints and tips to help form a new platform migration guide which will help members have a smooth transition when thinking of moving providers.\nShayn Smulyan will be attending the ORCID breakfast meeting on Thursday morning, so come and say hello if you have any questions about how ORCID and Crossref work together. Shayn is one of our support specialists, so he\u0026rsquo;ll be able to help you with any other technical queries you may have.\nOur tech director Chuck Koscher will be keen to hone in on members\u0026rsquo; advanced questions about Content Registration, citation matching, and any and all schema deets. So seek him out if you have deep technical questions.\nWant to find out more about Metadata 2020, the new campaign to improve metadata for research? Rosa Morais Clark will be able to give you the lowdown, and even better - she has stickers!\nAnd don\u0026rsquo;t feel left out if you aren\u0026rsquo;t a member but work closely with Crossref. Jennifer Kemp will be on hand to answer all your metadata use and reuse questions, she\u0026rsquo;ll be looking forward to chatting with all kinds of service providers, platforms, and tools.\nWe\u0026rsquo;re looking forward to seeing you there!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-good-is-your-metadata/", "title": "How good is your metadata?", "subtitle":"", "rank": 1, "lastmod": "2018-04-26", "lastmod_ts": 1524700800, "section": "Blog", "tags": [], "description": "Exciting news! We are getting very close to the beta release of a new tool to publicly show metadata coverage. As members register their content with us they also add additional information which gives context for other members and for services that help e.g. discovery or analytics.\nRicher metadata makes content useful. Participation reports will give\u0026mdash;for the first time\u0026mdash;a clear picture for anyone to see the metadata Crossref has. This is data that\u0026rsquo;s long been available via our Public REST API, now visualized.\n", "content": "Exciting news! We are getting very close to the beta release of a new tool to publicly show metadata coverage. As members register their content with us they also add additional information which gives context for other members and for services that help e.g. discovery or analytics.\nRicher metadata makes content useful. Participation reports will give\u0026mdash;for the first time\u0026mdash;a clear picture for anyone to see the metadata Crossref has. This is data that\u0026rsquo;s long been available via our Public REST API, now visualized.\nWho are participation reports for? Everyone! It\u0026rsquo;s an opportunity to evaluate and educate. See for yourself where the gaps are, and what our members could improve upon. Understand best practice through seeing what others are doing, and learn how to level-up.\nMonitor what metadata is being registered, even if this work is done by a third party or another department. And see what other organizations in scholarly communications see when they use Crossref metadata in their research, tools, and services.\nThe beta release—expected after acceptance testing some time late May—will let anyone look up any of our 15,000+ members and see whether they are registering ten key elements that add context and richness to the basic required bibliographic metadata.\nWhat do we mean by ‘richer metadata’? The ten checks for Beta, will be:\nReferences ~Open references~ [EDIT 6th June 2022 - all references are now open by default]. ORCID iDs Funder IDs Funding award numbers Crossmark metadata License information Full text links Similarity Check URLs Abstracts Each of these additional metadata elements helps increase discovery and wider and more varied use\u0026mdash;and usefulness\u0026mdash;of research outputs.\nWhy are we doing this and what do we mean by ‘participation’? Over the years when we’ve talked with our members about their metadata, we learned that many just can’t be certain exactly how they’re performing. It could be that they’ve outsourced Content Registration to another service provider or larger publisher, or it could be they just weren’t previously aware they could collect and share authors’ ORCID iDs, Funder IDs, and so on. So our primary aim is to give our members the information they need in order to make a case for improving their metadata records. Each check will come with information about why it is important and guidance on how to improve. Additionally, with the growing use of Crossref as a central source of metadata for the research community, it’s in everyone’s interest to be as transparent as possible about what metadata we have - and encourage greater understanding of what’s possible.\nMember ‘participation’ is an important concept. Crossref distinguishes itself from other DOI registration agencies by providing this richer infrastructure which allows for things like funding information, license information, links between data and preprints, and so on—all contributing to the research nexus for everyone’s benefit.\nMembership of Crossref is not just about getting a persistent identifier for your content, it’s about placing your content in context by providing as much metadata as possible and looking after it long-term.\nHere’s a sneak preview of what the report will look like:\nSo whether you’re a member who wants to run a “health check” on your own metadata, or a consumer of metadata interested in what’s available and from whom, watch this space for Participation Reports!\nWould you like a heads-up on your report, pre-beta? Beta will be released some time in May or June this year, following acceptance testing with members and others. Then we’re looking for about 20 members to have a half-hour phone call with a walk-through ‘health check’. Please contact Anna if you’d like to schedule one.\n", "headings": ["Who are participation reports for? Everyone!","What do we mean by ‘richer metadata’?","Why are we doing this and what do we mean by ‘participation’?","Would you like a heads-up on your report, pre-beta?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/henry-thompson/", "title": "Henry Thompson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/jonathan-rees/", "title": "Jonathan Rees", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/programming/", "title": "Programming", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/redirecting-redirection/", "title": "Redirecting redirection", "subtitle":"", "rank": 1, "lastmod": "2018-04-24", "lastmod_ts": 1524528000, "section": "Blog", "tags": [], "description": "Crossref has decided to change the HTTP redirect code used by our DOIs from 303 back to the more commonly used 302. Our implementation of 303 redirects back in 2010 was based on recommended best practice for supporting linked data identifiers. Unfortunately, very few other parties have adopted this practice.\n", "content": "Crossref has decided to change the HTTP redirect code used by our DOIs from 303 back to the more commonly used 302. Our implementation of 303 redirects back in 2010 was based on recommended best practice for supporting linked data identifiers. Unfortunately, very few other parties have adopted this practice.\nWhat’s more, because using a 303 redirect is still unusual, it tends to throw SEO tools into a tizzy- and we spend a lot of time fielding SEO questions from our members about our use of 303s.\nTL;DR At this point, we need to emphasise that we have never seen our use of 303s actually affect page rankings. But at the same time, use of 303 redirects has not had wider uptake. Maintaining this quixotic behaviour just isn’t worth the effort. We hope that, in the future, we can use other techniques (e.g. signposting \u0026amp; cite-as) to achieve some of the things that 303 was supposed to do.\nNote that these changes will not affect users or machines using DOIs. The change should be entirely transparent.\nBelow we provide some background to our decision and after that we provide some detailed technical notes from Jonathan Rees and Henry Thompson who have been very kind in helping to provide Crossref technical guidance on how we can help DOIs best support linked open data and adhere to HTTP best practice.\nBackground Back in 2010, Crossref, DataCite (and later, several other RAs) responded to concerns that DOIs were not \u0026ldquo;linked-data friendly.\u0026rdquo; There were three problems with DOIs at that time:\nIt was not clear that DOIs could be used and expressed as HTTP URIs. There was no standard way to ask a DOI to return a machine-readable representation of the data. It wasn’t always clear if the DOI resolved to \u0026ldquo;the thing\u0026rdquo; (e.g. an article) or “something about the thing” (e.g. a landing page). On the advice of several people in the linked data community, we proposed some options for fixing this. And we finally settled on:\nRecommending that Crossref DOIs be expressed and displayed as HTTP (now HTTPS) URIs. This made it clear that DOIs could be used with HTTP applications. Enabling DOI registration agencies to support content negotiation. This allowed RAs to support providing machine-readable representations of the data associated with a DOI. Changing the underlying redirect code from the normal 302 to 303. This was designed to clarify what, at the time, was true- that most DOIs resolved to a landing page, not the article itself. By any practical measure, machine use of DOIs has exploded since we made these decisions back in 2010. Crossref’s APIs and content negotiation handle over 800 million requests for machine readable data a month. Our sibling organisation, DataCite, has also seen a huge growth in machine use of DOIs. Many applications, from bibliographic management tools, to authoring systems and CRIS systems, make use of machine actionable DOIs all the time. So clearly our work to promote DOIs as machine actionable identifiers is working, but we are certain that our current use of 303 redirects has nothing to do with this growth.\nFirst of all, as we said, very few parties have actually subscribed to the notion of using 303s to help distinguish \u0026ldquo;the thing\u0026rdquo; from “something about the thing”.\nSecondly, even if they did try to rely on 303s to make this distinction, they would quickly get confused because the DOI is so often just the first in a chain of redirects which do not implement the same semantic distinction. At this point we should be clear - Crossref thinks these kinds of long redirect chains are a bad idea for two main reasons:\nThey slow down resolution. They increase the number of potential failure points between the DOI and the item it resolves to. But we also cannot legislate them away. They exist. And in the real world you will find plenty of DOIs that do a 303 redirect to a system that, in turn, does a 302 redirect to a system that does a 301 redirect and…eventually ends up someplace returning a 200. You get the picture. How on earth is a machine supposed to interpret a 303-\u0026gt;302-\u0026gt;301-\u0026gt;302 redirect chain?\nFurthermore - nowadays, after following this chain of redirects, you will often find yourself on a \u0026ldquo;page\u0026rdquo; that is both a landing page and the article itself. Dynamic, one-page applications can simply morph the one into the other without the use of additional HTTP requests.\nIn other words, using 303s is not helping machines interpret what the DOI is pointing at. And yet, people seem to be making good use of machine actionable DOIs and they are not complaining much about it.\nPersonally, I’d might have just been happy to switch back to using 302s simply so that I could cut down on my conversations with SEO hacks. But that wouldn’t be a principled approach. In 2010 we spent a lot of time considering the initial switch to 303s- we needed to consult with the LOD community on a potential switch back to 302s. At the January 2018 PIDapalooza I had a chance to talk to Henry Thomson about the 302/303 dilemma we faced, and he along with Jonathan Rees very generously provided the following feedback.\nBest practices for HTTP redirection by persistent identifier resolvers: 302 vs. 303 Jonathan Rees (MIT CSAIL, https://orcid.org/0000-0001-7694-8250) Henry Thompson (University of Edinburgh, School of Informatics, https://orcid.org/0000-0001-5490-1347) If one goes to the trouble to organize an identifier system, then the desire that such a system should last as long as possible leads one to aspirationally say it’s a persistent identifier (PID) system. The unwillingness of the major browser suppliers to implement new URI schemes for PIDs initially hindered their use on the Web and this in turn inhibited widespread adoption. More recently a number of PID approaches have enjoyed very rapid growth as a result of a compromise: these PIDs participate in the World Wide Web by defining simple conversion rules mapping identifiers to actionable (\u0026lsquo;http:\u0026rsquo; and/or \u0026lsquo;https:\u0026rsquo;) forms and providing resolution servers that redirect requests for such forms to the appropriate destination.This approach has been widely adopted and is very successful, because it is so useful. An identifier’s actionable form leads, via the HTTP protocol and one or more redirections, to a web page that bears on the ground identity of the associated entity – or perhaps even directly to the entity itself, if the system is one for document entities that are naturally provided as web pages. The nature of the retrieved web page varies from one system to the next.\nA confusion arose, however, over claims in various technical specifications (URIs, HTTP, Web Architecture) that the normal case is for the protocol to yield a \u0026ldquo;representation\u0026rdquo; of the “resource” “identified” by the URI. None of these terms is adequately defined by the specifications, and initially the language was not taken as normative. Those deploying identifier systems took the HTTP “resource” to be the entity associated with an identifier, and understood the “resource” as being “identified” by the URI, but it was never clear what was, or wasn’t, a “representation” of a given entity/resource: a description of the resource, the resource itself, a version of the resource, instructions on how to find the resource, etc. Sixteen years ago, in an attempt to clarify the intent of this part of the theory of URIs, and to allow applications to usefully and uniformly exploit the idea that an HTTP 200 response must deliver a “representation” of the “resource”, Tim Berners-Lee asked the W3C Technical Architecture Group to consider what came to be known as the httpRange-14 issue. It’s now 13 years after the TAG gave advice which almost no one was happy with, and 5 years after work on issue httpRedirections-57 (which superseded httpRange-14) ground to a halt. There’s still no consensus on whether it’s OK to return landing pages with a 200 status in response to requests for pictures or publications, but the Web seems to be working nonetheless, and no one seems to be bothered much anymore.\nThe provision of HTTP-based resolution services has stimulated widespread support for the use of identifier systems with Web resolution, particularly in the scholarly journal publication context. Those setting up HTTP resolvers responsible for identifier systems must decide which HTTP response code should be used. The TAG’s advice sows doubt on the use of the 200 response code when the response would have been a landing page, and many resolvers avoid 200 regardless and use redirection for administrative purposes, for example\n‘https://0-dx-doi-org.libus.csd.mu.edu/10.1109/5.771073’ to\n‘http://0-ieeexplore-ieee-org.libus.csd.mu.edu/document/771073/?reload=true’ for the DOI\n‘10.1109/5.771073’, or ‘https://identifiers.org/uniprot/A0A022YWF9’ to\n‘http://www.uniprot.org/uniprot/A0A022YWF9’ for the Uniprot identifier\n‘A0A022YWF9’.\nSo the response should be a redirection, but what kind, 301, 302, or 303? (Or 307, which is almost the same as 302.) A 301 redirect seems to say that the URI is not persistent (since its target is deemed \u0026ldquo;more persistent\u0026rdquo;). A 302 redirect seems to say that the response could have come via a 200, and so suffers the same fate as 200. That leaves 303, as hinted at in the TAG’s advice. This idea got some traction: Ten years ago a Semantic Web interest group promoted the TAG’s advice in a published note, and seven years ago one of us wrote a blog post giving the same advice for resolvers for PIDs in publishing.\nHowever, not only is there neither consensus nor general utility around this strict understanding of the use of the various response codes – that is, that resolution to a landing page is inconsistent with a 200 (and a posteriori therefore with a 302) – but also the range of usage patterns for redirection of HTTP requests has grown and ramified over time as the Web has grown and become more complex. It’s on the face of it unlikely that a mere three response codes can capture all the resulting complexity or cover the space of outcomes (in terms of e.g. what ends up in the browser address bar or what search engines index a page under) that a page owner might like to signal.\nWe find in practice that some PID redirections are ending up (usually after further publisher-local redirects) at the \u0026ldquo;identified\u0026rdquo; document, some at landing pages, and some at one or the other depending on the requesting site, for example in the case of paywalled material.\nIn the absence of a rethinking of the whole 3xx space, it seems to us that only the 301 vs. 302 distinct ion (roughly, 301 = permanent = please fix the link, and 302 = temporary = don’t change the link) is well understood and more or less consistently treated, whereas for 303, web servers are not very consistent and both search engine and citation crawler behaviours are at best inconsistent and at worst downright unhelpful.\nSo, we believe it is in both users’ and publishers’ interests for resolvers of actionable-form PIDs to use 302 redirects, not 303.\nIf we want to help machines better understand the resource that a DOI points at, we have to explore using more nuanced mechanisms.\nJust using 302 for the first redirect doesn\u0026rsquo;t do everything necessary to effectively support the emerging PID+redirection architecture. It\u0026rsquo;s at the end of the redirect chains that we need more: a standardised way to find the PID back at the start of the chain. The \u0026lsquo;cite-as\u0026rsquo; proposal does exactly this, and we hope it\u0026rsquo;s quickly approved and widely adopted. Once that happens a proposal for augmenting browser (and API) behaviour to prefer, or at least offer, the \u0026lsquo;cite-as\u0026rsquo; link for bookmarking and copying will be needed.\n", "headings": ["TL;DR","Background","Best practices for HTTP redirection by persistent identifier resolvers: 302 vs. 303"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/gavin-reddick/", "title": "Gavin Reddick", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-8-with-researchfish/", "title": "Using the Crossref REST API. Part 8 (with Researchfish)", "subtitle":"", "rank": 1, "lastmod": "2018-04-23", "lastmod_ts": 1524441600, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Gavin Reddick, Chief Analyst at Researchfish about the work they’re doing, and how they’re using our REST API as part of their workflow.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Gavin Reddick, Chief Analyst at Researchfish about the work they’re doing, and how they’re using our REST API as part of their workflow.\nIntroducing Researchfish Researchfish is the world’s leading platform for the reporting of the outputs, outcomes and impacts of funded research. It is used by over 100 funding organisations in Europe, North America and Australasia and currently tracks around €50 billion of funding, across 125,000 grants. Researchers have reported around 2.5 million attributed outcomes in Researchfish and roughly half of these are publications with the other half being collaborations, further funding, data sets, policy influences, engagement activities etc.\nFunders use Researchfish to ask grantees to report on the outcomes of their grant and Researchfish makes it easy for researchers to do this in a structured way. Researchfish seeks to improve the quality and robustness of the evidence base available for evaluation. It works with funders, research organisations and researchers to present, explain and evaluate the impact of research across all disciplines and a wide range of output types.\nHow is the Crossref REST API used in Researchfish? Search\nAs publications are a major output of research it is important to make the reporting of those publications be as easy as possible and quality of the information on those publications as high as possible. Researchfish integrates with a number of publication APIs, including Crossref, which enables users to enter a number of DOIs or search by author, title, etc. to find their publication.\nDirect Harvest\nResearchfish uses funding acknowledgements in the Crossref metadata to add publications to researchers’ portfolios and report the publications as arising from the grant. If the acknowledgement exists it’s important to use it instead of asking researchers to report the same thing twice.\nInteroperability\nResearch organisations can upload publications to Researchfish on behalf of researchers, re-using information from their local systems. We use the Crossref REST API to validate the data provided by universities before uploading.\nMetadata Enrichment – Open Access\nWe use the license and embargo period information in the Crossref metadata to help understand the open access status of publications and whether they meet any policy requirements, without researchers having to take any steps to report in this complex area.\nMetadata Enrichment – Normalisation/deduplication\nAs Researchfish allows users to add information from lots of different sources it is very important to normalise the data and prevent the same publication being reported multiple times in different ways. We use the Crossref REST API as part of this process.\nWhat are the future plans for Researchfish? We are looking to expand the range of integrations to support non-publication outputs and allow some of the same functionality that we have built for publications. We already have integrations to support the reporting of patents, collaborations, further funding and next destinations but are looking to enhance these, along with expanding links to data sets, clinical trials, software and spin out companies.\nWhat else would Researchfish like to see in Crossref? Crossref is an excellent resource and most of our wish list would be to see more uptake of existing fields e.g. retractions and the ability to use them more flexibly in the REST API. We would also like to see a little more consistency in some of the metadata – publication type is the area that seems to cause the most confusion, particularly around conference proceedings and clinical trials.\nThank you Researchfish! If you would like to contribute a case study on the uses of Crossref Metadata APIs please contact the Community team.\n", "headings": ["Introducing Researchfish","How is the Crossref REST API used in Researchfish?","What are the future plans for Researchfish?","What else would Researchfish like to see in Crossref?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/aliaksandr-birukou/", "title": "Aliaksandr Birukou", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/conference-ids/", "title": "Conference IDs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/pids-for-conferences-your-comments-are-welcome/", "title": "PIDs for conferences - your comments are welcome!", "subtitle":"", "rank": 1, "lastmod": "2018-04-19", "lastmod_ts": 1524096000, "section": "Blog", "tags": [], "description": "Aliaksandr Birukou is the Executive Editor for Computer Science at Springer Nature and is chair of the Group that has been working to establish a persistent identifier system and registry for scholarly conferences. Here Alex provides some background to the work and asks for input from the community:\nRoughly one year ago, Crossref and DataCite started a working group on conference and project identifiers. With this blog post, we would like to share the specification of conference metadata and Crossmark for proceedings and are inviting the broader community to comment.\n", "content": "Aliaksandr Birukou is the Executive Editor for Computer Science at Springer Nature and is chair of the Group that has been working to establish a persistent identifier system and registry for scholarly conferences. Here Alex provides some background to the work and asks for input from the community:\nRoughly one year ago, Crossref and DataCite started a working group on conference and project identifiers. With this blog post, we would like to share the specification of conference metadata and Crossmark for proceedings and are inviting the broader community to comment.\nWhy are conferences important? One common misbelief is that most published research appears in journals. However, next to new ways of communication research results (blogs, presentations,…) and journals there are also other publication options, like books, very important in humanities, or conference proceedings, which are very important in computer science and a couple of related disciplines. Conference proceedings are collections of journal-like papers, often undergoing a more competitive peer review process than in journals. For instance, looking at original research in computer science in Scopus published in CS in 2012-2016, 63% of articles appeared in proceedings, while only 37% were published in journals. DBLP, one of the most important indexing services in CS, lists more than two million conference papers organized in ~5,400 conference series.\nSo, while it is true that CS has a significant share of conference proceedings, conferences are also relevant in many other disciplines which do not publish formal proceedings. For instance, inSPIRE contains ~23,000 conferences in high-energy physics, the American Society of Mechanical Engineers (ASME) publishes roughly 100 proceedings volumes annually.\nWhy do we need an open persistent ID for a conference or a conference series? With publishers, learned societies, indexing services, libraries, conference management systems, research evaluation and funding agencies using conferences directly or indirectly in their daily work, a common vocabulary would simplify data processing, reporting and minimize errors. Right now, a publisher assigns a unique conference ID to the conference to be published, then an indexing service does it, then it is assigned in a library. Wouldn\u0026rsquo;t it be easier to do this at the very beginning of the process, when the conference planning starts, and keep the same identifier through the whole conference lifecycle?\nThe joint Crossref and DataCite group on conference and project identifiers has discussed this topic at half a dozen calls and various PID community meetings (PIDapalooza, FORCE conferences, AAHEP Information Provider Summit). The result of those discussions is a draft of the specification of conference metadata and Crossmark for proceedings.\nThe document first defines the concepts of a conference, conference series, joint and co-located conferences. It then introduces the information we want to store about those entities, e.g., the ID, name, acronym, other IDs, URL and the maintainer of the conference series, or the ID, conf series ID, number, dates, location, and URL for conferences. Such metadata can be submitted to Crossref and DataCite by conference organizers or publishers on their behalf and linked to the existing proceedings metadata, where appropriate. It can be then used for linking research outputs from a conference (beyond formal proceedings), recognizing reviewers via services such as ORCID and Publons, computing metrics of a conference series, conference disambiguation in indexing services and ratings (CORE, QUALIS, CCF), and so on.\nThe second part of the document introduces Crossmark for conference proceedings. Its goal is to structure and preserve the information about the peer review process of a conference as declared by the general or program chairs. Depending on how much information is available from the conference organizers, one can use the basic or extended versions of Crossmark.\nIn order to comment, please open the specification and leave comments using “comment” feature of Google Docs. The draft remains open for comments till the 31st of May 2018.\nNext steps After hearing from YOU, we will update the document to reflect the community comments. In parallel, we start a subgroup discussing the governance models, looking into whether we need a new membership category at Crossref, what fees should be covered, etc.\n", "headings": ["Why are conferences important?","Why do we need an open persistent ID for a conference or a conference series?","Next steps"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/do-you-want-to-be-on-our-board/", "title": "Do you want to be on our Board?", "subtitle":"", "rank": 1, "lastmod": "2018-04-18", "lastmod_ts": 1524009600, "section": "Blog", "tags": [], "description": " Do you want to effect change for the scholarly community?\nThe Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2018 election.\n", "content": " Do you want to effect change for the scholarly community?\nThe Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2018 election.\nThe key responsibilities of the Board are:\nSetting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. Some of the decisions the board has made in recent years include: Introduction of the Metadata APIs Plus service (to provide a paid-for premium service for machine access to metadata); Updating the policy on open references (to increase links so that more readers can access content); Establishing the OI Project (to create a persistent Organization Identifier); Inclusion of preprints in the Crossref metadata; and Approval to develop Event Data (which will track online activity from multiple sources). What is expected of a Crossref Board member? Board members should be able to attend all board meetings, which occur three times a year in different parts of the world. If you are unable to attend in person you must be able to attend via telephone.\nBoard members must:\nbe familiar with the three key responsibilities listed above, actively participate and contribute towards discussions, and read the board documents and materials provided, prior to attending meetings. How to submit an expression of interest to serve on the Board We are seeking people who know about scholarly communications and would like to be part of our future. If you have a vision for the international Crossref community, we are interested in hearing from you.\nIf you are a Crossref member, are eligible to vote, and would like to be considered, you should complete and submit the expression of interest form with both your organization\u0026rsquo;s statement and your personal statement before 18 May 2018.\nIt is important to note it is your organization who is the Crossref member—and therefore the seat will belong to your organization.\nAbout the election and our Board We have a principle of “one member, one vote”; our board comprises a cross-section of members and it doesn’t matter how big or small you are, every member gets a single vote. Board terms are three years, and one third of the Board is eligible for election every year. There are five seats up for election in 2018.\nThe board meets in a variety of international locations in March, July, and November each year. View a list of the current Crossref Board members and a history of the decisions they’ve made (motions).\nThe election opens online in September 2018 and voting is done by proxy online, or in person, at the annual business meeting during ‘Crossref LIVE18’ on 13th November 2018 in Toronto, Canada. Election materials and instructions for voting will be available to all Crossref members online in September 2018.\nThe role of the Nominating Committee The Nominating Committee meets to discuss change, process, criteria, and potential candidates, ensuring a fair representation of membership. The Nominating Committee is charged with selecting a slate of candidates for election from those who have expressed an interest.\nThe selection of the slate (which is likely to exceed the number of open seats) is based on the quality of the expressions of interest and maintaining the balance and diversity of the board—especially in areas of organizational size, gender, geography and sector.\nThe Committee is made up of three board members not up for election, and two non-board members. The current Nominating Committee members are:\nMark Patterson, eLife (Chair); Chris Shillum, Elsevier; Amy Brand, MIT Press; Vincent Cassidy, The Institution of Engineering \u0026amp; Technology (IET); and Claire Moulton, The Company of Biologists. Our board needs to be stay truly representative of Crossref’s global and diverse membership of organizations who publish. Please submit your statements of interest or reply to me with any questions to me at lhart@crossref.org.\n", "headings": ["Some of the decisions the board has made in recent years include:","What is expected of a Crossref Board member?","How to submit an expression of interest to serve on the Board","About the election and our Board","The role of the Nominating Committee"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hear-this-real-insight-into-the-inner-workings-of-crossref/", "title": "Hear this, real insight into the inner workings of Crossref", "subtitle":"", "rank": 1, "lastmod": "2018-04-01", "lastmod_ts": 1522540800, "section": "Blog", "tags": [], "description": "You want to hear more from us. We hear you. We’ve spent the past year building Crossref Event Data, and hope to launch very soon. Building a new piece of infrastructure from scratch has been an exciting project, and we’ve taken the opportunity to incorporate as much feedback from the community as possible. We’d like to take a moment to share some of the suggestions we had, and how we’ve acted on them.", "content": "You want to hear more from us. We hear you. We’ve spent the past year building Crossref Event Data, and hope to launch very soon. Building a new piece of infrastructure from scratch has been an exciting project, and we’ve taken the opportunity to incorporate as much feedback from the community as possible. We’d like to take a moment to share some of the suggestions we had, and how we’ve acted on them.\nWe asked a focus group “What one thing would you change?”. In hindsight, we could have done a better job with the question. We did get some enlightening answers but\u0026mdash;for legal and practical reasons\u0026mdash;we are unable to end either world hunger or global conflict, or do any of the other things we were invited to do. So we went back to our focus group and asked “What one thing would you change about Crossref?”.\nThe answers were illuminating. Some of you wanted mundane things like more data dumps. A disappointing number of people wanted us to put the capital ‘R’ back in our name. But two things we heard consistently, loud and clear, were:\n“I want to hear more from Crossref” “I want to know more about what’s going on inside Crossref” One respondent said:\nI like the newsletters, and the Twitter visuals are nice enough, but I want to hear, you know, more from them.\nAnother:\nCrossref is your typical quiet DOI Registration Agency. They make a big thing about being the background infrastructure you don’t notice. But infrastructure doesn’t have to be quiet. I live next to the M25, and I can tell you, that’s the sound of success. I mean, it’s loud.\nOne final quote which clinched it for us:\nThe outreach team is doing a great job with their multilingual videos. But you can never cover every world language. In today’s connected world, you should be thinking about the universal language.\nShe clarified:\nNo, I don’t mean XML.\nWe took this advice to heart. When we were building Crossref Event Data, we baked these features right in. Now you can hear what’s going on inside Crossref, any time, day or night.\nIntroducing the Crossref Thing Action Service! Turn up your speakers (about half-way, it would be foolhardy to turn them too high) and visit:\nlive.eventdata.crossref.org/thing-action-service.html It’s optimized for Google Chrome, but we’ve tested it in Firefox and Safari.\nThe Thing Action Service shows you, in excruciating sonorous detail, every single action that happens inside the Crossref Event Data system. Every time we receive live data from Twitter or Wikipedia. Every time we check a DOI. Every time we check an RSS feed. Every time we find a link to our Registered Content on the web.\nIn a pioneering move within the scholarly publishing space, you can hear the data as it’s being processed, live. Furthermore, we think we are the first DOI Registration Agency to offer our services in stereo.\nJohn Chodacki, Professional Working Group Chair, said:\nWe welcome this innovation. From my experience Chairing, well, everything, I’m certain that hearing-impaired users will like it especially.\nSo sit back, put the Thing Action Service on the speakers, and relax. You may find it difficult at first, but as you let the sound waves wash over you, think of all that data in flight. That beep could be someone criticizing the article you wrote on Twitter. But don’t worry, the next one might be someone defending it.\nThink of it as musique concrète. That’s the Art of Persistence.\n", "headings": ["You want to hear more from us. We hear you.","Introducing the Crossref Thing Action Service!","live.eventdata.crossref.org/thing-action-service.html"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/launch/", "title": "Launch", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hello-meet-event-data-version-1-and-new-product-manager/", "title": "Hello, meet Event Data Version 1, and new Product Manager", "subtitle":"", "rank": 1, "lastmod": "2018-03-29", "lastmod_ts": 1522281600, "section": "Blog", "tags": [], "description": "I joined Crossref only a few weeks ago, and have happily thrown myself into the world of Event Data as the service’s new product manager. In my first week, a lot of time was spent discussing the ins and outs of Event Data. This learning process made me very much feel like you might when you’ve just bought a house, and you’re studying the blueprints while also planning the house-warming party.\n", "content": "I joined Crossref only a few weeks ago, and have happily thrown myself into the world of Event Data as the service’s new product manager. In my first week, a lot of time was spent discussing the ins and outs of Event Data. This learning process made me very much feel like you might when you’ve just bought a house, and you’re studying the blueprints while also planning the house-warming party.\nIf Event Data is like a house, it’s been built and we’ve recently been putting on a last coat of paint. We’re very happy to announce version 1 of the API today. This is bringing us closer to the launch (house warming party), which will officially present Event Data to the world. Further to that analogy, while I bought into the house, I wasn’t around to see it being built. That’s both incredibly exciting and a little daunting.\nVersion 1 contains fixes for some challenges we came up against. Like scalability, data modeling for Wikipedia, and polishing. Version 1 is a new release of the data, but it is the same data set you already know and love. It should solve some of the recent stability issues, for which we apologize.\nMoving forward, we expect the data model in V1 to persist and are not planning to make further large scale, fundamental changes to the Event Data API. As such, the version 1 release of the API is exceptional and a big step forward. It is important that we address these fixes before we go into production as it affects everyone who uses the service.\nSame Event Data, new address In setting up for the upcoming production service rollout, we have updated the Event Data API domain so that it is in line with Crossref’s suite of APIs. The Query API can now be found at a new URL. Here is an example query: https://0-api-eventdata-crossref-org.libus.csd.mu.edu/v1/events?rows=1\nWe have also simplified the standard query parameters in favor of a cleaner filter syntax.\nLastly, we have added a new “Mailto” parameter, just like in our REST API. It is encouraged but optional, so you are not obliged to supply it. We\u0026rsquo;ll only use it to contact you if there\u0026rsquo;s a problem.\nChanges to the Wikipedia data structure We’ve done a lot of work to use the canonical URLs for web pages to represent content as consistently as possible. This has entailed updating previously collected Events across data sources. As such, we’ve updated our Wikipedia data model to align with this. Because this update has impacted every Wikipedia Event in the system, we recommend those who have used or saved existing data from the deprecated Query API version to pull a new copy of the data. Read more about the rationale for changing the Wikipedia data model.\nUpdated data This then brings me to how we now handle updated data. Sometimes we edit Events to add new features, or we may edit Events if there is an issue processing and/or representing the data when we provision it to the community. And sometimes we must remove Events to comply with a particular data source’s terms and conditions (ex: deleted Tweets). You can read about how updates work in the user guide.\nTo make life easier moving forward, we’ve split updated Events into two API endpoints. If you are already using Event Data, you will need to make some small updates to your client(s) to align with this. The new endpoints are further described in the documentation.\nEvent Data beta group With the version 1 release we are making solid progress towards an official launch (the house-warming party!), we are quite excited to hear how you are using Event Data. Please consider [joining our beta group] (https://groups.google.com/forum/#!forum/crossref-event-data-beta-testers), if you are using the Event Data API or want to hear about updates.\nThis is also where you can read about these updates in more detail.\nFor more information and to get started with Crossref Event Data, please refer to the user guide.\nI am looking forward to seeing how Event Data is being used, and working with the community to continuously improve what we can offer through this service. Feedback is always welcome, feel free to get in touch with me at eventdata@crossref.org.\n", "headings": ["Same Event Data, new address","Changes to the Wikipedia data structure","Updated data","Event Data beta group"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-lustrum-over-the-weekend/", "title": "A Lustrum over the weekend", "subtitle":"", "rank": 1, "lastmod": "2018-03-26", "lastmod_ts": 1522022400, "section": "Blog", "tags": [], "description": "\rThe ancient Romans performed a purification rite (“lustration”) after taking a census every five years. The term “lustrum” designated not only the animal sacrifice (“suovetaurilia”) but was also applied to the period of time itself. At Crossref, we’re not exactly in the business of sacrificial rituals. But over the weekend I thought it would be fun to dive into the metadata and look at very high level changes during this period of time.\n", "content": "\rThe ancient Romans performed a purification rite (“lustration”) after taking a census every five years. The term “lustrum” designated not only the animal sacrifice (“suovetaurilia”) but was also applied to the period of time itself. At Crossref, we’re not exactly in the business of sacrificial rituals. But over the weekend I thought it would be fun to dive into the metadata and look at very high level changes during this period of time.\nCrossref provides the latest cumulative stats online. We share news about the work we do along the way in the Crossref blog, including periodic summaries such as the Executive Director’s 2017 end-of-year highlights and the annual review. But what follows is a brief and very informal survey of the population of inhabitants in the Crossref metadata-land for the current lustrum.\nWorks published The first thing a census typically asks is population size. We know there are new records arriving each month with 95.7mil to date. And they do so at variable rates. But when the data is visualized, a rough yearly pattern emerges into view. (Data were collected on Mar 25, 2018; results are partial for this month.)\nEach year brings with it a significant spike, an influx of new entrants, perhaps reflecting an increase in submissions at the end of the previous year. After January, volume drops down dramatically and gradually rises once more over the course of the year. We see smaller spikes at the March, June, and September mark. (Since this was a brief exercise, I did not dive into any formal research conducted on the nature of publishing cycles.)\nMetadata Coverage The next question is a look at how the population is broken up into different demographics. For this, I analyzed four key sub-populations of ORCID, funding information, license, abstract metadata. The following graph shows the percentage of new parties (i.e., works registered at Crossref containing these metadata) across four specific segments.\nI ran Karthik Ram’s script which employed rOpenSci’s r client for the Crossref REST API. Data are based on publication date rather than deposit date and represent all updates to the metadata record for the baseline view.\nThe census graph shows extensive empty space on the top half, indicating there is ample room for continual growth in these communities. The ORCID population is expanding the fastest, followed by license and funding. Abstracts are a minority group and quite visibly needs a population boost here in Crossref-land.\nThis view does not capture the percentages across record types nor does it take into account the differential rate of growth between record types (e.g., journal article, book, report, conference proceeding, dissertation, dataset, component, posted content, peer review) as the Crossref corpus has grown. While ORCID, funding, and license information are available for all full record types (viz., excludes components), this matters for abstracts. Abstracts are part of the metadata schema of all relevant record types. This excludes those which do not apply: dataset, component, and peer reviews. All things considered though, the relative impact on the total percentage of metadata deposited (or not deposited) is miniscule given the small sums for these works.\nCalling the real demographers \u0026amp; cartographers This mini-pseudo-lustrum was the result of a few hours of play. The graphs have raised more questions than answers. We welcome more serious and earnest efforts to dive into the metadata and conduct a more detailed, reliable investigation on the size, distribution and composition of the population through our REST API. Next month, we will roll out reports on metadata coverage based on individual members.\nThis “play” census came out of a session with Karthik Ram, one of the founders of rOpenSci, as we talked about struggle to build better tools for researchers. (rOpenSci is an exciting and influential non-profit that builds open source software for research with a community of users and developers and educates scientists about transparent research practices.) With each round of cocktails, it became clear that a critical subset of the issues boiled down to the problem of limited information about research publications. Why, that is what Crossref does! Indeed. Publishers register their content with Crossref and provide the metadata about the works they publish.\nOver the past few years, we have been working with our members to broaden the coverage of the metadata as well as improve their metadata quality. This issue is not exclusive to Crossref - Metadata 2020 rallies stakeholders across the research enterprise to push for change together.\nTo represent the full breadth and depth of the scholarly communications enterprise, Crossref aims to capture the richness of what our members publish through the content they register. So publishers, powerfully represent your services and make sure your metadata is complete and correct for discovery systems, indexing platforms, research evaluation systems, analytics tools, and the great number of Crossref metadata consumers far and wide.\n", "headings": ["Works published","Metadata Coverage","Calling the real demographers \u0026amp; cartographers"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-we-use-crossref-metadata/", "title": "How we use Crossref metadata", "subtitle":"", "rank": 1, "lastmod": "2018-03-26", "lastmod_ts": 1522022400, "section": "Blog", "tags": [], "description": "Bruce Rosenblum, CEO, Inera Incorporated talks about the work they are doing at Inera, and how they’re using our metadata as part of their workflow.\n", "content": "Bruce Rosenblum, CEO, Inera Incorporated talks about the work they are doing at Inera, and how they’re using our metadata as part of their workflow.\nCan you tell us little bit about Inera, and yourself ", "headings": ["Can you tell us little bit about Inera, and yourself","What problem is your software and service trying to solve?","How are you using Crossref Metadata at Inera?","What values do you pull from our APIs?","Have you built your own interface to extract this data?","How often do you extract or query data?","What are the future plans for Inera?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-board-chair-paul-peters-shares-our-mission/", "title": "New Board Chair Paul Peters shares our mission", "subtitle":"", "rank": 1, "lastmod": "2018-03-22", "lastmod_ts": 1521676800, "section": "Blog", "tags": [], "description": "At the end of last year, Paul Peters\u0026mdash;CEO of our member Hindawi\u0026mdash;became the new Chair of the Crossref Board. The announcement was made in Singapore at our first LIVE Annual ever held in Asia. I caught up with Paul back in London, UK, where he answered a few questions about what he hopes to bring to the Board, and to the Crossref community as a whole.\n", "content": "At the end of last year, Paul Peters\u0026mdash;CEO of our member Hindawi\u0026mdash;became the new Chair of the Crossref Board. The announcement was made in Singapore at our first LIVE Annual ever held in Asia. I caught up with Paul back in London, UK, where he answered a few questions about what he hopes to bring to the Board, and to the Crossref community as a whole.\n1. Congratulations, Paul. How delighted were you to be voted in by your fellow board members, old and new? That’s a rather leading question ;-)\nSeriously though, I am incredibly honored to have been chosen to lead Crossref’s board at such an important point in the organization’s development. The current composition of the board is as diverse as it has ever been, which is essential if the board is to represent Crossref’s global membership, as well as the wide range of business and publication models that our members use. This diversity on the board will help to support Crossref’s aim of encouraging innovation in scholarly communication by providing open infrastructure that benefits all researchers.\n2. You’ve been on our board for nine years. How has it changed in that time and what should the board be most proud of? When I first joined the board, Crossref was at the stage where you had successfully established persistent reference linking as a standard practice among scholarly journal publishers. And, although this was the original purpose of Crossref, it was by no means an easy task, as it required a diverse group of competing publishers to work together in building shared infrastructure for the common good.\nIn the nine years since then, I’ve seen Crossref continue to build on this core foundation of technological expertise, the trust and goodwill of its membership, and the diverse skills of its small staff. The result has been the development of important new services (such as Similarity Check) that have become an essential component of the scholarly communications system, support new record types (including both preprints and peer review reports) that are becoming increasingly important in the move towards an Open Science future, and the expansion of Crossref’s membership to include almost 10,000 members of all shapes and sizes from 114 countries around the world.\nWith regard to the board itself, I have been pleased to see Crossref undergo important changes that have provided greater transparency in the organization\u0026rsquo;s governance, as well as more active participation from its members. Last year Crossref put out an open call to invite members to put themselves forward for consideration on the board. As a result of holding its first contested election, Crossref saw a dramatic increase in the engagement of members in the election process. Not only is this important for ensuring that the board is truly representative of the diverse membership, but it will also help to actively engage a larger pool of members in the important work that lies ahead.\n3. What do you see as Crossref’s strengths and role? I believe that Crossref’s past and future success relies on two key strengths. The first is its ability to bring together a large and disparate community of organizations and individuals to create tools and services that no single organization could develop alone. People sometimes overlook how successful Crossref has been in building the trust and support of a diverse group of stakeholders, however I believe this has been an essential ingredient in the organization’s success and will be essential as Crossref develops new tools and services in the years to come.\nCrossref’s other core strength has been the expertise, passion, and ambitious vision of its staff, many of whom I have had the pleasure of knowing since my first days on the board. The ability to develop and maintain real-time infrastructure serving millions of end-users, while simultaneously developing new products and services, requires an incredible range of skills from technology and product development, to marketing, community outreach, and customer support. Moreover, as a growing non-profit organization with thousands of members around the world, and an international staff working across national boundaries, Crossref’s legal, financial, and administrative support team have also been an essential ingredient in the organization’s success.\n4. We’ve grown beyond just the publisher constituency to libraries, scholars, and platforms and tools, which constituencies do you see us involving next? Over time I believe that Crossref’s constituency will grow to cover all organizations that contribute to the creation and dissemination of scholarly research, although I recognize this may take several years to achieve.\nIn the short-term, I believe that research funders are the most important stakeholder group for Crossref to focus on, for the following reasons:\nFirst, with the development of the open Funder Registry and the addition of structured funding data to the Crossref registry, Crossref has already become an important provider of open infrastructure for research funders. Second, as the result of several key initiatives within the Open Science movement I believe that research funders will play an increasingly important role in determining how scholarly research outputs are created, shared, evaluated, and re-used. Therefore, the active involvement of research funders in Crossref’s membership and governance is essential. Finally, I believe that there is an important opportunity for Crossref to enable a range of new services across the research lifecycle by providing persistent identifiers and structured metadata research grants. Given how critical grants are within the research process, I’m amazed by the lack of infrastructure to monitor, evaluate, and build upon grants as first-class research objects. In many cases there is minimal, if any, public information about the grants that have been awarded by a particular funder. Even in cases where such data is available, it is rarely structured in a way that enables it to be searched or analyzed across multiple funding agencies. In the absence of a community-driven, non-profit organization like Crossref to provide this infrastructure on an open basis, there is a risk that funders will be forced to rely on proprietary alternatives that limit how this information is used and by whom. Fortunately there are already efforts underway within Crossref to develop both the tools and the community of funders that will be required to create persistent identifiers and structured metadata for grants and other forms of research funding.\n5. What are the biggest challenges facing Crossref? I believe that Crossref’s greatest challenge will be to continue to bring together a diverse group of stakeholders, some of whom are regularly at odds with each other, in order to collaborate in developing tools and services for the benefit of the research community.\nAs challenging as it has been for Crossref to bring together competing publishers to build the shared services that we have all come to depend on, I believe that keeping the community focused towards a common goal will become even more challenging as that community expands to include funders, universities, and the many other organizations involved in the scholarly communications ecosystem. However, I think that Ed and his team have as good of a chance of succeeding as anyone could hope for, which is why I am so excited about Crossref’s future in the years ahead.\n6. How will things change with you as Chair? You’ll be busier I guess. But enough about you already, what can we expect as staff and Board? As my first order of business I’ll be getting rid of Crossref’s corporate jet, lavish office spaces, and executive chef. \u0026lt;/sarcasm\u0026gt;.\nOn a more serious note, my hope is that as Chair I will be able to work with the other members of the board in supporting Crossref’s staff as they work to achieve the ambitious goals we have set out during the past year. I believe that Crossref’s board members and staff are aligned in the desire to significantly expand the range of services Crossref provides, as well as the communities it serves.\nThe board still has an important role to play in shaping the organization’s strategic vision, while giving staff ample space to execute on this vision. Said another way, I hope to enable some lively strategic conversations among the board while making sure that we don’t get in the way of Ed and his team once it’s time to put ideas into action.\nOn a more personal note, I hope to be a good sounding board for Ed on any issues that he faces, either internally or externally, on the road ahead. Given my own experience in leading a growing organization through a period of significant change, I know how important it can be to have someone to talk to when difficult challenges arise, which they inevitably will. I hope that I can be a good advisor\u0026mdash;and also a good friend\u0026mdash;to Ed as he leads Crossref into the exciting future that lies ahead.\nGinny: Thanks, Paul. I know Ed will miss his personal chef\u0026hellip; but we look forward to working with you too!", "headings": ["1. Congratulations, Paul. How delighted were you to be voted in by your fellow board members, old and new?","2. You’ve been on our board for nine years. How has it changed in that time and what should the board be most proud of?","3. What do you see as Crossref’s strengths and role?","4. We’ve grown beyond just the publisher constituency to libraries, scholars, and platforms and tools, which constituencies do you see us involving next?","5. What are the biggest challenges facing Crossref?","6. How will things change with you as Chair? You’ll be busier I guess. But enough about you already, what can we expect as staff and Board?","Ginny: Thanks, Paul. I know Ed will miss his personal chef\u0026hellip; but we look forward to working with you too!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/paul-peters/", "title": "Paul Peters", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/subscribe-thank-you/", "title": "Thanks for subscribing", "subtitle":"", "rank": 1, "lastmod": "2018-03-19", "lastmod_ts": 1521417600, "section": "", "tags": [], "description": "Thank you Thanks for subscribing to news and updates from Crossref.", "content": "Thank you Thanks for subscribing to news and updates from Crossref.\n", "headings": ["Thank you"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-live-in-tokyo/", "title": "Crossref LIVE in Tokyo", "subtitle":"", "rank": 1, "lastmod": "2018-03-08", "lastmod_ts": 1520467200, "section": "Blog", "tags": [], "description": "What better way to start our program of LIVE locals in 2018 than with a trip to Japan? With the added advantage of it being Valentine’s Day, it seemed a good excuse to share our love of metadata with a group who feel the same way!\n", "content": "What better way to start our program of LIVE locals in 2018 than with a trip to Japan? With the added advantage of it being Valentine’s Day, it seemed a good excuse to share our love of metadata with a group who feel the same way!\nWe’ve worked closely with the Japan Science and Technology Agency (JST) since 2002, and were delighted when they agreed to collaborate with us on a LIVE event at their offices in Tokyo.\n", "headings": ["Any questions?\n"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/are-you-having-an-identity-crisis/", "title": "Are you having an identity crisis?", "subtitle":"", "rank": 1, "lastmod": "2018-02-23", "lastmod_ts": 1519344000, "section": "Blog", "tags": [], "description": "We work with a huge range of organizations in the scholarly communications world—publishers, libraries, universities, government agencies, funders, publishing service providers, and researcher services providers—and you each have different relationships with us.\nSome of you are members who create and disseminate your own content, register it with us by depositing metadata, and help steer our future by voting in our annual board elections. Some of you don\u0026rsquo;t vote in our board elections but do play a vital role by registering content on members\u0026rsquo; behalf.", "content": "We work with a huge range of organizations in the scholarly communications world—publishers, libraries, universities, government agencies, funders, publishing service providers, and researcher services providers—and you each have different relationships with us.\nSome of you are members who create and disseminate your own content, register it with us by depositing metadata, and help steer our future by voting in our annual board elections. Some of you don\u0026rsquo;t vote in our board elections but do play a vital role by registering content on members\u0026rsquo; behalf.\nAnd some of you make use of the metadata provided by our members and so perform a key service by getting their published works out into the world, but don\u0026rsquo;t vote in our board elections.\nAfter a recent review we realized our Member Types weren\u0026rsquo;t completely clear, and may in fact have led to a bit of confusion. With this in mind, we put some thought into their revision and have now given them the clarity they were missing. Over the course of this year we\u0026rsquo;ll be checking that everyone is in the right group and getting the appropriate support based on your Member Type.\nFormer Member Type name New Member Type name Publisher Member Sponsoring Publisher Sponsoring Member Represented Member Sponsored Member Sponsoring Entity Sponsoring Organization Sponsored Member Sponsored Organization Affiliate Metadata User Service Provider (No change to Member Type name) So, what\u0026rsquo;s different?\nThe changes we\u0026rsquo;ve made help to differentiate if you\u0026rsquo;re a voting member (and therefore have a say in our future direction), or not. If you are a voting member, you\u0026rsquo;ll now have the word \u0026ldquo;Member\u0026rdquo; in your title—and if you\u0026rsquo;re not—you won\u0026rsquo;t, as the diagram below indicates.\nWhere there are two organizations with a sponsorship arrangement in place (with a sponsoring party and a sponsored party), one of you will always be the voting party, and the other will be non-voting. These partnerships will therefore always contain one \u0026ldquo;Member\u0026rdquo; and one \u0026ldquo;Organization\u0026rdquo;.\nWe\u0026rsquo;ve also stopped using the word \u0026ldquo;Publisher\u0026rdquo; in our Member Types as not all our members consider themselves to be publishers — sometimes you\u0026rsquo;re libraries, funders, scholars, repositories, etc. As it says in one of our truths \u0026ldquo;Come one, come all: we define publishing broadly. If you communicate research and care about preserving the scholarly record, join us.\u0026rdquo;\nHow do you know if you are a voting member? Voting members fall into three Member Types: Members, Sponsoring Members and Sponsored Members.\nThis means you are Organizations who create and disseminate content, and therefore contribute to the scholarly record. Some of you register your content directly with us and some via a third party, but the key thing is that you\u0026rsquo;re adding to our metadata records, and as such can have a say in the future direction of Crossref. Voting members can also take metadata out of our system — and many of you do — however, your key relationship with us is as a member who is contributing to the scholarly record.\nIt also means you have obligations to keep your records up-to-date, and maximize links with other Crossref members.\nWhat\u0026rsquo;s the difference between the voting categories? Members\nAs a Member (formerly known as Publishers), you create and disseminate content, register your own content with us (usually under a single prefix), and are able to vote in our board elections. You pay an annual fee based on your publishing revenue, plus Content Registration fees for all new DOIs.\nSponsoring Members\nAs a Sponsoring Member (formerly known as a Sponsoring Publisher), you do everything a standard member does, but as well as registering your own content under your own DOI prefix, you also register content on behalf of other, smaller publishers (ideally using separate DOI prefixes so the metadata is accurate and can be reported on separately and relied upon downstream).\nWhen you vote, you vote on behalf of the organizations that you sponsor. You pay an annual fee based on your publishing revenue/expenses plus the publishing revenue of your sponsored organizations, and you also pay Content Registration fees for all new metadata records registered. You look after deposit billing for the organizations you sponsor, and provide technical and language support for them.\nSome of our larger members may be thinking that you should be in this Member Type - and you\u0026rsquo;re probably right! During the course of 2018 we\u0026rsquo;ll be working with you to transition you over to Sponsoring Membership. If you are a Member who is thinking of becoming a Sponsoring Member, please get in touch.\nSponsored Members\nAs a Sponsored Member (formerly known as a Represented Member), you create and disseminate content, but you don\u0026rsquo;t register your content directly with us—this is done by your Sponsoring Organization. Because of this it\u0026rsquo;s you, the one who creates and disseminates the content and thus contributes to the scholarly record, who can vote.\nHow do you know if you are a non-voting member? If you haven\u0026rsquo;t spotted yourself yet, you may be one of the non-voting organizations we work with — these fall into four Member Types: Sponsoring Organizations, Sponsored Organizations, Service Providers and Metadata Users.\nAs a non-voting organization, you may still register content with us, but you either don\u0026rsquo;t create and disseminate the content yourselves, or you\u0026rsquo;re already represented by a voting organization. Non-voting organizations also include those whose only relationship with us is to make use of our metadata. What\u0026rsquo;s the difference between the non-voting categories? Sponsoring Organizations\nAs a Sponsoring Organization (formerly known as a Sponsoring Affiliate), you don\u0026rsquo;t create and disseminate content yourself, but you do register content with us on behalf of your Sponsored Members — preferably using distinct DOI prefixes for each member. You also often look after their administrative, technical, billing and language support needs. You\u0026rsquo;ll pay us an annual fee based on the publishing revenue of all your members, and Content Registration fees for all new DOIs. You might charge the members you work with for this service. You also provide support and promotion of our services and activities.\nSponsored Organizations\nAs a Sponsored Organization (formerly known as a Sponsored Member), you do create and disseminate content yourself, but you don\u0026rsquo;t register your own content. This is done by a Sponsoring Member, and as they have the member vote, you can\u0026rsquo;t have one too. For this reason, we\u0026rsquo;ve removed the word \u0026ldquo;Member\u0026rdquo; from your title, to make your voting position clearer. Of course, your Sponsoring Member needs to represent your needs too when voting, so make sure you make them known!\nService Providers\nAs a Service Provider you work closely with our members to collect and/or host and/or deposit metadata on their behalf. Unlike a Sponsoring Organization however you don\u0026rsquo;t get involved with administrative, technical, billing or language support for the members you work with, but you\u0026rsquo;re a key partner in helping them deposit quality metadata and contribute effectively to the scholarly record. During 2018 we\u0026rsquo;ll be working more closely with you to help you collaborate with us more effectively.\nMetadata Users\nMetadata Users (formerly known as Affiliates), you are the organizations who don\u0026rsquo;t register content with us, but you do make use of it through our free and open APIs and search interfaces, or our paid-for Metadata Plus service, giving you access to a premium version of both the REST API and OAI-PMH. Of course all members can get metadata out of our systems as well, but if the only thing you do with us is get metadata out, then you\u0026rsquo;re a Metadata User.\nDon\u0026rsquo;t know which Member Type you are? We\u0026rsquo;re hoping these new names make it clearer, but if you\u0026rsquo;re still confused, please get in touch with our membership specialist\n", "headings": ["How do you know if you are a voting member?","What\u0026rsquo;s the difference between the voting categories?","How do you know if you are a non-voting member?","What\u0026rsquo;s the difference between the non-voting categories?","Don\u0026rsquo;t know which Member Type you are?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/nina-frentrop/", "title": "Nina Frentrop", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/robert-kiley/", "title": "Robert Kiley", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/wellcome-explains-the-benefits-of-developing-an-open-and-global-grant-identifier/", "title": "Wellcome explains the benefits of developing an open and global grant identifier", "subtitle":"", "rank": 1, "lastmod": "2018-02-16", "lastmod_ts": 1518739200, "section": "Blog", "tags": [], "description": "Wellcome, in partnership with Crossref and several research funders including the NIH and the MRC, are looking to pilot an initiative in which new grants would be assigned an open, global and interoperable grant identifier. Robert Kiley (Open Research) and Nina Frentrop (Grants Operations) from the Wellcome explain the potential benefits this would deliver and how it might work.\n", "content": "Wellcome, in partnership with Crossref and several research funders including the NIH and the MRC, are looking to pilot an initiative in which new grants would be assigned an open, global and interoperable grant identifier. Robert Kiley (Open Research) and Nina Frentrop (Grants Operations) from the Wellcome explain the potential benefits this would deliver and how it might work.\nIntroduction As a funder we want to be able to track the outputs that arise from research we have funded. Currently, this is not as straightforward as it should be as researchers do not always cite their funder correctly, let alone their specific grant number. And, even when they do this accurately, because every funder users its own set of grant IDs, these numbers are not unique. For example, we can use EuropePMC to look up outputs from grants with ID 207467, and see that there is one Wellcome grant with this number, and one from the European Research Council.\nTo resolve such issues, we need a system in which every grant awarded is giving a unique, global ID. Global IDs are already assigned to articles DOIs, people ORCIDs and even biological materials RRIDs. It is time for the funder community to follow suit.\nBenefits of an open \u0026amp; global grant identifier system Once implemented, it would make the identification of grant-specific research outputs more accurate, whilst simultaneously reducing the burden on the researcher.\nCurrently, researchers are typically asked to manually disclose what outputs have arisen from their funding. In the future, such disclosures would be fully automated. We are already seeing how publishers\u0026mdash;who collect ORCIDs through their manuscript submission system\u0026mdash;automatically update the author’s ORCID record with details of new publications. If a global ID system for grants was developed, publishers and repositories could also require these to be disclosed on submission, and this data could then programmatically be passed to researcher assessment platforms, like ResearchFish.\nHow would it work? For a global grant ID system to work, two things need to happen. First, when a new grant is awarded, that grant must be assigned a unique ID. For the pilot project we plan to contract with Crossref who will register a unique ID, (a DOI) for every grant we register.\nSecond, every DOI must resolve to a publicly accessible web site, where information about that grant is disclosed. Again, for this pilot we will almost certainly use the Europe PMC Grants Finder Repository, as we already make grant data available from this resource.\nA working group has been established to determine precisely what metadata we should make available, but it is likely to include the name of the grant holder, title and value of the award, a short abstract, along with the name of the funder and the unique ID. Mindful that funders already assign IDs to the grants they award and that any changes to this process may be problematic (and certainly time consuming), the plan is to register a DOI which still makes use of the existing grant ID. To make it unique however, the ID will be prefixed with a funder identifier, most likely the Funder Registry ID.\nNext steps Whilst the metadata working group is focusing on the technical aspects of the pilot, a separate “governance group” is examining how a funder might become a member of Crossref and what the business model for registering grant DOIs should be.\nIn parallel with this, a pilot “proof of concept” initiative is under way, and we anticipate that by autumn 2018 we will have registered DOIs for a defined cohort of grants.\nUltimately we want to get to a situation where every grant has a unique ID, which can then be unambiguously linked to the all outputs – articles, data, code, materials, patents etc. – which arise from it.\nAnd, if every funder were to adopt such a system and expose their grant metadata in a consistent, machine-readable way, it would facilitate the development of applications to help funders get a greatly enhanced picture of the global funding landscape, which in turn would inform strategic planning and resource allocation.\nThanks to guest authors: Robert Kiley, Head of Open Research, Wellcome [ORCID: 0000-0003-4733-2558] Nina Frentrop, Grants Information \u0026amp; Systems Manager, Wellcome\nPlease read Crossref for funders for context, and contact Ginny Hendricks at Crossref with any questions.\n", "headings": ["Introduction","Benefits of an open \u0026amp; global grant identifier system","How would it work?","Next steps","Thanks to guest authors:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/lenny-teytelman/", "title": "Lenny Teytelman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/meet-the-members---perspectives/", "title": "Meet the Members - Perspectives", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/meet-the-members-part-2-with-protocols.io/", "title": "Meet the members, Part 2 (with protocols.io)", "subtitle":"", "rank": 1, "lastmod": "2018-01-31", "lastmod_ts": 1517356800, "section": "Blog", "tags": [], "description": "Second in our Meet the members blog series is Lenny Teytelman, co-founder and CEO of protocols.io, who gives us a bit of insight into his background and why he started protocols.io, what the future plans for protocols.io are, and how they use and benefit from being a Crossref member.\n", "content": "Second in our Meet the members blog series is Lenny Teytelman, co-founder and CEO of protocols.io, who gives us a bit of insight into his background and why he started protocols.io, what the future plans for protocols.io are, and how they use and benefit from being a Crossref member.\n", "headings": ["Can you tell us a little bit about yourself, and why you started protocols.io?","What problem is your service trying to solve?","Tell us a little bit about what you publish and for whom.","How would you describe the value of being a Crossref member?","What do you see as the value of Crossref, beyond protocols.io?","What are the future plans for protocols.io?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/no-longer-lost-in-translation/", "title": "No longer lost in translation", "subtitle":"", "rank": 1, "lastmod": "2018-01-30", "lastmod_ts": 1517270400, "section": "Blog", "tags": [], "description": "More than 80% of the record breaking 1,939 new members we welcomed in 2017 were from non-English speaking countries, and as our member base grows in its diversity, so does the need for us to share information about Crossref and its services in languages appropriate to our changing audience.\n", "content": "More than 80% of the record breaking 1,939 new members we welcomed in 2017 were from non-English speaking countries, and as our member base grows in its diversity, so does the need for us to share information about Crossref and its services in languages appropriate to our changing audience.\nSo, early last year we started translating our service videos into six other languages: French, Spanish, Brazilian Portuguese, Chinese, Japanese, and Korean. However, the process of translating from one language to another is not always straightforward—but it is super important—as some things can get seriously lost in translation\u0026hellip;\n||||\nIn order to avoid such translation tragedies we created a foolproof process to get the text of the service videos translated and ready for production. (I am, I realize, exposing myself here—see what I did there? —by using a word like foolproof.)\nFirst we produced the videos in English, setting the content to animation and sound (AKA audio visual or A/V to us marketing types), then we brought in a translation company to turn the English content into the six other languages. So far so good. However, as the above examples demonstrate, the meaning of words can get lost in translation. Also, what Crossref does isn’t the easiest thing in the world to translate (are there words for metadata delivery and full-text XML in Japanese?), so we added another stage to the process.\nNext, we sent the translated scripts and their English counterparts to some very helpful international members who, as part of the scholarly research community, understand the complexities of our work and are therefore qualified to check that the text had remained in context.\nUnfortunately, it hadn’t, as the text came back from them heavily edited. After round two of the editing process, the revised text was applied to the videos—but just to be 100% sure, we sent the completed videos back to our helpful international members for a final run through.\nMultiply this painstaking process by 48 videos, throw numerous time zones into the mix and you can see why it took us nearly 12 months to complete them.\nAnd so, it is with great pleasure that today we launch all eight of our service videos in six languages, just click the links below, and enjoy! Découvrez-les!​\t¡Que los disfrutes!\tAproveite! 请欣赏! どうぞお楽しみください!\t즐거운 시간 되세요!\nView videos by language English French Spanish Brazilian Portuguese Simplified Chinese Japanese Korean English français español português do Brasil 简体中文 日本語 한국어로 We\u0026rsquo;d like to thank the following for their help in checking the video translations: Fabienne Meyers from IUCAP for the French versions, our very own resident translator Vanessa Fairhurst for the Spanish versions, Edilson Damasio from the University Library of Maringá for the Brazilian Portuguese versions, Guo Xiaofeng from Wanfang Data for the Chinese versions, Nobuko Miyairi from ORCID for the Japanese versions and Junghyo from Nurimedia and Jae Hwa Chang at infoLumi for the Korean versions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-year-in-the-life-of-crossref/", "title": "A year in the life of Crossref", "subtitle":"", "rank": 1, "lastmod": "2018-01-23", "lastmod_ts": 1516665600, "section": "Blog", "tags": [], "description": "We are delighted to report that last year Crossref welcomed a record-breaking 1,939 new members and, because our member base is growing so rapidly in both headcount and geography\u0026mdash;with the highest number of new members joining from Asia\u0026mdash;we thought it was a good time to reiterate what Crossref is all about, as well as show off a little about the things we are proud to have achieved in 2017.\n", "content": "We are delighted to report that last year Crossref welcomed a record-breaking 1,939 new members and, because our member base is growing so rapidly in both headcount and geography\u0026mdash;with the highest number of new members joining from Asia\u0026mdash;we thought it was a good time to reiterate what Crossref is all about, as well as show off a little about the things we are proud to have achieved in 2017.\nWhat is Crossref?\nWe are an organization that runs a registry of metadata and DOIs of course, but we are much more than that\u0026mdash;staff, board, working groups, and committees as well as a broad range of collaborators, users, and supporters in the wider scholarly communications community. Increasingly, our community includes new contributors like scholars, funders, and universities. Together, we are all working toward the same goal\u0026mdash;to enhance scholarly communications. Everything we do is designed to put scholarly content in context so that the content our members publish can be found, cited, used, and re-used.\nHere\u0026rsquo;s how we did that over the past year:\nWe rallied the community Rallying the community is all about working together to forge new relationships and pave the way for future generations of researchers\u0026mdash;in 2017 we were closely involved with the launch of Metadata 2020; a collaboration that advocates richer, connected, and reusable metadata for all research outputs.\nWe tagged and shared metadata To make sure that our APIs continue to have real, genuine utility, we introduced a new service called Metadata Plus in 2017 so that platforms and tools can leverage the power of our rich, immense database to increase the value and discoverability of content.\nWe played with new technology To keep pace with changes in the industry and stay true to our mission, we often play with new technology with the goal of offering a bigger and better infrastructure. In 2017 we formed a working group and an advisory group for two new identifiers that will see this infrastructure increase; Organization IDs which became ROR, and Grant IDs which became the Crossref Grant Linking System.\nWe made new tools and services Combining our own knowledge and experience with input from the wider community, in 2017 we were able to launch in Beta a new and exciting tool called Event Data. Event Data provides a record of where research has been bookmarked, linked, recommended, shared, referenced, commented on etc, beyond publisher platforms\u0026mdash;which is a great example of putting scholarly research in a wider context.\nSo, while richer metadata (including more record and resource types) remains our focus 2018 and beyond, we also hope that as we become a bigger and more global community we can move beyond the basics and work together to make sure that DOIs, are not the be-all and end-all when they are, in fact, just the beginning.\n", "headings": ["We rallied the community","We tagged and shared metadata","We played with new technology","We made new tools and services"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/bridging-identifiers-at-pidapalooza/", "title": "Bridging Identifiers at PIDapalooza", "subtitle":"", "rank": 1, "lastmod": "2018-01-22", "lastmod_ts": 1516579200, "section": "Blog", "tags": [], "description": "Hello from sunny Girona! I\u0026rsquo;m heading to PIDapalooza, the Persistent Identifier festival, as it returns for its second year. It\u0026rsquo;s all about to kick off.\nOne of the themes this year is \u0026ldquo;bridging worlds\u0026rdquo;: how to bring together different communities and the identifiers they use. Something I really enjoyed about PIDapalooza last year was the variety of people who came. We heard about some \u0026ldquo;traditional\u0026rdquo; identifier systems (at least, it seems that way to us): DOIs for publications, DOIs for datasets, ORCIDs for researchers.", "content": "Hello from sunny Girona! I\u0026rsquo;m heading to PIDapalooza, the Persistent Identifier festival, as it returns for its second year. It\u0026rsquo;s all about to kick off.\nOne of the themes this year is \u0026ldquo;bridging worlds\u0026rdquo;: how to bring together different communities and the identifiers they use. Something I really enjoyed about PIDapalooza last year was the variety of people who came. We heard about some \u0026ldquo;traditional\u0026rdquo; identifier systems (at least, it seems that way to us): DOIs for publications, DOIs for datasets, ORCIDs for researchers. But, gathered in Reykjavik, under dark Icelandic skies, I met oceanographic surveyors assigning DOIs to drilling equipment, heard stories of identifiers in Chinese milk production and consoled librarians trying navigate the identifier landscape.\nIn addition to the usual scholarly publishing and science communication crowd, it was encouraging to see a real diversity of people from different walks of life encounter the same problems and work on them them collaboratively. The thing that brought everyone together was the understanding that if we\u0026rsquo;re going to reliably reference things \u0026ndash; be they researchers, articles they write, or ships they sail \u0026ndash; we need to give them identifiers. And those identifiers should be as good as possible: persistent, resolvable, interoperable.\nWho cares about PIDs? At the turn of the century, a handful of publishers came together to create Crossref (or CrossRef as it was in those days). It was becoming increasingly important to be able to store references in machine-readable format, but publishers were faced with a problem. If an author wants to cite an article, they\u0026rsquo;ll do so without worrying who published it. This means they needed an identifier system that worked across all publishers. Thus the Crossref DOI was born.\nToday we\u0026rsquo;re heading toward 10,000 members, and the thing that they have in common is that they all produce scholarly content and care about how it\u0026rsquo;s referenced. As a trade association, we effectively act on behalf of all of our members, allowing them to register their content, share metadata and links, and assign an identifier.\nBut there\u0026rsquo;s a whole world out there. Publications have never been the be-all and end-all of scholarship, but they have been a backbone. But more and more scholarship, especially science, is done outside journal publishing. Sometimes it\u0026rsquo;s done on platforms that care about the scholarly record as much as publishers. And sometimes it isn\u0026rsquo;t.\nThe Twitterverse Lots of people use Twitter to talk about science. Some are scientists, some aren\u0026rsquo;t. Scientific articles are linked from news reports and discussed on blogs. Gone are the days of scholarly articles being cited only by other scholarly articles. We see links coming in from all over the place. And, although not all of this can be counted as the \u0026ldquo;scholarly record\u0026rdquo;, some of it could be.\nThe barrier-to-entry for journals publishing means that science journals contain only science articles. The barrier-to-entry for Twitter means that anyone can, and does, publish there. My Twitter feed is finely balanced between bibliometrics research, marine biology and pictures of snow leopards with Japanese captions. I don\u0026rsquo;t understand all of it, but I like looking at the pictures.\nBack in the days when the only references to scholarly publications were from other scholarly publications, it was easy to keep track of those references. When an article was published, its references went into a citation database. This happened because the publisher considered this important.\nBut Twitter, the publisher of tweets, doesn\u0026rsquo;t care. It is used for a huge variety of communications and although some people choose to use it to engage in scholarship, we\u0026rsquo;re just a blip on their radar. The same goes for Reddit, a platform that describes itself as \u0026ldquo;the front page of the Internet\u0026rdquo;. There are communities engaged in scientific discussions, but Reddit doesn\u0026rsquo;t feel the need to publish its bibliographic references.\nNor should it.\nBridging those who care with those who don\u0026rsquo;t The barrier-to-entry for contributing to scientific discussions has lowered, meaning that the role of more non-specialist platforms has increased.\nI imagine that there are other communities out there who have their own concerns about the web. Maybe there are model train enthusiasts who want to keep track of every reference to a particular model. Or political commentators who want to keep track of how certain politicians and policies are discussed. As the scholarly community embraces new platforms for communicating, we should recognise that we are part of a broader universe of people using those platforms for more diverse reasons.\nGone are the days when the only way to reply to an article was by writing a letter to the editor. But also gone are the days when you could guarantee that your letter wouldn\u0026rsquo;t appear next to cat pictures (assuming you weren\u0026rsquo;t writing to the Journal of Feline Medicine \u0026amp; Surgery). As a specialist community cohabiting online spaces with non-specialists, it falls to us to do whatever we need to adapt that space and make it our own. In our case, this means recording bibliographic references as and where they occur.\nSomething like this happened once before. As traditional publishers went online, they created Crossref to build and maintain the necessary infrastructure. We\u0026rsquo;re acting on behalf of the community again to collect links from non-traditional sources. Because we can\u0026rsquo;t go to platforms like Twitter and say \u0026ldquo;please deposit your references\u0026rdquo;, we\u0026rsquo;re doing the opposite. We identify a platform, then work out how to scrape its content and extract links.\nWorking at scale So we\u0026rsquo;re broadening out the universe of references that we would like to track from \u0026ldquo;traditional scholarly publishing\u0026rdquo; to \u0026ldquo;the entire web\u0026rdquo;. There are four broad challenges inherent in this, and we think that Crossref infrastructure is the right way to meet them.\nThe first challenge is physically finding the links. Because social media platforms aren\u0026rsquo;t specialised for scholarly publishing, they don\u0026rsquo;t have the same mechanisms in place for capturing bibliographic references. This means that we have to do it ourselves by scraping webpages for references. As the standard-bearer for scholarly PIDs, we think we can do a good job of this.\nThe second challenge is doing this at the scale of the web. Because we might, in theory, find a link on any webpage, there is a literally infinite number of publishing platforms. From big websites like BBC News down to tiny blogs run out of a bedroom. It would be impossible to partner with each of these individually. The way to solve this is to run a centralised service which goes out and contacts as many sources as possible. This role is a collaborative one. Our system is open to inspection, suggestions and contributions from the community.\nThe third challenge is the sheer number of publishers. Because they all register content with us, we are in good position to track their DOIs. In addition to that, every member of Crossref publishes content on their own platform, and has their own set of websites to track. We monitor our members\u0026rsquo; websites and create a central list of domains that we look for. If this wasn\u0026rsquo;t done centrally, each publisher would have to run its own web crawlers and perform the same work, only to filter out their own links.\nThe fourth challenge is how to get all that data to the public. Even if every publisher were able to run their own infrastructure, it would make it very difficult to consume. Through Crossref metadata services, publishers have built a system where you can look up metadata and link to articles without worrying who published them. We think that the same approach should apply to this new link data.\nFor these reasons, we\u0026rsquo;re building Crossref Event Data: a system that monitors as many platforms as we can think of, and brings them into one place, and serves the whole community.\nBuilding bridges If you\u0026rsquo;ve been following along you\u0026rsquo;ll know that my last metaphor was the process of refining crude oil. I like metaphors, and mixing them. After all, you can\u0026rsquo;t mix a good metaphor without breaking a few eggs into the mixing bowl. Today\u0026rsquo;s metaphors are bridges. And not just one.\nBridge 1: PIDs and URLs In the world of Persistent Identifiers, we\u0026rsquo;re quite good at linking. Organizations like Crossref, DataCite and ORCID run separate systems but we work together to record and exchange links. But the web is different. There\u0026rsquo;s no single organization in control and there are many organizations working to catalogue it. Event Data is our offering: bridging the web with our identifiers.\nBridge 2: Scholarly link providers Of course, some platforms and systems do care about persistence and Persistent Identifiers. Event Data is an open platform, and we\u0026rsquo;re collaborating with a few providers to publish links.\nWe\u0026rsquo;ve partnered with The Lens to include Patent to DOI references. We\u0026rsquo;re working with F1000 to include links between reviews and articles. Hopefully we\u0026rsquo;ll see more organizations use Event Data to publish their links.\nBridge 3: Crossref / DataCite Event Data is a collaborative project between DataCite and Crossref. When Crossref Registered Content contains a reference to a DataCite DOI we put it into Event Data. DataCite do the same in reverse. This means that Event Data contains a huge number of article - dataset links.\nBridge 4: Traditional discussions vs new ones At each moment, scholarly discussions are happening in the literature, on various social media platforms and on the web at large. They are all talking about the same thing, but are spread out. Event Data collects links wherever we find them and brings them into one place. By doing this we hope we can help bring those conversations together.\nBridge 5: Bridging bibliometricians and altmetricians to data sources Capturing links from social media to published literature underpins the field of altmetrics. By collecting this data and making it available under open licenses, we bring it to altmetrics researchers. We don\u0026rsquo;t provide metrics, but we do provide the data points that can form the basis for research.\nWithout infrastructure for collecting data, researchers would have to perform the same work over and over again. Because the data is all open, we allow datasets to be republished, reworked and replicated.\nBridge 6: Bridging the Evidence Gap Running Event Data involves collecting a lot of data - gigabytes per day - and boiling it down into hundreds of thousands of individual Events per day. People consuming the data may want to do further boiling down. At every point of the process we record the input data that we were working from, the internal thought process of the system, and the Events that were produced. A researcher can use the Evidence Logs to trace through the entire process that led to an Event.\nWe\u0026rsquo;re a bridge from websites and social media to data consumers. But we take the role very seriously, and there\u0026rsquo;s nothing hidden. A glass bridge, you could say.\nInteresting challenges It\u0026rsquo;s not all plain sailing. There are a few challenges along the way to collecting this data which anyone who wanted to collect this kind of information would face. By collecting it in a central place and running an open platform we can solve each problem once, and improve our process as a community.\nOne problem is choosing what to include. We include any link that we find from a non-publisher website. That means that invariably some of the links are from spam. This problem isn\u0026rsquo;t new: we see low-quality articles being published in traditional journals from time to time. We try to include all of the data we can find and pass it onto consumers. They might want to whitelist certain sources, or they may want all of the data because they\u0026rsquo;re trying to study scholarly spam. We have decided to provide data as Events, which strike the balance between atomicity and usefulness.\nAnother, which I talked about at last year\u0026rsquo;s PIDapalooza, is how we track article landing pages. Read the blog post, the user guide or hop in a time machine if you\u0026rsquo;re interested.\nThe thing about bridges\u0026hellip; \u0026hellip; is that they help people get where they\u0026rsquo;re going. With a few notable exceptions, they\u0026rsquo;re not the main attraction. We play a humble part in scholarly publishing, helping collect and distribute metadata. Most of what we do goes unseen, and helps people create tools, platforms and research. Event Data is an API, and whilst we hope people will build all kinds of things with it, including altmetrics tools, we\u0026rsquo;re not making another metric.\nPIDapalooza All of which brings me to my talk, which I\u0026rsquo;m giving on Wednesday: Bridging persistent and not-so-persistent identifiers. I would tell you about it, but there isn\u0026rsquo;t much more left to say.\nIf you want to find out more, we\u0026rsquo;re currently in Beta, and open for business. Head over to the User Guide to get started!\n", "headings": ["Who cares about PIDs?","The Twitterverse","Bridging those who care with those who don\u0026rsquo;t","Working at scale","Building bridges","Bridge 1: PIDs and URLs","Bridge 2: Scholarly link providers","Bridge 3: Crossref / DataCite","Bridge 4: Traditional discussions vs new ones","Bridge 5: Bridging bibliometricians and altmetricians to data sources","Bridge 6: Bridging the Evidence Gap","Interesting challenges","The thing about bridges\u0026hellip;","PIDapalooza"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-ambassador-program/", "title": "Crossref ambassador program", "subtitle":"", "rank": 1, "lastmod": "2018-01-04", "lastmod_ts": 1515024000, "section": "Blog", "tags": [], "description": "We have listened to the feedback from you, our members, and you\u0026rsquo;ve told us of a need for local experts to provide support in your timezone and language, and to act as liaisons with the Crossref team. You\u0026rsquo;ve also asked for an increased number of training events both online and in person close to you, and for more representatives from Crossref at regional industry events.\nWe want to make sure we can reach members around the globe, and as such, a wide team of people is required who are knowledgeable in the languages, cultures, and member needs in a variety of countries.", "content": "We have listened to the feedback from you, our members, and you\u0026rsquo;ve told us of a need for local experts to provide support in your timezone and language, and to act as liaisons with the Crossref team. You\u0026rsquo;ve also asked for an increased number of training events both online and in person close to you, and for more representatives from Crossref at regional industry events.\nWe want to make sure we can reach members around the globe, and as such, a wide team of people is required who are knowledgeable in the languages, cultures, and member needs in a variety of countries. This is why we\u0026rsquo;re launching our Ambassador Program.\nWhat are Crossref Ambassadors?\nCrossref Ambassadors are volunteers who work within the international scholarly research community in a variety of different roles such as librarians, researchers or editors to name but a few. They are individuals who are well connected, value the work that Crossref does and are passionate about improving scholarly communication and the role Crossref plays within this system.\nSome of the activities our ambassadors will undertake:\nStaying up-to-speed with Crossref developments, for example, by attending webinars and maintaining regular check-ins with the Crossref team. Engaging in the online community platform; providing feedback, joining in discussions and helping other members to resolve issues posted to the group. Writing blog posts, or contributing to newsletters. Participating in beta-testing of new products and services. Helping with local LIVE events; for example, providing recommendations on speakers or venues, helping with logistics and presenting at the event. Helping with the translation of Crossref material and content into local languages. Running webinars on different Crossref services in local languages. Running training sessions locally with Crossref members Representing Crossref at relevant industry events It is important that our ambassadors enjoy the work they are doing with Crossref by contributing in ways in which they feel comfortable, according to their interests, skills and the time they feel they want to contribute. For this reason, the role comes with a high degree of flexibility.\nWe see our ambassadors as valued members of the Crossref network and will provide them with:\nA dedicated contact for any upcoming news, or to share ideas, queries or concerns. Help with content for proposal calls, presentations, training and written articles. Crossref materials and giveaways (plus ambassador-branded materials). Personal endorsement via Crossref Training on Crossref services and on wider relevant skills as necessary. First look at new Crossref developments Certification from Crossref on ambassador and training status. Personal ambassador logo or badge for use on email, website and profile on the Crossref online community forum (launching later this year). Crossref Ambassadors will become an increasingly key part of the Crossref community - the first port of call for updates or to test out new products or services, and the eyes and ears within the local academic community - working closely with Crossref to make scholarly communications better for all.\nMeet our first ambassadors! Jae Hwa Chang has been working at infoLumi as a manuscript editor in academic journals since 2010. Prior to joining infoLumi, she was a medical librarian at International Vaccine Institute and was engaged in medical information management and service. Her interests in information control and management started when she was doing work indexing newspaper articles at JoonAng Ilbo. She was fascinated by Crossref’s persistent efforts and contribution in developing new services to “make content easy to find, cite, link, and assess” and has been introducing them to Korean scholarly publishing communities. Jae earned her MA in Library and Information Science from Ewha Womans University, Korea. She serves as a vice chair of the Committee on Planning and Administration at the Korean Council of Science Editors. In her spare time, she enjoys traveling and experiencing new cultures.\n장재화는 2010년부터 인포루미에서 의학학술지 원고편집을 담당하고 있다. 그전에는 국제백신연구소 도서관에서 사서로 일하면서 의학정보와 학술지논문 유통에 관심을 가졌으며, 그에 앞서서는 중앙일보에서 신문기사 DB 색인을 하면서 정보관리와 활용에 대해 연구하였다. 정보의 검색, 평가, 활용을 위해 꾸준히 새로운 서비스를 개발하는 Crossref에 매력을 느꼈고, 그 서비스들을 한국의 학술지 출판 관계자들에게 소개해왔다. 이화여자대학교에서 문헌정보학을 전공하였고, 한국과학학술지편집인협의회 기획운영위원회 부위원장을 맡고 있다. 여행과 다양한 문화 체험을 즐긴다.\nEdilson Demasio has been a librarian since 1995, with PhD. in Information Science at Federal University of Rio de Janeiro-UFRJ/IBICT. He works in the Department of Mathematics Library of State University of Maringá-UEM, Brazil. With 20 years\u0026rsquo; experience in scientific metadata and publishing. His expertise is various including knowledge in scientific communication, Crossref services, research integrity, misconduct prevention in science, publishing on Latin America, biomedical information, OJS-Open Journal Systems, Open Access journals, scientific journals quality and indexing, and scientific bibliographical databases. He is enthusiastic about presenting and disseminating information about Crossref services to his community in Brazil and working within the community, exchanging ideas and experience.\nEu sou bibliotecário desde 1995, Doutor em Ciência da Informação pela Universidade Federal do Rio de Janeiro-UFRJ/convênio IBICT. Eu trabalho na Biblioteca do Departamento de Matemática da Universidade Estadual de Maringá-UEM. Com 20 anos de experiência em metadados científicos e editoração, entre outros. Meus conhecimentos são diversos sobre comunicação científica, cientometria, metadados XML, serviços Crossref, integridade em pesquisa, prevenção de más condutas na ciência, editoração, editoração na América Latina, informação biomédica, OJS-Open Journal Systems, revistas de Acesso Aberto, qualidade de periódicos científicos e indexação, bases de dados bibliográficas. Gosto de disseminar meu conhecimento a outras regiões e pessoas e de trabalhar em comunidade junto as instituições e outros países, de planejar novas apresentações, de trocar experiências como palestrante ou convidado e trabalhar na disseminação do conhecimento para todos.\nLauren Lissaris has dedicated much of her career to the dissemination of valuable content on a robust platform. She takes pride in her achievements as the Digital Content Manager at JSTOR. JSTOR provides access to more than 10 million academic journal articles, books, and primary sources in 75 disciplines. JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways.\nLauren successfully works with all aspects of journal content to effectively assist publishers with their digital content. This includes everything from XML markup, Content Registration/multiple resolution, and HTML website updates. Lauren has been involved in hosting current content on JSTOR since the program\u0026rsquo;s launch in 2010. She continues to collaborate with organizations to successfully contribute to the evolution of digital content. The natural spread from journals to books has set Lauren up for developing and planning the book Content Registration program for JSTOR. She is a member of the Crossref Books Advisory Group and she helped successfully pilot Crossref’s new Co-access book deposit feature.\nIf you want to find out more information on the Ambassador Program, or you would like to express your interest in being an ambassador, you can either contact us at [feedback@crossref.org](mailto:feedback@crossref.org?subject=Ambassador Program) or complete our online form.\n", "headings": ["Meet our first ambassadors!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2017/", "title": "2017", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/damian-pattinson/", "title": "Damian Pattinson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/discussion/", "title": "Discussion", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/guest/", "title": "Guest", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-and-integrity-the-unlikely-bedfellows-of-scholarly-research/", "title": "Metadata and integrity: the unlikely bedfellows of scholarly research", "subtitle":"", "rank": 1, "lastmod": "2017-12-14", "lastmod_ts": 1513209600, "section": "Blog", "tags": [], "description": "I was invited recently to present parliamentary evidence to the House of Commons Science and Technology Select Committee on the subject of Research Integrity. For those not familiar with the arcane workings of the British Parliamentary system, a Select Committee is essentially the place where governments, and government bodies, are held to account. So it was refreshing to be invited to a hearing that wasn’t about Brexit.\nThe interest of the British Parliament in the integrity of scientific research confirms just how far science’s ongoing “reproducibility crisis” has reached. The fact that a large proportion of the published literature cannot be reproduced is clearly problematic, and this call to action from MPs is very welcome. And why would the government not be interested? At stake is the process of how new knowledge is created, and how reliable that purported knowledge is.\n", "content": "I was invited recently to present parliamentary evidence to the House of Commons Science and Technology Select Committee on the subject of Research Integrity. For those not familiar with the arcane workings of the British Parliamentary system, a Select Committee is essentially the place where governments, and government bodies, are held to account. So it was refreshing to be invited to a hearing that wasn’t about Brexit.\nThe interest of the British Parliament in the integrity of scientific research confirms just how far science’s ongoing “reproducibility crisis” has reached. The fact that a large proportion of the published literature cannot be reproduced is clearly problematic, and this call to action from MPs is very welcome. And why would the government not be interested? At stake is the process of how new knowledge is created, and how reliable that purported knowledge is.\nThe other issue driving this overview of research practices are the cases of deliberate fraud and wrongdoing that have recently created headlines (e.g., the STAP papers concerning the reprogramming of stem cells). While these cases are clearly dramatic outliers, they nevertheless serve to diminish public confidence in scholarly research and the findings that come out of this enterprise.\nAs with most inquiries, the question quickly boiled down to: who is to blame? As Bill Grant MP asked me directly, “Where does the responsibility lie?”\nMy answer was lifted from an article by Ginny Barbour and colleagues in F1000Research this November (https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.13060.1): publishers are responsible for the integrity of the published literature, while institutions and employers are ultimately responsible for the conduct of their staff. Misconduct entails intent, usually to deceive the reader into believing a conclusion that the researcher wishes them to believe. But journal editors can never know, and are not in a position to investigate, whether a researcher has deliberately falsified their data.\nHowever, there are things that publishers can do to ensure high standards of integrity. Much of this involves making a study’s authors publish as much information about what they have done as possible - the more the reader can see of how data were generated, the more that reader can trust the findings communicated in the published article.\nArticle metadata directly supports this function. It provides structure and transparency to information pertaining to ethics and integrity. And because metadata is independent of the main article, it can be readable even if the article itself is locked behind a paywall.\nCrossref already provides metadata that can demonstrate the integrity of published articles. The metadata collected on 91+ million scholarly works across publishers and disciplines is open and freely accessible to all. Bibliographic information, for example, allows readers to see who the authors of the article are, where they are from, and what else they have published. Similarly, funding data allows readers to identify potential conflicts of interest, for example if the funder has commercial or political affiliations. Even if the reader cannot see the conflict of interest statement (or if the journal has not provided one), they can use the funding statement to surface potential conflicts.\nAnd if they wanted, publishers could provide additional metadata to add still more transparency to the research process. Ethical approval by institutional review boards, for example, could be captured, and any protocol numbers traced back to the original ethics committee approval. At present the process of ethical approval varies from country to country, and from institution to institution. Encouraging authors and journals to deposit information on the approval process would both demonstrate the high ethical standards the author is working to, and also improve the standards themselves, since institutions would have to encode their approval processes in a way that is understandable to others. This could pave the way to significantly higher international ethical standards, all through a simple addition to the indexed metadata underlying the scholarly literature.\nOne key recommendation that I and many others made to the Committee was, in short, \u0026ldquo;show your work\u0026rdquo;. As a researcher, that means showing your data. As a publisher, that means showing what checks you have done. In both cases, metadata can help.\nA major issue that publishers and researchers can – and should – address is the provision of actual scientific data. Most papers, today, present only the end results of the authors’ (often quite extensive) analyses. The case for sharing data is an obvious one - many recent cases of misconduct could have been identified earlier, or even avoided altogether, if editors and readers had had access to underlying datasets.\nWith images, a requirement to submit raw images alongside the edited figures would dramatically reduce the cases of manipulation that are rife in the literature (studies suggest up to 20% of papers have some kind of inappropriate figure manipulation, with around 1 in 40 papers showing manipulation beyond that which can be expected to be a result of error). Similarly, providing the numbers that a paper’s analyses are based upon would allow readers to fully assess if datasets are distributed as would be expected through random sampling, and, if they choose, to determine if the data are sufficient to support the statistical inferences made in the paper. The Crossref schema – by providing unique identifiers to data citations - makes this link between data and paper possible. (See the recent blog post on the Research Nexus for more information.)\nFor publishers, showing your work also means being transparent to your readers about the editorial checks that a manuscript has undergone. Crossref has a tool that enables this editorial transparency: it’s called Crossmark. Crossmark allows readers to see the most up-to-date information about an article, even on downloaded PDFs. In most cases it is used to show whether the version of an article is most recent one, or whether any corrigenda or retractions have been subsequently added. But it can also be used to provide whatever information a publisher wishes to share about the paper. Some journals have experimented with using Crossmark to ‘thread’ publications together, for example, by linking all the outputs generated from a single clinical trial registration number (blog post here). But publishers could go further and display metadata pertaining to the editorial checks they have performed on a paper. So Crossmark could tell readers that the paper has been checked for plagiarism, or figure manipulation, or reporting standards such as CONSORT or ARRIVE guidelines. Here at Research Square we have been addressing this with a series of Badges that researchers can apply to their papers to demonstrate what checks have been performed.\nTogether, these implementations would provide value to the reader, who can see exactly what has been checked, and to the publisher, who can show how rigorous their editorial processes are. It would also serve to highlight the integrity of the authors who have passed all of these checks.\nResearch integrity is not something that can be easily measured but, unlike wit or charm, it is something that people generally know that they have.* This means that they just need to be transparent in their output to demonstrate this to the world. Metadata provides a simple way of doing this, so researchers and publishers should make sure they provide it as openly as they can.\n*with apologies to Laurie Lee for the mangled quote\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dr.-livingstone-i-presumea-two-month-expedition-deep-into-the-heart-of-research-publishing/", "title": "Dr. Livingstone, I presume…a two month expedition deep into the heart of research publishing", "subtitle":"", "rank": 1, "lastmod": "2017-12-13", "lastmod_ts": 1513123200, "section": "Blog", "tags": [], "description": "Hello there. I\u0026rsquo;m Amanda Bartell, and I joined the Crossref team in mid-October as the new Head of Member Experience. My new Member Experience team will be responsible for metadata users as well as members, onboarding new accounts, supporting existing ones, and making sure that everyone can make the most of Crossref services - an an easy and efficient way. I have spent the last couple of months exploring the world of academic publishing and what our members need - and it\u0026rsquo;s been fascinating!\n", "content": "Hello there. I\u0026rsquo;m Amanda Bartell, and I joined the Crossref team in mid-October as the new Head of Member Experience. My new Member Experience team will be responsible for metadata users as well as members, onboarding new accounts, supporting existing ones, and making sure that everyone can make the most of Crossref services - an an easy and efficient way. I have spent the last couple of months exploring the world of academic publishing and what our members need - and it\u0026rsquo;s been fascinating!\nExpedition members The new Member Experience team is made up of some people who are new to Crossref and Scholarly Publishing and some whose names you\u0026rsquo;ll probably recognize!\nAnna Tolwinska (Member Experience Manager) will support existing members in understanding the quality of metadata they deposit with us, and how they can best make use of our other products and services. Paul Davis (Product Support Specialist) and Shayn Smulyan (Product Support Associate) will continue to provide excellent technical support to all creators and consumers of our metadata. Gurjit Bhullar (Membership Coordinator) will help new applicants who want to join Crossref understand the member obligations and have a smooth induction journey. We\u0026rsquo;ll be expanding the team in 2018 to support you further - watch this space!\nWhat a diverse ecosystem My background is educational publishing, so this has been my first foray into the world of scholarly publishing. In my first few months I\u0026rsquo;ve been lucky enough to attend three very different events with Crossref - Frankfurt Book Fair, our annual meeting (LIVE17) in Singapore, and an OpenCon event in Oxford. Each one has given me the chance to talk to our members and other constituents, and I\u0026rsquo;ve been really struck by what a diverse bunch you are: from small volunteer-led society journals through universities to commercial behemoths; from Albania to Zambia (and 125 countries in between); covering everything from Ancient History to X-Ray Spectrometry.\nExpedition equipment to suit the climate This diversity gives my team a huge responsibility. We need to make sure that the support we provide to you can meet the needs of everyone - whether you\u0026rsquo;re a multinational publisher with a large team of xml specialists, or a small team of enthusiastic academics. Everyone should be able to clearly understand and take advantage of what Crossref offers both to you as an organization and to the wider scholarly community.\nWith this in mind, we\u0026rsquo;re going to be making a few changes to the support materials we provide over the next 12 months\u0026mdash;rewriting them so they\u0026rsquo;re clearer for everyone, re-structuring our support center so there\u0026rsquo;s a separate route through depending on your level of technical expertise and closer links with our main website, plus providing support in different languages and different formats.\nSticking together in a harsh environment As someone who has previously worked in commercial publishing, something else that has struck me about working in a member organization is the difference between members and traditional \u0026ldquo;customers\u0026rdquo;. It\u0026rsquo;s been fantastic to see how involved many of you are in Crossref. From taking part in our various committees and working groups, to helping to organize LIVE Local events, to attending webinars and training, it\u0026rsquo;s obvious that you feel a real sense of ownership over Crossref and our shared mission.\nWe\u0026rsquo;re hoping to make use of that great sense of community in 2018 by improving our member center, giving you more access to see the level of metadata you\u0026rsquo;re sharing with the community (and that others are sharing) and providing more options for you to communicate with, and support each other. We\u0026rsquo;re also going to be improving the education we offer for new members, to make sure that everyone is aware of the joint mission we all have to improve research communications. Most long time members know it\u0026rsquo;s so much more than just having a DOI, and we need to make sure that our new members are aware of this too and share our vision.\nLeaving no-one behind We have a lot of plans for the Member Experience team in 2018, but it\u0026rsquo;s key that everything we do meets the needs of all our members. If you have any suggestions for how we can improve your member experience, [do let me know](mailto:feedback@crossref.org?subject=Member Experience suggestion).\n", "headings": ["Expedition members","What a diverse ecosystem","Expedition equipment to suit the climate","Sticking together in a harsh environment","Leaving no-one behind"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/global-persistent-identifiers-for-grants-awards-and-facilities/", "title": "Global Persistent Identifiers for grants, awards, and facilities", "subtitle":"", "rank": 1, "lastmod": "2017-12-13", "lastmod_ts": 1513123200, "section": "Blog", "tags": [], "description": "Crossref\u0026rsquo;s Open Funder Registry (neé FundRef) now includes over 15 thousand entries. Crossref has over 2 million metadata records that include funding information - 1.7 million of which include an Open Funder Identifier. The uptake of funder identifiers is already making it easier and more efficient for the scholarly community to directly link funding to research outputs, but lately we\u0026rsquo;ve been hearing from a number of people that the time is ripe for a global grant identifier as well.\nTo that end, Crossref convened its funder advisory group along with representatives from our collaborator organizations, ORCID and DataCite, to explore the creation of a global grant identifier system.\nWe thought you might like to know about what we\u0026rsquo;ve been discussing\u0026hellip;\n", "content": "Crossref\u0026rsquo;s Open Funder Registry (neé FundRef) now includes over 15 thousand entries. Crossref has over 2 million metadata records that include funding information - 1.7 million of which include an Open Funder Identifier. The uptake of funder identifiers is already making it easier and more efficient for the scholarly community to directly link funding to research outputs, but lately we\u0026rsquo;ve been hearing from a number of people that the time is ripe for a global grant identifier as well.\nTo that end, Crossref convened its funder advisory group along with representatives from our collaborator organizations, ORCID and DataCite, to explore the creation of a global grant identifier system.\nWe thought you might like to know about what we\u0026rsquo;ve been discussing\u0026hellip;\nThe First Rule of Grant Identifiers The first rule of grant identifiers is that they probably should not be called \u0026ldquo;grant identifiers\u0026rdquo;. Research is supported in a variety of ways\u0026mdash;through grants, endowments, secondments, loans, use of facilities/equipment and even crowd-funding. In any of these cases, it is important to be able to link researchers and research outputs to details about the sources of support. This is true for prosaic reasons\u0026mdash;to understand ROI, to map the competitive landscape, to ensure that mandates are fulfilled, to avoid double payment. But it is also true for epistemic reasons; understanding how research was funded can help contextualise that research, and help expose potential conflicts of interest or specific agendas.\nThe Open Funder Registry which provides a coarse mapping between research outputs and funders, but it is becoming clear that we need more fine-grained mapping directly to information about the kind of support that was provided.\nAwkwardly, none of us had any great ideas about alternative nomenclature, so we\u0026rsquo;ve made the eminently practical decision to continue to use the term \u0026ldquo;grant identifier\u0026rdquo; whilst being aware that our aim is to define a system that applies more broadly to any form of funding or support of research. So +1 for practicality.\nWhy do we need an open, global, grant identifier? With the steady increase in research outputs, and the growing number of active researchers from both academia and industry, research stakeholders find they need to be able to automate workflows in order to scale their systems efficiently. Funders want to be able to track the outputs that arise from research they have funded. As a result, institutions find themselves having to regularly analyse and summarise the research their faculty produces. Faculty, in turn, face increasing accounting bureaucracy in order to meet all the reporting requirements that are cascading through the system. And finally, publishers are seeking to make the manuscript submission and evaluation process more efficient as well as to increase the discoverability and contextual richness of their publications.\nMost funders already have local, internal grant identifiers. But there are over 15K funders currently listed in the aforementioned Open Funder Registry. The problem is that each funder has its own identifier scheme and (sometimes) API. It is very difficult for third parties to integrate with so many different systems. Open, global, persistent and machine-actionable identifiers are key to scaling these activities.\nWe already have a sophisticated open, global, interoperable infrastructure of persistent identifier systems for some key elements of scholarly communications. We have persistent identifiers for researchers and contributors (ORCID iDs), for data and software (DataCite DOIs), for journal articles, preprints, conference proceedings, peer reviews, monographs and standards (Crossref DOIs), and for Funders (Open Funder Registry IDs).\nAnd there are similar systems under active development for research organizations, conferences, projects and resources reported in the biomedical literature (e.g. antibodies, model organisms). At a minimum, open, persistent identifiers address the inherent difficulty in disambiguating entities based on textual strings (structured or otherwise). This precision, in turn, allows automated cross-walking of linked identifiers through APIs and metadata which enable advanced applications.\nFor example, the use of identifiers can simplify user interfaces and save users time. Almost everybody in scholarly communications spends a frustrating portion of their lives copying information from one system to another. This process is not just tedious, it is also error-prone. But we are increasingly seeing systems make use of identifiers to eliminate the need for a lot of this manual copying. For example, researchers using an ORCID iD when they submit a manuscript can start to expect that their relevant ORCID biographical data will simply be imported into the manuscript tracking system so that it doesn\u0026rsquo;t have to be manually copied over. And if said researcher has their manuscript accepted, they can also expect that their ORCID record will automatically be updated with the publication information and that their institution and/or their funder can be automatically notified of the impending publication so that relevant repositories and CRIS systems can be populated automatically.\nAdditionally, there is a growing list of services that have been built on top of these standard identifiers. Profile systems (e.g. VIVO, Impact Story, Kudos) can automatically retrieve the latest information from a researcher\u0026rsquo;s ORCID record. Bibliographic management tools (EasyBib, Zotero, Papers) allow researchers to cite content with the latest metadata. And similarity checking services can harvest and index the latest scholarly literature for inclusion in the tools they have developed for detecting plagiarism and fraud. Funder identifiers are already playing an important role in this metadata workflow. As of November 2017, there are 1.7 million Crossref publication DOIs that are explicitly linked to an Open Funder Registry ID. These linkages serve as a foundation for initiatives like SHARE, CHORUS, and the Jisc Publications Router. But there are another 1+ million records that have funding information without an associated ID and, of course, 90+ million records that have no funding information at all.\nSo If we have global funder identifiers and they are already working, why do we need global grant identifiers as well? Don\u0026rsquo;t we just need to increase uptake of funder identifiers? How will grant identifiers help?\nFirst, global grant identifiers could greatly reduce the UX complexity of gathering funder information. This, in turn, would boost the collection of funding information from researchers and ensure that the information that they provide to publishers, institutions and other funders is accurate and complete.\nSecond, the introduction of global grant identifiers would further increase the utility of links between research outputs and funding information. A grant identifier provides more granular information about the funding. Instead of just linking to information about the funder, a grant identifier would allow linking research outputs to particular research programs along with the information relating to those programs, such as grant durations, award amounts, etc. It would also allow analysis of relationships between multiple co-funding bodies.\nTo DOI or not to DOI? Clearly, we think DOIs are pretty good things. But we also aren\u0026rsquo;t zealots. Sometimes DOIs are appropriate and sometimes they are not. For example, we were instrumental in defining the structure of the ORCID identifier and, in that case, we decided that DOIs were not appropriate.\nBut in the case of a global grant identifier system, we think there are a number of reasons adopting DOIs would be useful:\nIt is easy to \u0026ldquo;overlay\u0026rdquo; the global DOI system onto existing local identifier systems. An organization does not need to abandon their internal identifier scheme in order to use DOIs. They can instead incorporate their local scheme into the DOI structure via the simple mechanism of prepending their existing identifiers with an assigned DOI prefix and registering relevant metadata with a DOI registration agency like Crossref or DataCite. DOI links are \u0026ldquo;persist-able\u0026rdquo;. That is they can resolve to different online locations even if domain names change and/or the DNS system itself is replaced. This characteristic is important for a grant identifier because funding agencies - particularly government funding agencies - tend to undergo frequent reorganisations (e.g. splitting, merging, restructuring) and renaming. An indirectly resolvable identifier like a DOI (or ARK, Handle, etc.) is critical to ensure the long-term integrity of identifiers in these situations. There are 15K+ funders currently listed in the Open funder Registry. Each has their own grant identifier scheme and different levels of technical support for them (APIs, etc.). This makes it very difficult for 3rd parties to build tools that work \u0026ldquo;generically\u0026rdquo; with grant identifiers. But once a local identifier scheme had been \u0026ldquo;globalised\u0026rdquo; by making it a DOI, third parties can build tools without having to worry about the differences between individual funder systems. Crossref and DataCite DOIs are deeply embedded in the tools and workflows of scholarly communications. Manuscript tracking systems, bibliographic management systems, metrics systems, CRIS systems, profile systems, etc. often have built-in mechanisms for consuming and making use of DOIs and their associated metadata. Crossref and DataCite DOIs are cross-disciplinary. They are used in the humanities, social sciences, sciences and in a host of communities that frequently interact with the scholarly literature for example- NGOs, IGOs, patent systems, and standards bodies. Crossref and DataCite provide a variety of APIs (e.g. REST, OAI-PMH) and services (e.g. search, Crossmark, Similarity Check, Scholix) built around DOIs. DOI\u0026rsquo;s have a useful characteristic, which is that the \u0026ldquo;prefix\u0026rdquo; of a DOI can be used to determine who originally created the record with which the DOI is associated. In the case of grant identifiers, this means that the prefix of a DOI-based grant identifier could be used to automatically determine the correct funder responsible for the initial grant. This means that the UIs for entering funder/grant information could be both simplified and made more robust\u0026mdash;which would likely increase the number of parties that collect and propagate id-based funder information. But the use of DOIs as the basis for grant identifiers also introduces some potential barriers to adopting a standard funding identifier. For example:\nFunders would need to be able to join a suitable DOI registration agency (e.g. Crossref, DataCite). Some funders (e.g. government agencies) may be restricted in their ability to \u0026ldquo;join\u0026rdquo; external organizations. Funders would need to be able to create new DOIs and register associated metadata with their chosen registration agency in a timely manner. Some funders may be unable to generate metadata or may not have the technical capacity to automatically register metadata. Funders would need to be able to provide an openly available (e.g. not behind access control) online resource to which the DOI would resolve. For example, a landing page describing the grant or a digital copy of the grant itself. Again, some funders may face technical barriers to providing an online resource to resolve to. In other cases there may be privacy or security reasons for not providing an open resource to which a DOI can resolve. Still, the advisory group consensus has been that these barriers are generally surmountable. Most of the questions they had revolved around understanding what a DOI-based workflow would look like from the funder\u0026rsquo;s perspective, and so we outlined the steps a funder would need to take in order to adopt DOI-based global identifiers.\nThe DOI-based grant identifiers workflow A funder registering metadata and creating DOIs for grants would need to support the following workflow:\nWhen a grant is submitted, the funder would assign their own internal identifier for tracking, etc. For example 00-00-05-67-89. If the grant is accepted, the funder would: generate a global public identifier for the grant based on the DOI. For example, assuming their prefix was 10.4440, then the global public identifier might become https://0-doi-org.libus.csd.mu.edu/10.4440/00-00-05-67-89. create a \u0026ldquo;landing page\u0026rdquo; on their website (or wherever they make their grants available online) to which the global public identifier will resolve. The landing page would display a TBD set of metadata describing the grant, as well as a link to the grant itself. register the generated DOI and a TBD set of metadata with their registration agency (RA) (e.g. Crossref or DataCite). This metadata would include the URL of the landing page defined above. Once metadata and DOIs are registered with an RA, the funder would have a series of ongoing obligations: Update locations: If the location of the landing page changes (for example, because of a site restructuring, merger of split of the funding organization, etc.), the funder would need to update their metadata records to point the DOI to the new location. Update metadata: If metadata becomes out-of-date (e.g. the status of a grant changes, additional grant-related metadata is added, etc.), the funder would update the relevant records. Promote the use of the the DOI as the preferred global, public identifier for the grant. That is - the one that people should use when referring to or citing the grant (the funder can continue to use the original local identifier for their internal systems, etc.). Again, the advisory group thought that this workflow seemed tractable and agreed that the best way to ensure that would be to proceed to creating a working pilot of a global grant identifier system based on the DOI.\nNext steps Crossref is starting a grant identifier pilot. We will create two sub-groups of the funder advisory group.\nGroup for \u0026ldquo;Governance, membership, and fees\u0026rdquo; This group will look at governance and financial issues raised by the introduction of grant identifiers. For example, it will look at whether Crossref\u0026rsquo;s membership model works as is or might need to be adjusted in order to accommodate a new constituency. We know, for example, that some funders find it hard to become \u0026ldquo;members\u0026rdquo; of organizations. We might need to create other participation categories in order to accommodate these restrictions. Similarly the group will look design a pricing model of DOIs for grants in order to make sure that they cover the costs of modifying and sustaining the system for them, as well as to ensure that the pricing incentivises funders to participate. This sub-group will work closely with Crossref\u0026rsquo;s membership and fees committee.\nGroup for \u0026ldquo;Technical and metadata\u0026rdquo; This group will look at any technical changes that need to be made to registration process in order to accommodate the new participants. If there are, they are likely to center around specific metadata requirements for grants. As such, the group will likely spend most of its time agreeing to a practical metadata schema for capturing relevant information about the myriad of ways in which organizations support research. This group will also liaise with other relevant technical working groups, such as those who are looking at organizational identifiers and conference identifiers.\nThe two sub-groups will first meet in January and, after a few meetings, will report back the advisory group with recommendations. Using these recommendations, we will develop an implementation plan which will include testing the infrastructure, testing metadata deposits, fee modelling, etc, with a small group of participants.\nIf you are a funder, and you would like to have somebody from your origanization participate in one of these working groups, please contact Ginny Hendricks. Note that joining the above groups does not commit you to anything other than engaging in the discussion. We want to make sure we create a system that works for a range of funders, not just those who can start testing something right away.\n", "headings": ["The First Rule of Grant Identifiers","Why do we need an open, global, grant identifier?","To DOI or not to DOI?","The DOI-based grant identifiers workflow","Next steps","Group for \u0026ldquo;Governance, membership, and fees\u0026rdquo;","Group for \u0026ldquo;Technical and metadata\u0026rdquo;"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/and-our-survey-says.../", "title": "And our survey says…", "subtitle":"", "rank": 1, "lastmod": "2017-12-11", "lastmod_ts": 1512950400, "section": "Blog", "tags": [], "description": "Earlier this year we sent out a short survey inviting members to rate our performance. We asked what you think we do well, what we don’t do so well, and one thing we could do to improve our rating.\n", "content": "Earlier this year we sent out a short survey inviting members to rate our performance. We asked what you think we do well, what we don’t do so well, and one thing we could do to improve our rating.\nWe were delighted to receive 313 responses and relieved that 93% of those were positive (phew!). It was very useful to hear your thoughts and to get such a variety of comments covering Product, Outreach, Marketing and Member Experience. There were a few recurring themes, three of which we’d like to address here:\n1. Providing information in different languages Not surprisingly, given the growing diversity of our member base, some respondents asked us to share information in languages other than English. We have been aware of this growing need for some time and have been working on a few developments in this area:\nIn January 2018 we will be launching a series of seven service videos in six different languages—French, Spanish, Brazilian Portuguese, Chinese, Korean, and Japanese. January also sees the launch of a new initiative called the Ambassador Program. Ambassadors will work closely with Crossref to help spread the word about our services, and support our global members in their own languages. During 2017 we hosted two webinars in Brazilian Portuguese and one in Turkish, and aim to increase this in 2018. 2. Member-to-member discussion forum Some respondents asked for a facility to enable members to reach out to each other, giving direct opportunity for discussions and/or sharing experiences online (and in their own languages). We have been working for a few months now to provide a member-to-member discussion area, which is planned for 2018. Following a soft launch covering a few areas/topics, we’ll broaden the scope to include technical support, too.\n3. Registering metadata more easily using the web deposit form Many respondents requested a more user-friendly process for registering metadata through our webform. Our Product and DevOps teams have been working on this for some time and have created a new interface called the Metadata Manager, which is currently in Beta but scheduled to launch in Q1 of 2018.\nFinally, we’d like to thank you for participating in our survey. Your valuable feedback and suggestions help us understand your experience, improve our service, shape the course of particular projects and even direct our future strategy.\nAs this survey was anonymous, we are unable to respond to anyone on an individual basis, however, if you’d like to have your particular comments addressed, we would love to hear from you directly.\n", "headings": ["1. Providing information in different languages","2. Member-to-member discussion forum","3. Registering metadata more easily using the web deposit form"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/net-promoter-score/", "title": "Net Promoter Score", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/survey/", "title": "Survey", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/working-with-universities-at-crossref-live-yogyakarta/", "title": "Working with universities at Crossref LIVE Yogyakarta", "subtitle":"", "rank": 1, "lastmod": "2017-12-11", "lastmod_ts": 1512950400, "section": "Blog", "tags": [], "description": "Following on from our LIVE Annual Meeting in Singapore, my colleague, Susan Collins, and I held a local LIVE event in Yogyakarta thanks to support from Universitas Ahmad Dahlan (UAD), Universitas Muhammadiyah Sidoarjo and one of Crossref\u0026rsquo;s new Sponsoring Affiliates, Relawan Jurnal Indonesia.\n", "content": "Following on from our LIVE Annual Meeting in Singapore, my colleague, Susan Collins, and I held a local LIVE event in Yogyakarta thanks to support from Universitas Ahmad Dahlan (UAD), Universitas Muhammadiyah Sidoarjo and one of Crossref\u0026rsquo;s new Sponsoring Affiliates, Relawan Jurnal Indonesia.\nOver the past two years, we\u0026rsquo;ve seen accelerated growth in our membership in Asia Pacific (making up a quarter of all new members in the last two years). A lot of those new members have come from Indonesia, so it was great to have the opportunity to meet up, answer questions and to share knowledge between all our different organizations.\nWe welcomed speakers such as Dr. Muhammad Dimyati, from the Directorate General of Strengthening for Research and Development, Ministry of Research, Technology and Higher Education. Dr. Dimyati talked about the importance of Indonesian research and presented statistics on its growth, but also its coverage in different databases like Scopus and DOAJ.\nDr. Lukman from LIPI, the Indonesian Institute of Sciences also joined us to explain the importance of identifiers within the research ecosystem. As any identifier buff will know, we\u0026rsquo;re keen to talk more about how organizations are using Crossref metadata and identifiers, and the importance of providing good, complete metadata (Metadata2020) so this, plus a remote presentation from Nobuko Miyari from ORCID helped provide great context for the day.\nMetadata and identifiers are of course just one part of the process, and Mr. Tole Sutikno from UAD gave an overview of good practice publishing by looking at some of the wider issues that journal editors (and researchers) need to know.\nWe had time in the afternoon to talk to our audience about Crossref - our different services, OJS integrations, funding data and our APIs, and thanks to our moderators we were able to take lots of questions from members who had specific questions about Crossmark, Cited-by and depositing references.\nA few weeks later, and I\u0026rsquo;m still absorbing all of the things that happened on our (too) quick trip to Yogyakarta.\nThanks again to our members and hosts for attending the event and sharing their questions, ideas and plans with us, and we plan to come back to continue to build on these in future.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-pidapalooza-lineup-is-out-come-rock-out-with-us-at-the-open-festival-of-persistent-identifiers/", "title": "The PIDapalooza lineup is out; come rock out with us at the open festival of persistent identifiers", "subtitle":"", "rank": 1, "lastmod": "2017-12-01", "lastmod_ts": 1512086400, "section": "Blog", "tags": [], "description": "PIDs\u0026rsquo;R\u0026rsquo;Us and if they\u0026rsquo;re you, too, please join us for the second PIDapalooza, in Girona, Spain on January 23-24, for a two-day celebration of persistent identifiers.\nTogether, we will achieve the incredible - make a meeting about persistent identifiers and networked research fun! Brought to you by California Digital Library, Crossref, DataCite, and ORCID, this year\u0026rsquo;s sessions are organized around eight themes:\n", "content": "PIDs\u0026rsquo;R\u0026rsquo;Us and if they\u0026rsquo;re you, too, please join us for the second PIDapalooza, in Girona, Spain on January 23-24, for a two-day celebration of persistent identifiers.\nTogether, we will achieve the incredible - make a meeting about persistent identifiers and networked research fun! Brought to you by California Digital Library, Crossref, DataCite, and ORCID, this year\u0026rsquo;s sessions are organized around eight themes:\nPID myths Achieving persistence PIDs for emerging uses Legacy PIDs Bridging worlds PIDagogy PID stories Kinds of persistence The program is now final and there really is something for everyone (well, every PID geek) Hmm, Do Researchers Need to Care about PID Systems? Excellent question. We\u0026rsquo;ll hear Stories from the PID Roadies: Scholix. Nevermind the The Bollockschain and other PID Hallucinations. An intriguing session on #ResInfoCitizenshipIs?. There will be a plenary by Johanna McEntyre on As a biologist I want to reuse and remix data so that I can do my research. And we\u0026rsquo;ll enjoy another plenary from Melissa Haendel (title to be confirmed). With half the places already booked, now\u0026rsquo;s the time to register and plan your trip. We hope to see fellow festival-goers there for some PIDtastic party time (and actually some epic serious conversations).\nContact me via the steering committee at PIDapalooza@datacite.org with any questions, music requests, or backstage passes.\nFull lineup View the Crossref LIVE17 agenda.\n", "headings": ["The program is now final and there really is something for everyone (well, every PID geek)","Full lineup"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/resolving-citations-we-dont-need-no-stinkin-parser/", "title": "Resolving Citations (we don’t need no stinkin’ parser)", "subtitle":"", "rank": 1, "lastmod": "2017-11-29", "lastmod_ts": 1511913600, "section": "Labs", "tags": [], "description": "If you are reading this, you may be faced with the following problem- You have a collection of free-form citations which you have copied from a scholarly article and you want to import them into a bibliographic management tool (or other database). In short, you would like to turn something like this:\nCarberry, J 2008, “Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory.” Journal of Psychoceramics, vol. 5, no.", "content": "\rIf you are reading this, you may be faced with the following problem- You have a collection of free-form citations which you have copied from a scholarly article and you want to import them into a bibliographic management tool (or other database). In short, you would like to turn something like this:\nCarberry, J 2008, “Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory.” Journal of Psychoceramics, vol. 5, no. 11, pp. 1-3.\nInto something more like this:\n@article{Carberry_2008, title={Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory}, volume={5}, url={http://0-dx-doi-org.libus.csd.mu.edu/10.5555/12345678}, DOI={10.5555/12345678}, number={11}, journal={Journal of Psychoceramics}, publisher={Society of Psychoceramics}, author={Carberry, Josiah}, year={2008}, month={Aug}, pages={1-3}} Or even this:\nTY - JOUR JO - Journal of Psychoceramics AU - Josiah Carberry SN - 0264-3561 TI - Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory SP - 1 EP - 3 VL - 5 PB - Society of Psychoceramics PY - 2008 The traditional approach to this is often “We’ll start by trying to parse the citation into its component parts.” Indeed, there are a number of tools that try to do this:\nParaCite ParsCit FreeCite AnyStyle Simple Text Query Which is cool, but parsing citations is very difficult- particularly with obscure and/or terse citation styles.\nBut there is another way!\nInstead of trying to parse the citation, just search for the record in a database that already has the citation parsed. The Crossref REST API is remarkably good for this. For example:\nhttps://api.crossref.org/works?query.bibliographic=Carberry%2C+Josiah.+%E2%80%9CToward+a+Unified+Theory+of+High-Energy+Metaphysics%3A+Silly+String+Theory.%E2%80%9D+Journal+of+Psychoceramics+5.11+%282008%29%3A+1-3.# Gives you the following result:\n{\u0026#34;status\u0026#34;:\u0026#34;ok\u0026#34;,\u0026#34;message-type\u0026#34;:\u0026#34;work\u0026#34;,\u0026#34;message-version\u0026#34;:\u0026#34;1.0.0\u0026#34;,\u0026#34;message\u0026#34;:{\u0026#34;indexed\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2017,10,26]],\u0026#34;date-time\u0026#34;:\u0026#34;2017-10-26T06:16:09Z\u0026#34;,\u0026#34;timestamp\u0026#34;:1508998569281},\u0026#34;reference-count\u0026#34;:6,\u0026#34;publisher\u0026#34;:\u0026#34;CrossRef Test Account\u0026#34;,\u0026#34;issue\u0026#34;:\u0026#34;11\u0026#34;,\u0026#34;license\u0026#34;:[{\u0026#34;URL\u0026#34;:\u0026#34;http:\\/\\/psychoceramicsproprietrylicenseV1.com\u0026#34;,\u0026#34;start\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2011,11,21]],\u0026#34;date-time\u0026#34;:\u0026#34;2011-11-21T00:00:00Z\u0026#34;,\u0026#34;timestamp\u0026#34;:1321833600000},\u0026#34;delay-in-days\u0026#34;:1195,\u0026#34;content-version\u0026#34;:\u0026#34;tdm\u0026#34;}],\u0026#34;funder\u0026#34;:[{\u0026#34;DOI\u0026#34;:\u0026#34;10.13039\\/100000001\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;National Science Foundation\u0026#34;,\u0026#34;doi-asserted-by\u0026#34;:\u0026#34;publisher\u0026#34;,\u0026#34;award\u0026#34;:[\u0026#34;CHE-1152342\u0026#34;]},{\u0026#34;DOI\u0026#34;:\u0026#34;10.13039\\/100006151\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;Basic Energy Sciences\u0026#34;,\u0026#34;doi-asserted-by\u0026#34;:\u0026#34;publisher\u0026#34;,\u0026#34;award\u0026#34;:[\u0026#34;DE-SC0001091\u0026#34;]}],\u0026#34;content-domain\u0026#34;:{\u0026#34;domain\u0026#34;:[\u0026#34;psychoceramics.labs.crossref.org\u0026#34;],\u0026#34;crossmark-restriction\u0026#34;:true},\u0026#34;short-container-title\u0026#34;:[\u0026#34;Journal of Psychoceramics\u0026#34;],\u0026#34;published-print\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2008,8,14]]},\u0026#34;DOI\u0026#34;:\u0026#34;10.5555\\/12345678\u0026#34;,\u0026#34;type\u0026#34;:\u0026#34;journal-article\u0026#34;,\u0026#34;created\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2011,11,9]],\u0026#34;date-time\u0026#34;:\u0026#34;2011-11-09T14:42:05Z\u0026#34;,\u0026#34;timestamp\u0026#34;:1320849725000},\u0026#34;page\u0026#34;:\u0026#34;1-3\u0026#34;,\u0026#34;update-policy\u0026#34;:\u0026#34;http:\\/\\/dx.doi.org\\/10.5555\\/crossmark_policy\u0026#34;,\u0026#34;source\u0026#34;:\u0026#34;Crossref\u0026#34;,\u0026#34;is-referenced-by-count\u0026#34;:2,\u0026#34;title\u0026#34;:[\u0026#34;Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory\u0026#34;],\u0026#34;prefix\u0026#34;:\u0026#34;10.5555\u0026#34;,\u0026#34;volume\u0026#34;:\u0026#34;5\u0026#34;,\u0026#34;clinical-trial-number\u0026#34;:[{\u0026#34;clinical-trial-number\u0026#34;:\u0026#34;isrctn12345\u0026#34;,\u0026#34;registry\u0026#34;:\u0026#34;10.18810\\/isrctn\u0026#34;}],\u0026#34;author\u0026#34;:[{\u0026#34;ORCID\u0026#34;:\u0026#34;http:\\/\\/orcid.org\\/0000-0002-1825-0097\u0026#34;,\u0026#34;authenticated-orcid\u0026#34;:true,\u0026#34;given\u0026#34;:\u0026#34;Josiah\u0026#34;,\u0026#34;family\u0026#34;:\u0026#34;Carberry\u0026#34;,\u0026#34;affiliation\u0026#34;:[]}],\u0026#34;member\u0026#34;:\u0026#34;7822\u0026#34;,\u0026#34;published-online\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2008,8,13]]},\u0026#34;container-title\u0026#34;:[\u0026#34;Journal of Psychoceramics\u0026#34;],\u0026#34;original-title\u0026#34;:[],\u0026#34;deposited\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2016,1,20]],\u0026#34;date-time\u0026#34;:\u0026#34;2016-01-20T15:44:56Z\u0026#34;,\u0026#34;timestamp\u0026#34;:1453304696000},\u0026#34;score\u0026#34;:1.0,\u0026#34;subtitle\u0026#34;:[],\u0026#34;short-title\u0026#34;:[],\u0026#34;issued\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2008,8,13]]},\u0026#34;references-count\u0026#34;:6,\u0026#34;URL\u0026#34;:\u0026#34;http:\\/\\/dx.doi.org\\/10.5555\\/12345678\u0026#34;,\u0026#34;relation\u0026#34;:{\u0026#34;references\u0026#34;:[{\u0026#34;id-type\u0026#34;:\u0026#34;doi\u0026#34;,\u0026#34;id\u0026#34;:\u0026#34;10.5284\\/1000389\u0026#34;,\u0026#34;asserted-by\u0026#34;:\u0026#34;object\u0026#34;}]},\u0026#34;ISSN\u0026#34;:[\u0026#34;0264-3561\u0026#34;],\u0026#34;issn-type\u0026#34;:[{\u0026#34;value\u0026#34;:\u0026#34;0264-3561\u0026#34;,\u0026#34;type\u0026#34;:\u0026#34;electronic\u0026#34;}],\u0026#34;assertion\u0026#34;:[{\u0026#34;value\u0026#34;:\u0026#34;http:\\/\\/orcid.org\\/0000-0002-1825-0097\u0026#34;,\u0026#34;URL\u0026#34;:\u0026#34;http:\\/\\/orcid.org\\/0000-0002-1825-0097\u0026#34;,\u0026#34;order\u0026#34;:0,\u0026#34;name\u0026#34;:\u0026#34;orcid\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;ORCID\u0026#34;,\u0026#34;group\u0026#34;:{\u0026#34;name\u0026#34;:\u0026#34;identifiers\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Identifiers\u0026#34;}},{\u0026#34;value\u0026#34;:\u0026#34;2012-07-24\u0026#34;,\u0026#34;order\u0026#34;:0,\u0026#34;name\u0026#34;:\u0026#34;received\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Received\u0026#34;,\u0026#34;group\u0026#34;:{\u0026#34;name\u0026#34;:\u0026#34;publication_history\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Publication History\u0026#34;}},{\u0026#34;value\u0026#34;:\u0026#34;2012-08-29\u0026#34;,\u0026#34;order\u0026#34;:1,\u0026#34;name\u0026#34;:\u0026#34;accepted\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Accepted\u0026#34;,\u0026#34;group\u0026#34;:{\u0026#34;name\u0026#34;:\u0026#34;publication_history\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Publication History\u0026#34;}},{\u0026#34;value\u0026#34;:\u0026#34;2012-09-10\u0026#34;,\u0026#34;order\u0026#34;:2,\u0026#34;name\u0026#34;:\u0026#34;published\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Published\u0026#34;,\u0026#34;group\u0026#34;:{\u0026#34;name\u0026#34;:\u0026#34;publication_history\u0026#34;,\u0026#34;label\u0026#34;:\u0026#34;Publication History\u0026#34;}}]}} That’s already pretty cool. But if you extract the DOI from the above and use DOI content negotiation to query the the DOI like this:\n$ curl -LH \u0026#34;Accept: application/x-bibtex\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.5555/12345678 You get the following result in BibTex:\n@article{Carberry_2008, title={Toward a Unified Theory of High-Energy Metaphysics: Silly String Theory}, volume={5}, url={http://0-dx-doi-org.libus.csd.mu.edu/10.5555/12345678}, DOI={10.5555/12345678}, number={11}, journal={Journal of Psychoceramics}, publisher={Society of Psychoceramics}, author={Carberry, Josiah}, year={2008}, month={Aug}, pages={1-3}} Yay!\nThere, that wasn’t too hard, was it?\nOK, what is the catch?\nWell… using Crossref REST API has a number of limitations that you should be aware of:\nCrossref metadata contains more than just bibliographic metadata. You need to use query.bibliographic if you want to restrict your query to just bibliographic information. Otherwise you may get false positives. The API will almost always match *something*. You need to look at the score in order to determine the likelihood that you’ve got a correct match. It only works on content listed in Crossref’s database. Still, this is a lot of content. The metadata in Crossref’s database can sometimes be… spotty* But using the API also has a big benefit– You get fewer false negatives. If you have a typo or incomplete metadata, it will do a much better job than a strict citation parser or OpenURL Query.\nIn short, the Crossref REST API is very good at resolving citations. We encourage you to try it and let us know how it works for you.\nNote that if you are having trouble getting hold of free-form citations to begin with, you may want to use the Cermine tool for extracting citations from PDFs.\n(*unmitigated bilge)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/howard-ratner/", "title": "Howard Ratner", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/sara-girard/", "title": "Sara Girard", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-7-with-chorus/", "title": "Using the Crossref REST API. Part 7 (with CHORUS)", "subtitle":"", "rank": 1, "lastmod": "2017-11-27", "lastmod_ts": 1511740800, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Sara Girard and Howard Ratner at CHORUS about the work they’re doing, and how they’re using our REST API as part of their workflow.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Sara Girard and Howard Ratner at CHORUS about the work they’re doing, and how they’re using our REST API as part of their workflow.\nIntroducing CHORUS CHORUS (www.chorusaccess.org) is an innovative non-profit organization that supports funders, publishers, authors and institutions to deliver public access to articles reporting on funded research. Our vision is to create a future where the output flowing from funded research is easily and permanently discoverable, accessible and verifiable by anyone in the world.\nCHORUS currently monitors over 400,000 articles for more than 20 US federal and two international funding agencies, and has partnerships with Department of Defense, Department of Energy, National Science Foundation, National Institute of Standards and Technology, Office of the Director National of Intelligence: Intelligence Advanced Research Projects Activity, Smithsonian Institution, US Department of Agriculture, US Geological Survey, Japan Science and Technology Agency, and the Australian Research Council. CHORUS is supported by over 50 publisher and affiliate members who represent the majority of funded published research.\n\u0026lt;img align=right\u0026quot; src=\u0026quot;/images/blog/chorus-blog.png\u0026quot; width=\u0026ldquo;700\u0026rdquo; alt=\u0026ldquo;mage of interaction of platforms\u0026rdquo; class=\u0026ldquo;img-responsive\u0026rdquo;/\u0026gt;\nWhat problem is your service trying to solve? CHORUS is the first service of CHOR Inc., founded in 2013 in response to the directive of the US Office of Science and Technology Policy (OSTP) for all US federal research agencies to develop and implement plans to widen public access to publications and data associated with federally funded research.\nCHORUS aims to minimize public access compliance burdens and ensure the long-term preservation and accessibility of articles reporting on funded research. We provide the necessary metadata infrastructure and governance to enable a smooth, low-friction interface between funders, authors, institutions and publishers in a distributed network environment. CHORUS’ services track public accessibility of articles regardless of whether they are published Gold OA or made open by the publisher.\nCan you tell us how you are using the Crossref REST API at CHORUS? The Crossref REST API is a key source for the metadata database that powers the CHORUS Dashboard, Search and Reporting services for Funders, Institutions and Publishers.\nWhat metadata values do you pull from the API? We pull the basic bibliographic information such as publisher, journal title, article title, authors and publication date. Perhaps even more important to our area of focus are the funder, grant and license information.\nHow often do you extract/query data? CHORUS uses the Crossref REST API every day.\nCan you describe your workflow using Crossref metadata? Every night we query the Crossref API to send us metadata for all article or conference proceeding records for our member publishers that have funder metadata matching the funders monitored by CHORUS.\nCHORUS monitors these DOIs for public accessibility on publisher websites; inclusion in agency search tools; deposit in a growing list of funder repositories (e.g.,US DOE PAGES,NSF PAR, and USGS Publications Warehouse and NIH PubMed Central); and for associated ORCID researcher records. CHORUS also uses the reuse license metadata to identify when an article is expected to be made publicly accessible.\nFinally, we check for ingestion in CLOCKSS and/or Portico to ensure long-term preservation and accessibility of research findings reported in journal and proceedings articles. Our preservation partners keep the full text in their dark archives, only making it available when the content may no longer be made publicly accessible by the publisher.\nThe collected and enhanced metadata is presented in our dashboard, search and reporting services all including links back to the publisher sites via the Crossref DOI.\nWhat are the future plans for CHORUS? Following the success of our Funder and Publisher Dashboards, CHORUS is expanding the services we provide to international funders, non-governmental funders, and institutions. Our first funder partnership outside of the United States is with the Japan Science and Technology Agency (JST). CHORUS announced its new Institution Dashboard service this Autumn after successfully concluding pilots with the University of Florida and University of Denver. CHORUS will also be adding links to relevant datasets and other metadata utilizing forthcoming identifiers and metadata standards.\nWhat else would you like to see the REST API offer It would be great to see more identification of funders from Crossref members. While we have seen great leaps since 2013, we all have a long way to go. We are also eager to see Crossref incorporate the Organization Identifiers that they have begun with ORCID, DataCite and others.\nThanks, CHORUS! If you would like to contribute a case study on the uses of Crossref Metadata APIs please contact the Community team.\n", "headings": ["Introducing CHORUS","What problem is your service trying to solve?","Can you tell us how you are using the Crossref REST API at CHORUS?","What metadata values do you pull from the API?","How often do you extract/query data?","Can you describe your workflow using Crossref metadata?","What are the future plans for CHORUS?","What else would you like to see the REST API offer"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-research-nexus-better-research-through-better-metadata/", "title": "The research nexus - better research through better metadata", "subtitle":"", "rank": 1, "lastmod": "2017-11-14", "lastmod_ts": 1510617600, "section": "Blog", "tags": [], "description": "Researchers are adopting new tools that create consistency and shareability in their experimental methods. Increasingly, these are viewed as key components in driving reproducibility and replicability. They provide transparency in reporting key methodological and analytical information. They are also used for sharing the artifacts which make up a processing trail for the results: data, material, analytical code, and related software on which the conclusions of the paper rely. Where expert feedback was also shared, such reviews further enrich this record. We capture these ideas and build on the notion of the “article nexus” blogpost with a new variation: \u0026ldquo;the research nexus.\u0026rdquo;\n", "content": "Researchers are adopting new tools that create consistency and shareability in their experimental methods. Increasingly, these are viewed as key components in driving reproducibility and replicability. They provide transparency in reporting key methodological and analytical information. They are also used for sharing the artifacts which make up a processing trail for the results: data, material, analytical code, and related software on which the conclusions of the paper rely. Where expert feedback was also shared, such reviews further enrich this record. We capture these ideas and build on the notion of the “article nexus” blogpost with a new variation: \u0026ldquo;the research nexus.\u0026rdquo;\nSome of Crossref’s publishing community are encouraging the scholarly communication practices surrounding these tools in a variety of ways: incorporating them into the publishing workflow, integrations between the tools and publishing systems, as well as linking and exposing the artifacts in the publications for readers to access. A special set of publishers have gone all the way and included these links into their Crossref metadata record. They insert them directly into the metadata deposit when they register the content (technical documentation). Doing so, these connections reach further than the publisher platform and propagate to systems across the research ecosystem including places like indexers, research information management systems, sharing platforms (oh, the list goes on!). We highlight a small set of examples to illustrate how these outstanding publishing practices are supporting good research.\n1. Linking to an entire collection of methods Crossref member, Protocols.io, is supporting transparency and methods reproducibility with their open access repository of science methods. Leitão-Goncalves R, Carvalho-Santos Z, Francisco AP, et al. investigated the concerted action of the commensal bacteria Acetobacter pomorum and Lactobacilli in Drosophila melanogaster, demonstrating how the interaction of specific nutrients within the microbiome can shape behavioral decisions and life history traits. Findings were published in PLOS Biology earlier this year: https://0-doi-org.libus.csd.mu.edu/10.1371/journal.pbio.2000862. Authors deposited detailed methods and protocols used in the project (Drosophila rearing, media preparations, and microbial manipulations) as a collection in Protocols.io: https://0-doi-org.libus.csd.mu.edu/10.17504/protocols.io.hdtb26n. So Protocols.io registered their content with us, linking the protocol to the paper. This creates the crosswalk between both so that users can get from one to the other through the metadata. The full metadata record can be found here.\n2. Linking to video protocol If a picture is worth a thousand words, the truism might apply to moving pictures many times over. Fasel B, Spörri J, Schütz P, et al. proposed a set of calibration movements optimized for alpine skiing and validated the 3D joint angles of the knee, hip, and trunk during alpine skiing in a PLOS ONE paper: https://0-doi-org.libus.csd.mu.edu/10.1371/journal.pone.0181446. These movements consisted of squats, trunk rotations, hip ad/abductions, and upright standing. The specific team responsible for designing them (Fasel B, Spörri J, Kröll J, and Aminian K) described the set of calibration movements performed but found videos to be a far more effective way to communicate the technical movements used in their study. They made the visuals available too: https://0-doi-org.libus.csd.mu.edu/10.17504/protocols.io.itrcem6. So Protocols.io deposited the link between video protocol and paper to the Crossref metadata record (full metadata record).\n3. Linking to software and peer reviews The Journal of Open Source Software (JOSS) is an academic journal about high quality research software across broadly diverse disciplines. Sara Mahar works on the effectiveness of organizations funded by the US Department of Housing and Urban Development to combat homelessness. She collaborated with computational physicist Matthew Bellis to create a python tool for researchers to visualize and analyze data from the Homeless Management Information System:https://0-doi-org.libus.csd.mu.edu/10.21105/joss.00384. The software was archived in Zenodo: https://0-doi-org.libus.csd.mu.edu/10.5281/zenodo.13750 and the peer review artifacts were also published. JOSS deposited all these links in the metadata record (found here).\n4. Linking to preprint, data, code, source code, peer reviews Gigascience, published by Oxford University Press, is experimenting with a number of new tools in their mission to promote reproducibility of analyses and data dissemination, organization, understanding, and use. In a recent paper Luo R, Schatz M, and Salzberg S shared the results of the firstly publicly available implementation of variant calling using a 16-genotype probabilistic model for germline variant detection: https://0-doi-org.libus.csd.mu.edu/10.1093/gigascience/gix045. Prior to formal peer review, the group posted the preprint in bioRxiv: https://0-doi-org.libus.csd.mu.edu/10.1101/111393. When the paper was published, the authors made the supporting data available, including snapshots of the test and result data, in a public repository: http://0-dx-doi-org.libus.csd.mu.edu/10.5524/100316. OUP included this data citation in their Crossref metadata record via the routes recommended in our previous blog post about depositing data citations. The researchers made the code available in Github, and the algorithm is ready for researchers to run on Code Ocean, a cloud-based computational reproducibility platform that allows researchers to wrap and encapsulate the data, code, and computation environment linked to an article: https://0-doi-org.libus.csd.mu.edu/10.24433/CO.0a812d9b-0ff3-4eb7-825f-76d3cd049a43. For further transparency, expert reviews of the manuscript from the peer review history were published in Publons: http://0-dx-doi-org.libus.csd.mu.edu/10.5524/review.100737 and http://0-dx-doi-org.libus.csd.mu.edu/10.5524/review.100738. (As of last month, publishers can register peer reviews at Crossref). The full metadata record contains links to the entire set of materials listed above.\n5. Linking to preprint, Code, Docker hub, video, reviews Narechania A, Baker R, DeSalle R, et al. used bird flocking behavior to design an algorithm, Clusterflock, for optimizing distance-based clusters in orthologous gene families that share an evolutionary history. Their paper was published in Gigascience last year: https://0-doi-org.libus.csd.mu.edu/10.1186/s13742-016-0152-3. Supporting data, code snapshots and video were published in GigaDB: http://0-dx-doi-org.libus.csd.mu.edu/10.5524/100247. Code was maintained in GitHub. And authors also created a Docker application for Clusterflock, a lightweight, stand-alone, executable package of the software which includes everything needed to run it: code, runtime, system tools, system libraries, settings (Docker Hub link here). They created a video demo of the algorithm. Publons reviews were published http://0-dx-doi-org.libus.csd.mu.edu/10.5524/review.100507 and http://0-dx-doi-org.libus.csd.mu.edu/10.5524/review.100508. Gigascience shared all these assets in their publication, including the link to the original bioRxiv preprint: https://www.biorxiv.org/content/early/2016/03/25/045773). The full metadata record containing these links can be found here.\nThe Research Nexus: better research through better metadata These five are just a few exemplary cases showing how publishers are declaring the relationships between their publications and other associated artifacts to support reproducibility and discoverability of their content. We welcome you to check out our overview of relationships between DOIs and other materials for more information. Members who are enriching your publishing pipeline in similar ways, please register these links to make your reach go further. We also welcome everyone to retrieve these relations in our REST API (technical documentation).\n", "headings": ["1. Linking to an entire collection of methods","2. Linking to video protocol","3. Linking to software and peer reviews","4. Linking to preprint, data, code, source code, peer reviews","5. Linking to preprint, Code, Docker hub, video, reviews","The Research Nexus: better research through better metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/services/event-data/terms/", "title": "Event Data terms of use", "subtitle":"", "rank": 1, "lastmod": "2017-11-10", "lastmod_ts": 1510272000, "section": "Find a service", "tags": [], "description": "Version 1.0\nWho are we? Crossref is a not-for-profit membership organization providing certain technology services to the scholarly community on a noncommercial basis.\nWhat is Crossref Event Data? Crossref Event Data [the “Service”] is a Crossref service which provides a unique record of the relationship between web activity and specific scholarly content items (e.g. a journal article which has been referenced on a Wikipedia page). The Service is a hub for the collection, storage and distribution of data concerning these relationships.", "content": "Version 1.0\nWho are we? Crossref is a not-for-profit membership organization providing certain technology services to the scholarly community on a noncommercial basis.\nWhat is Crossref Event Data? Crossref Event Data [the “Service”] is a Crossref service which provides a unique record of the relationship between web activity and specific scholarly content items (e.g. a journal article which has been referenced on a Wikipedia page). The Service is a hub for the collection, storage and distribution of data concerning these relationships. When a relationship is observed between a Crossref-registered content item and a specific web activity, the data is expressed in the Service as an “Event”. An “Event” is the record of a relationship between an item of registered content and a specific activity. The Service provides Events from a variety of web sources. Each web source is referred to as a “Data Contributor”. A “Data Contributor” means the location where a relationship was observed. The Events, and all original data from the Data Contributor, are available in the Service via an API.\nWhat are the Terms of Use for Crossref Event Data? To facilitate data access and maximal reuse, all Events provided in the Service are tagged with a license that is conformant with the principles laid out in the Open Definition. This ensures that data made available in the Service can be made publicly available and reused, in accordance with the conditions of the specific license associated with each Event. To this end, the service will only include data from Data Contributors who can provide this to the service via a license conformant with the Open Definition.\nConsumers of the Service must ensure they abide by the conditions set out in the license tagged to each Event. In addition, consumers must also ensure to comply with any conditions set out in both the Data Contributor’s Privacy Policy as well as the Crossref Privacy Policy.\nThe Crossref Event Data service will respect the restrictions provided in robots.txt to ensure the service does not follows links when directed not to by these files. In addition, like search engines, Crossref runs software which visits websites. Events from our Newsfeeds, Reddit Links and Web sources are derived in this way.\nData from all Data Contributors is made available in the Service via the Creative Commons CC-0 1.0 license waiver, with the exception of the following Data Contributors listed below:\nData Contributor Terms of Use / License Crossref Metadata Made available without restriction Cambia Lens Creative Commons CC-BY-SA 4.0 Stack Exchange Network Creative Commons CC-BY 4.0 Privacy In addition to the license associated with each Event, consumers of the Service must also ensure to comply with any conditions set out in the both the Crossref Privacy Policy as well as the Data Contributor’s own Privacy Policy. It’s a Service requirement that you adhere to the conditions stated in these policies. Every Event we provide via an API will contain the link to the Crossref Terms of Use page, where all the information and links you need regarding use and reuse will be available.\nWhere there are specific privacy policies of which Crossref is aware, that are associated with a particular Data Contributor, they are listed below.\nData Contributor Privacy Policy Cambia Lens Cambia Lens Privacy Policy Crossref Metadata Crossref Privacy Policy DataCite Metadata DataCite Privacy Policy Hypothes.is Hypothesis Privacy Policy Reddit Reddit Privacy Policy Stack Exchange Network Stack Exchange Privacy Policy Twitter Twitter TOS Wikipedia Wikimedia Foundation Privacy Policy Wordpress.com Automattic Privacy Policy Need to contact us? Please email us at eventdata@crossref.org with any questions or feedback.\n", "headings": ["Who are we?","What is Crossref Event Data?","What are the Terms of Use for Crossref Event Data?","Privacy","Need to contact us?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-transparent-record-of-life-after-publication/", "title": "A transparent record of life after publication", "subtitle":"", "rank": 1, "lastmod": "2017-11-01", "lastmod_ts": 1509494400, "section": "Blog", "tags": [], "description": "Crossref Event Data and the importance of understanding what lies beneath the data. Some things in life are better left a mystery. There is an argument for opaqueness when the act of full disclosure only limits your level of enjoyment: in my case, I need a complete lack of transparency to enjoy both chicken nuggets and David Lynch films. And that works for me. But metrics are not nuggets. Because in order to consume them, you really need to know how they’re made.", "content": "Crossref Event Data and the importance of understanding what lies beneath the data. Some things in life are better left a mystery. There is an argument for opaqueness when the act of full disclosure only limits your level of enjoyment: in my case, I need a complete lack of transparency to enjoy both chicken nuggets and David Lynch films. And that works for me. But metrics are not nuggets. Because in order to consume them, you really need to know how they’re made. Knowing the provenance of data, along with the context with which it was derived, provides everyone with the best chance of creating indicators which are fit for purpose. This is just one of the reasons why we built the Event Data infrastructure with transparency in mind.\nThe transparency problem For the scholarly community, alternative metrics to citation count (‘altmetrics’) are becoming increasingly popular as they can offer rich and expedited insight into today’s diverse and dynamic research environment. Research artifacts undergo an extended life online as they’re linked, shared, saved and discussed in forums both within and beyond the traditional academic ecosystem. Data on these interactions are initially fragmented and buried within platforms like social media, blogs and news sites. Downstream, there are several value-add services that collate and present that data as a single, aggregated count. We see individual data points like ‘paper X was tweeted 22 times’, and ‘paper X is referenced 16 times on Wikipedia’ being combined, homogenised, weighted and expressed as a single figure, a calculated number serving as a proxy for value. But altmetrics alone don\u0026rsquo;t tell the whole story, and how they are calculated is not without idiosyncrasy or politics. As we each have our own unique voice and perspective, we need to ensure we understand the lenses through which these metrics are made in order to consume them effectively.\nThe 2015 Metric Tide report highlighted transparency as one of the five dimensions of responsible metrics. Having access to the context used to create a metric — the provenance of the original data as well as full transparency around its extraction, processing and aggregation — helps consumers to use the data meaningfully and allows for comparison across third-party vendors. But transparency is difficult to achieve when, as the report notes, the systems and infrastructure for collecting and curating altmetrics-style data are fragmented and have limited interoperability.\nIn the academic community, underlying centralised systems include ORCIDs to identify people and DOIs to identify items. But we’re missing a transparent, centralised infrastructure for describing and recording the relationships between objects and resources1. These relationships, or links, occur outside publisher platforms and can provide valuable information about the interconnectivity and dissemination of research. Dedicated infrastructure for collecting these relationships would provide a data source for those interested in altmetrics to build upon.\nFigure 1.1 Example of some relationships between articles and activity on the web\nAt Crossref, we call these relationships Events. An Event is the record of a claim made about the existence of a relationship between a registered content item (i.e. a DOI) and a specific activity on the web. Events include:\na DataCite dataset DOI contains a link to a Crossref article DOI an article was referenced in Wikipedia an article was mentioned on Twitter an article has a Hypothes.is annotation a blog contains a link to an article In collaboration with DataCite, we are collecting Events for the DOIs registered with our organisations and are making that data available for others in the community to use. This is the Event Data infrastructure, with which we’re plugging the gap in open scholarly relationships infrastructure. https://0-www-crossref-org.libus.csd.mu.edu/blog/a-transparent-record-of-life-after-publication/\nThe Event Data infrastructure Crossref and DataCite have for many years provided a centralised location for bibliographic metadata and links, and a facility to help our members register Persistent Identifiers (DOIs) for their content. With nearly 100 million DOIs registered with Crossref, we know where research lives. Which got us thinking — could we use these links to find out more about the journey research undertakes after publication? Could we express these interactions as links without any aggregation or counts so it could be maximally reused? And if so, could we then provide this data in an open, centralised, structured format? The answer was yes, subject to some challenges:\nQuerying for individual DOIs wasn’t scalable for our full corpus of 100 million items, so we had to find something else. Not everyone uses the DOI link (not a surprise!). Most people will link directly to the publisher’s site. This means we need to look for links using both the DOI and article landing page URLs. When we find people referring to registered content using its landing page, we find the DOI for that content item so that the link can be referenced in our data set in a stable, link-rot-proof way. We don’t always know the article landing page URL for every DOI upfront because like many relationships, the one between DOIs and URLs is complicated. We began by asking the wrong questions and as a result we got the wrong type of data back: instead of returning a record of individual actions, we were returning aggregated counts. Aside from not meeting our use case, aggregation requires the curation of an ever-churning dataset in order to keep totals updated, which is not scalable for the number of DOIs in our corpus.\nWe soon learnt to ask the right questions. One pivotal change in approach was that instead of counts, we asked instead ‘what activity is happening on Twitter for this article?’. Our data went from ‘DOI X was mentioned 20 times on Twitter as of this date’ to ‘tweet X mentions DOI X on this date’. The data are now represented as a subject-verb-object triple:\nFigure 1.2 Triple table.\nUltimately this has allowed us to represent actions like Wikipedia page edits as individual atomic actions (i.e an Event) rather than as a dataset that changes over time. Being open about the provenance of altmetrics with Event Data Crossref Event Data (the Crossref-specific service powered by the shared Event Data infrastructure) has evolved beyond a link store to become a continual stream of Events; each Event tells a new part of the story. Rather than constantly updating an Event whenever a new action takes place, we add a new one instead:\nFigure 1.3 A Wikipedia Event.\nEvents answer a whole range of questions, such as:\nWhat links to what? How was the link made? Which Agent collected the Event? Which data source? When was the link observed? When do we think the link actually happened? What algorithms were used to collect it? Where’s the evidence? We’re collecting data from a diverse range of platforms including Twitter, Wikipedia, blogs and news sites, Reddit, StackExchange, Wordpress.com and Hypothes.is. This means that when we observe a link in these platforms to what we think is a DOI, we create an Event and a corresponding Evidence Record to represent our observation. We also have Events to represent the links between research items registered with Crossref and DataCite - for example, when a Crossref DOI cites a DataCite DOI and vice versa.\nThe provenance of the data is fully transparent and is made available to everyone via an open API. We call this the evidence trail. The record of each link (‘Events’) as well as the corresponding evidence can then be used to feed into tools for impact measurement, discoverability, collaboration and network analysis.\nTherefore, one application of Event Data is as an underlying, transparent data source for altmetrics calculations. For example, you might want to know the total number of times your paper has been mentioned on Twitter to date. If I told you that the number was 22, what does that actually mean? Do you know whether I counted both tweets and retweets? Do you consider both of these actions as equal? Is the sentiment of the tweet important to you? Was it a human or a bot that initiated a tweet? Are you interested in tweets containing links to multiple representations of your paper or do you only want to track mentions of your version of record (the final published copy)? With Event Data as your underlying data source, you can answer these questions.\nNot only transparent in data, transparent by design The National Information Standards Organisation (NISO), a US organisation responsible for technical standards for publishing, bibliographic and library applications, has developed a set of recommendations for transparency in their Alternative Assessment Metrics Project report, as well as a Code of Conduct for both altmetric practitioners and aggregators that aims to help improve the quality of altmetrics data. The working groups recognised that without transparency and conforming to a recognised standard, altmetric indicators \u0026ldquo;are difficult to assess, and thus may be seen as less reliable for purposes of measuring influence or evaluation\u0026rdquo;1.\nCrossref Event Data is one of the example altmetric data providers listed in the NISO recommendations. My colleague Joe Wass participated in the development and specification of the NISO \u0026ldquo;Altmetrics Recommended Practices on Data Metrics, Alternative Outputs, and Persistent Identifiers\u0026rdquo; at the same time as we were working with DataCite on Event Data, so they have mutually informed one another.\nFigure 1.4 Martin Fenner (DataCite) and Joe Wass (Crossref) drawing plans for the Event Data infrastructure.\nThe outcome of our involvement in the NISO recommendations is that Crossref Event Data is a service that is transparent by design. We have opened up our entire extraction and processing workflow so that we can clearly demonstrate the context and environment that was used to generate an Event. This evidence is a core component of our transparency-first principle.\nBuilding services on Event Data There are some really exciting ways that people are already using Event Data, and we’re still only in beta. Our aim has always been to create an open, portable, transparent data set that can be used by our diverse community including researchers, application developers, publishers, funders and third-party service providers. We have already seen data from our service used in recent research studies, impact reports and even a front-end tool. Launched recently as a prototype, ImpactStory’s Paperbuzz.org uses Event Data as one of its data sources for tracking the online buzz around scholarly articles. Jason Priem, cofounder of ImpactStory, notes:\n\u0026ldquo;Because Crossref Event Data is completely open data, we believe it\u0026rsquo;s a game-changer for altmetrics. Our latest project, Paperbuzz.org, is just the first of a whole constellation of upcoming tools that will add value on top of Crossref\u0026rsquo;s open data.\u0026rdquo;\nWe are working towards launching Crossref Event Data as a production service. In the meantime though, please do take a look at our comprehensive User Guide. Hopefully you’ll be inspired to go make something cool using the data! Events are being collected constantly; take a look below as they stream in from our data sources or visit our live stream demo site to watch in real time.\nFigure 1.5 Screen capture of Crossref Event Data live stream demo.\nAs the service matures, we’ll continue to add new platforms to track and I also encourage anyone with article link data to get in touch to discuss how we can share it with the community via Event Data.\nFor researchers in particular, I’m really keen to hear your thoughts on our data model and about the things we could additionally provide you with from an infrastructure perspective that would best support your research needs.\nAnd if you’re a publisher, take a look at our Event Data best practice guidelines — there’s some really important information in there about how you can help give us the best chance possible of collecting Events for your registered content.\nAnd finally, if you’re a consumer of altmetrics data, I encourage you to ask questions. Ask your altmetrics vendors about how they gather their data and what context they apply to the aggregation of the metrics they supply. Ask yourself what behaviours you are interested in tracking and equally those you are not. Think about the endgame; about the type of impact you’re truly trying to measure and the story you want to tell. Because it’s these questions that will help you choose indicators that are the best fit for your own unique narrative.\nThis content is cross-posted on eLife Labs.\nReferences\n1 Bilder, Geoffrey; Lin, Jennifer; Neylon, Cameron (2015): What exactly is infrastructure? Seeing the leopard\u0026rsquo;s spots. Retrieved: Oct 16, 2017; https://0-doi-org.libus.csd.mu.edu/10.6084/m9.figshare.1520432.v1\n2 NISO, Outputs of the NISO Alternative Assessment Metrics Project. Retrieved: 6th October 2017; https://www.niso.org/publications/rp-25-2016-altmetrics , p.2.\n", "headings": ["Crossref Event Data and the importance of understanding what lies beneath the data.","The transparency problem","The Event Data infrastructure","Not only transparent in data, transparent by design","Building services on Event Data"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/madeleine-watson/", "title": "Madeleine Watson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/transparency/", "title": "Transparency", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/wikipedia/", "title": "Wikipedia", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/liam-finnis/", "title": "Liam Finnis", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/meet-the-members-part-1-with-oxfam/", "title": "Meet the members, Part 1 (with Oxfam)", "subtitle":"", "rank": 1, "lastmod": "2017-10-30", "lastmod_ts": 1509321600, "section": "Blog", "tags": [], "description": "Introducing our new blog series Meet the members; where we talk to some of our members and find out a little bit more about them, ask them to share how they use our services, and discuss what their plans for the future are. To start the series we talk to Liam Finnis of Oxfam.\n", "content": "Introducing our new blog series Meet the members; where we talk to some of our members and find out a little bit more about them, ask them to share how they use our services, and discuss what their plans for the future are. To start the series we talk to Liam Finnis of Oxfam.\n", "headings": ["Can you tell us a little bit about Oxfam?","What’s your role within Oxfam?","What’s your participation level?","Tell us a bit about what you publish and for whom","What do you think makes your publications unique?","What trends are you seeing in your part of the scholarly publishing community?","How would you describe the value of being a Crossref member?","What are Oxfam\u0026rsquo;s plans for the future?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/peer-reviews-are-open-for-registering-at-crossref/", "title": "Peer reviews are open for registering at Crossref", "subtitle":"", "rank": 1, "lastmod": "2017-10-24", "lastmod_ts": 1508803200, "section": "Blog", "tags": [], "description": "About 13-20 billion researcher-hours were spent in 2015 doing peer reviews. What valuable work! Let\u0026rsquo;s get more mileage out of these labors and make these expert discussions citable, persistent, and linked up to the scholarly record. As we previously shared during Peer Review week, Crossref is lauintroducing support for a new record type to support the registration of peer reviews. We’re one step closer to changing that. Today, we are excited to announce that we’re open for deposits.\n", "content": "About 13-20 billion researcher-hours were spent in 2015 doing peer reviews. What valuable work! Let\u0026rsquo;s get more mileage out of these labors and make these expert discussions citable, persistent, and linked up to the scholarly record. As we previously shared during Peer Review week, Crossref is lauintroducing support for a new record type to support the registration of peer reviews. We’re one step closer to changing that. Today, we are excited to announce that we’re open for deposits.\nIf you missed the first episode, here’s a recap:\nPublishers have been registering reviews with us for a while (ex: Example 1, Example 2, and Example 3). But these have been shoehorned into other content: article, dataset, or component. So we are extending Crossref’s infrastructure to properly treat this special scholarly artifact. This includes a range of outputs made publicly available from the peer review history (referee reports, decision letters, author responses, community comments) across any and all review rounds. We welcome scholarly discussions of journal articles before or after publication (e.g. “post-publication reviews”).\nWe collect metadata that characterizes the peer review asset (for example: recommendation, type, license, contributor info, competing interests). We also collect metadata, which offers a view into the review process (e.g. pre/post-publication, revision round, review date).\nThis special set will support the discovery and investigation of peer reviews as it is linked up to the article discussed. It will also enable the following:\nEnable tracking of the evolution of scholarly claims through the lineage of expert discussion Support enrichment of scholarly discussion Enable reviewer accountability Credit reviewers and editors for their scholarly contribution Support publisher transparency Connect reviews to the full history of the published results Provide data for analysis and research on peer review Please come check out our documentation for more information.\nAs publishers are implementing this, we are finishing up the delivery of this metadata for machine and human access, across all the Crossref interfaces (REST API, OAI-PMH, Crossref Metadata Search) to enable discoverability across the research ecosystem. We are also working to make it possible for members to get Cited-by data for the peer reviews they register.\nIf you are interested in registering your peer review content with us, please get in touch.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-happened-at-last-months-live-local-in-london/", "title": "What happened at last month’s LIVE local in London", "subtitle":"", "rank": 1, "lastmod": "2017-10-22", "lastmod_ts": 1508630400, "section": "Blog", "tags": [], "description": "So much has happened since we held LIVE16 (our annual meeting) in London last year that we wanted to check-in with our UK community and share the year’s developments around our tools, teams and services ahead of LIVE17 next month in Singapore.\n", "content": "So much has happened since we held LIVE16 (our annual meeting) in London last year that we wanted to check-in with our UK community and share the year’s developments around our tools, teams and services ahead of LIVE17 next month in Singapore.\nAnd so, on 26th September we held a half-day \u0026lsquo;LIVE local\u0026rsquo;, covering a wide range of strategic topics, well-attended by a diverse representation of our UK community of publishers, funders, researchers, and tool-makers.\nWhat we discussed on the day:\nEd Pentz, Crossref\u0026rsquo;s Executive Director, kicked the day off with \u0026lsquo;What’s new at Crossref\u0026rsquo; Geoffrey Bilder, Strategic Director, talked us through \u0026lsquo;Crossref\u0026rsquo;s Strategic Initiatives\u0026rsquo; Ginny Hendricks, Director of Member and Community Outreach introduced \u0026lsquo;Metadata 2020\u0026rsquo; Rachael Lammey, Head of International Outreach discussed the \u0026lsquo;Global reach of Crossref metadata\u0026rsquo; Jure Triglav from Coko Foundation presented some interesting \u0026lsquo;Metadata Use Case Studies\u0026rsquo; Jennifer Lin, Director of Product Management, spoke about Crossref\u0026rsquo;s \u0026lsquo;New Product Developments\u0026rsquo; Ed Pentz concluded the day leading a discussion on \u0026lsquo;Crossref\u0026rsquo;s Future Direction\u0026rsquo; This event was one in a series of smaller, regional events which aim to better cater to our global membership and provide a tailored program of activities. You can read more about this series of events on our LIVE locals page, and if you are interested in hosting an event near you or have suggestions for one in your region then please contact me to get involved.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/celebrating-orcid-at-five/", "title": "Celebrating ORCID at five", "subtitle":"", "rank": 1, "lastmod": "2017-10-16", "lastmod_ts": 1508112000, "section": "Blog", "tags": [], "description": "Happy birthday, ORCID! It\u0026rsquo;s their fifth birthday today and it\u0026rsquo;s gratifying to me\u0026mdash;as a founding board member and former Chair of the board\u0026mdash;to see how successful it has become. ORCID has a great staff, over 700 members from 41 countries and is quickly approaching 4 million ORCID iDs. Crossref\u0026mdash;it\u0026rsquo;s board, staff, and members\u0026mdash;has been an ORCID supporter from the start. One example of this support is that we seconded Geoffrey Bilder to be ORCID\u0026rsquo;s interim CTO for about eight months.\n", "content": "Happy birthday, ORCID! It\u0026rsquo;s their fifth birthday today and it\u0026rsquo;s gratifying to me\u0026mdash;as a founding board member and former Chair of the board\u0026mdash;to see how successful it has become. ORCID has a great staff, over 700 members from 41 countries and is quickly approaching 4 million ORCID iDs. Crossref\u0026mdash;it\u0026rsquo;s board, staff, and members\u0026mdash;has been an ORCID supporter from the start. One example of this support is that we seconded Geoffrey Bilder to be ORCID\u0026rsquo;s interim CTO for about eight months.\nActually, Crossref has been involved with ORCID even before the start.\nORCID\u0026rsquo;s birthday recognizes when the registry went live in 2012 but the origins of what became ORCID stretch back to a meeting that Crossref organized back in February 2007 on \u0026ldquo;Author IDs\u0026rdquo;. After this meeting there were many follow on discussions but it was clear that as an association of scholarly publishers Crossref didn\u0026rsquo;t have suitable governance for an researcher identifier registry which needed support from a broader group of stakeholders.\nSubsequent discussions between Nature and Thomson Reuters (represented by Howard Ratner Dave Kochalko) led\u0026mdash;after many more meetings\u0026mdash;to ORCID being set up as a new organization. ORCID was incorporated in September 2010 and the first meeting of the board of directors of ORCID was on October 8th, 2010.\nA lot of people and organizations have contributed to getting ORCID to where it is today and it\u0026rsquo;s been great to be a part of it and continue to contribute to their future.\nReflecting on the creation of ORCID: it has shown the power of collaboration in improving scholarly research, and in making life easier and better for researchers.\nToday they celebrate in a number of fun ways and, in particular, mark the occasion with the release of a new set of educational resources.\nFrom everyone in the Crossref community, here\u0026rsquo;s to ORCID\u0026rsquo;s continuing success!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/agreements/", "title": "Agreements", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/changes-to-the-2018-membership-agreement-for-better-metadata-distribution/", "title": "Changes to the 2018 membership agreement for better metadata distribution", "subtitle":"", "rank": 1, "lastmod": "2017-10-09", "lastmod_ts": 1507507200, "section": "Blog", "tags": [], "description": "We are making a change to section 9b of the standard Crossref membership agreement which will come into effect on January 1, 2018. This will not change how members register content, nor will it affect membership fees in any way. The new 2018 agreement is on our website, and the exact wording changes are highlighted below. The new membership agreement will automatically replace the previous version from January 1, 2018 and members will not need to sign a new agreement.\n", "content": "We are making a change to section 9b of the standard Crossref membership agreement which will come into effect on January 1, 2018. This will not change how members register content, nor will it affect membership fees in any way. The new 2018 agreement is on our website, and the exact wording changes are highlighted below. The new membership agreement will automatically replace the previous version from January 1, 2018 and members will not need to sign a new agreement.\nWhat’s changing? At its July meeting the Crossref board unanimously approved recommendations from the Membership and Fees Committee to update Crossref’s metadata delivery offerings. One of the recommendations was to remove the option for case-by-case opt outs of metadata delivery through the OAI-PMH channel used for Enhanced Crossref Metadata Services.\nThis opt-out was only used by a small number of our members (around 40 of nearly 9,000), who have been contacted directly. This means that for the vast majority of members there is no change in how Crossref makes their metadata available but we wanted to make everyone aware of the change to the membership agreement.\nSo, as is currently the case, all metadata registered with Crossref is available via all the Metadata APIs under an appropriate agreement with the user or terms and conditions for the service. The one exception to this is how references are distributed - we will contact members next week about the options for references.\nWhy are we making this change? Our metadata services have become very popular with users of all kinds throughout scholarly communications\u0026ndash;including search and discovery platforms, libraries, other publishers, reference managers, sharing services, and analytics providers. More and better metadata means more and better discoverability of publisher content. The change also brings this service into line with our mission to improve scholarly communications through quality metadata and related infrastructure services, removing the need for bilateral agreements between publishers and third parties. Many members complained when we contacted them about opt-outs whenever a new OAI-PMH user came on board. It is better for our members and for our staff if there is a common standard across the board. Changes to 2018 membership agreement 9) Sharing of Metadata by PILA\na) Local Hosting. [no change]\nb) Other Metadata Services. Subject to compliance by the entity receiving the Metadata and Digital Identifiers with the terms and conditions set forth in a separate agreement between established by PILA for the particular service through which access is provided, and the entity receiving the Metadata and Digital Identifiers, PILA may license authorize third parties to receive and use bulk deliveries of Metadata and Digital Identifiers from the PILA System from members who have chosen to participate in Metadata Services, which PILA shall provide directly to such third parties. At least thirty (30) days prior to making such Metadata delivery PILA will notify each PILA Member whose Metadata and Digital Identifiers are intended to be included in such delivery of the anticipated delivery date, the identity of the third party and the purpose for which the delivery is being made. Metadata and Digital Identifiers belonging to any PILA Member who notifies PILA in writing prior to the specified delivery date of its desire to be excluded from such delivery will be excluded or removed from such delivery.\nPlease contact our membership specialist if you have any feedback or questions.\n", "headings": ["What’s changing?","Why are we making this change?","Changes to 2018 membership agreement"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/policy/", "title": "Policy", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/ulf-kronman/", "title": "Ulf Kronman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-6-with-nls/", "title": "Using the Crossref REST API. Part 6 (with NLS)", "subtitle":"", "rank": 1, "lastmod": "2017-10-06", "lastmod_ts": 1507248000, "section": "Blog", "tags": [], "description": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Ulf Kronman, Bibliometric Analyst at the National Library of Sweden about the work they’re doing, and how they’re using our REST API as part of their workflow.\n", "content": "Continuing our blog series highlighting the uses of Crossref metadata, we talked to Ulf Kronman, Bibliometric Analyst at the National Library of Sweden about the work they’re doing, and how they’re using our REST API as part of their workflow.\nIntroducing the National Library of Sweden (NLS) The NLS is a state agency, has a staff of about 320, and its main offices in Stockholm. Its primary duty is to preserve the Swedish cultural heritage by collecting everything printed in Sweden, and has been doing so since 1661. Nowadays the library also collects Swedish TV and radio programs, movies, videos, music, and computer games.\nThe National Library coordinates services and programs for all publicly funded libraries in Sweden and runs the national library catalogue system Libris and the national database for Swedish scholarly output, SwePub. The library also runs the Bibsam consortium, negotiating national subscription licenses and open access publishing agreements with publishers.\nImages left to right: External and internal view of the National Library of Sweden, and Ulf Kronman, Bibliometric Analyst at NLS.\nWhat problem is your service trying to solve? The metadata in the national scholarly publication database SwePub is harvested from the Swedish universities\u0026rsquo; local publication systems, where data often is entered manually by librarians and researchers. This means that the metadata can contain a lot of omissions, synonyms, spelling variants and errors. Using Crossref, we can enhance and correct the metadata delivered to us, if we just have a correct DOI.\nCan you tell us how you are using Crossref metadata at the National Library of Sweden? The Crossref metadata is presently used in two projects; Open APC Sweden and in our local analysis database for publication statistics used in negotiations with publishers.\nOpen APC Sweden is a pilot project to gather data on open access publication costs (APC\u0026rsquo;s – Article Processing Charges) from Swedish universities. The project is modelled from the German Bielefeld University Open APC initiative, which is a part of the INTACT project. After APC data has been delivered to the APC system, scripts are run against the Crossref API to fetch information about publishers and journals. A description of Open APC Sweden can be found here.\nWhen building our local analysis database for publisher statistics, we download data from the SwePub database, use the Crossref DOIs for API lookup against Crossref to add correct ISSN and publisher data to the records and then match the records against a list of publisher serials. In this way, we can get information about how much Swedish researchers have been publishing with a certain publisher and use this data when negotiating conditions for open access publishing with the publisher in question.\nWhat metadata values do you pull from the API? In Open APC Sweden, a Python script supplied by staff at the Bielefeld University is used to pull metadata about publisher and journal names and ISSN\u0026rsquo;s from the Crossref API. The result is entered into an enriched version of the APC data files delivered by the universities and then statistics can be calculated on the result using an R script. The result can be seen here.\nIn the local analysis database, a modified copy of the Bielefeld Python script is used to add the same metadata to the records before matching them against publisher serial ISSNs.\nHave you built your own interface to extract this data? In Open APC Sweden, the Python script is developed and maintained at the Bielefeld University and an exact copy is being run in the Swedish project.\nIn the local analysis system, the Python script is somewhat modified to suit the special demands of this system.\nBut sometimes it is very convenient just to use the main DOI lookup to do a manual check-up of problematic records.\nHow often do you extract/query data? In Open APC Sweden, usually about two-three times a month, when new datasets are delivered from the universities. In the local analysis database, usually lookups are being done on a daily basis as development of the database continues.\nWhat do you do with the metadata once it’s pulled from the API? In Open APC Sweden, the metadata is going into the APC data files for processing of statistics. In the local analysis database, the metadata is used to match against publisher journal ISSN\u0026rsquo;s.\nWhat plans do you have for the future? For the Open APC Sweden I would like to build a database system to make the system more scalable than just working with flat data files.\nWith both the SwePub system and the local analysis system, we are now using the new service oaDOI and their API to look up metadata about the open access status of the publications to enrich our local systems.\nWhat else would you like to see the REST API offer? In the process of normalising the publishers\u0026rsquo; names, the names returned are sometimes at a \u0026ldquo;too high\u0026rdquo; or on a too generic level to be used to generate good statistics. For instance, Springer Nature are sometimes returned as Springer Nature, sometimes as Springer Science + Business Media and sometimes as Nature Publishing Group. A similar thing is valid for Taylor \u0026amp; Francis, where the mother company Informa UK Limited is returned instead of the publishing subsidiary of the company. One thing to wish for here is that we could agree on some kind of normalisation of the publishers\u0026rsquo; names and that Crossref could return this as a supplement to the present metadata.\nThanks Ulf! If you would like to contribute a case study on the uses of Crossref Metadata APIs please contact the Community team.\n", "headings": ["Introducing the National Library of Sweden (NLS)","What problem is your service trying to solve?","Can you tell us how you are using Crossref metadata at the National Library of Sweden?","What metadata values do you pull from the API?","Have you built your own interface to extract this data?","How often do you extract/query data?","What do you do with the metadata once it’s pulled from the API?","What plans do you have for the future?","What else would you like to see the REST API offer?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/publishers-help-us-capture-events-for-your-content/", "title": "Publishers, help us capture Events for your content", "subtitle":"", "rank": 1, "lastmod": "2017-10-02", "lastmod_ts": 1506902400, "section": "Blog", "tags": [], "description": "The day I received my learner driver permit, I remember being handed three things: a plastic thermosealed reminder that age sixteen was not a good look on me; a yellow L-plate sign as flimsy as my driving ability; and a weighty ‘how to drive’ guide listing all the things that I absolutely must not, under any circumstances, even-if-it-seems-like-a-really-swell-idea-at-the-time, never, ever do.\n", "content": "The day I received my learner driver permit, I remember being handed three things: a plastic thermosealed reminder that age sixteen was not a good look on me; a yellow L-plate sign as flimsy as my driving ability; and a weighty ‘how to drive’ guide listing all the things that I absolutely must not, under any circumstances, even-if-it-seems-like-a-really-swell-idea-at-the-time, never, ever do.\nThe margin space dedicated to finger-wagging left little room for championing any driving-do’s. And as each page delivered a fresh new warning, my enthusiasm for hitting the road sunk to levels usually reserved for activities like trigonometry and visits to my orthodontist.\nMany years (and an excellent driving record) later, I’m reminded of this again now when thinking about our own Event Data User Guide. Because it contains a chapter with some really important don\u0026rsquo;ts for our members. Really good, we’d-love-you-to-consider-not-doing-these-things type of advice. But despite our intent to encourage, I feel the ghost of finger-waggers past. So in the spirit of championing enthusiasm over ennui, I thought I’d attempt to contextualise our Event Data Best Practices Guide for Publishers and show you why there’s a lot of good reasons for publishers to be enthusiastic about these rules.\nSo if you’re a publisher, I encourage you to read on to learn more about how you can help us have the best chance possible of capturing Events for your content.\nWhat\u0026rsquo;s in it for you? Well, collecting this data helps to give everyone (Crossref, yourself, and others) a better picture of how your content is being used, including for altmetrics. 1. Please let us in Please do open the door when we come knocking, we promise not to stay long. You can do this by allowing the User Agent CrossrefEventDataBot to visit your site, and whitelisting it if necessary. The bot is how we visit URLs to confirm if they are for an item of content registered with us. The reason why we’re visiting your site could include:\nsomeone tweeted an article landing page someone discussed it on Reddit it was linked to from a blog post The Bot has only one job: to work out the DOI. No information beyond this is stored. Whenever we become aware of a link that we think points to a DOI or an Article Landing Page, we follow it so we can collect the required metadata. Everything in Crossref Event Data is linked via its DOI, so it\u0026rsquo;s important that we can collect this information.\nThe bot will identify itself using the standard method. It sets two headers:\nReferer: https://0-eventdata-crossref-org.libus.csd.mu.edu User-Agent: CrossrefEventDataBot (eventdata@crossref.org) Once we confirm that a link points to registered content, we then log an Event for the DOI. You should expect our bot to visit no more than once or twice per second, although if there is a period of activity around your articles, you may see higher rates. The bot also takes a sample of DOIs and visits them to work out which domain names belong to our members, so it can maintain a list. This can happen every few weeks. You may see a small number of requests from the bot, but limited to one per second.\nIf we can’t enter your site to look for metadata though, then we won’t be able to collect Events for your DOIs. So by allowing our bot, you will be helping us to collect Event Data for your registered content.\nIf you’re worried about traffic on your site, consider sending us your mapping of article landing pages to DOIs. Because Resource URLs aren\u0026rsquo;t the same as article landing pages, we need more information than the DOI Resource URLs that you already send us.\nIf you’re running a blog or website (and you’re not a member of Crossref), you may also see our bot visiting, to look for links that comprise Events. Please allow us to visit, so we can record in our Event Data service the fact that your website links to registered content.\n2. We ❤️ robots.txt Robots.txt files are important and we ensure our Event Data Bot respects yours. If we are instructed not to visit a site, we won\u0026rsquo;t. So if you want us to visit your site in order to check the metadata of your article landing page, please ensure you provide an exception for our Bot, or make sure that you’re not blocking it. Check the restrictions in your file to see if we’re allowed to visit. This is just another way you can help us work for you.\n3. Include the DC Identifier Including good metadata is general best practice for scholarly publishing. When we visit a publisher’s site, we look for metadata embedded in the HTML document (such as DC.Identifier tags that, amongst other things, enable Crossmark to work).\nBy ensuring you include a Dublin Core identifier meta tag in each of your articles pages, our system can match your landing pages back to DOIs.\nHere’s an example:\n4. Let us in, even if we don’t bring cookies We’re like that friend who turns up for dinner without bringing a bottle of wine. And we hope that you’ll be ok with that. Some Publisher sites don\u0026rsquo;t allow browsers to visit unless cookies are enabled and they block visitors that don\u0026rsquo;t accept them. If your site does this, we will be unable to collect Events for your DOIs. Allowing your site to be accessed without cookies will help give us the best chance of successfully reading your metadata.\n5. We may not speak your language Sometimes we come across a publisher’s site that won’t render unless JavaScript is enabled. This means that the site won’t show any content to browsers that don\u0026rsquo;t execute JavaScript. The Event Data Bot does not execute JavaScript when looking for a DOI. This means that if your site requires JavaScript, then we will be unable to collect DOIs for your Events. Consider allowing your site to be accessed without JavaScript. And if this is not possible, then if you ensure you include the tag in the HTML header, then we’ll do our best to collect Events for your registered content.\nIf you want to pass this on to your friendly system administrator, the best practice is documented in full here: https://0-www-eventdata-crossref-org.libus.csd.mu.edu/guide/best-practice/publishers-best-practice/. And sorry about all the don’ts you’ll find on that page…. don’t let them curb your enthusiasm for taking Event Data out for a spin!\n", "headings": ["1. Please let us in","2. We ❤️ robots.txt","3. Include the DC Identifier","4. Let us in, even if we don’t bring cookies","5. We may not speak your language"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/bestblogsread/", "title": "BestBlogsRead", "subtitle":"", "rank": 1, "lastmod": "2017-09-26", "lastmod_ts": 1506384000, "section": "Blog", "tags": [], "description": "We know that research communication happens everywhere, and we want your help in finding it!\nFrom October 9th we will be collecting links sent in by you through a social campaign across Twitter and Facebook called #BestBlogsRead.\nSimply send us links to the blogs YOU like to read It’s easy to participate, all you have to do is watch out for the daily tweets and facebook posts and then send us links to the blogs (and news sites) you read.", "content": "We know that research communication happens everywhere, and we want your help in finding it!\nFrom October 9th we will be collecting links sent in by you through a social campaign across Twitter and Facebook called #BestBlogsRead.\nSimply send us links to the blogs YOU like to read It’s easy to participate, all you have to do is watch out for the daily tweets and facebook posts and then send us links to the blogs (and news sites) you read.\nFrom gardening to gaming, recipes to rock climbing, tennis to taxidermy - whatever blogs you read, we want to hear about them!\nBecause research happens everywhere! And you’ll be surprised where it is mentioned - for example:\nWe found a Wiley article mentioned in a blog about the eclipse\nAn American Chemical Society article in a blog about food allergies A blog about Neanderthals on the Atlantic links to and article from the American Association for the Advancement of Science\nSo, watch out for the campaign on Twitter and Facebook, and tell us about your #BestBlogsRead.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-at-the-frankfurt-book-fair/", "title": "Crossref at the Frankfurt Book Fair", "subtitle":"", "rank": 1, "lastmod": "2017-09-26", "lastmod_ts": 1506384000, "section": "Blog", "tags": [], "description": "We’ll be at booth M82 in the Hotspot area of Hall 4.2 and would love to meet with you. Let us know if you’re interested in chatting with one of us - about anything at all.\n", "content": "We’ll be at booth M82 in the Hotspot area of Hall 4.2 and would love to meet with you. Let us know if you’re interested in chatting with one of us - about anything at all.\nKirsty Meddings, Product Manager: Here to help with Crossref services such as Crossmark and funding data, and happy to talk about your metadata and how you can deposit more.\nPaul Davis, Support Specialist: Any issues with metadata deposit, or anything technical, I’m your man.\nSusan Collins, Publisher Outreach Manager: If you’re a member and have questions about how things are going, or try out additional services, I can help.\nJennifer Kemp, Affiliate Outreach Manager: Come to me if you want to get Metadata from Crossref, or discuss our imminent new service for social mentions and data links: Event Data (in Beta).\nGinny Hendricks, Member \u0026amp; Community Outreach Director: I’d love to talk to publishers and platforms about the new Metadata 2020 initiative.\nAmanda Bartell, Head of Member Experience: This will be my first day at Crossref! If there is something you’d like the Membership team to do or change, please let me know.\nChrissie Cormack-Wood, Head of Marketing Communications: I’ll be acting as \u0026ldquo;host\u0026rdquo; so ask me anything about our booth and activities at the Fair. Ideas for joint campaigns or co-promotion are welcome too.\nIf some of these topics are on your agenda, or if you’re not sure who to contact, please let me know and I’ll set up a 30-minute meeting at our booth, M82 in Hall 4.2.\nAnd, if you don’t get a chance to visit us at our stand, make sure you don’t miss Ginny’s Metadata 20/20 talk at 2.30pm on Wednesday 11th, at the Hot Spot stage in the corner of Hall 4.2, area N99. We hope you have a great Book Fair!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/altmetrics/", "title": "Altmetrics", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/blogs/", "title": "Blogs", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-as-underlying-altmetrics-infrastructure-at-the-4am-altmetrics-conference/", "title": "Event Data as Underlying Altmetrics Infrastructure at the 4:AM Altmetrics Conference", "subtitle":"", "rank": 1, "lastmod": "2017-09-25", "lastmod_ts": 1506297600, "section": "Blog", "tags": [], "description": "I\u0026rsquo;m here in Toronto and looking forward to a busy week. Maddy Watson and I are in town for the 4:AM Altmetrics Conference, as well as the altmetrics17 workshop and Hack-day. I\u0026rsquo;ll be speaking at each, and for those of you who aren\u0026rsquo;t able to make it, I\u0026rsquo;ve combined both presentations into a handy blog post, which follows on from my last one.\nBut first, nothing beats a good demo. Take a look at our live stream.", "content": "I\u0026rsquo;m here in Toronto and looking forward to a busy week. Maddy Watson and I are in town for the 4:AM Altmetrics Conference, as well as the altmetrics17 workshop and Hack-day. I\u0026rsquo;ll be speaking at each, and for those of you who aren\u0026rsquo;t able to make it, I\u0026rsquo;ve combined both presentations into a handy blog post, which follows on from my last one.\nBut first, nothing beats a good demo. Take a look at our live stream. This shows the Events passing through Crossref Event Data, live, as they happen. You may need to wait a few seconds before you see anything.\nCrossref and scholarly links You may know about Crossref. If you don\u0026rsquo;t, we are a non-profit organisation that works with Publishers (getting on for nine thousand) to register scholarly publications, issue Persistent Identifiers (DOIs) and maintain the infrastructure required to keep them working. If you don\u0026rsquo;t know what a DOI is, it\u0026rsquo;s a link that looks like this:\nhttps://doi.org/10.5555/12345678\nWhen you click on that, you\u0026rsquo;ll be taken to the landing page for that article. If the landing page moves, the DOI can be updated so you\u0026rsquo;re taken to the right place. This is why Crossref was created in the first place: to register Persistent Identifiers to combat link rot and to allow Publishers to work together and cite each other\u0026rsquo;s content. A DOI is a single, canonical identifier that can be used to refer to scholarly content.\nNot only that, we combine that with metadata and links. Links to authors via ORCIDs, references and citations via DOIs, funding bodies and grant numbers, clinical trials\u0026hellip; the list goes on. All of this data is provided by our members and most of it is made available via our free API.\nBecause we are the central place that publishers register their content, and we\u0026rsquo;ve got approaching 100 million items of Registered Content, we thought that we could also curate and collect altmetrics type data for our corpus of publications. After all, a reference from a Tweet to an article is a link, just like a citation between two articles is a link.\nAn Experiment So, a few years back we thought we would try and track altmetrics for DOIs. This was done as a Crossref Labs experiment. We grabbed a copy of PLOS ALM (since renamed Lagotto), loaded a sample of DOIs into it and watched as it struggled to keep up.\nIt was a good experiment, as it showed that we weren\u0026rsquo;t asking exactly the right questions. There were a few things that didn\u0026rsquo;t quite fit. Firstly, it required every DOI to be loaded into it up-front, and, in some cases, for the article landing page for every DOI to be known. This doesn\u0026rsquo;t scale to tens of millions. Secondly, it had to scan over every DOI on a regular schedule and make an API query for each one. That doesn\u0026rsquo;t scale either. Thirdly, the kind of data it was requesting was usually in the form of a count. It asked the question:\n\u0026ldquo;How many tweets are there for this article as of today?\u0026rdquo;\nThis fulfilled the original use case for PLOS ALM at PLOS. But when running it at Crossref, on behalf of every publisher out there, the results raised more questions than they answered. Which was good, because it was a Labs Experiment.\nAsking the right question The whole journey to Crossref Event Data has been a process of working out how to ask the right question. There are a number of ways in which \u0026ldquo;How many tweets are there for this article as of today?\u0026rdquo; isn\u0026rsquo;t the right question. It doesn\u0026rsquo;t answer:\nTweeted by who? What about bots? Tweeted how? Original Tweets? Retweets? What was tweeted? The DOI? The article landing page? Was there extra text? When did the tweet occur? We took one step closer toward the right question. Instead of asking \u0026ldquo;how many tweets for this article are there as of today\u0026rdquo; we asked:\n\u0026ldquo;What activity is happening on Twitter concerning this article?\u0026rdquo;\nIf we record each activity we can include information that answers all of the above questions. So instead of collecting data like this:\nRegistered Content Source Count Date 10.5555/12345678 twitter 20 2017-01-01 10.5555/87654321 twitter 5 2017-01-15 10.5555/12345678 twitter 23 2017-02-01 We\u0026rsquo;re collecting data like this:\nSubject Relation Object Source Date twitter.com/tweet/1234 references 10.5555/12345678 twitter 2017-01-01 twitter.com/tweet/5678 references 10.5555/987654321 twitter 2017-01-11 twitter.com/tweet/9123 references 10.5555/12345678 twitter 2017-02-06 Now we\u0026rsquo;re collecting individual links between tweets and DOIs, we\u0026rsquo;re closer to all the other kinds of links that we store. It\u0026rsquo;s like the \u0026ldquo;traditional\u0026rdquo; links that we already curate except:\nIt\u0026rsquo;s not provided by publishers, we have to go and collect it ourselves. It comes from a very diverse range of places, e.g. Twitter, Wikipedia, Blogs, Reddit, random web pages The places that the Events do come from don\u0026rsquo;t play by the normal rules. Web pages work differently to articles. Non-traditional Publishing is Untraditional This last point caused us to scratch our heads for a bit. We used to collect links within the \u0026rsquo;traditional\u0026rsquo; scholarly literature. Generally, journal articles:\nget published once have a publisher looking after them, who can produce structured metadata are subject to a formal process of retractions or updates Now we\u0026rsquo;re collecting links between things that aren\u0026rsquo;t seen as \u0026rsquo;traditional\u0026rsquo; scholarship and don\u0026rsquo;t play by the rules.\nThe first thing we found is that blog authors don\u0026rsquo;t reference the literature using DOIs. Instead they use article landing pages. This meant that we had to put in the work to collect links to article landing pages and turn them back into DOIs so that they can be referenced in a stable, link-rot-proof way.\nWhen we looked at Wikipedia we noticed that, as pages are edited, references are added and removed all the time. If our data set reflected this, it would have to evolve over time, with items popping into existence and then vanishing again. This isn\u0026rsquo;t good.\nOur position in the scholarly community is to provide data and infrastructure that others can use to create services, enrich and build things. Curating an ever changing data set, where things can disappear, is not a great idea and is hard to work with.\nWe realised that a plain old link store (also known as an assertion store, triple store, etc.) wasn\u0026rsquo;t the right approach as it didn\u0026rsquo;t capture the nuance in the data with sufficient transparency. At least, it didn\u0026rsquo;t tell the whole picture.\nWe settled on a new architecture, and Crossref Event Data as we now know it was born. Instead of a dataset that changes over time, we have a continual stream of Events, where each Event tells a new part of the story. An Event is true at the time it is published, but if we find new information we don\u0026rsquo;t edit Events, we add new ones.\nAn Event is the way that we tell you that we observed a link. It includes the link, in \u0026ldquo;subject - relation type - object\u0026rdquo; format, but so much more. We realised that one question won\u0026rsquo;t do, so Events now answer the following questions:\nWhat links to what? How was the link made? Was it with a article\u0026rsquo;s DOI or straight to an Article landing page? Which Agent collected it? Which data source were they looking at? When was the link observed? When do we think the link actually happened? What algorithms were used to collect it? How do you know? I\u0026rsquo;ll come back to the \u0026ldquo;how do you know\u0026rdquo; a bit later.\nWhat is an altmetrics Event? So, an Event is a package that contains a link plus lots of extra information required to interpret and make sense of it. But how do we choose what comprises an Event?\nAn Event is created every time we notice an interaction between something we can observe out on the web and a piece of registered content. This simple description gives rise to some interesting quirks.\nIt means that every time we see a tweet that mentions an article, for example, we create an Event. If a tweet mentions two articles, there are two events. That means that \u0026ldquo;the number of Twitter events\u0026rdquo; is not the same as \u0026ldquo;the number of tweets\u0026rdquo;.\nIt means that every time we see a link to a piece of registered content in a webpage, we create an Event. The Event Data system currently tries to visit each webpage once, but we reserve the right to visit a webpage more than once. This means that the number of Events for a particular webpage doesn\u0026rsquo;t mean there are that many references.\nWe might go back and check a webpage in future to see if it still has the same links. If it does, we might generate a new set of Events to indicate that.\nBecause of the evolving nature of Wikipedia, we attempt to visit every page revision and document the links we find. This means that if an article has a very active edit history, and therefore a large number of edits, we will see repeated Events to the literature, once for every version of the page that makes references. So the number of Events in Wikipedia doesn\u0026rsquo;t mean the number of references.\nAn Event is created every time we notice an interaction. Each source (Reddit, Wikipedia, Twitter, blogs, the web at large) has different quirks, and you need to understand the underlying source in order to understand the Events.\nWe put the choice into your hands. If you want to create a metric based on counting things, you have a lot of decisions to make. Do you care about bots? Do you care about citation rings? Do you care about retweets? Do you care about whether people use DOIs or article landing pages? Do you care what text people included in their tweet? The answer to each of these questions means that you\u0026rsquo;ll have to look at each data point and decide to put a weighting or score on it.\nIf you wanted to measure how blogged about a particular article was, you would have to look at the blogs to work out if they all had unique content. For example, Google\u0026rsquo;s Blogger platform can publish the same blog post under multiple domain names.\nA blog full of link spam is still a blog. You may be doing a study into reputable blogs, so you may want to whitelist the set of domain names to exclude less reputable blogs. Or you may be doing a study into blog spam, so lower quality blogs is precisely what you\u0026rsquo;re interested in,\nIf you wanted to measure how discussed an article was on Reddit, you might want to go to the conversation and see if people were actually talking about it, or whether it was an empty discussion. You might want to look at the author of the post to see if they were a regular poster, whether they were a bot or an active member of the community.\nIf you wanted to measure how referenced an article was in Wikipedia, you might want to look at the history of each reference to see if it was deleted immediately. Or if it existed for 50% of the time, and to give a weighting.\nWe don\u0026rsquo;t do any scoring, we just record everything we observe. We know that everyone will have different needs, be producing different outcomes and use different methodologies. So it\u0026rsquo;s important that we tell you everything we know.\nSo that\u0026rsquo;s an Event. It\u0026rsquo;s not just a link, it\u0026rsquo;s the observation of a link, coupled with extra information to help you understand it.\nHow do you know? But what if the Event isn\u0026rsquo;t enough? To come back to the earlier question, \u0026ldquo;how do you know?\u0026rdquo;\nEvents don\u0026rsquo;t exist in isolation. Data must be collected and processed. Each Agent in Crossref Event Data monitors a particular data source and feeds data into the system, which goes and retrieves webpages so it can make observations. Things can go wrong.\nAny one of these things might prevent an Event from being collected:\nWe might not know about a particular DOI prefix immediately after it\u0026rsquo;s registered. We might not know about a particular landing page domain for a new member immediately. Article landing pages might not have the right metadata, so we can\u0026rsquo;t match them to DOIs. Article landing pages might block the Crossref bot, so we can\u0026rsquo;t match DOIs. Article landing pages might require cookies, or convoluted JavaScript, so the bot can\u0026rsquo;t get the content. Blogs and webpages might require cookies or JavaScript to execute. Blogs might block the Event Data bot. A particular API might have been unavailable for a period of time. We didn\u0026rsquo;t know about a particular blog newsfeed at the time. This is a fact of life, and we can only operate on a best-effort basis. If we don\u0026rsquo;t have an Event, it doesn\u0026rsquo;t mean it didn\u0026rsquo;t happen.\nThis doesn\u0026rsquo;t mean that we just give up. Our system generates copious logs. It details every API call it made, the response it got, every scan it made, every URL it looked at. This amounts to about a gigabyte of data per day. If you want to find out why there was no Wikipedia data at a given point in time, you can go back to the log data and see what happened. If you want to see why there was no Event for an article by publisher X, you can look at the logs and see, for example, that Publisher X prevented the bot from visiting.\nEvery Event that does exist has a link to an Evidence Record, which corresponds with the logs. The Evidence Record tells you:\nwhich version of the Agent was running which Artifacts and versions it was working from which API requests were made which inputs looked like possible links which matched or failed which Events were generated Artifacts are versioned files that contain information that Agents use. For example, there\u0026rsquo;s a list of domain names, a list of DOI prefixes, a list of blog feed urls, and so on. By indicating which version of these Artifacts were used, we can explain why we visited a certain domain and not another.\nAll the code is open source. The Evidence Record says which version of each Agent was running so you can see precisely which algorithms were used to generate the data.\nBetween the Events, Evidence Records, Evidence Logs, Artifacts and Open Source software, we can pinpoint precisely how the system behaved and why. If you have any questions about how a given Event was (or wasn\u0026rsquo;t) generated, every byte of explanation is freely available.\nThis forms our \u0026ldquo;Transparency first\u0026rdquo; idea. We start the whole process with an open Artifact Registry. Open source software then produces open Evidence Records. The Evidence Record is then consulted and turned into Events. All the while, copious logs are being generated. We\u0026rsquo;ve designed the system to be transparent, and for each step to be open to inspection.\nWe\u0026rsquo;re currently in Beta. We have over thirty million Events in our API, and they\u0026rsquo;re just waiting for you to use them!\nHead over to the User Guide and get stuck in!\nIf you are in Toronto, come and say hi to Maddy or me.\n", "headings": ["Crossref and scholarly links","An Experiment","Asking the right question","Non-traditional Publishing is Untraditional","What is an altmetrics Event?","We put the choice into your hands.","How do you know?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/organization-identifier-working-group-update/", "title": "Organization Identifier Working Group Update", "subtitle":"", "rank": 1, "lastmod": "2017-09-18", "lastmod_ts": 1505692800, "section": "Blog", "tags": [], "description": "About 1 year ago, Crossref, DataCite and ORCID [announced a joint initiative] (https://orcid.org/blog/2016/10/31/organization-identifier-project-way-forward) to launch and sustain an open, independent, non-profit organization identifier registry to facilitate the disambiguation of researcher affiliations. Today we publish governance recommendations and product principles and requirements for the creation of an open, independent organization identifier registry and invite community feedback.\n", "content": "About 1 year ago, Crossref, DataCite and ORCID [announced a joint initiative] (https://orcid.org/blog/2016/10/31/organization-identifier-project-way-forward) to launch and sustain an open, independent, non-profit organization identifier registry to facilitate the disambiguation of researcher affiliations. Today we publish governance recommendations and product principles and requirements for the creation of an open, independent organization identifier registry and invite community feedback.\nThe Organization Identifier (OrgID) Working Group was established as a joint effort by Crossref, DataCite and ORCID in January 2017. The members of the group bring a broad range of experience and perspectives, including expertise in research data discovery, data management, persistent identifiers, economics research, funding, archiving, non-profit membership organizations, academia, publishing, and metadata development.\nThe Working Group was charged with refining the structure, principles, and technology specifications for an open, independent, non-profit organization identifier registry to facilitate the disambiguation of researcher affiliations.\nThe group has been working in three interdependent areas: Governance, Registry Product Definition, and Business Model \u0026amp; Funding, and today releases for public comment its findings and recommendations for governance and product requirements.\nGovernance Recommendations - https://0-doi-org.libus.csd.mu.edu/10.23640/07243.5402002.v1\nProduct Principles and Recommendations - https://0-doi-org.libus.csd.mu.edu/10.23640/07243.5402047.v1 We invite your feedback!\nPlease send comments by October 15th, 2017.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/request-for-community-comment/", "title": "Request for Community Comment", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/pidapalooza-is-back-and-wants-your-pid-stories/", "title": "PIDapalooza is back and wants your PID stories", "subtitle":"", "rank": 1, "lastmod": "2017-09-14", "lastmod_ts": 1505347200, "section": "Blog", "tags": [], "description": "Now in its second year, this “open festival of persistent identifiers” brings together people from all walks of life who have something to say about PIDs. If you work with them, develop with them, measure or manage them, let us know your PID adventures, pitfalls, and plans by submitting a talk by September 18. It\u0026rsquo;ll be in Girona, Spain, January 23-24, 2018.\n", "content": "Now in its second year, this “open festival of persistent identifiers” brings together people from all walks of life who have something to say about PIDs. If you work with them, develop with them, measure or manage them, let us know your PID adventures, pitfalls, and plans by submitting a talk by September 18. It\u0026rsquo;ll be in Girona, Spain, January 23-24, 2018.\nOne of the great strengths of last year’s PIDapalooza was the number of people who spoke and all the conversations that were kindled. So if you\u0026rsquo;re thinking of going, we encourage you to propose a talk, so we can hear what you\u0026rsquo;re working on and you can get some feedback.\nAt the inaugural PIDapalooza event Crossref took to the stage twice, with Ed Pentz covering Org IDs and Joe Wass talking about Event Data.\nHere we have Joe’s memories of the event and Ed’s update on the Org ID status.\nJoe Wass reflects: At Crossref, the subject of Persistent Identifiers is something we care deeply about, and linking between DOIs, ORCID iDs and other identifiers is the reason we get up in the morning. But a whole conference dedicated to them? If I\u0026rsquo;m honest, the first time I heard about PIDapalooza I thought the subject was rather niche.\nHow wrong I was. It turns out there are people from all walks of life who care about \u0026ldquo;things\u0026rdquo; using persistent identifiers to link, describe and reference them. There was a great balance between presenters and attendees, and the programme meant that lots of people had a chance to speak. We heard about identifiers for research vessels, pieces of scientific equipment, individual bottles of milk, plus the usual subjects like scholarly publishing, datasets, organisations and funders, and how to cite them.\nBetween sessions we chatted over a wide range of subjects, noted similarities between subject areas, offered advice and exchanged ideas. Who knew this stuff was all related?\nEd Pentz on plans for the new Organization IDs An important presentation at the 2016 PIDapalooza meeting was on organization identifiers. A week before the conference Crossref, DataCite and ORCID released three documents for public comment outlining a proposed way forward. The goal is launch and sustain an open, independent, non-profit organization identifier registry to facilitate the disambiguation of researcher affiliations. At the packed PIDapalooza session Crossref, DataCite and ORCID gave an update on their work over the previous year and their proposals going forward.\nThere was a lively discussion and debate about the issues. Following the meeting the three organizations set up the OI Project Working Group with a broad group of stakeholders. The group has been meeting over the last year and will release two documents next week - a set of Governance Recommendations and Product Principles and Recommendations for community feedback. So watch this space.\nThe PIDapalooza conference really helped galvanize the work in this area by bringing together a broad range of people interested in persistent identifiers. If you have an idea about PIDs, please come and tell us about it.\nCheck out the decks from last year's talks, the PIDapalooza website with all the info, and sumbit a proposal for your talk before September 18.\n", "headings": ["Joe Wass reflects:","Ed Pentz on plans for the new Organization IDs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/making-peer-reviews-citable-discoverable-and-creditable/", "title": "Making peer reviews citable, discoverable, and creditable", "subtitle":"", "rank": 1, "lastmod": "2017-09-11", "lastmod_ts": 1505088000, "section": "Blog", "tags": [], "description": "A number of our members have asked if they can register their peer reviews with us. They believe that discussions around scholarly works should have DOIs and be citable to provide further context and provenance for researchers reading the article. To that end, we can announce some pertinent news as we enter Peer Review Week 2017 : Crossref infrastructure is soon to be extended to manage DOIs for peer reviews. Launching next month will be support for this new resource/record type, with schema specifically dedicated to the reviews and discussions of scholarly content.\n", "content": "A number of our members have asked if they can register their peer reviews with us. They believe that discussions around scholarly works should have DOIs and be citable to provide further context and provenance for researchers reading the article. To that end, we can announce some pertinent news as we enter Peer Review Week 2017 : Crossref infrastructure is soon to be extended to manage DOIs for peer reviews. Launching next month will be support for this new resource/record type, with schema specifically dedicated to the reviews and discussions of scholarly content.\nNot disimilar to other registered resources (datasets, working papers, preprints, translations, etc.) publication peer reviews are important scholarly contributions in their own right and form a part of the scholarly record. In addition to the members who have been registering them, many more are looking to better handle these contributions and give recognition to this process which is so critical to maintaining scientific quality.\nHere are a few examples of existing Crossref DOIs for peer reviews: https://0-doi-org.libus.csd.mu.edu/10.1016/j.engfracmech.2015.01.019 and https://0-doi-org.libus.csd.mu.edu/10.5194/wes-1-177-2016 and https://0-doi-org.libus.csd.mu.edu/10.14322/PUBLONS.R518142.\nWe are extending our infrastructure to support all members who make these scholarly discussions available to readers. To accommodate a wide range of publisher practices, this will include a range of outputs made publicly available from the peer review history, across any and all review rounds, including referee reports, decision letters, and author responses. Members will be able to include not only scholarly discussions of journal articles before but also after publication (e.g. “post-publication reviews”).\nCentral to this new feature of the Crossref Content Registration service is the special set of metadata dedicated to supporting the discovery and investigation of peer reviews as it is linked up to the article discussed. The peer review schema will provide a characterization of the peer review asset (for example: recommendation, type, license, contributor info, competing interests) as well as offer a view into the review process (e.g. pre/post-publication, revision round, review date).\nOur custom support for peer reviews will ensure that: Readers can see provenance and get context of a work Links to this content persist over time The metadata is useful They are connected to the full history of the published results Contributors are given credit for their work (we will ask for ORCID iDs) The citation record is clear and up-to-date. As with all the content registered with Crossref, we will make peer review metadata available for machine and human access, across multiple interfaces (e.g. REST API, OAI-PMH, Crossref Metadata Search) to enable discoverability across the research ecosystem. This metadata may also support enrichment of scholarly discussion, reviewer accountability, publishing transparency, analysis or research on peer reviews, and so on.\nTo reflect the nature of this special content, we will bundle the fees for peer review content fees into the cost of registering the article for members who publish the journal article and its peer reviews. No matter how many reviews are associated with a paper, there will be a fixed fee for the full set.\nPeer review infrastructure will arrive at Crossref in one month, and we are excited to engage our members who want to assign DOIs to peer reviews or migrate previously registered review content to the new schema. A special thanks to the members so far who have given feedback and advice to develop the schema: BMC, The BMJ, Copernicus, eLife, PeerJ, and Publons.\nPlease contact our membership specialist if you\u0026rsquo;d like to know more.\n", "headings": ["Our custom support for peer reviews will ensure that:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/more-metadata-for-machines-citations-relations-and-preprints-arrive-in-the-rest-api/", "title": "More metadata for machines-citations, relations, and preprints arrive in the REST API", "subtitle":"", "rank": 1, "lastmod": "2017-09-11", "lastmod_ts": 1505088000, "section": "Blog", "tags": [], "description": "Over the past few months we have been adding to the metadata and functionality of our REST API, Crossref’s public machine interface for the metadata of all 90 million+ registered content items. Much of the work focused on a review and upgrade of the API’s code and architecture in order to better support its rapidly growing usage. But we have also extended the types of metadata that the API can deliver.\n", "content": "Over the past few months we have been adding to the metadata and functionality of our REST API, Crossref’s public machine interface for the metadata of all 90 million+ registered content items. Much of the work focused on a review and upgrade of the API’s code and architecture in order to better support its rapidly growing usage. But we have also extended the types of metadata that the API can deliver.\nOne of the biggest changes is that references are now available if the publisher has made them public (a simple email instruction to us). Currently 45% of all publications with deposited references are now accessible. For example:\nThis article studying fluid ejection from animals has 55 references and they are all in the metadata here. You can also see that the article has an is-referenced-by count of 6.\nThis article exploring whether people bitten by their cat are more likely to develop depression has 142 references and is referenced by 12.\nWe recently announced that we would be accepting preprints, and the metadata for 15,000 preprints registered to date is now in the API, labelled as posted-content. Over 4,000 have been subsequently published in a journal, and the Crossref metadata now links these preprints to their respective articles (and vice versa). For example this article in Biorxiv has since been published in a journal, and this relationship is recorded in its metadata as is-preprint-of.\nAlso new to the API: Cited-by counts - the number of times each work has been referenced by other content registered with us. Look for is-referenced-by-count within a record.\nThis article from 1953 about a fairly notable discovery has been cited 4832 times, but the two most cited articles both have over 100,000 citations and thousands have been cited more than Watson and Crick.\nAbstracts for over 1 million works.\nSimilarity Check URLs\u0026ndash;the ones that Turnitin crawl to add content to the database\u0026ndash;are now showing so that participating publishers can check that they are including them in their metadata deposits.\nSubject categories have been added for an additional 7000 journal titles, taking the total number of classified titles to ~45,000.\nAre you already using our Metadata APIs for your system or project? We’re always keen to hear new use cases and happy to answer any questions.\nYou may need to install a JSON viewer extension in your browser to render API queries in a human-friendly way.\n", "headings": ["Also new to the API:"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/david-shotton/", "title": "David Shotton", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/silvio-peroni/", "title": "Silvio Peroni", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-5-with-opencitations/", "title": "Using the Crossref REST API. Part 5 (with OpenCitations)", "subtitle":"", "rank": 1, "lastmod": "2017-09-10", "lastmod_ts": 1505001600, "section": "Blog", "tags": [], "description": "As part of our blog post series on the Crossref REST API, we talked to Silvio Peroni and David Shotton of OpenCitations (OC) about the work they’re doing, and how they’re using the Crossref REST API as part of their workflow.\n", "content": "As part of our blog post series on the Crossref REST API, we talked to Silvio Peroni and David Shotton of OpenCitations (OC) about the work they’re doing, and how they’re using the Crossref REST API as part of their workflow.\nIntroducing OpenCitations\nOpenCitations employs Semantic Web technologies to create an open repository of the citation data that publishers have made available. This repository, called the OpenCitations Corpus (OCC), contains RDF-based scholarly citation data that are made freely available so that others may use and build upon them. All the resources published by OC – namely the data within the OCC, the ontologies describing the data, and the software developed to build the OCC – are available to the public with open licenses.\nWhat problem is your service trying to solve?\nOC was started to address the lack of RDF-based open citation data. To our knowledge, when the project formally started with Jisc funding in 2010 the prototype OCC was the first RDF-based dataset of open citation data.\nWe collect accurate scholarly citation data derived from bibliographic references harvested from the scholarly literature, so as to make them available under a Creative Commons public domain dedication (CC0) by means of Semantic Web technologies, thus making them findable, accessible, interoperable, and re-usable, as well as structured, separable, and open.\nOCC citation data are described using standard and/or well-known vocabularies, including the SPAR Ontologies , PROV-O, the Data Catalog Vocabulary, and VoID. The use of such vocabulary is described in the OCC metadata document, and is implemented by means of the OpenCitations Ontology (OCO).\nThe OCC resources are made available and accessible in different ways, so as to facilitate their reuse in different contexts: as monthly dumps, via the SPARQL endpoint, and by accessing them directly by means of the HTTP URIs of the stored resources (via content negotiation; example)\nCan you tell us how you are using the Crossref Metadata API at OpenCitations?\nAt present, basic citation information is retrieved from PubMed Central, and the Crossref API is then used to retrieve additional metadata describing the citing and cited articles, and to disambiguate bibliographic resources and agents by means of the identifiers retrieved (e.g., DOI, ISSN, ISBN, URL, and Crossref member URL). In future, we will retrieve full citation data direct from Crossref.\nWhat metadata values do you pull from the API?\nWe pull the titles, subtitles, identifiers (e.g. DOI, ISSN, ISBN, URL, and Crossref member URL), author list, publisher, container resources (issue, volume, journal, book, etc.), publication year and pages.\nHave you built your own interface to extract this data?\nThe SPAR Citation Indexer, a.k.a. SPACIN, is a script and a series of Python classes that allow one to process particular JSON files containing the bibliographic reference lists of papers, produced from the PubMed Central API by another script included in the OpenCitations GitHub repository.\nSPACIN processes such JSON files and retrieves additional metadata information about all the citing and cited articles by querying the Crossref API, among others. Once SPACIN has retrieved all these metadata, RDF resources are created (or reused, if they have been already added in the past) and stored in the file system in JSON-LD format. In addition, they are also uploaded to the OCC triplestore (via the SPARQL UPDATE protocol).\nHow often do you extract/query data?\nThe entire OpenCitations ingestion workflow is running continuously, processing about half a million citations per month.\nWhat do you do with the metadata once it’s pulled from the API?\nAll the metadata relevant to bibliographic entities are stored by using the OCC metadata model. The ontological terms of such metadata model are collected within an ontology called the OpenCitations Ontology (OCO), which includes several terms from the SPAR Ontologies and other vocabularies. In particular, the following six bibliographic entity types occur in the datasets created by SPACIN:\nbibliographic resources (br), class fabio:Expression – resources that either cite or are cited by other bibliographic resources (e.g. journal articles), or that contain such citing/cited resources (e.g. journals);\nresource embodiments (re), class fabio:Manifestation – details of the physical or digital forms in which the bibliographic resources are made available by their publishers;\nbibliographic entries (be), class biro:BibliographicReference – literal textual bibliographic entries occurring in the reference lists of bibliographic resources;\nresponsible agents (ra), class foaf:Agent – names of agents having certain roles with respect to the bibliographic resources (i.e. names of authors, editors, publishers, etc.);\nagent roles (ar), class pro:RoleInTime – roles held by agents with respect to the bibliographic resources (e.g. author, editor, publisher);\nidentifiers (id), class datacite:Identifier – external identifiers (e.g. DOI, ORCID, PubMedID) associated to bibliographic resources and agents.\nDo you have plans to enhance your metadata input?\nWe already handle additional information, such as ORCIDs, that are extracted by means of the ORCID API applied to the citing and cited articles included in the OCC. In addition, we are developing scripts in order to use all the new citation data Crossref now makes available as consequence of the Initiative for Open Citations (I4OC).\nWhat are the future plans for OpenCitations?\nWith funding received from the Alfred P. Sloan Foundation, we will shortly extend the current infrastructure and the rate of data ingest. Our immediate goal is to increment the daily ingestion of citation data from about half a million citations per month to about half a million citations per day. In addition, we plan to analyse the OCC so as to understand the quality of its current data, and to develop new user interfaces, including graph visualizations of citation networks, that will expand the means whereby users can interact with the OpenCitations data.\nWhat else would you like to see our REST API offer?\nCategorising articles/journals/any bibliographic resources according to their main discipline (Computer Science, Biology, etc.) and, eventually, by means of subject terms and/or keywords. Additionally, provision of authors\u0026rsquo; institutional affiliations and funder information would be extremely valuable.\nThank you Silvio and David!\nIf you are keen to share what you’re doing with the our Metadata APIs, contact feedback@crossref.org and share your story.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/live17-in-singapore-is-taking-shape/", "title": "LIVE17 in Singapore is taking shape!", "subtitle":"", "rank": 1, "lastmod": "2017-08-29", "lastmod_ts": 1503964800, "section": "Blog", "tags": [], "description": "Our annual meeting on 14th and 15th November, LIVE17 is shaping up nicely with an exciting line-up of respected speakers talking around the theme of “Metadata + Infrastructure + Relations = Context”, with each half day covering some element of the main theme.\n", "content": "Our annual meeting on 14th and 15th November, LIVE17 is shaping up nicely with an exciting line-up of respected speakers talking around the theme of “Metadata + Infrastructure + Relations = Context”, with each half day covering some element of the main theme.\nDay one, AM: Metadata enables connections Day one, PM: How research and infrastructure is changing Day two, AM: Social challenges in the scholarly community Day two, PM: Who is using your metadata and what are they doing with it? This years updated format means both days will be packed with a mixture of plenary and breakout sessions and interactive activities. A cocktail reception with entertainment will be held in the Grand Marquee on the first evening.\nA comprehensive agenda of the two-day event will be available shortly, but in the meantime we’ve provided a few talk teasers from six of our plenary speakers to whet your appetite:\nSpeaker Title and Organization Talk title Theodora Bloom Executive Editor, The BMJ Preparing to handle dynamic scholarly content: Are we ready? Casey Green Assistant Professor, Perelman School of Medicine, University of Pennsylvania Research and literature parasites in a culture of sharing. Leonid Teytelman Co-founder and CEO, Protocols.io A call to reduce random collisions with information; we can automatically connect scientists to the knowledge that they need. Nicholas Bailey Data Science Team, Royal Society of Chemistry What does data science tell us about social challenges in scholarly publishing? Miguel Escobar Varela Assistant Professor of Theatre Studies, National University of Singapore Digital Humanities in Singapore: some thoughts for the future. Kuansan Wang Managing Director, Microsoft Research Outreach Democratize access to scholarly knowledge with AI. Theodora Bloom - Preparing to handle dynamic scholarly content: Are we ready? Historically, journals might expect a few \u0026lsquo;Letters to the Editor\u0026quot; to discuss \u0026lsquo;matters arising\u0026rsquo; after an article was published. But scholarly communications are becoming much more dynamic, with versions posted as \u0026lsquo;preprints\u0026rsquo; before publication, corrections after publication, and potentially multiple versions of the same study appearing at different times. How should we handle this changing landscape for the benefits of researchers and consumers of the literature?\nAbout Theodora Bloom Theodora Bloom has been executive editor of The BMJ since June 2014. She has a PhD in developmental cell biology from the University of Cambridge and worked as a postdoctoral fellow at Harvard Medical School. She moved into publishing as an editor on the biology team at Nature, and in 1992 joined the fledgling journal Current Biology. After a number of years helping to develop Current Biology and its siblings Structure and Chemistry \u0026amp; Biology, Theo joined the beginnings of the open access movement. As the founding editor of Genome Biology she was closely involved in the birth of the commercial open access publisher BioMed Central. She joined the non-profit open access publisher Public Library of Science (PLOS) in 2008, first as chief editor of PLOS Biology and later as biology editorial director. She took the lead for PLOS on issues around data access and availability and launched PLOS\u0026rsquo;s data sharing policy. At The BMJ she is responsible for operations, delivering the journal online and in print.\nCasey Greene - Research and literature parasites in a culture of sharing. Casey has been a strong champion of preprints and will discuss his efforts in this area including resources that he has shared to help advance the spread of preprints not only amongst researchers but publishers. These include letters to respond to journals that invite reviews but have unclear preprint policies. His lab members have also analyzed the licensing of preprints and the coverage of literature provided by the pirate repository, Sci-Hub. His talk will touch on each of these areas, and also a discussion of the Research Parasite and Symbiont Awards, which aim to advance recognition for data sharing and reuse.\nAbout Casey Greene Casey is an assistant professor in the Department of Systems Pharmacology and Translational Therapeutics in the Perelman School of Medicine at the University of Pennsylvania and the director of the Childhood Cancer Data Lab for Alex\u0026rsquo;s Lemonade Stand Foundation. His lab develops deep learning methods that integrate distinct large-scale datasets to extract the rich and intrinsic information embedded in such integrated data. Before starting the Integrative Genomics Lab in 2012, Casey earned his PhD for his study of gene-gene interactions in the field of computational genetics from Dartmouth College in 2009 and moved to the Lewis-Sigler Institute for Integrative Genomics at Princeton University where he worked as a postdoctoral fellow from 2009-2012. The overarching theme of his work has been the development and evaluation of methods that acknowledge the emergent complexity of biological systems.\nLeonid Teytelman - Call to reduce random collisions with information; we can automatically connect scientists to the knowledge that they need. Every scientist knows that virtually all papers, including their own, contain mistakes. A key motivation for creating protocols.io was to make it possible to share corrections and optimizations of published research protocols and to have this information automatically reach the scientists using these methods. While pushing relevant knowledge to the users is built into all aspects of protocols.io, we can do a lot more. If publishers, Crossref, and reference management platforms collaborate, we can move beyond the search towards a point where important information automatically reaches the appropriate researchers.\nAbout Leonid (Lenny) Teytelman Lenny is the Co-founder and CEO of protocols.io, an open access platform to share and discover research protocols. It enables scientists to make, exchange, improve and discuss protocols and it is poised to dramatically accelerate and to increase reproducibility of scientific research. Lenny did his graduate studies at UC Berkeley and finished his postdoctoral research at MIT. Lenny has a strong passion for sharing science and improving research efficiency through technology.\nNicholas Bailey - What does data science tell us about social challenges in scholarly publishing? How can we facilitate the fair advancement and dissemination of knowledge? The risks and shortcomings within scholarly publishing are always under scrutiny, but some problems don’t seem to be going away. What should we do about obvious gender inequality within some disciplines, or the weight given to Impact Factor as a measure of quality? The Royal Society of Chemistry has a royal charter to publish scientific content in a way that serves the public interest, and as such its Data Science team devotes part of its time to analysing the social challenges facing scholarly publishing. In this talk, Nicholas Bailey will share some examples.\nAbout Nicholas Bailey Nicholas Bailey is a web analytics expert, a swimmer, a father, and a data geek. After spending several years in the Marketing team at the Royal Society of Chemistry, ultimately managing the database marketing team, he moved out of Marketing and into the Data Science team in order to work more closely with agile teams of developers and strengthen his data analysis and coding skills. Nicholas has a lot to say about measuring digital products, machine learning, and the potential of data science to contribute to positive social outcomes.\nMiguel Escobar Varela - Digital Humanities in Singapore: some thoughts for the future. Singapore-based researchers from a variety of disciplines are currently using digital tools to study the humanities, in areas as diverse as history and dance studies. This talk will present an overview of current projects and suggest a path for the growth of this field in Singapore. It argues that the future of DH requires better inter-institutional infrastructure for long-term data storage, clearer protocols for interoperability and more freely available and reusable datasets. This is easier said than done, but looking at the examples of other countries can provide some sources for inspiration.\nAbout Miguel Escobar Varela Miguel Escobar Varela is an assistant professor in the University Scholars Programme (USP) at the National University of Singapore. At the USP, Dr. Varela teaches in the domain of Humanities and Social Sciences. He is a theatre researcher and software programmer. His interests are in teaching theatre through interactive websites and applying computational methods to study performances in Singapore and Indonesia.\nKuansan Wang - Democratize access to scholarly knowledge with AI. With the advent of big data and cloud computing, artificial intelligence has made tremendous strides in recent years. Not only has machine surpassed humans in playing the chess game Go and Jeopardy game shows, reports of superhuman performance in other highly cognitive tasks, ranging from image classification to speech recognition, also abound. Have we reached a stage where the advancements in AI can help tackle a problem in scientific pursuits, namely, the access and the dissemination of scholarly knowledge? This talk describes Microsoft Academic, a project inside Microsoft Research that uses the state-of-the-art AI in natural language understanding and knowledge acquisition to harvest knowledge from scholarly communications and make it available on the web. The talk will describe the technical challenges that have been overcome, the world-wide research collaborations that have since been enabled, and discuss the potentials of making knowledge more readily available to the mass.\nAbout Kuansan Wang Kuansan Wang is the Managing Director at Microsoft Research Outreach (MSR), where he started in March 1998 as a Researcher in the speech technology group working. In 2004, he moved to the speech product group and became a software architect where he helped create and ship the product Microsoft Speech Server, which is still powering the corporate call center for Microsoft. Since September 2007, he has been back at MSR, joining the newly founded Internet Service Research Center with a mission to revolutionize online services and make Web more intelligent. In March 2016, he took on an additional role as a Managing Director of MSR Outreach, an organization with the mission to serve the research community.\nRead more about our annual events\nRegister now for LIVE17\n", "headings": ["Theodora Bloom - Preparing to handle dynamic scholarly content: Are we ready?","About Theodora Bloom","Casey Greene - Research and literature parasites in a culture of sharing.","About Casey Greene","Leonid Teytelman - Call to reduce random collisions with information; we can automatically connect scientists to the knowledge that they need.","About Leonid (Lenny) Teytelman","Nicholas Bailey - What does data science tell us about social challenges in scholarly publishing?","About Nicholas Bailey","Miguel Escobar Varela - Digital Humanities in Singapore: some thoughts for the future.","About Miguel Escobar Varela","Kuansan Wang - Democratize access to scholarly knowledge with AI.","About Kuansan Wang"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/scenario-planning-for-our-future/", "title": "Scenario planning for our future", "subtitle":"", "rank": 1, "lastmod": "2017-08-28", "lastmod_ts": 1503878400, "section": "Blog", "tags": [], "description": "Crossref is governed by a board of directors that meets in person three times a year in March, July and November. At the July meeting the board typically spends a significant amount of time on strategic planning in addition to its usual activities such as financial oversight, approving investment in new services based on staff and committee recommendations, reviewing and approving policies and fees for new and existing services and generally making sure Crossref is healthy and well run.\n", "content": "Crossref is governed by a board of directors that meets in person three times a year in March, July and November. At the July meeting the board typically spends a significant amount of time on strategic planning in addition to its usual activities such as financial oversight, approving investment in new services based on staff and committee recommendations, reviewing and approving policies and fees for new and existing services and generally making sure Crossref is healthy and well run.\nThis year we worked with a facilitator to look farther into the future than normal using a technique called scenario planning to map out “strategic agendas” for the next five years. Scenario-based strategic planning doesn’t try to predict the future but allows us to be flexible in planning by looking at a range of different possible eventualities. This is particularly useful for Crossref because scholarly research and communications is changing rapidly and we operate in a very complex environment.\nTo prepare for the meeting our facilitator, Susan Stickely, prepared 12 “critical uncertainties” - impactful issues that could go either way and that will affect how Crossref works, its mission and even whether it needs to exist. To develop the critical uncertainties Susan interviewed Crossref staff, board members, general members and scholarly communications community influencers and we held a preparatory group exercise at the March board meeting. The critical uncertainties are:\nScholarly Communication Landscape: Increasing diversity? Or publishing disintermediated? Machine Learning / Artificial Intelligence: Supporting? Or obsoleting the researcher and publishers? Policy and Regulation: Limiting? Or visionary? Financing of Scholarly Communication: Shrinking Pool? Or Expanding Pool? Rise of Pre-print, New Content Sources: New, non-traditional? Or De-formalizing? Tracking and Privacy: Increased Privacy? Or Loss of Privacy? Cybersecurity: Secure? Or Vulnerable, Insecure? Publisher Sustainability: Slow Progress? Or Fast Progress? Impact of Open: Open or Closed? Or Slow to Change? Source of Prestige and Recognition: New Source? Or Publisher, Institution? Quality and Accuracy of Content: High? Or Low? Geopolitical Stability and Stance: Stable, Unified? Or Unstable, Fragmented In addition, from the interviews Susan was able to summarize Crossref’s distinctive competencies as:\nHaving a reputation as a trusted, neutral one-stop source of metadata and services Managing scholarly infrastructure with technical knowledge and innovation Convening and facilitating scholarly communications community collaboration To be successful Crossref will need to continue to invest in, apply, and evolve these distinctive competencies and strategic dilemmas and challenges.\nOver a day and half of discussions and breakout sessions the board and staff drew up a number of scenarios and created a draft strategic agenda for Crossref. Over the next couple of months we’ll be working on refining the strategic agenda and will be presenting the results to members in the next couple of months.\nOne theme that emerged is for Crossref to engage more with funders and build on the work with done with them in creating the Crossref Funder Registry. We have started a new Funder Advisory Group and, among other things, are working with them on a prototype for a new registry of grant identifiers.\nIn the regular board session the board approved three recommendations from the Membership and Fees Committee:\nTo approve the recommendations with respect to volume discounts for current deposits of posted content (i.e. preprints). To create a new “peer review report” record type with a specific metadata schema and a bundled fee of $1.25 to be charged for a content item and all the reports associated with it. To update the metadata delivery offering to have a single agreement that covers all metadata APIs/delivery routes, to adopt a single (updated) fee structure, and to remove case-by-case opt-outs for metadata. Item number 3 involves a number of big changes - for example the removal of the case-by-case opt outs requires a change to the main Membership Agreement - so we will be sending out more information to members and Affiliates in September and October about the changes and our implementation plans.\nYou can see the full history of the motions from every Board meeting on our website.\nAnother major issue that the board discussed is the upcoming election for the board of directors. In order to broaden participation and be inclusive there was a new process this year. The Nominating Committee put out a call for expressions of interest for candidates to be on the slate for the election. We had a great response and there were 25 expressions of interest reviewed by the Nominating Committee who came up with a slate of nine excellent candidates for the six seats up for election. This is the first time that there are more candidates than seats on the slate so it’s particularly important for members to vote this year. See the recent blog post about the election process and the slate for more details.\nThe next board meeting is in November in conjunction with Crossref LIVE17 in Singapore.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/coming-to-a-venue-near-you/", "title": "Coming to a venue near you", "subtitle":"", "rank": 1, "lastmod": "2017-08-24", "lastmod_ts": 1503532800, "section": "Blog", "tags": [], "description": "First of all – hello! I’m Vanessa. I’m fairly new to Crossref, having just joined our outreach team a few weeks ago. I previously worked in International Development, enabling individuals and institutions in Africa, Asia and Latin America to access cutting edge scholarly research and knowledge, supporting national development and transforming lives.\n", "content": "First of all – hello! I’m Vanessa. I’m fairly new to Crossref, having just joined our outreach team a few weeks ago. I previously worked in International Development, enabling individuals and institutions in Africa, Asia and Latin America to access cutting edge scholarly research and knowledge, supporting national development and transforming lives.\nA firm belief in the importance of connecting research and information around the world led me to Crossref where my role of International Community Outreach Manager connects me with a range of different people working across diverse disciplines and sectors. I’ll be supporting the coordination of our local LIVE events and helping to set up an ambassador program (more information on this coming soon) to deepen regional connections around the globe. You can read more about myself and my colleagues at Crossref on our People page.\nAs Crossref membership continues to grow globally, it becomes increasingly important for us to look at new ways to engage with our international membership base.\nYou may have heard about our LIVE local events, or even attended one in person before. These are free-to-attend, one day, regional events (local to you), providing a tailored program of activities which include information on the key concepts of Crossref, the services we offer and our future plans.\nIn the past year we have held LIVE local events in Brazil, Beijing, Boston and most recently Seoul. We also have a London LIVE event coming up soon. Next year we are aiming to be even more ambitious, hoping to expand our activities to a number of different countries around the world.\nImages left to right, Crossref LIVE participants in Seoul, Crossref LIVE speakers in Brazil, and literature we use at our LIVE events\n||||\nWhen running our LIVE local events, we collaborate with local organizations to ensure they are appropriate, accessible, and applicable to the country context. Members support us by lending their local expertise with regards to venue selection, suggestions for speakers, tailored content, translation of materials and participant enrolment. We collaborate on logistics, content, Crossref speakers and the promotion of the event to our members and the wider community.\nWhen running our LIVE local events, we collaborate with local organizations to ensure they are appropriate, accessible, and applicable to the country context. Members support us by lending their local expertise with regards to venue selection, suggestions for speakers, tailored content, translation of materials and participant enrollment. We collaborate on logistics, content, Crossref speakers and the promotion of the event to our members and the wider community.\nWe will release more information of upcoming regional events in due course, but we are working on the following countries as priorities for 2018-19:\nAsia-Pacific: Malaysia, Indonesia, Japan, Taiwan, Australia Central Asia: India Latin America: Mexico, Colombia, Chile, Brazil Middle East: UAE (Dubai or Abu Dhabi) Africa: South Africa, Kenya Eastern Europe: Turkey, Greece, Bulgaria, Romania, Serbia, Poland Western Europe: Germany, Spain, UK North America: Canada, USA If you are interested in hosting a LIVE local event or have any suggestions for one in your region, then we would love to hear from you. View more information on our LIVE locals page or contact us to hear more or get involved.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2017-election-slate/", "title": "2017 election slate", "subtitle":"", "rank": 1, "lastmod": "2017-08-17", "lastmod_ts": 1502928000, "section": "Blog", "tags": [], "description": "Slate of 2017 board candidates announced, and it’s going to be exciting Crossref is always evolving and the board knows it must evolve with us so we can continue to provide the right kind of services and support for you, as members of the research community.\nThis year two things happened for the first time: we used our updated bylaws see article VII, section 2 agreed by the board last year, to allow more candidates than available seats; and secondly, to issue an open call for expressions of interest.", "content": "Slate of 2017 board candidates announced, and it’s going to be exciting Crossref is always evolving and the board knows it must evolve with us so we can continue to provide the right kind of services and support for you, as members of the research community.\nThis year two things happened for the first time: we used our updated bylaws see article VII, section 2 agreed by the board last year, to allow more candidates than available seats; and secondly, to issue an open call for expressions of interest. Many members of the current board felt it was vital to move to this more transparent process.\nWith Crossref developing new services for new types of members at a rapid pace, it’s an exciting time to be on the board of directors. With 25 expressions of interest it seems we’re not the only ones who think so!\nFrom these 25 applications, the Nominating Committee has proposed the following nine candidates to fill the six seats open for election to our board of directors:\nAmerican Institute of Physics (AIP), Jason Wilde, USA\nF1000 Research, Liz Allen, UK\nInstitute of Electronic and Electrical Engineers (IEEE), Gerry Grenier, USA\nThe Institution of Engineering and Technology (IET), Vincent Cassidy, UK\nMassachusetts Institute of Technology Press (MIT Press), Amy Brand, USA\nOpenEdition, Marin Dacos, France\nSciELO, Abel Packer, Brazil\nSPIE, Eric Pepper, USA\nVilnius Gediminas Technical University Press (VGTU Press), Eleonora Dagiene, Lithuania\nRead the candidates’ organizational and personal statements Candidates were chosen based on the following criteria:\nThat board representation should be reflective of membership\nA balance of types and sizes of organizations\nThat all committee choices and recommendations were unanimous\nYou can be part of this important process, by voting in the election If your organization is a member of Crossref on September 15 2017, you are eligible to vote when voting opens on September 28 (affiliates, however, are not eligible to vote).\nHow can you vote? On September 28, your organization’s designated voting contact will receive an email with a link to the formal Notice of Meeting and Proxy Form with concise instructions on how to vote. An additional email will be sent with a username and password along with a link to our online voting platform. It is important to make sure your voting contact is up-to-date.\nWant to add your voice? We are accepting independent nominations until 7 November 2017. Organizations interested in standing as an independent candidate should contact me by this date with the endorsements of ten other Crossref members.\nThe election itself will be held at LIVE17 Singapore, our annual meeting, on 14 November 2017. We hope you’ll be there to hear the results.\n", "headings": ["Slate of 2017 board candidates announced, and it’s going to be exciting","Read the candidates’ organizational and personal statements","You can be part of this important process, by voting in the election","How can you vote?","Want to add your voice?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/you-do-want-to-see-how-its-made-seeing-what-goes-into-altmetrics/", "title": "You do want to see how it’s made — seeing what goes into altmetrics", "subtitle":"", "rank": 1, "lastmod": "2017-08-14", "lastmod_ts": 1502668800, "section": "Blog", "tags": [], "description": "There\u0026rsquo;s a saying about oil, something along the lines of \u0026ldquo;you really don\u0026rsquo;t want to see how it\u0026rsquo;s made\u0026rdquo;. And whilst I\u0026rsquo;m reluctant to draw too many parallels between the petrochemical industry and scholarly publishing, there are some interesting comparisons to be drawn.\nOil starts its life deep underground as an amorphous sticky substance. Prospectors must identify oil fields, drill, extract the oil and refine it. It finds its way into things as diverse as aspirin, paint and hammocks.", "content": "There\u0026rsquo;s a saying about oil, something along the lines of \u0026ldquo;you really don\u0026rsquo;t want to see how it\u0026rsquo;s made\u0026rdquo;. And whilst I\u0026rsquo;m reluctant to draw too many parallels between the petrochemical industry and scholarly publishing, there are some interesting comparisons to be drawn.\nOil starts its life deep underground as an amorphous sticky substance. Prospectors must identify oil fields, drill, extract the oil and refine it. It finds its way into things as diverse as aspirin, paint and hammocks. And as I lie in my hammock watching paint dry, I\u0026rsquo;m curious to know how crude oil made its way into the aspirin that I’ve taken for the headache brought on by the paint fumes. Whilst it would be better if I did know how these things were made, not knowing doesn\u0026rsquo;t impair the efficacy of my aspirin.\nAltmetrics start life deep inside a number of systems. Data buried in countless blogs, social media and web platforms must be identified, extracted and refined before it can be used in products like impact assessments, prompts to engagement, and even tenure decisions. But there the similarity ends. Like the benzene in my aspirin, the data that goes into my favourite metric has come a long way from its origins. But that doesn\u0026rsquo;t mean that I shouldn\u0026rsquo;t know how it was made. In fact, knowing what went into it can help me reason about it, explain it and even improve it.\nHeavy industry or backyard refinery? When you head out to fill your car, you buy fuel from a company that probably did the whole job itself. It found the crude oil, extracted it, refined it, transported it and pumped it into your car. Of course there are exceptions, but a lot of fuel is made by vertically integrated companies who do the whole job. And whilst there are research scientists who brew up special batches for one-off pieces of research, if you wanted to make a batch of fuel for yourself you\u0026rsquo;d have to set up your own back-yard fractional distillation column.\nBecause the collection of a huge amount of data must be boiled down into altmetrics, organisations who want to produce these metrics have a big job to do. They must find data sources, retrieve the data, process it and produce the end product. The foundation of altmetrics is the measurement of impact, and whilst the intermediary data is very interesting, the ultimate goal of a metric is the end product. If you wanted to make a new metric you\u0026rsquo;d have two choices: set up an oil refinery (i.e. build a whole new system, complete with processing pipeline) or a back-yard still (a one-off research item). Either option involves going out and querying different systems, processing the data and producing an output.\nBeing able to demonstrate the provenance of a given measurement is important because no measurement is perfect. It\u0026rsquo;s impossible to query every single extant source out there. And even if you could, it would be impossible to prove that you had. And even then, the process of refinement isn\u0026rsquo;t always faultless. Every measurement out there has a story behind it, and being able to tell that story is important when using the measurement for something important. Data sources and algorithms change over time, and comparing a year-old measurement to one made today might be difficult without knowing what underlying observations went into it. A solution to this is complete transparency about the source data, how it was processed, and how it relates to the output.\nUnderlying data This is where Crossref comes in. It turns out that the underlying data that goes into altmetrics is just our kind of thing. As the DOI Registration Agency for scholarly literature, it\u0026rsquo;s our job to work with publishers to keep track of everything that\u0026rsquo;s published, assign DOIs and be the central collection and storage point for metadata and links. Examples of links stored in Crossref are between articles and funders, clinical trial numbers, preprints, datasets etc. With the Event Data project, we are now collecting links between places on the web and our registered content when they\u0026rsquo;re made via DOIs or article landing pages.\nThis data has wider use than just than altmetrics. For example, an author might want to know over what time period a link to their article was included in Wikipedia, and which edit to the article was responsible for removing it and why. Or, in these days of \u0026ldquo;fake news\u0026rdquo;, someone may want to know everywhere on Twitter that a particular study is referenced so they can engage in conversation.\nWhilst the field of altmetrics was the starting point for this project, our goal isn’t to provide any kind of metric. Instead, we provide a stream of Events that occurred concerning a given piece of registered content with a DOI. If you want to build a metric out of it, you\u0026rsquo;re welcome to. There are a million different things you could build out of the data, and each will have a different methodology. By providing this underlying data set, we hope we\u0026rsquo;ve found the right level of abstraction to enable people to build a wide range of things.\nEvery different end-product will use different data and use different algorithms. By providing an open dataset at the right level of granularity, we allow the producers of these end-products to say exactly which input data they were working with. By making the data open, we allow anyone else to duplicate the data if they wish.\nSticky mess To finish, let me return to the sticky mess of the distillation column. We identify sources (websites, APIs and RSS feeds). We visit each one, and collect data. We process that data into Events. And we provide Events via an API. At each stage of processing, we make the data open:\nThe Artifact Registry lists all of the sources, RSS feeds and domains we query. The Evidence Registry lists which sites we visited, what input we got, what version of each Artifact was used, and which Events were produced. The Evidence Log describes exactly what every part of the system did, including if it ran into problems along the way. The Events link back to the Evidence so you can trace exactly what activity led up to the Event. All the code is open source and the version is linked in the Evidence Record, so you can see precisely which algorithms were used to generate a given Event. Anyone using the Data can link back to Events, which in turn link back to their Evidence. The end-product, Events, can be used to answer altmetrics-y questions like \u0026ldquo;who tweeted my article?\u0026rdquo;. But the layers below that can be put to a range of other uses. For example:\n\u0026ldquo;Why does publisher X have a lower Twitter count?\u0026rdquo;. The Evidence Logs might show that they tend to block bots from their site, preventing data from being collected. \u0026ldquo;Why did their Twitter count rise?\u0026rdquo;. The Evidence Logs might show that they stopped blocking bots. \u0026ldquo;What does Crossref think the DOI is for landing page X?\u0026rdquo;. A search of the Evidence Logs might show that the Event Data system visited the page on a given date and decided that it corresponded to DOI Y. \u0026ldquo;Which domains hold DOI landing pages?\u0026rdquo;. The \u0026ldquo;Domains\u0026rdquo; Artifact will show the domains that Event Data looked at, and the Evidence Logs will show which versions were used over time. By producing not only Events, but being completely transparent about the refinement process, we hope that people can build things beyond traditional altmetrics, and also make use of the intermediary products as well. And by using open licenses, we allow reuse of the data.\nSee you in Toronto! There\u0026rsquo;s so much more to say but I\u0026rsquo;ve run out of ink. To find out more, come to 4:AM Altmetrics Conference! I\u0026rsquo;ll be speaking at the conference in Session 10 on the 28th. I\u0026rsquo;ll also be at the Altmetrics Workshop on the 26th. Stacy Konkiel and I are hosting the Hackathon on the 29th, where you can get your hands on the data. See you there!\nThis blog post was originally posted on the 4:AM Altmetrics Conference Blog.\n", "headings": ["Heavy industry or backyard refinery?","Underlying data","Sticky mess","See you in Toronto!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-4-with-cla/", "title": "Using the Crossref REST API. Part 4 (with CLA)", "subtitle":"", "rank": 1, "lastmod": "2017-07-25", "lastmod_ts": 1500940800, "section": "Blog", "tags": [], "description": "As a follow-up to our blog posts on the Crossref REST API we talked to the Copyright Licensing Agency (CLA) about the work they’re doing, and how they’re using the Crossref REST API as part of their workflow.\n", "content": "As a follow-up to our blog posts on the Crossref REST API we talked to the Copyright Licensing Agency (CLA) about the work they’re doing, and how they’re using the Crossref REST API as part of their workflow.\nAlex Cole, Senior Business Analyst at the Copyright Licensing Agency introduces the DCS\nThe Digital Content Store (DCS) is an innovative rights, technology and content platform for UK Higher Education Institutions (HEIs), which was developed collaboratively with HEIs, publishers and technology partners. The platform is included in the CLA annual licence fee and is an optional tool for licensees.\nAt its core, the system is a searchable repository of digital copies that have been created under the licence by HEIs (the CLA Digital Content Store), it also functions as a workflow management tool. When extracts are digitised by HEIs under the CLA Licence, they are uploaded directly to the DCS. Once an extract is uploaded and assigned to a course, students are able to access the extract via a secure link. Every year HEIs are obliged to report all of these digitised items to CLA as part of the terms of their copyright blanket licence. Prior to the DCS, HEIs were having to submit this data manually, a process that could take days, if not weeks. The system removes the need for annual census reporting to CLA, reducing the data collection burden on the HE sector and creating administrative efficiencies through streamlining the digital course pack creation process.\nCan you talk about how you\u0026rsquo;re using the Crossref REST API within CLA Digital Content Store (DCS)?\nWhen a DCS user adds a new extract to a course they need to include relevant metadata. This metadata is necessary, as it ultimately helps CLA in correctly identifying the copyright owner of the extract so that we can make sure they receive fair payment in our royalties distributions. The Crossref REST API supplies the DCS user with article and journal metadata so that they can provide the correct information about the content they are uploading. Using the API saves the user the time they would have otherwise spent searching for this data, streamlining their workflow and making the process more efficient.\nSearching for and adding content in the DCS What are your future development plans?\nWe’re continuing to develop the DCS in order to improve user experience for our customers. We’re currently looking into opening up access for our users by allowing academics to submit requests to the DCS via a web-form and our own DCS Course Content URL API. We are also looking into incorporating the Crossref REST API into some of our back office workflows to improve efficiency and simplify our workflow. The metadata that we can retrieve from Crossref can help us match customer usage to our rights database.\nWhat else would you like to see in Crossref metadata?\nGoing forward we’d like to see:\nMore books included in the database.\nIndicating if an ISSN is associated with the print or digital edition of a journal.\nThanks Alex! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-enters-beta/", "title": "Event Data enters Beta", "subtitle":"", "rank": 1, "lastmod": "2017-07-05", "lastmod_ts": 1499212800, "section": "Blog", "tags": [], "description": "We’ve been talking about it at events, blogging about it on our site, living it, breathing it, and even sometimes dreaming about it, and now we are delighted to announce that Crossref Event Data has entered Beta.\n", "content": "We’ve been talking about it at events, blogging about it on our site, living it, breathing it, and even sometimes dreaming about it, and now we are delighted to announce that Crossref Event Data has entered Beta.\nA collaborative initiative by Crossref and DataCite, Event Data offers transparency around the way interactions with scholarly research occur online, allowing you to discover where it’s bookmarked, linked, liked, shared, referenced, commented on etc., across the web, and beyond publisher platforms.\nThe name Event Data reflects the nature of the service, as it collects and stores digital actions that occur on the web, from the quick and simple, such as bookmarking and referencing, through to deeper interconnectivity such as exposing the links between research artifacts. Each individual action is timestamped and recorded in our system as an Event, and made available to the community via an API.\nEvent Data will be available for absolutely anyone to use; publishers, third party vendors, editors, bibliometricans, researchers, authors, funders etc., and with tens of thousands of events occurring every day, there’s a wealth of insight to be gained for those interested in analyzing and interpreting the data.\nIt’s important to note that Event Data does not provide metrics. What is does provide is the raw data to help you facilitate your own analysis, giving you the freedom to integrate the data into your own systems.\nWe are currently working very closely with a few organizations with specific use cases who are helping us to test and refine Beta before we launch our production service later this year. If you decide to take a look at Beta yourself, all the data you collect from Event Data is licensed for public sharing and reuse according to our Terms of Use.\nUntil Event Data is in production mode, we do not recommend building any commercial or customer-based tools off the data. If you are not in the Beta test group but are interested in participating, please contact me below. For more information about Event Data, please see our user guide.\nPlease contact me, Jennifer Kemp\u0026mdash;Outreach Manager for Event Data\u0026mdash;with any questions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-and-colleagues-in-south-korea/", "title": "Crossref and colleagues in South Korea", "subtitle":"", "rank": 1, "lastmod": "2017-06-30", "lastmod_ts": 1498780800, "section": "Blog", "tags": [], "description": "Connecting Crossref, ORCID, DataCite, and our communities Q: What do you get if you combine our three organisations for a week to catch up with our Korean community - publishers, librarians, universities, researchers, and service providers? A: Two events, plenty of meetings, great conversations and feedback, fabulous Korean hospitality, and a little jet-lag.\n", "content": "Connecting Crossref, ORCID, DataCite, and our communities Q: What do you get if you combine our three organisations for a week to catch up with our Korean community - publishers, librarians, universities, researchers, and service providers? A: Two events, plenty of meetings, great conversations and feedback, fabulous Korean hospitality, and a little jet-lag.\nOver the past few years, Crossref has seen huge growth in our members in Korea. We have nine Sponsoring Affiliates (who look after nearly 1,000 members between them), two Sponsoring Members and nearly 80 Library members. With the International DOI Foundation (IDF) strategy meeting taking place in Daejon, it seemed sensible to combine that with our own events and meetings with key organizations. This also fitted nicely with some plans that ORCID and DataCite had, so we combined forces.\nWe (that\u0026rsquo;s me, Rachael Lammey, Ed Pentz, and Geoffrey Bilder) hosted a Crossref LIVE local event on Monday 12th June for around 80 members and affiliates. We were joined by Alice Meadows and Nobuko Maiyairi (ORCID), Martin Fenner (DataCite), and Professor Sun-Tae Hong (Seoul National University) as co-presenters. We looked at the global reach of Korean research, and how registering content with Crossref and participating in services like Reference Linking helps create valuable connections between research outputs. With so many established members in Korea, we were able to go beyond the basics and emphasize the importance of metadata input, metadata delivery, and preview our upcoming Event Data service. We also talked data-sharing and the value of integrating ORCID iDs into publisher and institution workflows.\n_Growth in research outputs in Asia Pacific 2009-2017. Source: Web of Science databases SCI-E, SSCI and AHCI only, downloaded 19/4/2017. Data provided by Wiley (thank you!)_ Later in the week we took a multi-pronged approach to highlight the many shared principles of our organizations and discuss the specific initiatives we’re collaborating on. We held the Joint Global Infrastructure Conference covering the global nature of what we do and the connections/interoperability between ORCID, DataCite and Crossref. This interoperability and our governance structures lend themselves to cooperation on other initiatives such as Metadata 2020 and The OI Project, which we were able to share.\nCheck out all #jgic_seoul tweets.\nGuest speakers volunteered to talk about how they work with our organizations - we were joined by Choon Shil Lee from the Korean Association of Medical Journal Editors (KAMJE) to demonstrate their ORCID integrations, and Hideaki Takeda from the Japan Link Centre (JaLC) who discussed the infrastructure and services they use to register and disseminate content globally. User stories like this are great - they highlight how people work with our services, give others ideas, and also flag up where we can do more.\nPart of doing more involved providing clarification on Crossref’s position alongside other DOI Registration Agencies. With a new Registration Agency in Korea, we needed to communicate the global nature of what we do to help our members achieve their discoverability goals, as not all DOIs are made equal. Through working with ORCID and DataCite colleagues we were able to place great importance both on our work worldwide, and on the benefits to Korean societies in collaborating outside national boundaries.\nCombining talks from our three organizations was a great opportunity to emphasize the importance of shared global infrastructure. Geoffrey Bilder’s plug socket analogy is apt - services that work cross-border, cross-language, and cross-subject areas streamline processes for all of our different communities and enable research to travel beyond national boundaries and help it be found, linked, cited and assessed.\nWant to find out more? Slides from both meetings are available here and here, and watch out for further collaborative events.\n", "headings": ["Connecting Crossref, ORCID, DataCite, and our communities"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-receives-soc-accreditation-for-data-integrity-and-security/", "title": "Crossref receives SOC accreditation for data integrity and security", "subtitle":"", "rank": 1, "lastmod": "2017-06-21", "lastmod_ts": 1498003200, "section": "Blog", "tags": [], "description": "We are delighted to announce that Crossref has been awarded the Service Organization Control (SOC) 2® accreditation after an independent assessment of our controls and procedures by the American Institute of CPA’s (AICPA).\n", "content": "We are delighted to announce that Crossref has been awarded the Service Organization Control (SOC) 2® accreditation after an independent assessment of our controls and procedures by the American Institute of CPA’s (AICPA).\nThe SOC 2® accreditation is awarded to service organizations that have passed standard trust services criteria relating to the security, availability, and processing integrity of systems used to process users’ data and the confidentiality and privacy of the information processed by these systems.\nThe AICPA’s assessment also reviewed our vendor management programs, internal corporate governance and risk management processes, and regulatory oversight.\nFind out more about the SOC accreditation structure\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/data-management/", "title": "Data Management", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/security/", "title": "Security", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/full-text-links/", "title": "Full-Text Links", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/now-put-your-hands-up-for-a-similarity-check-update/", "title": "Now put your hands up! (for a Similarity Check update)", "subtitle":"", "rank": 1, "lastmod": "2017-06-06", "lastmod_ts": 1496707200, "section": "Blog", "tags": [], "description": "Today, I’m thinking back to 2008. A time when khaki and gladiator sandals dominated my wardrobe. The year when Obama was elected, and Madonna and Guy Ritchie parted ways. When we were given both the iPhone 3G and the Kindle, and when the effects of the global financial crisis lead us to come to terms with the notion of a ‘staycation’. In 2008 we met both Wall-E and Benjamin Button, were enthralled by the Beijing Olympics, and became addicted to Breaking Bad.", "content": "Today, I’m thinking back to 2008. A time when khaki and gladiator sandals dominated my wardrobe. The year when Obama was elected, and Madonna and Guy Ritchie parted ways. When we were given both the iPhone 3G and the Kindle, and when the effects of the global financial crisis lead us to come to terms with the notion of a ‘staycation’. In 2008 we met both Wall-E and Benjamin Button, were enthralled by the Beijing Olympics, and became addicted to Breaking Bad. And lest we forget, 2008 was also the year in which Beyoncé brought us Single Ladies; in all its sassy hand-waving, monochrome glory. For Crossref though, 2008 holds another important milestone as it was the year we launched our Similarity Check initiative. Today, the artist formerly known as CrossCheck provides our members with cost-effective access to Turnitin’s powerful text comparison tool, iThenticate.\nFast forward nearly a decade, and it’s wonderful to see just how Similarity Check membership has grown in the nine years since launch; from 16 original members in 2008 to over 1,300 today.\nFigure 1.1 The number of publishers participating in the Similarity Check service each year between 2008 – 2017 (to April)\nUsage of iThenticate is also consistent with this growth in membership, and throughout 2016 our members checked over four million manuscripts for similarity using the tool. As Similarity Check members contribute their full-text content into Turnitin’s database, this increase in membership also has a dramatic impact on the volume of content indexed by Turnitin. Today, members can compare their manuscripts against Turnitin’s database of over 60 million full-text works provided by Similarity Check members. With over 88 million works currently registered with Crossref, this means that 68% of all content deposited with us is now available for comparison in iThenticate. Over the years we have worked very closely with Turnitin to help champion new iThenticate feature developments that best support our member’s use of the tool as a core function of their editorial workflow. Many of our members too have also worked together with Turnitin to provide feedback on user experience and design.\nBelow, Turnitin’s Product Manager for iThenticate, Sun Oh, shares an insight into their research process and how Similarity Check member’s feedback has been critical in developing new and improved functionality in iThenticate.\nRead on to learn more from Sun\u0026hellip;\nSun Oh is a Senior Product Manager at Turnitin. She is currently the Product Manager for iThenticate and backend systems including the Content Intake System and similarity reports.\nLast year we surveyed our Crossref customers to find out what Similarity Check improvements they would like to see and noticed a recurring request for the ability to compare two or more personally sourced documents. We were intrigued and decided to run with it. We contacted the respondents who had asked for this, and started conversations to find out more. This helped us gather invaluable data, which in turn helped us to build the feature based on real use cases and with a clear view of what was wanted.\nThe design prototypes were reviewed for usability and effectiveness each step of the way by the respondents and once we had the feature up and running, those who requested it in our initial survey were among the first to trial it.\nWe’re thrilled to announce that we’ve now launched the new Doc-to-Doc comparison feature, available through iThenticate’s native interface. Simply select the Doc-to-Doc comparison upload method from the document submission panel.\nIf you are a Crossref member using Similarity Check, you have exclusive early access to this new feature, which allows you to use iThenticate’s powerful similarity check functionality and apply it to your own, private documents.\nHow does Doc-to-Doc Comparison work? Doc-to-Doc comparison allows users to upload one primary document and compare it against up to five other documents.\nFigure 1.2 The document upload screen for Doc-to-Doc comparison\nWhen the upload is complete, a similarity score is generated for the primary document based on the amount of similar content found in the comparison documents. A full comparison report is also available. The comparison report will open in the document viewer, and will display the primary document along with a list of the comparison documents and with their similarity percentage. If one of the comparison documents doesn’t include text that matches the primary document, iThenticate will still display it anyway, with a 0% score, allowing users to rule it out of their inspection. The similarity report will be stored securely in the user’s folder until they delete it.\nFigure 1.3 Similarity report for Doc-to-Doc comparison\nAs these documents will not be stored in a shared database, they won’t affect the similarity score of any future submissions. Primary and comparison documents remain completely private and will not be indexed into the shared iThenticate content database. To get a better idea of how Doc-to-Doc comparison works, check out the iThenticate feature guide on the Turnitin website.\nStart using Doc-to-Doc Comparison now! If you’re a Crossref member using Similarity Check, you can log in to your iThenticate account now and select the Doc-to-Doc comparison link on the homepage.\nWhat else is new in iThenticate in this new release? New Look In addition to Doc-to-Doc comparison, we decided to refresh the look and feel of iThenticate; the same tools our users know and trust, now with a modern interface. Users will also notice that iThenticate now has more readable font and friendlier styling throughout.\nReport Mode Memory To make life easier, iThenticate now remembers whether users were in the All Sources or Match Overview mode when they last used the Document Viewer. iThenticate will then open documents in this mode automatically hereafter.\nImproved Submission Process We’re also enhancing our submission process by making the upload requirements more inclusive. We’ve increased the possible file size limit from 40MB to 100MB when uploading to either the database or to Doc-to-Doc comparison, and PowerPoint (.ppt) and Excel (.xlsm) file formats are now accepted.\nDevelopments completed in 2016 If Similarity Check members haven’t had a chance to check out the improvements we introduced in iThenticate throughout 2016, here’s a quick recap. You can always find our updates on the What\u0026rsquo;s New page of the iThenticate website.\nDownload User List The ability for administrators to download a list of all the users in their account has been added. This list will allow administrators to easily send emails to users.\nSimilarity Score Calculation Update We updated how the similarity score is calculated when bibliographic material is excluded from a similarity report. Now, when bibliography exclusion is enabled, the word count of the bibliography is not included when calculating the overall percentage. This update to the similarity report calculation helps to provide users with a more accurate similarity score.\nImproved Security We are fully committed to keeping user’s data safe and secure at all times. To that end, we’ve added additional security logging, put in measures to enforce stronger passwords, and enabled Captcha after failed login attempts.\nFaster Report Generation We’ve increased the number of resources dedicated to the generation of similarity reports for our iThenticate service. As a result, users should see faster turnaround times for similarity reports.\nSupport for Eight Additional Languages The iThenticate user interface is now available in eight additional languages: German, Dutch, Latin American Spanish, Brazilian Portuguese, Italian, French, and both Simplified \u0026amp; Traditional Chinese. When adding new users to an account, administrators can specify the language of the new user, which will then send a welcome email in the selected language. Individual users can also set their preferred language by selecting a language from the Language dropdown in the Settings menu.\nContent Intake System We’ve developed a new Content Intake System which enables our publication content database to scale so that our users can compare against a constantly growing database of the most recently published content. This allows us to index Similarity Check members’ data in a much more reliable and efficient way than legacy intake methods. And recently, we’ve made the collecting and processing of content from Crossref members using Similarity Check even faster by parallelising our processors. This means that we have more processors running simultaneously to process data.\nBy removing the need for crawling, we will also minimize our impact on traffic to a Similarity Check member’s public-facing website. The Content Intake System is able to directly collect full text URLs from members DOI metadata. This results in a huge reduction in the time it takes from when a publisher first deposits a new DOI with Crossref, to when the content is indexed by us into our full-text publication database. To date, we’ve been able to index the content associated with 60 million Crossref DOIs, and have indexed more than 165 million published works in total which submissions are compared against in iThenticate.\nWalker (web crawler) We’ve developed a new web crawler. Referred to as “Walker”, the crawler makes it possible to provide quicker and more reliable similarity matches to content available on the web. Not to be confused with the Content Intake System mentioned above, Walker’s purpose is to crawl the public web and is not used for indexing full-text content from Similarity Check members.\nUsing Walker, we’re adding an average of nearly 10 million new web pages to our content database per day, ensuring we have the freshest internet content available to find matches against.\nWe’d love to get your feedback! As we design and develop new features, we want to make sure we’re fully understanding Similarity Check member’s needs and would love the opportunity to engage with users for further research. If you’d like to sign up to participate in user research for upcoming feature developments, please take a few minutes to fill out our Feedback Program Form. We look forward to connecting with you!\nContact Turnitin (EDIT 30/04/24: Support for iThenticate, contact details updated ) Please go to our Get help with Similarity Check page\nFor iThenticate technical and billing support, please email tiisupport@turnitin.com\nFor questions about content indexing, please contact Gareth at gmalcolm@turnitin.com\nFor iThenticate product development questions, please contact Sun at soh@turnitin.com\n* Sun Oh, Product Manager for iThenticate*\n**Thanks to Sun and the whole team at Turnitin for sharing this update.** For more information about Similarity Check, visit our service page.\nWant to join Crossref Similarity Check? Please contact our membership specialist.\n", "headings": ["How does Doc-to-Doc Comparison work?","Start using Doc-to-Doc Comparison now!","What else is new in iThenticate in this new release?","New Look","Report Mode Memory","Improved Submission Process","Developments completed in 2016","Download User List","Similarity Score Calculation Update","Improved Security","Faster Report Generation","Support for Eight Additional Languages","Content Intake System","Walker (web crawler)","We’d love to get your feedback!","Contact Turnitin (EDIT 30/04/24: Support for iThenticate, contact details updated )"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/data-citations-and-the-elife-story-so-far/", "title": "Data citations and the eLife story so far", "subtitle":"", "rank": 1, "lastmod": "2017-05-18", "lastmod_ts": 1495065600, "section": "Blog", "tags": [], "description": "When we set up the eLife journal in 2012, we knew datasets were an important component of research content and decided to give them prominence in a section entitled ‘Major datasets’ (see images below). Within this section, major previously published and generated datasets are listed. We also strongly encourage data citations in the reference list.\n", "content": "When we set up the eLife journal in 2012, we knew datasets were an important component of research content and decided to give them prominence in a section entitled ‘Major datasets’ (see images below). Within this section, major previously published and generated datasets are listed. We also strongly encourage data citations in the reference list.\nMajor Datasets for “Structural basis of protein translocation by the Vps4-Vta1 AAA ATPase” by N. Monroe, H. Han, P. Shen, et. al.\nAlmost five years on and I feel we have still not cracked it! We have signed up to the Force11 data citation principles, which were published three years back; we have been actively involved in working groups of Force11 and others, for example the Data Citation Roadmap for Scientific Publishers and the JATS XML data citation recommendation of JATS4R. I am also currently working with other publishers to come up with recommended JATS XML tagging for data availability statements, which is easier said than done considering the nuances of dataset uses and also how different publishers approach this.\nAdded to this, there is still significant push-back from authors about putting all dataset citations in the reference list (for example, authors are concerned about self-citing by citing a dataset created as part of the research article; “dataset citations” that are in effect a link to a search results page on a database; and the necessitation of hundreds of reference entries if an author has used a large base for the research).\nWhile eLife is very active in this space, and aims to arrange and mark up the datasets and citations produced by our authors in line with recommendations, the recommendations still have some gaps and the complete picture is not yet clear.\nIn late 2014, we brought in-house the process of depositing Crossref metadata (previously our online host did this for us). It gave us control of our processes and, at the time, we sent all the information we could to Crossref and have ensured our references are open and available in the Crossref public API. The code for this conversion process is all open-source and available for reuse. It can be found on GitHub. Since then, besides small improvements to the code and troubleshooting problems, we’ve not updated the code. I have been keeping a list of Crossref features and new deposit metadata we can add to our deposits, and now is the time for us to start working on this again.\nOne of the items we’ll be addressing is data citations.\nThe Crossref reference schema does not cater well for non-book or -journal content, and if an item does not have a DOI, the “reference” is not very useful because of the few tags available in the Crossref schema.\nHowever, Crossref have introduced the relationship type to their schema, so data references can be well linked and mineable. As I see Crossref as a potential broker between publishers and data repositories in the future, using the relationship-type deposit for all datasets will assist this and also allow these data points to more easily be seen within the article Nexus framework (see the recent blog post, How do you deposit data citations?).\nAt eLife, we already distinguish between Dataset generated as part of research results (relationship type in the Crossref schema: “isSupplementedBy”) and Dataset produced by a different set of researchers or previously published (relationship type: “references”). Therefore, it will not be hard for us to convert all the information about data referencing that is within the dataset section into a relationship-type deposit in the conversion to Crossref XML.\nWe have also recently gone through an exercise of defining a set of rules for all our references and, of the 12 allowed types, one is data. The rules for Schematron (a rule-based validation language for making assertions about the presence or absence of patterns in XML trees; see also this useful article about Schematron on the JATS4R learning centre) have been written for the eLife ‘business’ rules. Subject to final testing, these will be integrated into our workflow (the Schematron is open source and available for reuse on GitHub, and we will also build an API for people to use the Schematron direct). This will allow us to easily identify all data references and convert them into relationship types in the XML delivered to Crossref. This way, they will not be lost in the references section of our deposits, but properly identified.\nHowever, we do appreciate this will become harder for us as authors become more familiar with datasets as references, because we will not be able to identify the difference between generated and analysed datasets so easily.\nThe code developed and used to complete these conversions will, again, be on Github and open source, and we actively encourage the reuse of this.\nWhile the industry is still working on the best way to deal with data and ensuring it is given the prominence it requires, we feel this is the best approach we can take. Nothing is forever and we can still change what we do in the future. The beauty of open-source code also means that if there is an alternative approach now or in the future, the code we wrote at eLife can be developed by someone else in the future and we can all benefit.\nIf you have any questions, please do not hesitate to contact us.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/melissa-harrison/", "title": "Melissa Harrison", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/want-to-be-on-our-board/", "title": "Want to be on our Board?", "subtitle":"", "rank": 1, "lastmod": "2017-04-28", "lastmod_ts": 1493337600, "section": "Blog", "tags": [], "description": " Do you want to affect change for the scholarly community?\nOur Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2017 election.\n", "content": " Do you want to affect change for the scholarly community?\nOur Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2017 election.\nKey responsibilities of the Board are setting the strategic direction for the organization, providing financial oversight, and approving new policies and services. Some of the decisions the board has made in recent years include:\nEstablishing The OI Project to create a persistent Organization Identifier; Inclusion of preprints in the Crossref metadata; and The approval to develop Event Data which will track online activity from multiple sources. Any member can express interest in serving on the Board We are seeking people who know about scholarly communications and would like to be part of our future. If you have a vision for the international Crossref community, we are interested in hearing from you. Crossref members that are eligible to vote, and would like to be considered, can express their interest together with statements of interest from you and from your organization. The form should be completed and sent to us before 01 June 2017.\nThe role of the Nominating Committee The Nominating Committee meets to discuss change, process, criteria, and potential candidates, ensuring a fair representation of membership. The Committee is made up of three board members not up for election, and two non-board members.\nCurrent Nominating Committee members:\nJohn Shaw, Sage (Chair) Mark Patterson, eLife Paul Peters, Hindawi Chris Fell, Cambridge University Press Rebecca Lawrence, F1000 Research About the election and our Board We have a principle of one member, one vote; our board comprises a cross-section of members and it doesn’t matter how big or small you are, every member gets a single vote. Board terms are three years, and one third of the Board is eligible for election every year. There are six seats up for election in 2017. The board meets in a variety of international locations in March, July, and November each year. View a list of the current Crossref Board members and a history of the decisions they’ve made (motions). The election opens online in late September 2017 and voting is done by proxy online or in person at the annual business meeting during Crossref LIVE in November 2017. Election materials and instructions for voting will be available to all Crossref members online in late September 2017. The board needs to be truly representative of Crossref’s global and diverse membership of organizations who publish.\nPlease express interest using the form, or email me with any questions.\n", "headings": ["Any member can express interest in serving on the Board","The role of the Nominating Committee","About the election and our Board"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-oi-project-gets-underway-planning-an-open-organization-identifier-registry/", "title": "The OI Project gets underway planning an open organization identifier registry", "subtitle":"", "rank": 1, "lastmod": "2017-03-28", "lastmod_ts": 1490659200, "section": "Blog", "tags": [], "description": "At the end of October 2016, Crossref, DataCite, and ORCID reported on collaboration in the area of organization identifiers. We issued three papers for community comment and after input we subsequently announced the formation of The OI Project, along with a call for expressions of interest from people interested in serving on the working group.\n", "content": "At the end of October 2016, Crossref, DataCite, and ORCID reported on collaboration in the area of organization identifiers. We issued three papers for community comment and after input we subsequently announced the formation of The OI Project, along with a call for expressions of interest from people interested in serving on the working group.\nWe had a great response and are happy to report that the Working Group has now been established, and is already underway with work to develop a plan for an open, independent, not-for-profit, sustainable, organization identifier registry. There is information about the OI Project Working Group on the ORCID website including a list of the 17 working group members. They represent a broad range of scholarly communications stakeholders. Our scope of work includes three separate but interdependent areas:\nGovernance; Registry Product Definition; and Business Model \u0026amp; Funding. The initial goal of the Working Group is to create a thorough and robust implementation plan by the end of 2017.\nPlease take a look at the website for more information and we’ll provide updates as things progress throughout the course of the year.\nPlease contact us with any questions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/revised-crossref-doi-display-guidelines-are-now-active/", "title": "Revised Crossref DOI display guidelines are now active", "subtitle":"", "rank": 1, "lastmod": "2017-03-15", "lastmod_ts": 1489536000, "section": "Blog", "tags": [], "description": "\rWe have updated our DOI display guidelines as of March 2017, this month! I described the what and the why in my previous blog post New Crossref DOI display guidelines are on the way and in an email I wrote to all our members in September 2016. I’m pleased to say that the updated Crossref DOI display guidelines are available via this fantastic new website and are now active. Here is the URL of the full set of guidelines in case you want to bookmark it (https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy) and a shareable image to spread the word on social media.\n", "content": "\rWe have updated our DOI display guidelines as of March 2017, this month! I described the what and the why in my previous blog post New Crossref DOI display guidelines are on the way and in an email I wrote to all our members in September 2016. I’m pleased to say that the updated Crossref DOI display guidelines are available via this fantastic new website and are now active. Here is the URL of the full set of guidelines in case you want to bookmark it (https://0-doi-org.libus.csd.mu.edu/10.13003/5jchdy) and a shareable image to spread the word on social media.\nThis blog is a quick reminder that all Crossref members should now be displaying DOIs in the recommended new format from this month, on any new content you publish online. Please note these guidelines are for Crossref DOIs only, we have nearly 90 million registered but there are others, and not all DOIs are made equal.\nThe main changes are to display the DOI as a full, linked URL using HTTPS:\nhttps://doi.org/10.xxxx/xxxxx\nFor background on the HTTPS issue please read Geoffrey Bilder’s blog post, Linking DOIs using HTTPS.\nWhat will happen if you don’t update your Crossref DOI display? We tell members that they should be working towards making the change even if they can’t do it until later - we recognize that it is not always an easy change to make.\nHowever, if members don’t make the change, nothing immediate will happen (Crossref won’t fine you!) although as more members make the change your display will look odd and out of place compared with other members’ content.\nIf you have any questions please do not hesitate to contact us.", "headings": ["What will happen if you don’t update your Crossref DOI display?","If you have any questions please do not hesitate to contact us."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/how-do-you-deposit-data-citations/", "title": "How do you deposit data citations?", "subtitle":"", "rank": 1, "lastmod": "2017-03-02", "lastmod_ts": 1488412800, "section": "Blog", "tags": [], "description": "\rPlease visit Crossref\u0026rsquo;s official Data \u0026amp; Software Citations Deposit Guide for deposit details. Very carefully, one at a time? However you wish.\nLast year, we introduced linking publication metadata to associated data and software when registering publisher content with Crossref Linking Publications to Data and Software. This blog post follows the “whats” and “whys” with the all-important “how(s)” for depositing data and software citations. We have made the process simple and fairly straightforward: publishers deposit data \u0026amp; software links by adding them directly into the standard metadata deposit via relation type and/or references. This is part of the **existing Content Registration ** process and requires no new workflows.\n", "content": "\rPlease visit Crossref\u0026rsquo;s official Data \u0026amp; Software Citations Deposit Guide for deposit details. Very carefully, one at a time? However you wish.\nLast year, we introduced linking publication metadata to associated data and software when registering publisher content with Crossref Linking Publications to Data and Software. This blog post follows the “whats” and “whys” with the all-important “how(s)” for depositing data and software citations. We have made the process simple and fairly straightforward: publishers deposit data \u0026amp; software links by adding them directly into the standard metadata deposit via relation type and/or references. This is part of the **existing Content Registration ** process and requires no new workflows.\nRelationships Data \u0026amp; software citations are a valuable part of the “research article nexus”, comprised of the publication linked to a variety of associated research objects, including data and software, supporting information, protocols, videos, published peer reviews, a preprint, conference papers, etc. For all of these resources, we use relation types in the metadata deposit to “anchor” the article in the article nexus and link to it.\nFor data \u0026amp; software, we ask for: identifier of the dataset/software identifier type: “DOI”, “Accession”, “PURL”, “ARK”, “URI”, “Other” * relationship type: “isSupplementedBy” or “references” description of dataset or software. *Additional identifier types beyond those used for data or software are also accepted, including ARXIV, ECLI, Handle, ISSN, ISBN, PMID, PMCID, and UUID. Crossref maintains an expansive set of relationship types to support the various resources linked in the research article nexus. For data and software, we recommend “isSupplementedBy” and “references” as relationship types in the metadata. Use the former if it was generated de novo as part of the research results. For those generated by another project and then reused, we recommend applying “references” in the relationship type. These were selected in consultation with DataCite and data working groups. They will provide the level of specificity requested by the community.\nTo illustrate how to represent the link within the metadata deposit, we offer two examples from two popular dataset identifiers, one for each of the relationship types.\nDataset Snippet of deposit XML containing link Dataset with DOI: Data from: Extreme genetic structure in a social bird species despite high dispersal capacity. Database: Dryad Digital RepositoryDOI: https://0-doi-org.libus.csd.mu.edu/10.5061/dryad.684v0 \u0026lt;program xmlns=\u0026quot;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026quot;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;Data from: Extreme genetic structure in a social bird species despite high dispersal capacity\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026quot;isSupplementedBy\u0026quot; identifier-type=\u0026quot;doi\u0026quot;\u0026gt;10.5061/dryad.684v0\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; Dataset with accession number: NKX2-5 mutations causative for congenital heart disease retain functionality and are directed to hundreds of targets Database: Gene Expression Omnibus (GEO) Accession number: GSE44902 URL: https://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/geo/query/acc.cgi?acc=GSE44902 \u0026lt;program xmlns=\u0026quot;http://0-www-crossref-org.libus.csd.mu.edu/relations.xsd\u0026quot;\u0026gt; \u0026lt;related_item\u0026gt; \u0026lt;description\u0026gt;NKX2-5 mutations causative for congenital heart disease retain and are directed to hundreds of targets\u0026lt;/description\u0026gt; \u0026lt;inter_work_relation relationship-type=\u0026quot;references\u0026quot; identifier-type=\u0026quot;Accession\u0026quot;\u0026gt;GSE44902\u0026lt;/inter_work_relation\u0026gt; \u0026lt;/related_item\u0026gt; \u0026lt;/program\u0026gt; In the examples above, the Dryad dataset was generated as part of the research published in an article. Hence, it contains the “isSupplementedBy” relationship type. The GEO dataset was reused by and referenced in a scholarly article published separate from the project that generated this dataset. Hence, it contains the “references” relationship type. Both Crossref and DataCite employ this method of linking. Data repositories who register their content with DataCite follow the same process and apply the same metadata tags. This means that we achieve direct data interoperability with links in the reverse direction (data and software repositories to journal articles).\nReferences Another mechanism for depositing data and software citations is to insert it into the manuscript’s references. Publishers then deposit it as part of the article’s references. To do so, publishers follow the general process for depositing references. (Visit Crossref’s Support page for step-by-step instructions.)\nPublishers can deposit the full data or software citation as a unstructured reference. \u0026lt;citation key=\u0026quot;ref=3\u0026quot;\u0026gt; \u0026lt;unstructured_citation\u0026gt;Morinha F, Dávila JA, Estela B, Cabral JA, Frías Ó, González JL, Travassos P, Carvalho D, Milá B, Blanco G (2017) Data from: Extreme genetic structure in a social bird species despite high dispersal capacity. Dryad Digital Repository. http://0-dx-doi-org.libus.csd.mu.edu/10.5061/dryad.684v0\u0026lt;/unstructured_citation\\\u0026gt; \u0026lt;/citation\u0026gt; \u0026lt;/citation_list\u0026gt;\nOr they can employ any number of reference tags currently accepted by Crossref. Most do not readily suit datasets and software as the suite was originally established to match article and book references. This leaves out substantial metadata needed to identify and describe the dataset, however, if the resource does not have a DOI. \u0026lt;citation key=\u0026quot;ref2\u0026quot;\u0026gt; \u0026lt;doi\u0026gt;10.5061/dryad.684v0\u0026lt;/doi\u0026gt; \u0026lt;cYear\u0026gt;2017\u0026lt;/cYear\u0026gt; \u0026lt;author\u0026gt;Morinha F, Dávila JA, Estela B, Cabral JA, Frías Ó, González JL, Travassos P, Carvalho D, Milá B, Blanco G\u0026lt;/author\u0026gt; \u0026lt;/citation\u0026gt; We are exploring the JATS4R recommendations while we consider expanding the current collection. We welcome additional suggestions from the community.\nPrecise, accessible links Crossref’s infrastructure is setup to facilitate the flow of information about scholarly works across the research network. We maintain a fair degree of flexibility both in the structure and completeness of metadata deposited. The aim, though, is to make the links rich in metadata, accurate in associating literature to corresponding resource, and available to both human and machine consumers as per Principle #5 and #7 in the Joint Declaration of Data Citation Principles.\nAs with the other associated resources in the article nexus, we recommend depositing data/software links in the publication metadata via relationships. Publishers are free to do this on top of or independent of references. Relationship metadata offer a high degree of precision. References are a hodgepodge of various resources cited by the publication, including articles, books, media, blogs, reference materials, etc. and data citations are hard to isolate. Furthermore, the unstructured, “spaghetti string” text is difficult for systems to parse and extract specific information.\nWith relationship metadata, data and software resources are expressly designated. We obtain a more accurate link that specifies identifier type and explicitly identifies data generated as part of the research shared in the paper or as reuse of existing data). The richer metadata contained here enables consumers to conduct powerful queries based on different attributes (identifier type, description, relationship), taking data discovery and mining to the next level.\nFurthermore, relationships are important for achieving full accessibility of data and software citations. Access to references is based on publisher permission so not all data citations can be shared (excluding DataCite DOIs). In contrast, all links deposited via relationships are publicly available.\nPublishers play an important role in supporting research validation and reproducibility. Data \u0026amp; software citation is a basic part of of this practice, and instrumental in enabling the reuse and verification of these research outputs, tracking their impact, and creating a scholarly structure that recognizes and rewards those involved in producing them. For the full scoop of how to deposit (i.e., technical details and more), we encourage you to reference the Crossref Data \u0026amp; Software Citations Deposit Guide and contact us (support@crossref.org) with questions or feedback.\n", "headings": ["Please visit Crossref\u0026rsquo;s official Data \u0026amp; Software Citations Deposit Guide for deposit details.","Relationships","For data \u0026amp; software, we ask for:","References","Precise, accessible links"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/conferences/", "title": "Conferences", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/taking-the-con-out-of-conferences/", "title": "Taking the “con” out of conferences", "subtitle":"", "rank": 1, "lastmod": "2017-02-15", "lastmod_ts": 1487116800, "section": "Blog", "tags": [], "description": "TL;DR\nCrossref and DataCite are forming a working group to explore conference identifiers and project identifiers. If you are interested in joining this working group and in doing some actual work for it, please contact us at community@crossref.org and include the text conference identifiers WG in the subject heading. ", "content": "TL;DR\nCrossref and DataCite are forming a working group to explore conference identifiers and project identifiers. If you are interested in joining this working group and in doing some actual work for it, please contact us at community@crossref.org and include the text conference identifiers WG in the subject heading. All the times I could have gone to Walt Disney World\u0026hellip; Back around 2010 I added a filter to my email settings that automatically flagged and binned any email that contained the word \u0026ldquo;Orlando.\u0026rdquo; Back then this was a remarkably effective way of detecting and ignoring spam from the numerous fake technology conferences that all seemed to advertise the city of Orlando, Florida as the location for their non-events. I suspected they all chose Orlando as it would provide the punter that little bit of extra motivation to pay and register for the conference as they simultaneously plotted how they could tag-on some holiday time at Walt Disney World. I finally had to remove the filter last year when I realised that the scammers had moved on to advertising more realistically gritty cities in their calls for submissions and that meanwhile I had managed to miss all the mail informing me of the ALA\u0026rsquo;s summer 2016 meeting held in, you guessed it\u0026hellip; Orlando. Clearly we need better mechanisms to flag dubious conferences. Late last year Crossref\u0026rsquo;s Strategic initiatives group was approached by “CounterMock,” a group of Crossref members (including major proceedings publishers like Springer Nature, Elsevier, IEEE, ACM, IET, etc) who were actively exploring the establishment of an identifier system and registry for scholarly conferences. The long term goal of the group is to make it easier for publishers, researchers and other stakeholders to identify fraudulent and/or low-quality conferences. There has recently been a proliferation of conferences that seem to have been developed specifically to dupe international and early-career researchers into paying substantial conference and publication fees. Sometimes these conferences are intentionally named after long-standing and well-respected conferences. At worst these conferences are entirely fake - no meetings are held and no publications are issued. At best they produce subpar publications of questionable academic integrity. Members of the group are concerned that these \u0026ldquo;mock conferences\u0026rdquo; (Hence \u0026ldquo;COUNTERMOCK\u0026rdquo;) will: Waste researcher time.\nWaste publisher time.\nUndermine academic trust in conferences and conference proceedings as a trustworthy means of scholarly communication.\nThe group understands that the \u0026ldquo;evaluation of a conference quality\u0026rdquo; and the \u0026ldquo;unambiguous identification of conferences\u0026rdquo; are separate concerns (as they are with publications, contributors, etc). But they also realise that it will be hard to address the quality issue without an infrastructure for unambiguously identifying conferences and providing meaningful provenance metadata about those conferences. Moreover, having unique identifiers for conference series would enable a number of other applications. Examples include conference-level metrics, better and more structured info about forthcoming conferences on a certain topic, and more visibility of conferences in research evaluation. Springer Nature has built a POC prototype of a conference identifier system and shown it to a number of other parties. The feedback has been that there is interest in the project, but that the consensus is that it should be managed a run by a neutral industry group. They have approached us to form a working group and explore how this project can be advanced. This is all good. Crossref itself doesn\u0026rsquo;t make value judgements on the quality of content registered with us. Crossref DOIs are not quality marks. But we do believe that unambiguous identification of research artifacts is a perquisite to building effective trust and reputation tools.\nIt is possible that the issue of conference identifiers can be folded into the work we are doing with DataCite and ORCID on organization identifiers. For example, some have argued that organization identifiers should include identifiers for projects or other less formal and more ephemeral corporate entities that are often included in affiliation and/or bibliographic data. It is possible to make the similar arguments in the case of conferences.\nOn the other hand we have also been interested in the issue of \u0026ldquo;project identifiers.\u0026rdquo; Martin Fenner and Tom Demeranville have made a strong argument that \u0026lsquo;projects\u0026rsquo; can be thought of as containers for collections of project outputs, project members and project funders. Again, it seems plausible that one could make the same case for conferences.\nAt the very least it is important to coordinate any work that is done on conference, project and organization identifiers. This why we have decided to form a joint Crossref/DataCite working group to specifically explore conference and project identifiers and determine how they relate both to each other and to our already ongoing work with ORCID on organization identifiers. Additionally, it is likely that the working group will discuss and explore how conference/project identifiers might be used for increasing the transparency of peer review at conferences, better attribution for programme chairs and program committee members, and how they might be incorporated into other services like Crossref Metadata Search, DataCite search, CrossMark, etc.\nIf you are interested in doing some work on this- then please indicate your interest in joining a working group by sending email to community@crossref.org and include the text conference identifiers WG in the subject heading.\nWe will update this blog as the group convenes and makes progress.\n", "headings": ["All the times I could have gone to Walt Disney World\u0026hellip; "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/contributor-id/", "title": "Contributor ID", "subtitle":"", "rank": 1, "lastmod": "2017-01-22", "lastmod_ts": 1485043200, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Crossref Contributor ID started out in 2007 as a project to assign \u0026ldquo;author DOIs.\u0026rdquo; Initial discussions were held in a meeting in Washington DC on February 14th 2007. The project eventually metamorphosed into what is known today as ORCID. Watch this space as we will eventually collect and link to documents and presentations that trace the genisis and history of this important initiative.", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Crossref Contributor ID started out in 2007 as a project to assign \u0026ldquo;author DOIs.\u0026rdquo; Initial discussions were held in a meeting in Washington DC on February 14th 2007. The project eventually metamorphosed into what is known today as ORCID. Watch this space as we will eventually collect and link to documents and presentations that trace the genisis and history of this important initiative.\r", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linking-dois-using-https-the-background-to-our-new-guidelines/", "title": "Linking DOIs using HTTPs: the background to our new guidelines", "subtitle":"", "rank": 1, "lastmod": "2017-01-17", "lastmod_ts": 1484611200, "section": "Blog", "tags": [], "description": "Recently we announced that we were making some new recommendations in our DOI display guidelines. One of them was to use the secure HTTPS protocol to link Crossref DOIs, instead of the insecure HTTP.\n", "content": "Recently we announced that we were making some new recommendations in our DOI display guidelines. One of them was to use the secure HTTPS protocol to link Crossref DOIs, instead of the insecure HTTP.\nSome people asked whether the move to HTTPS might affect their ability to measure referrals (i.e. where the people who visit your site come from).\nTL;DR: Yes Yes. If you do not move your DOI links to HTTPS, Crossref, its members and the members of other DOI registration agencies (e.g. DataCite, JLC, CNKI) will find it increasingly difficult to accurately measure referrals. You should link DOIs using HTTPS. In fact, if you do not support HTTPS on your site now, it is likely that your ability to measure referrals is already impaired. If you do not already have a plan to move your site to HTTPS, you should develop one.\nIf you have already transitioned your site to HTTPS, you should follow the new guidelines and link DOIs via HTTPS as soon as possible. As it stands, you are not sending any referrer information when DOIs are clicked on and followed from your site. You should also make sure that the URLs you have registered with Crossref are HTTPS URLs, otherwise you will not get referrer information on your site when they are followed.\nRead on if you want some grody details. We\u0026rsquo;ll try to keep it as non-technical as possible.\nTwo protocols, one web To start with your web browser supports two closely related protocols, HTTP and HTTPS. The first, HTTP, is the protocol that the web started out with. It is an unencrypted protocol and it is also easy to intercept and modify. It is also very easy and inexpensive to implement.\nThe second protocol, HTTPS, is a secure version of the first protocol. It is very difficult to intercept and modify. It has historically been more complex and expensive to implement. Here you might say - \u0026ldquo;Great, but HTTPS has been around for a long time. We\u0026rsquo;ve used it for sensitive transactions like authentication and credit card transactions. Why do we want to use DOI links with HTTPS?\u0026rdquo; Why are you suggesting that we should even consider moving our entire site to HTTPS? The pressure to move to HTTPS The insecure HTTP protocol has become a major vector for a lot of security issues on the web. It allows user web pages to be intercepted and modified between the server and the browser. This flaw is being abused for everything from spying, to inserting unwanted advertisements into web pages, to distributing viruses, ransomware and botnets. As such, there has been a steady drumbeat of industry encouragement to move to the more secure HTTPS protocol for all website functions.\nWe are not going to argue all the points here. Instead we will mention the major constituencies that are advocating for a move to HTTPS and provide you with some pointers. We apologise that these are all so US-centric, but a lot of the web\u0026rsquo;s global direction does seem to be presaged by US adoption trends.\nGoogle It is probably easiest to start with Google, since its practices tend to focus the attention of those managing websites. Back in 2014 Google announced that they would slowly move toward including the use of HTTPS as a ranking signal. In 2015 they upped the ante by announcing that they would start indexing HTTPS versions of pages by default. It looks like in early 2017 they will really start to take the gloves off as they modify their Chrome browser to flag sites that do not use HTTPS as being insecure.\nEvery top website, evah It looks like Google's plan is working too. Their 2016 transparency report shows that most top websites have already transitioned to HTTPS and that this translates to approximately 25% of all web traffic worldwide taking place using HTTPS. Indeed, over 50% of all web pages viewed by desktop users are delivered via HTTPS. Government agencies The USA’s Whitehouse issued [a directive instructing all Federal websites to adopt HTTPS]. As of December 2016 64% of federal websites have made the transition. Libraries Much of the pressure to move to HTTPS is coming from the library community who have a historical tradition of protecting patron privacy and resisting efforts to censor content. The third principle of the American Library Association's code of ethics reads: We protect each library user\u0026rsquo;s right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted.\nRecently there has been a major push by the Electronic Frontier Foundation to get libraries to adopt a number of security and privacy practices, including the use of HTTPS by all library systems as well as those used by library vendors.\nWhat are Crossref members doing about HTTPS? How big an issue is this? How many of our members have moved to HTTPS? How many plan to? Well, we looked at the URLs that are registered with Crossref and we tested them with both protocols. Eventually we will write a blog post detailing our findings - but the highlights are: Slightly fewer than half of the member domains tested only support HTTP. Slightly fewer than half of the member domains tested support both HTTP and HTTPS. About 370 of the member domains tested only support HTTPS. The transition to HTTPS and the issue of DOI referrals The HTTP referrer is a piece of information passed on by a browser that indicates the site from which the user navigated. So, for example, if a user visiting site A clicks on a link which takes them to site B, site B will then record in its logs that a user visited them from site A. Obviously, this is important information for understanding where your web site traffic comes from. The default rules for referrals are1:\nIf you link between two sites with the same level of security, all referral information is retained. When you follow a link from an insecure (HTTP) web site to a secure (HTTPS) site, referral data is passed on to the secure web site. If you follow a link from a secure (HTTPS) web site to an insecure (HTTP) site, referral data is not passed on to the insecure web site. So let's see what the situation would look like with normal links. If we had two sites, `A` \u0026amp; `B`, the following table maps the possible combinations of protocols that can be used to link from `A` to `B`. So, for example, row #2 reads: A user browses site A using HTTP and clicks on a HTTPS link to publisher B who hosts their site using HTTPS. The last column indicates if the referrer information is passed along by the browser. In the case of row #2, the answer is “yes”. The user has navigated from a less secure site to a more secure site. User views site A using Site A links to site B using Browser reports referrer to site B HTTP HTTP Yes HTTP HTTPS Yes HTTPS HTTP No HTTPS HTTPS Yes But this gets a little more complicated with DOIs. In this case publisher `A` links to publisher `B` through the DOI system. This means there are two parts to the link. The first `(A-\u0026gt;doi.org)` results in a redirect (A-\u0026gt;B). Again we use the last columns to indicate when referrer information is passed along to site B. Again, let’s look at row #2. It reads: A user browses the site of member A using HTTP and clicks on a HTTP DOI link. The DOI system redirects the browser to member B using an HTTPS link registered with Crossref by member B. The middle column and the last column records whether Crossref and the publisher were able to see referrer information. The answer in both cases is “yes”. In the first case (A-\u0026gt;DOI) because the link was from a less secure site (HTTP on A) to a more secure site (HTTPS at DOI). The second case because the link is between two sites at the same security level (HTTP).\nUser views site A using Site A links DOI using Browser reports referrer to Crossref2 Crossref redirects to site B using3 Browser reports referrer to site B 1 HTTP HTTP Yes HTTP Yes 2 HTTP HTTP Yes HTTPS Yes 3 HTTP HTTPS Yes HTTP Yes 4 HTTP HTTPS Yes HTTPS Yes 5 HTTPS HTTP No HTTP No 6 HTTPS HTTP No HTTPS No 7 HTTPS HTTPS Yes HTTP No 8 HTTPS HTTPS Yes HTTPS Yes So what does this mean? Our old display guidelines recommended linking DOIs using HTTP. Rows #1, #2, #5, #6 represent the status quo. About half of our members support HTTPS. A few support it exclusively and it seems, given the industry pressures mentioned above, those who support both protocols are likely doing so as a transition stage to HTTPS-only sites.\nThis means that the scenarios represented in row #5 \u0026amp; #6 are already happening. The referral information for any user viewing one of our member sites using HTTPS is being lost when they click on DOIs that use the HTTP protocol. Crossref doesn\u0026rsquo;t get the referral data and neither does the member whose DOI has been clicked on.\nOf course this applies to non-member sites that link to DOIs as well. Wikipedia is the largest referrer of DOIs from outside the industry. In 2015 The Wikimedia Foundation made a highly publicised transition to HTTPS on all of their sites. This means that any of our members who are running HTTP sites have already lost the ability to see any referral information from Wikipedia on their own sites. However, Crossref worked closely with Wikimedia to ensure that, at the very least, Crossref was still able to record Wikimedia referral data on behalf of our members.\nA solution It is largely this work with Wikimedia that has helped us to understand just how important it is for Crossref to get ahead of the curve in helping our community to transition to HTTPS. As long as our members are running a combination of HTTP and HTTPS sites, there is no way for our community to avoid some disruption in the flow of referral data. And we certainly would never entertain the notion of asking our members to keep using HTTP.The best we can do is recommend a practice that will help smooth the transition to HTTPS. That is what we are doing.Our new recommendation is to move to linking DOIs using HTTPS. This is represented in rows #3, #4, #7 and #8 in the table above. This is a particularly important step for our members who have already moved to hosting their sites on HTTPS. As long as they are using HTTP DOIs on their site, they will be sending no referral traffic to Crossref, other Crossref members or other users of the DOI infrastructure. This is captured in scenarios #5 and #6.\nIf our linking guidelines are followed during the industry’s transition to HTTPS, then scenario #5 and #6 will eventually be replaced with scenario #7. It is still not perfect, but at least it means that, during the transition, publishers who are still running HTTP sites will be able to get some DOI referral data via Crossref. And of course, once our members have widely transitioned to HTTPS, everything will go back to normal and they will be able to see referral data on their own sites as well (i.e.they will have moved from the state represented in row #1 to state represented in row #8.)\nIn summary, please change your sites to use HTTPS to link DOIs. They should look like this:\nhttps://doi.org/10.7554/eLife.20320\nFAQ Q: If I have moved my site to HTTPS, do I need to redeposit my URLs to that they use the HTTPS protocol instead? A: Yes. If you want to be able to still collect referrer information on your site (scenario #8) as opposed to via Crossref (scenario #7).\nQ: But can’t I avoid redepositing my URLs and get referrer data again if I simply redirect HTTP URLs to HTTPS on my own site?\nA: No. The browser will strip referrer information if there is any HTTP step in the redirects. Even if the redirect is done on your own site.\nQ: Can I avoid having to redeposit all my URLs? Can’t Crossref just update the protocol on our existing DOIs for us?\nA: Contact support@crossref.org. We’ll see what we can do.\nQ: What about all the old PDFs that are are there? They link to DOIs using HTTP. A: That is true. But links followed from PDFs don’t send referrer information anyway.\nQ: And what about my new PDFs? Should I start linking DOIs from them using HTTPS.\nA: Probably. But not because of the DOI referrer problem. Simply because HTTPS is a more secure, private, and future-proof protocol.\nQ: Don’t some countries block HTTPS?\nA: Typically countries block specific sites and/or services. We do not know of any countries that have a blanket block on the HTTPS protocol.\nQ: I use a link resolver that uses OpenURL + a cookie pusher to redirect my users to local resources. What do I need to do?\nA: You need to change your cookie pusher script to enable the Secure attribute for cookies for HTTPS-linked DOIs. Q: Can I use protocol-relative URLs (e.g. //doi.org/10.7554/eLife.20320)?\nA: Protocol-relative URLs can be used in HTML HREFs to help ease the transition from HTTP to HTTPS, but use the full protocol in the text of the DOI link itself. So, for example, the following is fine: https://0-doi-org.libus.csd.mu.edu/10.7554/eLife.20320 Q: I hear that HTTP and HTTPS versions of URI identifiers are considered to be different identifiers. Doesn’t this mean that by moving to HTTPS we are essentially doubling the number of DOI-based identifiers out there?\nA: Yes. It isn’t a problem that is only being faced by DOIs. Basically all HTTP-URI based identifiers face the same issue. We will put in place appropriate same-as assertions in our metadata and HTTP headers to allow people to understand that the HTTP and HTTPS representations of the DOI point to the same thing. On a personal note (@gbilder speaking- don’t blame @CrossrefOrg) - it breaks my brain that the official line is that the protocol difference means they are different identifiers. As a practical matter (a concept the W3C seems to be increasingly alienated from), it would be insane for anybody to follow this policy to the letter. You can probably be pretty safe swapping the protocols on DOIs and being sure you will get the same thing.\nQ: I see that the Crossref site isn’t running on HTTPS. Are you just a bunch of hypocrites?\nA: Yes. The site will be moving to HTTPS-only very soon. Then we won’t be. We do now.\nReferences These rules can be tweaked using meta referrer tags (https://www.w3.org/TR/referrer-policy/), but not in any way that both avoids the fundamental problems outlined here and that preserves the security/privacy characteristics that are the very reason to implement HTTPS in the first place. To be pedantic- it actually passes referrer information to the DOI proxy (https://0-doi-org.libus.csd.mu.edu/), which in turn is reported to Crossref. To continue with the pedantry- the DOI proxy does the redirect based on the URL member B has deposited with Crossref. ", "headings": ["TL;DR: Yes","Two protocols, one web","The pressure to move to HTTPS","Google","Every top website, evah","Government agencies","Libraries","What are Crossref members doing about HTTPS?","The transition to HTTPS and the issue of DOI referrals","A solution","FAQ","References"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/web/", "title": "Web", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2016/", "title": "2016", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/included-registered-available-let-the-preprint-linking-commence./", "title": "Included, registered, available: let the preprint linking commence.", "subtitle":"", "rank": 1, "lastmod": "2016-12-05", "lastmod_ts": 1480896000, "section": "Blog", "tags": [], "description": "We began accepting preprints as a new record type last month (in a category known as “posted content” in our XML schema). Over 1,000 records have already been registered in the first few weeks since we launched the service.\nBy extending our existing services to preprints, we want to help make sure that:\nlinks to these publications persist over time they are connected to the full history of the shared research the citation record is clear and up-to-date. ", "content": "We began accepting preprints as a new record type last month (in a category known as “posted content” in our XML schema). Over 1,000 records have already been registered in the first few weeks since we launched the service.\nBy extending our existing services to preprints, we want to help make sure that:\nlinks to these publications persist over time they are connected to the full history of the shared research the citation record is clear and up-to-date. It’s not just collecting the metadata however, it’s also making it available so that it can be as widely used as possible. Preprint metadata is no different. As with all record types, we make the metadata available for machine and human access, across multiple interfaces (e.g. REST API, OAI-PMH, Crossref Metadata Search)\nFor example, you can see information on the preprint https://0-doi-org.libus.csd.mu.edu/10.20944/preprints201608.0191.v1 in a number of ways:\nhttps://api.crossref.org/v1/works/10.20944/preprints201608.0191.v1/transform/application/vnd.crossref.unixsd+xml https://web.archive.org/web/20131229210637/http://0-search-crossref-org.libus.csd.mu.edu//?q=10.20944%2Fpreprints201608.0191.v1 If you want to see all the preprint metadata deposited so far, try https://0-api-crossref-org.libus.csd.mu.edu/v1/types/posted-content/works. Over 1,000 records have already been registered in the first few weeks since we launched the service.\nCrossref members depositing preprints need to make sure they:\nRegister content using the posted content metadata schema. Respond to our match notifications that a manuscript / version of record (AM/VOR) has been registered and link to that within seven days. Label the manuscript as a preprint clearly, above the scroll on the preprint landing page, and ensure that any link to the AM/VOR is also prominently displayed above the scroll. It’s important to clearly label the record type so we can ensure that the connections between preprints and the associated literature are clearly visible, to both humans and machines.\nAs with other record types, there is a registration fee to include content in the Crossref system. For preprints, it’s $0.25 fee for current preprint files and $0.15 for backfiles.\nAre you an existing Crossref member who wants to assign preprint DOIs? Let\u0026rsquo;s talk about getting started or migrating any existing content over to the dedicated preprint deposit schema.\nInterested in becoming a Crossref member to assign DOIs to your preprints? Contact our membership specialist so we can answer any questions and get you set up as a member.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-rest-api.-part-3-with-share/", "title": "Using the Crossref REST API. Part 3 (with SHARE)", "subtitle":"", "rank": 1, "lastmod": "2016-12-01", "lastmod_ts": 1480550400, "section": "Blog", "tags": [], "description": "As a follow-up to our blog posts on the Crossref REST API we talked to SHARE about the work they’re doing, and how they’re employing the Crossref metadata as a piece of the puzzle. Cynthia Hudson-Vitale from SHARE explains in more detail…\n", "content": "As a follow-up to our blog posts on the Crossref REST API we talked to SHARE about the work they’re doing, and how they’re employing the Crossref metadata as a piece of the puzzle. Cynthia Hudson-Vitale from SHARE explains in more detail…\nCynthia Hudson-Vitale, digital data librarian in Research Data and GIS Services at Washington University in St. Louis Libraries and visiting program office for SHARE\nSHARE (http://share-research.org) is building a free, open, data set about research and scholarly activities across their life cycle. It is a higher education initiative whose mission is to maximize research impact by making research widely accessible, discoverable, and reusable. SHARE’s data set is free, openly licensed, and built with open source technology developed at the Center for Open Science (COS). Launched in beta in April 2015 the data set has grown to more than 6 million records from 100+ providers, including Crossref, Social Science Research Network (SSRN), DataONE, 50+ library institutional repositories, and more.\nHow is the Crossref REST API used within SHARE?\nSHARE currently harvests metadata from Crossref using the Crossref application programming interface (API). We pull such metadata values as journal title, author, DOI, journal name, and publisher, to name just a few. This metadata is then fed into our data processing pipeline, normalized, and aggregated into the full data set.\nWhat are the future plans for SHARE?\nPhase II of SHARE, launched in late 2015, focuses on adding metadata providers, enhancing the metadata, and making connections and links between the metadata records. These links will show the entire life cycle of research and scholarship—connecting a data management plan, grant award information, data deposits, analytic/software code, pre-publications, final manuscripts, and more.\nTo move these plans forward, SHARE is applying machine-learning and automation techniques and working with the community to verify metadata enhancements and curate the metadata. Current technology work focuses on imputing subject domain keywords and object types into the SHARE data set using learning models and heuristics. Data models and schemas are in development to connect the research lifecycle, connect multiple instances of an object to a single entity, and capture metadata provenance.\nWhat else would SHARE like to see in Crossref metadata?\nWe would love to see rights-declaration metadata elements and article references/citations included in the metadata about digital objects. The rights-declaration information is invaluable for individuals who want to know what category the object is in (public domain, copyrighted, etc.), what constraints or permission requirements exist, contact information, and more. Additionally, networks of research can be discovered and meta-scholarship facilitated by making article reference lists machine-readable and openly available. What’s next?\nDoes this give you any ideas? Feel free to get in touch with questions or take the API for a spin yourself and let us know what you can do with it! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/call-for-participation-membership-fees-committee/", "title": "Call for participation: Membership & Fees Committee", "subtitle":"", "rank": 1, "lastmod": "2016-11-29", "lastmod_ts": 1480377600, "section": "Blog", "tags": [], "description": "Crossref was founded to enable collaboration between publishers. As our membership has grown and diversified over recent years, it’s becoming even more vital that we take input from a representative cross-section of the membership. This is especially important when considering how fees and policies will affect our diverse members in different ways.\n", "content": "Crossref was founded to enable collaboration between publishers. As our membership has grown and diversified over recent years, it’s becoming even more vital that we take input from a representative cross-section of the membership. This is especially important when considering how fees and policies will affect our diverse members in different ways.\nAbout the M\u0026amp;F Committee The Membership \u0026amp; Fees Committee (M\u0026amp;F Committee) was established in 2001 and plays an important role in Crossref’s governance. Made up of 10-12 organizations of both board members and regular members, the group makes recommendations to the board about fees and policies for all of our services. They regularly review existing fees to discuss if any changes are needed. They also review new services while they are being developed, to assess if fees should be charged and if so, what those fees should be. For example, the committee recently made recommendations to the board about the fees for a new service called Event Data that we’ll launch soon, and the Content Registration fees for preprints. In addition, the board can also ask the committee to address specific issues about policies and services. Increasingly, the committee works with the outreach team to include research and survey insights.\nAbout committee participation The M\u0026amp;F Committee meets via one-hour conference calls about six times a year, although this can vary depending on what issues the committee is considering. Often proposals are developed by staff and then reviewed and discussed by the committee - so there is reading to do in preparation for the calls.\nThis is very important work and in order to ensure that the committee is broadly representative of Crossref’s diverse membership we are seeking expressions of interest from members who would like to serve on the M\u0026amp;F Committee for 2017. Appointments are for one year and members can serve multiple terms.\nAbout you In view of our commitment to be representative of the membership we are refreshing the committee and want to have engaged and interested people from a diverse set of members join.\nIf you are interested in joining the committee and helping Crossref fulfil its mission please email feedback@crossref.org with your name, title, organization and a short statement about why you want to serve on the committee by December 19th, 2016. Scott Delman, Director of Group Publishing, ACM is the current Chair of the committee and will review the expressions of interest with me, Ed Pentz, Executive Director, to form the committee.\nThanks for your interest.\n", "headings": ["About the M\u0026amp;F Committee","About committee participation","About you"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/committees/", "title": "Committees", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-look-back-at-live16/", "title": "A look back at LIVE16", "subtitle":"", "rank": 1, "lastmod": "2016-11-24", "lastmod_ts": 1479945600, "section": "Blog", "tags": [], "description": "Crossref LIVE16 opened with a Mashup Day on 1st November 2016 in London. Attendees from the scholarly communications world met to chat with Crossref team members in an open house atmosphere. The Crossref team put their latest projects on display and were met with questions, comments, and ideas from members and other metadata folks. Here’s what it looked like — you may recognize a few familiar faces. ", "content": "Crossref LIVE16 opened with a Mashup Day on 1st November 2016 in London. Attendees from the scholarly communications world met to chat with Crossref team members in an open house atmosphere. The Crossref team put their latest projects on display and were met with questions, comments, and ideas from members and other metadata folks. Here’s what it looked like — you may recognize a few familiar faces. Crossref LIVE16 in London LIVE16 continued with the Conference Day on 2nd November, a plenary session with invited speakers and presentations by the Crossref team. Here are the presentations, in chronological order. Dario Taraborelli speaks on “Wikipedia’s role in the dissemination of scholarship” Ian Calvert speaks on: “You don’t have metadata (and how to befriend a data scientist)” Ed Pentz speaks on “Crossref’s outlook \u0026amp; key priorities”\nGinny Hendricks speaks on “A vision for membership”\nGeoffrey Bilder speaks on “The case of the missing leg” Lisa Hart Martin speaks on “The meaning of governance”\nJennifer Lin speaks on “New territories in the Scholarly Research Map”\nChuck Koscher speaks on “Relationships and other notable things”\nCarly Strasser speaks on “Funders and Publishers as Agents of Change” April Hathcock speaks on “Opening Up the Margins”\nYour survey feedback\nWe’re serious about making Crossref LIVE a useful and welcoming annual event for the Crossref membership as well as members of the wider scholarly communications community. That’s why we appreciate responses from the attendees who answered our survey. Here’s what we have learned from your feedback:\nContent\nYou want speakers to tell you something new, even if you don’t agree with their points of view Your favorite speakers were those who inspired you You prefer an unscripted presentation style that makes complex topics accessible to all You’re not as interested in the mechanics of Crossref’s annual election as we are Format\nYou enjoyed the diversity of presenters and would like even more external speakers You want more opportunity to ask us technical questions on the Mashup Day You want to see panel discussions in addition to individual presentations on the Conference Day Those who attended the Conference Day only wished they had also attended the Mashup Day Atmosphere\nYou liked the casual atmosphere but wanted more seating and more dessert. So noted! LIVE17 will be held next November 14-15 in Asia. Until then, we hope you’ll have the chance to see us at the regional Crossref LIVE events we are planning around the world throughout the year. Our next local event is Crossref LIVE in Brazil, held 13 December in Campinas and 16 December in Sao Paulo. ", "headings": ["Crossref LIVE16 in London"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/april-ondis/", "title": "April Ondis", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/board-and-governance/incorporation-certificate/", "title": "Certificate of Incorporation", "subtitle":"", "rank": 1, "lastmod": "2016-11-23", "lastmod_ts": 1479859200, "section": "Board & governance", "tags": [], "description": "CERTIFICATE OF INCORPORATION\nOF\nPUBLISHERS INTERNATIONAL LINKING ASSOCIATION, INC.\nUnder Section 402 of the Not-For-Profit Corporation Law\nThe undersigned, being a natural person of at least eighteen years of age and acting as the incorporator of the corporation hereby being formed under the Not-For-Profit Corporation Law, certifies that:\nFIRST: The name of the corporation is PUBLISHERS INTERNATIONAL LINKING ASSOCIATION, INC. (the \u0026ldquo;Corporation\u0026rdquo;).\nSECOND: The Corporation is a corporation as defined in Subparagraph (a)(5) of Section 102 of the Not-For-Profit Corporation Law.", "content": "CERTIFICATE OF INCORPORATION\nOF\nPUBLISHERS INTERNATIONAL LINKING ASSOCIATION, INC.\nUnder Section 402 of the Not-For-Profit Corporation Law\nThe undersigned, being a natural person of at least eighteen years of age and acting as the incorporator of the corporation hereby being formed under the Not-For-Profit Corporation Law, certifies that:\nFIRST: The name of the corporation is PUBLISHERS INTERNATIONAL LINKING ASSOCIATION, INC. (the \u0026ldquo;Corporation\u0026rdquo;).\nSECOND: The Corporation is a corporation as defined in Subparagraph (a)(5) of Section 102 of the Not-For-Profit Corporation Law.\nTHIRD: The Corporation shall be a Type B corporation under Section 201 of the Not-For-Profit Corporation Law.\nFOURTH: The Corporation is formed for the following purposes:\nTo promote the development and cooperative use of new and innovative technologies to speed and facilitate scientific and other scholarly research; and shall have in furtherance of its not-for-profit corporate purposes, all of the powers conferred upon corporations organized under the Not-For-Profit Corporation Law subject to any limitations thereof contained in this Certificate of Incorporation or in the laws of the State of New York.\nFIFTH: The office of the Corporation is to be located in New York County, New York.\nSIXTH: (a) The name and the address of each of the initial directors of the Corporation are as follows:\nPieter Bolman\nAcademic Press, Inc.\n525 B Street, Suite 1900\nSan Diego, CA 92101\nMichael Spinella\nThe American Association for the Advancement of Science\n1200 New York Avenue, N.W.\nWashington, DC 20005\nMarc Brodsky\nAmerican Institute of Physics\nOne Physics Ellipse\nCollege Park, MD 20742\nJohn R. White\nAssociation for Computing Machinery\n1515 Broadway, 17th Floor\nNew York, NY 10036\nJohn Strange\nBlackwell Science\nOsney Mead\nOxford OX2 OEL, England\nJohn Regazzi\nElsevier Science\n655 Avenue of the Americas\nNew York, NY 10010\nAnthony Durniak\nIEEE\n445 Hoes Lane\nPiscataway, NJ 08855-1331\nJeffrey K. Smith\nKluwer Academic Publishers\nSpuiboulevard 50\n3311 GR Dordrecht, The Netherlands\nStefan von Holtzbrinck\nNature Publishing Group\n4-6 Crinan Street\nLondon N1 GXW, England\nMartin Richardson\nOxford University Press\nGreat Clarendon Street\nOxford OX2 6DP, England\nRuediger Gebauer\nSpringer Verlag\n175 Fifth Avenue\nNew York, NY 10010\nEric Swanson\nJohn Wiley \u0026amp; Sons, Inc.\n605 Third Avenue\nNew York, NY 10158-0012\n(b) If any person appointed or elected to be a director of the Corporation (i) resigns from the Board of Directors in writing, (ii) becomes sufficiently disabled so that, in the reasonable discretion of the balance of the Board of Directors, said person is unable to fulfill his or her duties as a director, or (iii) ceases to be employed by the entity which employed him or her at the time of such appointment or election (unless such entity wishes said person to remain a director and he or she agrees to do so), then said person shall be deemed to have resigned from the Board of Directors of the Corporation effective at the time at which said written resignation is received by an officer of the Corporation, the Board of Directors makes such a determination of disability, or said employment ceases, as the case may be, and the entity which employed said person at the time of his or her appointment or election shall have the right, power and authority to designate his or her successor who, subject to ratification by the balance of the Board of Directors, shall serve the balance of his or her then extant term as a director. Notwithstanding any other provision of this Certificate of Incorporation, if any entity which employs a director of the Corporation ceases to be a member of the Corporation, then said director shall be deemed to have resigned from the Board of Directors effective as of the date on which said entity ceases to be a member of the Corporation and the balance of the Board of Directors shall have the right, power and authority, as set forth in the By-Laws of the Corporation, to fill the vacancy created by such deemed resignation and the person selected by the Board to fill said vacancy shall serve the balance of the then-extant term of the director so deemed to have resigned.\n(c) Each director appointed above (or his or her duly appointed successor under the preceding paragraph) shall serve as a director for an initial term of two (2) years. At the annual meeting of members of the Corporation to be held after the first anniversary and before the second anniversary of the date of incorporation, there shall be an election for that number of directors of the Corporation designated by the Board of Directors, one-third of whom shall be elected for a term of one (1) year, one-third of whom shall be elected for a term of two (2) years, and the last third of whom shall be elected for a term of three (3) years. At each annual meeting thereafter, a number of directors equal to that of those whose terms have expired shall be elected for the term of three (3) years. At the expiration of any term of three (3) years, any director may be re-elected. In any event, each director\u0026rsquo;s term shall be deemed extended or shortened until such time as his or her successor shall be duly elected and have qualified.\nSEVENTH: The duration of the Corporation is to be perpetual.\nEIGHTH: The Secretary of State is designated as the agent of the Corporation upon whom process against the Corporation may be served. The post office address within the State of New York to which the Secretary of State shall mail a copy of any process against the Corporation served upon him is:\nKay Collyer \u0026amp; Boose LLP, One Dag Hammarskjold Plaza, New York, New York 10017, Attention: Claude P. Goetz, Esq.\nNINTH: The management of the Corporation and the conduct of its affairs shall be vested in its Board of Directors to the fullest extent permitted by the Not-For-Profit Corporation Law. Such right, power and authority shall include, but not be limited to, amending the Corporation\u0026rsquo;s by-laws, appointing committees of the Board of Directors, each of which, when duly appointed shall act with the delegated power and authority of the entire Board of Directors to the fullest extent permitted by the Not-For-Profit Corporation Law, establishing classes of membership and the respective rights, designations and preferences, if any, of each such class, determining criteria for membership in the Corporation and in any such class (including determining and/or varying the terms and conditions of any standard agreement between and/or among the Corporation and its members), and setting, amending and/or waiving membership dues generally and with respect to any particular member.\nTENTH: The personal liability of the directors and officers of the Corporation is hereby eliminated to the fullest extent permitted by Sections 719, 720 and 720-a of the Not-For-Profit Corporation Law, as the same may be amended and supplemented, from time to time.\nELEVENTH: The Corporation shall, to the fullest extent permitted by Sections 721 et seq. of the Not-For-Profit Corporation Law, as the same may be amended and supplemented from time to time, indemnify any and all persons whom it shall have power to indemnify under said sections from and against any and all of the expenses, liabilities or other matters referred to in, or covered by, said sections, and the indemnification provided for herein shall not be deemed exclusive of any other rights to which those indemnified may be entitled under any by-law, agreement, vote of members or disinterested directors or otherwise, both as to action in his or her official capacity and as to action in another capacity while holding such office, and shall continue as to a person who has ceased to be a director, officer, employee or agent and shall inure to the benefit of the heirs, executors and administrators of any such person. Notwithstanding the foregoing, no indemnification may be made to or on behalf of any director or officer if a judgment or other final adjudication adverse to the director or officer establishes that his or her acts were committed in bad faith or the result of active and deliberate dishonesty and were material to the cause of action so adjudicated, or that he or she personally gained in fact a financial profit or other advantage to which he or she was not legally entitled.\nTWELFTH: Membership in the Corporation shall be open to all publishers of original scientific, technical, medical or other scholarly material which otherwise meet the terms and conditions of membership set from time to time by the Board of Directors and to such other entities as the Board of Directors shall determine from time to time.\nTHIRTEENTH: Upon any non-judicial dissolution of the Corporation, subject to the applicable provisions of Section 516 of the Not-For-Profit Corporation Law, as the same may be amended and supplemented from time to time, any assets remaining after payment of all creditors and retention of a reserve deemed to be appropriate by the Corporation\u0026rsquo;s Board of Directors shall be distributed to the members of the Corporation at dissolution in the proportion that each such member\u0026rsquo;s aggregate dues during the three (3) years immediately preceding the date of such dissolution (or such lesser time during which said entity was a member of the Corporation) bears to the aggregate amount of all dues collected by the Corporation during the three (3) years immediately preceding the date of such dissolution (or such lesser time during which the Corporation was in existence) from members at the date of such dissolution.\nSigned on January 18, 2000\nClaude P. Goetz, Sole Incorporator\nKay Collyer \u0026amp; Boose LLP\nOne Dag Hammarskjold Plaza\nNew York, New York 10017\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/urls-and-dois-a-complicated-relationship/", "title": "URLs and DOIs: a complicated relationship", "subtitle":"", "rank": 1, "lastmod": "2016-11-04", "lastmod_ts": 1478217600, "section": "Blog", "tags": [], "description": "As the linking hub for scholarly content, it’s our job to tame URLs and put in their place something better. Why? Most URLs suffer from link rot and can be created, deleted or changed at any time. And that’s a problem if you’re trying to cite them.\n", "content": "As the linking hub for scholarly content, it’s our job to tame URLs and put in their place something better. Why? Most URLs suffer from link rot and can be created, deleted or changed at any time. And that’s a problem if you’re trying to cite them.\nThus the Crossref DOI was born: an Identifier which is Persistent, which means that it’s designed to live forever (or, as Geoff Bilder rather more prosaically puts it, as long as we do), and also Resolvable, which means that you can click on it. A DOI is a URL, but it’s imbued with special properties. I say special, not magical, because all of the things that make Crossref DOIs what they are, are obtained through agreements and common standards rather than any kind of magic.\nAs part of the development of Crossref Event Data I’ve been doing some research about the relationship between DOIs and URLs. It’s a problem we have to solve in order to make Event Data work, but it’s a much broader and more interesting story, and the results have wide applicability. I’ll be telling this story at PIDapalooza. If you’re interested in Persistent Identifiers you should go and registration is open, though hurry, as it’s next week and in Rejkjavik, Iceland!\nThis is also a story in progress. As I write not all of the data is in, and we can be certain that it will evolve in ways we have no idea about. It’s also quite long but I’ll do my best to disqualify it from the bedtime reading list.\nFull circle Crossref was established just over fifteen years ago with the purpose of forming the linking hub between publishers. Our job was — and still is — to register content for publishers and then continue to work with them to ensure their DOIs always point to the right location of the content. To do this we need to do one main thing: send people in the right direction when they click on a DOI, and know which direction to point them in.\nToday, linking is still an important part of what Crossref does, but we do a huge amount more. One of the new things we’re working on is Crossref Event Data. It’s a service for tracking how and where people use scholarly content (such as articles) across the web and social media. Early research suggested that if we limited ourselves to just looking for DOIs we wouldn’t find much. Instead we broadened our aims a little: rather than looking for mentions of registered content exclusively via their DOIs, we look for them via the most suitable mechanism. In most cases this means the actual URL of the Item. So we have come full circle: we started linking DOIs to URLs. Now we’re trying to link URLs back to DOIs.\nWhich URL are we talking about here? The Crossref Guidelines say:\nDOI-routed reference links enabled by Crossref must resolve to a response page containing no less than complete bibliographic information about the target content …\nhttp://www.crossref.org/02publishers/59pub_rules.html This is what’s referred to as the Landing Page. Every Landing Page has a URL. Usually when you want to read information about an Article, it’s the Landing Page that you’re looking at. I should also say at this point that when I say Article I mean any item of Crossref Registered Content with a DOI. So the same applies to books, chapters, conference proceedings etc. But as most items are Articles, I’ll stick with that for now.\nI’m going to make some assumptions. Unfortunately, and I don’t want to spoil the surprise here, they all turn out to be false. They’re all reasonable assumptions, though, and you would be forgiven for thinking, or at least wishing, that they were true.\nSo suspend your disbelief and follow me down the rabbit-hole…\nAssumption 1: A DOI points directly to a Landing Page URL When you click on a DOI you are taken to the Article Landing Page. It seems like a perfectly valid assumption to think that you are taken directly there.\nThe DOI system is essentially a big lookup table. In the first column is the DOI and in the second column is the URL. Publishers request that we register each item’s DOI and supply us with the URL it should point to. We work with CNRI and the International DOI Foundation to keep the system running and it means that when you, the reader at home, click on a DOI, you end up on the article’s Landing Page.\nIt would be very convenient if our assumption were true. If we wanted to turn a URL back into an article page, we could just swap the two columns and find the DOI by looking up the URL.\nIt turns out that it’s not quite so simple.\nThe Landing Page is under control of the publisher, as is the URL that they supply us with. They don’t need to supply us with the final landing page URL, only with one that leads to the landing page.\nHTTP redirects When you request a URL, either by typing it into your browser or by clicking on a link, your browser contacts the server and gets a reply. That reply can be “200 OK, here’s your page”, “303, look over there” or the dreaded “404, I can’t find it”. Other HTTP response codes are available, including well-known classics such as 201, 500 and 418.\nIf it’s a 303, your browser will follow the redirect URL. The response that comes back from that redirect could be another 303. You could end up following a whole chain of redirects. You wouldn’t notice anything, except having to wait an extra few milliseconds.\nExtraordinary diversity Crossref was created by a group of publishers who needed a way to link between articles. It was an ambitious goal: create a central system with which any publisher can integrate their own systems; one that allows linking to any article no matter who published it. Today we have over 5,000 members and counting, all contributing to our metadata engine. And up to 2 million DOIs are resolved every day, by all kinds of people and systems. Our wide range of members means a wide range of systems with a wide range of designs.\nThis brings an extraordinary diversity of behavior. If we want to make observations about DOIs we can’t just take a random sample of the over 80 million. Instead, we need to take a sample of DOIs per Publisher System. Even taking a sample per publisher might not do the job because some publishers run a variety of systems.\nExperiment 1: Does Crossref know all Landing Pages? By NASA / Paul Riedel (Great Images in NASA: Home - info - pic) [Public domain], via Wikimedia Commons Hypothesis: Crossref knows the Landing Page URL for all DOIs.\nFor a sample of Items, we can follow the DOI link all the way through to the Landing Page, following any redirects, then compare the final Landing Page URL to the one that Crossref knows about. If there are extra redirects, that means that the one we have on file isn’t the final one.\nWe need to tighten up the terminology at this stage:\nDOI URL - The full DOI, e.g. https://0-doi-org.libus.csd.mu.edu/10.5555/12345678 . Resource URL - The URL that Crossref has on file (stored in our system). This is where the browser is initially redirected. Destination URL - The URL that we end up at if we follow all the redirects. Article Landing Page - The page that represents the item. If everything works, this should be the same as the Destination URL. The reason we’re talking about the Destination URL as distinct from the Article Landing Page when they should be the same thing will become clear later. Consider yourself foreshadowed.\nSo let’s re-word our hypothesis:\nHypothesis: The Destination URL is the same as the Resource URL.\nMethod: A sample of DOIs was taken (most items updated in 2016, all from 2009 or earlier). The Resource URL was obtained for all of them. The DOIs were split by the domain name of the Resource URL (to give a good coverage of all Publisher systems). A sample of Resource URLs was followed per domain, at least 200 (or fewer if that exceeds the number of DOIs available). Where there were HTTP redirects they were followed.\nObservations:\nNumber of Items sampled Destination URL: 253,381 Number where Resource URL = Destination URL: 46,995 or 19.96% Conclusion: Not all Resource URLs are the same as the Destination URL by a long shot. Crossref does not automatically know every landing page URL.\nNow we know the truth about our first assumption: DOIs don’t point directly to Landing Pages. If we want to reverse Landing Pages back into DOIs, we’re going to need to go a bit deeper…\nInterlude But first, an interlude with some information about publishers, owners, and systems, because now seems like the right time to do it.\nAssumption 2: You can tell the publisher of a DOI by looking at its prefix This is a real one one that people believe. Again, it’s entirely understandable. People look at a DOI like https://0-doi-org.libus.csd.mu.edu/10.1371/journal.pone.0136117.g001 , which takes them to PLoS and naturally assume that another DOI like https://0-doi-org.libus.csd.mu.edu/10.1371/journal.pone.0136053.t003 — because it has the same prefix of 10.1371 — is also for a PLoS item.\nWhilst this turns out to be true most of the time, it’s not true for all Items, which makes it a dangerous assumption to make.\nIt is true that every publisher is given a prefix. They can then register DOIs with this prefix. It is also true that Items can be transferred between publishers. Because DOIs are persistent, the prefix in the DOI doesn’t change. So you might find a DOI that belongs to a publisher that has an unexpected prefix. Publishers can also be bought and sold, merged and split, which means that whilst most publishers have a single prefix, some, like Elsevier, have several. Take the case of Elsevier, who has 26 at the time of writing (you can see this in Elsevier’s entry in the Crossref Metadata API).\nEvery Item has an ‘owner prefix’ in addition to the prefix in the DOI. The owner prefix is the same as the DOI prefix when the Item is created, but over time, as articles are transferred, that can change to indicate that it is owned by another publisher.\nEvery Item has a DOI, and every DOI has a prefix. But every Item also has an Owner Prefix (you can check this in the Metadata API in the ‘prefix’ field).\nSo Assumption 2 has been laid to rest. The only thing you can tell from looking at a DOI is that it is, in fact, a DOI (you can tell by the “10.” index code).\nWhy do we care about identifying publishers anyway?\nA Fair Test We fundamentally want to conduct a fair test. The reason we can’t just take a random sample from the set of all DOIs is that there are lots of members who all do things slightly differently. Therefore we need to take a sample per publisher ‘system’. The word ‘system’ is a bit fuzzy, but my assumption is that two articles in the same system will behave the same way so we can treat them the same.\nWe also know that each Crossref member may be running more than one system, or a mixture. Therefore just looking at the owner of a DOI may not give accurate results if we want to conduct a survey of all the systems out there.\nThere’s no perfect answer, but the approach I’m taking is to look at the domain name of the Resource URL. We often find lots of subdomains for the same publisher, for example, “psw.sagepub.com”, “pol.sagepub.com”, “psx.sagepub.com” and “bpi.sagepub.com”. It’s clear that these are all operated by Sage, but they might or might not all be running on different ‘systems’.\nTherefore I’m splitting DOIs up into groups based on the domain of their Resource URL. It may turn out that some publishers use a single system running on many domains, or it may turn out that some publishers use a different system for each domain they use. The key point is to find a sampling technique that broadly works, and that allows us to explore and differentiate, as keenly as possible, the variety of systems and behaviours.\nWhy all the redirects? Curious minds might at this stage be wondering about all these extra redirects. Surely it’s extra stuff for the publisher to maintain. Why don’t they just point the DOI directly to the landing page?\nThe answer must be prefaced by repeating that there is a huge number of publishers, running a variety of systems, so we’ll never be able to completely answer that. But some humble suggestions:\nThey might want to be able to change the URLs of the Landing Pages. It may be easier to update their internal systems than send the update to Crossref, especially in bulk. Different parts of their technology stack may be owned by different parts of the company, or outsourced. It’s easier to define internal boundaries than to co-ordinate business units and cross an external one. A publisher may run a mix of different technology. As part of their systems integration process, they set up a redirect server to make everything work together. A publisher assigns DOIs to articles but also has their own internal IDs. They maintain their own DOI-to-internal-ID lookup service. Internal DOI resolvers That last point is an interesting one. The DOI system is the canonical “DOI-to-URL resolver”. That doesn’t prevent publishers from running their own. Indeed, many do.\nTo take a real example of PLoS, an Open Access publisher who registers lots of content with Crossref. To follow one of their DOIs we go on the following journey of redirects:\nhttp://dx.doi.org/10.1371/journal.pone.0164910 http://dx.plos.org/10.1371/journal.pone.0164910 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0164910 http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0164910 Given that the last step uses a DOI, this suggests that they use the DOI as an internal identifier. All those redirects were for some purpose, but they weren’t mapping a DOI to an internal ID. This is therefore not an internal DOI resolver.\nAnother example from JAMA Surgery:\nhttp://doi.org/10.1001/archsurg.142.7.595 http://archsurg.jamanetwork.com/article.aspx?doi=10.1001/archsurg.142.7.595 http://0-jamanetwork-com.libus.csd.mu.edu/journals/jamasurgery/fullarticle/487551 http://0-jamanetwork-com.libus.csd.mu.edu/journals/jamasurgery/article-abstract/487551 In this case we see a mapping from the DOI 10.1001/archsurg.142.7.595 to the ID 487551.\nCan we define a heuristic for this pattern? Yes, but not a perfect one. My test is this:\nDoes the resource URL contain the DOI? If so, does it redirect to a different destination URL? If so, does the destination URL not contain the DOI? The last step is important, because we can’t really say the publisher is running a DOI resolver if they use the DOI all the way through.\nIt’s not perfect and no doubt has false negatives. But we’re just trying to find out whether some publishers run their own DOI resolver systems.\nExperiment 2: Determine how widespread use of internal DOI resolvers is: Hypothesis: Some publishers run their own DOI resolvers.\nMethod: A number of Destination URLs were sampled per Resource URL Domain. If the Resource URL contains the DOI but the Destination URL doesn’t, that’s marked as a Publisher DOI resolver redirect.\nObservations:\nNumber of Items sampled with Resource URL and Destination URL: 253,381 Number of Items that appear to be DOI resolvers: 166,352 = 65.6% Conclusions: Some publishers run their own DOI resolvers.\nThis isn’t of much practical use, but it’s interesting to know, and hints at the way the Crossref system and DOIs are integrated with Publishers’ systems. Now that we’ve got a little insight into the reasons that publishers might run their own DOI resolvers, we can resume our journey of assumptions.\nAssumption 3: We can find the Landing Page for Every DOI Now we know that we can’t just use the lookup table in reverse, but have to follow the links all the way to their destination. Does this approach actually work?\nThis is a pretty big question and we need to be clear about what we mean by ‘every’ DOI. The set of DOIs I’m using (although I’m using a subset) is “all DOIs in our Metadata API that are found in doi.org”.\nWhat is a DOI? Geoff Bilder went over it in the DOI-like-strings blog post earlier this year. The definition I’m working to here is:\nA DOI is an identifier for an item of content registered in the DOI system.\nThat is, if you resolve the DOI on https://0-doi-org.libus.csd.mu.edu/ and it’s recognised, that counts as a DOI. I’m working from the set of DOIs found in the Crossref system as I’m primarily concerned with Crossref DOIs. However, we collaborate closely with DataCite.\nBack to our assumption: “we can find the Landing Page for every DOI”. The answer is that we can, most of the time. But because Crossref Event Data has to work as well as possible, and therefore work with as many DOIs as possible, we have to scour all the nooks and crannies.\nAssumption 4: Every DOI points somewhere unique Stop me when you find the deliberate mistake:\nEvery Item corresponds to a different thing Every Item has a single DOI Every DOI is different Every DOI points to a landing page Therefore every DOI points to a different landing page Two things immediately suggest themselves:\n“Every item has a single DOI” should be true, but it isn’t. We find that sometimes two DOIs are assigned to the same item. This can happen when publications change hands between publishers, or when mistakes are made, or for a variety of other reasons. We also find that in some cases Publishers registered a DOI for the metadata and one for the article abstract. The two DOIs point to the same place. In some cases where there were two DOIs registered for the same thing we create an Alias.\nWhen we alias a DOI we simply say “this DOI should actually point to this one”. Both DOIs still exist, and both still point to the ‘correct’ thing, it’s just that they both point to the same place. If we have two DOIs pointing to the same place, then there isn’t a one-to-one mapping, and Assumption 4 is incorrect.\nExperiment 4: Aliased DOIs Hypothesis: There isn’t a one-to-one mapping between DOIs and URLs because some DOIs are aliased to others.\nMethod: We collected a sample of Resource URLs from the DOI API. We count how many DOIs are classified as Aliases in the DOI system.\nObservations:\nFrom a sample of 11,227,458 DOIs 14,566 are aliased to others, or 0.129% Conclusion: There aren’t many aliases. But there are some, and we should be aware of them.\nExperiment 5: Duplicate Resource URLs Hypothesis: There isn’t a one-to-one mapping between DOIs and URLs because some DOIs have duplicate Resource URLs.\nMethod: A sample of Resource URLs was collected from the DOI API. We counted how many DOIs have Resource URLs that aren’t unique. We subtract the number of deleted DOIs because all deleted DOIs have the same resource URL.\nObservations:\nFrom a sample size of 11,227,458 a total of 112,195 have duplicate resource URLs, or 0.99% of these duplicates, 77,896 have the ‘deleted’ URL leaving 34,229, or 0.30% having non-unique Resource URLs Conclusion: A small number of DOIs have duplicate Resource URLs, even if we exclude those that have been deleted, which means that not every DOI can have a unique URL.\nAssumption 5: The Landing Page is the same as the Destination Page. HTTP has a very neat system for doing redirects. If it were that simple, then we could easily look up every Destination page and confidently say that it was the Landing Page. Not so.\nCookies Web browsers aren’t the only tools that use HTTP. Most programming languages have HTTP capabilities built in.\nUsing cookies is a requirement of some websites, but it’s not a requirement of HTTP. Most websites use cookies in some way or another. When you log into a site, you expect cookies. But when you’re just browsing there isn’t any technical need. A small number of websites absolutely require cookies to be enabled to use the site, even if you’re just browsing and not logged in. Unfortunately, this includes some publishers.\nRequiring cookies to use a publisher site means that you can’t fully resolve a DOI without enabling cookies. Most tools out there don’t. Some privacy-conscious people quite reasonably don’t enable cookies from all sites.\nUsing cookies when resolving a DOI adds considerable overhead and isn’t fool-proof.\nLet’s try a quick experiment to see when we land up on a cookie page. Here’s an example page that tells us that we should have enabled cookies: http://0-www-tandfonline-com.libus.csd.mu.edu/action/cookieAbsent . It’s reachable from the DOI: https://0-doi-org.libus.csd.mu.edu/10.1016/j.envhaz.2007.09.007 .\nExperiment 6: Some DOIs can’t be resolved without cookies Hypothesis: We can’t resolve some DOIs to the Landing Page using standard tools because cookies are required.\nMethod: A sample of DOIs was taken per Resource URL Domain. They were resolved by following HTTP links. Where the Destination URL contains the word ‘cookie’, we mark that as a DOI requiring a cookie.\nObservations:\nA sample of 253,381 DOIs were resolved following HTTP redirects where necessary a total of 6305 resolved to a page with ‘cookie’ in the URL or 2.48% Conclusion: There are cookies at play for at least 2.48% of DOIs. This is probably a very conservative estimate, as we’re using a blunt tool looking for ‘cookie’ in the URL.\nCookies Required For one DOI I found, the publisher system set cookies, then sent us on a series of redirects which set cookies that expired in the past and then, as far as I can tell, checked whether or not they were sent back. My working hypothesis is that it was profiling the behaviour to see what browser I was using.\nI have also seen javascript-based redirects. This is where a web page loads a javascript file, which executes and sends the browser onto another URL. This seems to be to be a browser detection method. There is no way you can follow these DOIs without actually using a real browser.\nThis is a problem for Crossref Event Data. We can’t fire up a browser and follow every DOI: it isn’t practical. When I tried this for a sample as an experiment I got an email from another publisher who was worried that we were scraping data (good bot operators always put contact details in their request headers!).\nThe Crossref member rules leave some wiggle-room about whether this is allowed, but for the Event Data service, we can say that it’s a physical impossibility to collect all Event Data for DOIs like this.\nBring in the Browser To quantify the size of the problem, we need to bring in a web browser. If we assume that some Publishers design their sites to work only with real browsers, that’s what we’ll use. Luckily there are web browsers packaged up into an automatable package, and we can use these to visit the DOI.\nUsing one of these is considerably slower than just following link headers.\nI have split the ‘destination’ concept into two:\nNaïve destination URL: The URL that you get from following HTTP redirects acccording to the HTTP specification Browser destination URL: The URL that you get from letting a browser follow the DOI doing whatever a browser does. Rather than defining a complicated spectrum of types of DOI resolution behaviour, I am classifying DOIs into two groups: those where standard HTTP redirects are sufficient and everything else.\nThe method I am using is to resolve a sample of URLs using the browser. I can then compare the Naïve Destination URL with the Browser Destination URL. If they are the same, then I didn’t need to use the browser after all. If they give a different result however, I trust the Browser one better and declare that DOI to require a browser to resolve.\nAgain, I took a sample of DOIs per Resource URL domain.\nExperiment 7: Quantify proportion of DOIs that require a browser to redirect Hypothesis: A number of DOIs can’t be resolved with standard tools but instead require a browser.\nMethod: A sample of DOIs was selected per Resource URL domain. The links were followed using standard HTTP and using a browser. Where the URLs between the two were different, the DOI was counted as requiring a browser to resolve.\nObservations:\nA total of 59,453 items were followed both using the Naïve and Browser methods. Of these 5,883 items have a different URL between the two methods, or 9.88% Conclusion: We can’t rely on the Naïve redirect, and would have to fire up the browser in about 10% of cases in the sample.\nOther gnarly things There are one or two supplementary gnarly things that crop up.\nFirst, session IDs are sometimes embedded in the URL. This is a tracking technique similar to cookies, but instead of sending cookies, which are invisible to the user, a unique code is placed on the end of the URL. This means that everyone gets a different URL. The most popular of these is the JSESSIONID, which is used by servers in the Java ecosystem. An example URL is:\nhttp://onlinelibrary.wiley.com/doi/10.1002/047084289X.rn00615.pub3/abstract;jsessionid=0D1B7AC4689A494E0EA78BD2F0A710C4.f04t04\nWe can easily remove these if they appear at the end of a URL. Sometimes they occur in the middle of a URL, as above. Sometimes they appear as query parameters:\nhttp://jpharmsci.org/action/consumeSharedSessionAction?SERVER=WZ6myaEXBLGvmNGtLlDx7g%3D%3D\u0026amp;MAID=npYBLvZTaUI3JTHw%2BH63WQ%3D%3D\u0026amp;JSESSIONID=aaajjhdDL5ssK6d1HHrFv\u0026amp;ORIGIN=207988872\u0026amp;RD=RD\nIn this case we make no attempt to remove them. These URLs won’t be any use for matching, and we have to acknowledge that and move on.\nInterpreting the results All the above experiments involved taking as many DOIs as we had time for, gathering the Resource URLs, and then grouping the DOIs per Resource URL Domain. A sample of DOIs was investigated per each Resource URL domain to give the best chance at even coverage. The above figures have been presented as a proportion of the sampled data-set.\nNow it’s time to draw some practical conclusions. I grouped the results per Resource URL Domain, so I can say that “for this domain, X% of DOIs was deleted, or aliased, or whatever”. This means that we can look at the statistics for a given domain and work out the best method for working with DOIs that belong to it.\nI have created histograms of domains by their various proportions.\nOur first chart is histogram of Resource URL Domains where the Naïve Destination = the Resource URL. Each domain is given a proportion which represents how many DOIs sampled on that domain have a Landing Page equal to the Resource URL.\nThere’s a clear bimodal distribution here. The conclusion here is “most domains require you to follow the link to find the destination URL“. Furthermore, the domains are consistent: there are virtually no domains that have a mix of DOIs that behave differently.\nOur second chart is a histogram of Resource URLs where the Browser-based redirect = the Naive URL. Each domain is given a proportion which represents how many DOIs sampled on that domain require us to fire up a browser.\nOverwhelmingly, the Browser Redirect URL is the same as the Naïve Redirect URL, meaning that we don’t need to fire up the browser, we can just use the Naïve URL, which is much easier to compute. There are some resource URL domains which require every DOI to be followed in a browser rather than just following links.\nWe know from this that we don’t have to use the browser most of the time. There is a small number of domains where we’re unsure (under 500) and a small number of domains where we know that we have to use a browser. This means we can focus our efforts.\nThere are lots of DOIs and they all behave differently. There are thousands of publishers out there registering DOIs. There are thousands of domains. Some publishers have lots of domains. This makes it impossible to make many general observations about DOIs.\nYou can’t tell anything by looking at the DOI Just by looking at the DOI you can’t tell who published it, or which publisher’s system is hosting it. Therefore you can’t tell how it’s going to behave.\nWe’ve looked at five kinds of URLs:\nThe DOI itself The Resource URL The “naïve” redirect URL The “browser” redirect URL The Article Landing Page In some cases, the Resource URL, naïve redirect URL, browser redirect and Article Landing Page are the same. In some cases they aren’t. Of these, the fifth is somewhat mythical.\nDOIs fall into classifications Each DOI falls into a category, most preferable first:\nThe Resource URL is the same as the Landing Page. The Landing Page can be discovered by following HTTP redirects. The Landing Page can be discovered by firing up a web browser to follow redirects. The Landing Page can’t be determined. We can predictively group DOIs We can group DOIs by their Resource URLs and take a sample per Resource URL Domain. If all samples for a domain behave a certain way, we can place the DOIs into one of the above four groups with a probability.\nWe’ll never know the full story. Because of the diversity of Publisher Systems and the long history of Crossref DOIs, we’ll never be able to describe exactly what’s going on for all DOIs.\nWhat next? We’re continuing to develop Crossref Event Data. The part of the system that handles turning URLs back into DOIs will never be perfect, but we know from this research that we can at least work with a subset.\nI’m also working on another project which will attempt to reverse a Landing Page URL back into a DOI by looking at the metadata on the Landing Page. You can read about it here. Ultimately we’re going to have to take a blended approach. Building a useful set of Landing Page URL to DOI mappings will be part of the mix.\nAs Event Data matures we’ll be sharing all the datasets automatically as part of our infrastructure, including our DOI-to-URL mapping.\nAnd any members reading, please make your DOIs as easy to follow as possible! Please don’t require JavaScript or cookies when resolving DOIs.\nIf you’re read this far, perhaps you’re as interested in DOIs as we are. There’s a lot more to say on the subject, but that’s enough for now. See you at PIDapalooza!\nImage Credits All images from Wikipedia Commons. Click or hover on the image to see the attribution.\n", "headings": ["Full circle","Assumption 1: A DOI points directly to a Landing Page URL","HTTP redirects","Extraordinary diversity","Experiment 1: Does Crossref know all Landing Pages?","By NASA / Paul Riedel (Great Images in NASA: Home - info - pic) [Public domain], via Wikimedia Commons","Interlude","Assumption 2: You can tell the publisher of a DOI by looking at its prefix","A Fair Test","Why all the redirects?","Internal DOI resolvers","Experiment 2: Determine how widespread use of internal DOI resolvers is:","Assumption 3: We can find the Landing Page for Every DOI","Assumption 4: Every DOI points somewhere unique","Experiment 4: Aliased DOIs","Experiment 5: Duplicate Resource URLs","Assumption 5: The Landing Page is the same as the Destination Page.","Cookies","Experiment 6: Some DOIs can’t be resolved without cookies","Cookies Required","Bring in the Browser","Experiment 7: Quantify proportion of DOIs that require a browser to redirect","Other gnarly things","Interpreting the results","There are lots of DOIs and they all behave differently.","You can’t tell anything by looking at the DOI","DOIs fall into classifications","We can predictively group DOIs","We’ll never know the full story.","What next?","Image Credits"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/preprints-are-go-at-crossref/", "title": "Preprints are go at Crossref!", "subtitle":"", "rank": 1, "lastmod": "2016-11-02", "lastmod_ts": 1478044800, "section": "Blog", "tags": [], "description": "We’re excited to say that we’ve finished the work on our infrastructure to allow members to register preprints. Want to know why we’re doing this? Jennifer Lin explains the rationale in detail in an earlier post, but in short we want to help make sure that:\nlinks to these publications persist over time they are connected to the full history of the shared research results the citation record is clear and up-to-date Doing so will help fully integrate preprint publications into the formal scholarly record.\n", "content": "We’re excited to say that we’ve finished the work on our infrastructure to allow members to register preprints. Want to know why we’re doing this? Jennifer Lin explains the rationale in detail in an earlier post, but in short we want to help make sure that:\nlinks to these publications persist over time they are connected to the full history of the shared research results the citation record is clear and up-to-date Doing so will help fully integrate preprint publications into the formal scholarly record.\nWhat’s new? We’ve had to do some work on our own infrastructure to facilitate the inclusion of preprints, enabling: Crossref membership for preprint repositories by updating our membership criteria and creating a policies for preprints The deposit of persistent identifiers for preprints to ensure successful links to the scholarly record over the course of time via the DOI resolver. Content Registration for preprints with custom metadata that reflect researcher workflows from preprint to formal publication (this custom metadata will then be visible to anyone using the Crossref metadata). Notification of links between preprints and formal publications that may follow (journal articles, monographs, etc.). Auto-update of ORCID records to ensure that preprint contributors get credit for their work. Preprint and funder registration to automatically report research contributions based on funder and grant identification. It will also allow for the collection of “event data” that capture activities surrounding preprints (usage, social shares, mentions, discussions, recommendations, links to datasets and other research entities, etc.). Now we’re ready to go!\nEarly adopters We have been working with various preprint publishers who are launching (or planning to launch) their own preprint initiatives. Preprints.org is the first to successfully make preprints deposits using the dedicated schema. For example, this preprint https://0-doi-org.libus.csd.mu.edu/10.20944/preprints201608.0191.v1 is registered with Crossref. It is linked to a published journal article https://0-doi-org.libus.csd.mu.edu/10.3390/data1030014 both in the online display as well the preprint’s Crossref metadata record. Others are getting ready to go - will your organisation be next? (Technical documentation available here.)\nMartyn Rittman, from Preprints, operated by MDPI said: Preprints.org is delighted to be the very first to integrate the Crossref schema for preprints. We believe it is an important step in allowing working papers and preliminary results to be fully citable as soon as they are available. It also makes it easy to link to the final peer-reviewed version, regardless of where it is published. Thanks to the hard work of Crossref and clear documentation, the schema was very simple to implement and has been applied retrospectively to all preprints at Preprints.org.\nJessica Polka, Director, ASAPbio adds: ASAPbio is a scientist-driven community initiative to promote the productive use of preprints in the life sciences. We’re thrilled to see Crossref’s development of a service that enables preprints to better contribute to the scholarly record. This infrastructure lays a necessary foundation for increasing acceptance of preprints as a valuable form of scientific communication among biologists.\nQuestions? Get in touch with any questions or comments, or join our upcoming webinar to talk about preprints, infrastructure and where we go from here. ", "headings": ["What’s new?","Early adopters","Questions?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-organization-identifier-project-a-way-forward/", "title": "The Organization Identifier Project: a way forward", "subtitle":"", "rank": 1, "lastmod": "2016-10-31", "lastmod_ts": 1477872000, "section": "Blog", "tags": [], "description": "The scholarly communications sector has built and adopted a series of open identifier and metadata infrastructure systems to great success. Content identifiers (through Crossref and DataCite) and contributor identifiers (through ORCID) have become foundational infrastructure to the industry. ", "content": "The scholarly communications sector has built and adopted a series of open identifier and metadata infrastructure systems to great success. Content identifiers (through Crossref and DataCite) and contributor identifiers (through ORCID) have become foundational infrastructure to the industry. But there still seems to be one piece of the infrastructure that is missing. There is as yet no open, stakeholder-governed infrastructure for organization identifiers and associated metadata.\nIn order to understand this gap, Crossref, DataCite and ORCID have been collaborating to:\nExplore the current landscape of organizational identifiers; Collect the use-cases that would benefit our respective stakeholders in scholarly communications industry; Identify those use-cases that can be more feasibly addressed in the near term; and Explore how the three organizations can collaborate (with each other and with others) to practically address this key missing piece of scholarly infrastructure. The result of this work is in three related papers being released by Crossref, DataCite and ORCID for community review and feedback. The three papers are:\nOrganization Identifier Project: A Way Forward (PDF; GDoc) Organization Identifier Provider Landscape (PDF; GDoc) Technical Considerations for an Organization Identifier Registry (PDF; GDoc) We invite the community to comment on these papers both via email (oi-project@orcid.org) and at PIDapalooza on November 9th and 10th and at Crossref LIVE16 on November 1st and 2nd. To move The OI Project forward, we will be forming a Community Working Group with the goal of holding an initial meeting before the end of 2016. The Working Group’s main charge is to develop a plan to launch and sustain an open, independent, non-profit organization identifier registry to facilitate the disambiguation of researcher affiliations.\nCrossref Use Cases Crossref has also been discussing the needs of its members over the last year and there is value in focusing on the affiliation name ambiguity problem with research outputs and contributors. In terms of the metadata that Crossref collects, something that is missing has been affiliations for the authors of publications. Over the last couple of years, Crossref has been expanding what it collects - for example, funding and licensing data and ORCID iDs - and this enables a fuller picture of what we are calling the article nexus. In order to continue to fill out the metadata we collect - and for our members to use in their own systems and publications - we need an organization identifier.\nAnother use case for Crossref is identifying funders as part of collecting funder data to enable connecting funding sources with the published scholarly literature. In order to enable the reliable identification of funders in the Crossref system we created the Open Funder Registry that now has over 15,000 funders available as Open Data under a CC0 waiver. While this has been very successful, it is a very narrowly focused registry and is not suitable for a broad, community-run organization identifier registry that addresses the affiliation use case. In future, our goal will be to merge the Open Funder Registry into the identifier registry that the Organization Identifier Working Group will work on.\nBy working collaboratively we can define a pragmatic and cost-effective service that will meet a fundamental need of all scholarly communication stakeholders.\nGeoffrey Bilder will be focusing his talk at Crossref LIVE16 this week on this initiative, dubbed The OI Project. The talk is scheduled for 2pm UK time and will be live streamed along with the rest of that day’s program.\n", "headings": ["Crossref Use Cases"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/smart-alone-brilliant-together.-community-reigns-at-crossref-live16/", "title": "Smart alone; brilliant together. Community reigns at Crossref LIVE16", "subtitle":"", "rank": 1, "lastmod": "2016-10-29", "lastmod_ts": 1477699200, "section": "Blog", "tags": [], "description": "A bit different from our traditional meetings, Crossref LIVE16 next week is the first of a totally new annual event for the scholarly communications community. Our theme is Smart alone; brilliant together. We have a broad program of both informal and plenary talks across two days. There will be stations to visit, conversation starters, and entertainment, that highlight what our community can achieve if it works together. Check out the final program.\n", "content": "A bit different from our traditional meetings, Crossref LIVE16 next week is the first of a totally new annual event for the scholarly communications community. Our theme is Smart alone; brilliant together. We have a broad program of both informal and plenary talks across two days. There will be stations to visit, conversation starters, and entertainment, that highlight what our community can achieve if it works together. Check out the final program.\nWe’re now opening the doors to all parties—our 5,000+ members of all shapes and sizes—as well as the technology providers, funders, libraries, and researchers that we work with. Our aim is to gather the ‘metadata-curious’ and have more opportunities to talk face-to-face to share ideas and information, see live demos, and get to know one another.\nMashup Day - Tuesday 1st November 12-5pm. An \u0026#8216;open house’ vibe, we’ll have several stations to visit each Crossref team, a LIVE Lounge, good food, and guest areas run by our friends at DataCite, ORCID, and Turnitin. We’ll have some special programming too, on-the-hour lightning talks, including a wild talk at 2pm from a primatologist who speaks baboon! Conference Day - Wednesday 2nd November 9am-5pm. There is more of a formal plenary agenda this day, with keynote speakers from across the scholarly communications landscape. Our primary goal is to share Crossref strategy and plans, alongside thought-provoking perspectives from our guest speakers. We’ll hear from many corners of our community including: Funder program officer, Carly Strasser (Moore Foundation) on \u0026#8220;Publishers and funders as agents of change\u0026#8220;, Data scientist, Ian Calvert (Digital Science) on \u0026#8220;You don’t have metadata\u0026#8220;, Open knowledge advocate, Dario Taraborelli (The Wikimedia Foundation) on \u0026#8220;Citations for the sum of all human knowledge\u0026#8220;, and Scholarly communications librarian, April Hathcock (New York University) on \u0026#8220;Opening up the margins\u0026#8220;. For our part, we will set out Crossref’s \u0026#8220;strategy and key priorities\u0026#8221; (Ed Pentz), \u0026#8220;A vision for membership\u0026#8221; (me, Ginny Hendricks), \u0026#8220;The meaning of governance\u0026#8221; (Lisa Hart Martin), \u0026#8220;The case of the missing leg\u0026#8221; (Geoffrey Bilder),\u0026#8221;New territories in the scholarly research map\u0026#8221; (Jennifer Lin), and \u0026#8220;Relationships and other notable things\u0026#8221; (Chuck Koscher). We will also set aside thirty minutes for the important Crossref annual business meeting, when we will announce the results of the membership’s vote, and welcome new board members. I can’t wait to welcome you all.\nHave you voted? If you’re a voting member of Crossref you’ll have cast your vote already I hope! I’m so happy to see that people have voted in record numbers although it’s under 7% of our eligible members which is not high… more on member participation next week.\n", "headings": ["Have you voted?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/important-changes-to-similarity-check/", "title": "Important changes to Similarity Check", "subtitle":"", "rank": 1, "lastmod": "2016-10-21", "lastmod_ts": 1477008000, "section": "Blog", "tags": [], "description": "New features, new indexing, new name - oh my! TL;DR The indexing of Similarity Check users’ content into the shared full-text database is about to get a lot faster. Now we need members assistance in helping Turnitin (the company who own and operate the iThenticate plagiarism checking tool) to transition to a new method of indexing content.\n", "content": "New features, new indexing, new name - oh my! TL;DR The indexing of Similarity Check users’ content into the shared full-text database is about to get a lot faster. Now we need members assistance in helping Turnitin (the company who own and operate the iThenticate plagiarism checking tool) to transition to a new method of indexing content.\nFor existing Similarity Check users: please check that your metadata includes full-text URLs so that Turnitin can quickly and easily locate and index your content. Full-text URLs need to be included in 90% of journal article metadata by 31st December 2016.\n2016 has seen some exciting new developments (And there are plenty more in store as we strive towards 2017). But first: in April we renamed the service from CrossCheck to Similarity Check and we now have a new service logo available to reference via our logo CDN using the following code.\n\u0026lt;img src=\u0026quot;https://0-assets-crossref-org.libus.csd.mu.edu/logo/crossref-similarity-check-logo-200.svg\u0026quot; width=\u0026quot;200\u0026quot; height=\u0026quot;98\u0026quot; alt=\u0026quot;Crossref Similarity Check logo\u0026quot;\u0026gt;\nEarlier this year Crossref also signed a new contract with Turnitin. As part of this, we negotiated the inclusion of dedicated development time each year from Turnitin’s engineering and product teams to focus on developments in the iThenticate tool that will specifically support Similarity Check users and their needs. Many of our members will have been contacted recently by Turnitin and asked to complete a survey regarding how they use the tool and what improvements they would like to see made in the future. The results of this survey are currently being analyzed and will be used by Turnitin to inform a development plan.\nFinally, throughout 2016 we have also been working with Turnitin to help them develop a new Content Intake System that provides a faster, more reliable and robust method for collecting data from Crossref and indexing users’ content into the Similarity Check full-text database. Previously Turnitin was only able to collect prefix data from Crossref’s system on a monthly basis whereas today, with the new Content Intake System up and running, they are able to pull full-text content links from deposited metadata on a daily basis. This means that if you are a Similarity Check user currently depositing full-text URLs with Crossref, your content is being indexed by Turnitin faster than ever before.\nThere are plenty of other benefits this new method provides. This is why we have agreed with Turnitin that from 1st January 2017 onwards, indexing via full-text URLs will be the only method supported for Similarity Check.\nNot convinced? Let me share my top four reasons for advocating Turnitin’s exclusive use of the full-text URL indexing method for Similarity Check:\n1. Reduced traffic to publisher servers. Indexing via full-text URLs means that the crawl is targeted specifically to the location of the full-text PDF or HTML content, thereby reducing the amount of traffic Turnitin puts through publisher’s servers. 2. Lower margin for error and simplified issue recovery. Turnitin will no longer need to make multiple fetches for any content item, meaning there are now fewer steps in the process. This means there will be fewer places for indexing errors to occur and also reduces the reliance on users setting meta tags or span tags correctly in their markup. Furthermore, if problems do arise, using the one method of indexing for all users will mean that Turnitin is able to pinpoint the issue faster and work with members to resolve it quickly. 3. Quicker turnaround on indexing with fewer delays. Turnitin will no longer need to investigate and set up bespoke indexing methods for different Similarity Check users and they will be able to access the location of full-text content from the one place (ie. within the specific resource tag in member’s metadata deposits). More accurate data from only one location will result in a quicker turnaround on indexing, meaning newly published content will be added into the Similarity Check content database sooner for all members to check other new manuscripts against. 4. Daily ingest is better than monthly! Full-text links can be collected daily from Crossref-rather than monthly for other methods-meaning a more regular ingest of content. The presence of full-text URLs within the metadata is critical to the functioning of Turnitin’s new indexing system. All new Similarly Check participants are now asked to ensure they have these links in place within their deposited metadata before they participate in the service.\nAlready a user of Similarity Check? If you’re an existing Similarity Check participant who joined the service before 2016, your content is likely to be currently indexed via different methods, such as following links contained in your page meta tags. If you’re not currently depositing full-text links with Crossref for Similarity Check, you will have received an email from us about this in August. If you’re unsure though, you can check your XML to see if you have included the full-text link in the field or you can send us an email at similaritycheck@crossref.org as we’d be happy to check for you. Help, don’t leave me behind! Us? Never! We’re here to help. But we really do need those full-text links… Everything existing Similarity Check publishers need to know about adding full-text links into new or existing metadata can be found on our help site. These URLs should be included as part of all standard metadata deposits going forward and can be easily added into existing files in bulk. So there’s no need to redeposit the full metadata, unless of course you would prefer to do so!\nThat’s a wrap Looking back, it really has been a busy year for Similarity Check and it will continue to be so as we persevere in laying the groundwork for a more streamlined, robust and scalable service for 2017 and beyond. Remember, we need Similarity Check users to ensure they have full-text URLs in at least 90% of their journal article metadata by 31st December 2016 in order to continue using Similarity Check from 2017 onwards.\nAnd please keep us updated! With over 1,200 publishers using Similarity Check, we’ll need a little nudge to know when metadata has been updated to include these links. So once updates have been deposited, please email similaritycheck@crossref.org to confirm. And of course, as always, if there are any questions or if some advice would help, we’re just an email away. ", "headings": ["New features, new indexing, new name - oh my!","2016 has seen some exciting new developments","Already a user of Similarity Check? ","Help, don’t leave me behind!","That’s a wrap"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/one-member-one-vote-crossref-board-election-opens-today-september-30th/", "title": "One member, one vote: Crossref Board Election opens today, September 30th", "subtitle":"", "rank": 1, "lastmod": "2016-09-30", "lastmod_ts": 1475193600, "section": "Blog", "tags": [], "description": "Watch for two important emails on September 30th – one with a voting link and material, and one with your username and password. Running Crossref well is a key part of our mission. It’s important that we be as neutral and fair as possible, and we are always striving for that balance. One of our stated principles is “One member, one vote”. And each year we encourage each of our members-standing at over 6000 today-to participate in the election of new board members.", "content": "Watch for two important emails on September 30th – one with a voting link and material, and one with your username and password. Running Crossref well is a key part of our mission. It’s important that we be as neutral and fair as possible, and we are always striving for that balance. One of our stated principles is “One member, one vote”. And each year we encourage each of our members-standing at over 6000 today-to participate in the election of new board members.\nIt is hard to believe that November 2nd will be Crossref’s 17th annual meeting and our 16th annual Board of Directors election. How time flies, and oh, how we have grown!\nCrossref’s Truths, taken from our forthcoming new website.\nI am hoping that we can rally the membership to participate in this important process!\nCandidates will be elected at Crossref LIVE16 for three-year terms to fill five of the 16 Board seats whose terms expire this year. The slate of candidates was recommended by the Nominating Committee, which consisted of three Board members not up for re-election, and two Crossref members that are not on the Board. This year, Jasper Simons, APA; Paul Peters, Hindawi; Jason Wilde, AIP; Chris Fell, Cambridge University Press; and Rebecca Lawrence, f1000 served on the Nominating Committee. The Committee met to discuss the process, criteria, and potential candidates, and put forward a slate which was required to be at least equal to the number of Board seats up for election. The slate may or may not consist of Board members up for re-election.\nCrossref members are welcome to run as independent candidates, as long as they have ten member endorsements sent to lhart@crossref.org with the intent to run. We sent a notification of the process in advance (this year on August 26th), so any nominations could be included in the voting materials that will be sent via email on September 30th.\nYou can access online voting from today at: https://eballot4.votenet.com/PILA/admin. Watch your inbox today for emails with your username and password! ", "headings": ["Watch for two important emails on September 30th – one with a voting link and material, and one with your username and password.","You can access online voting from today at:","https://eballot4.votenet.com/PILA/admin. Watch your inbox today for emails with your username and password!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-crossref-doi-display-guidelines-are-on-the-way/", "title": "New Crossref DOI display guidelines are on the way", "subtitle":"", "rank": 1, "lastmod": "2016-09-27", "lastmod_ts": 1474934400, "section": "Blog", "tags": [], "description": "TL;DR Crossref will be updating its DOI Display Guidelines within the next couple of weeks. This is a big deal. We last made a change in 2011 so it’s not something that happens often or that we take lightly. In short, the changes are to drop “dx” from DOI links and to use “https:” rather than “http:”. An example of the new best practice in displaying a Crossref DOI link is: https://0-doi-org.libus.csd.mu.edu/10.1629/22161\n", "content": "TL;DR Crossref will be updating its DOI Display Guidelines within the next couple of weeks. This is a big deal. We last made a change in 2011 so it’s not something that happens often or that we take lightly. In short, the changes are to drop “dx” from DOI links and to use “https:” rather than “http:”. An example of the new best practice in displaying a Crossref DOI link is: https://0-doi-org.libus.csd.mu.edu/10.1629/22161\nHey Ho, “doi:” and “dx” have got to go The updated Crossref DOI Display guidelines recommend that https://0-doi-org.libus.csd.mu.edu/ be used and not http://0-dx-doi-org.libus.csd.mu.edu/ in DOI links. Originally the “dx” separated the DOI resolver from the International DOI Foundation (IDF) website but this has changed and the IDF has already updated its recommendations so we are bringing ours in line with theirs.\nWe are also recommending the use of HTTPS because it makes for more secure browsing. When you use an HTTPS link, the connection between the person who clicks the DOI and the DOI resolver is secure. This means it can’t be tampered with or eavesdropped on. The DOI resolver will redirect to both HTTP and HTTPS URLs.\nTiming and backwards compatibility We are requesting all Crossref member publishers and anyone using Crossref DOIs to start following the updated guidelines as soon as possible. But realistically we are setting a goal of six months for implementation; we realize that updating systems and websites can take time. We at Crossref will also be updating our systems within six months - we already use HTTPS for some of our services and our new website (coming very soon!) will use HTTPS. An important point about backwards compatibility is that “http://0-dx-doi-org.libus.csd.mu.edu/” and “http://0-doi-org.libus.csd.mu.edu/” are valid and will continue to work forever-or as long as Crossref DOIs continue to work-and we plan to be around a long time.\nWe need to do better Reflecting on the 2011 update to the display guidelines it’s fair to say that we have been disappointed. It is still much too common to see unlinked DOIs in the form doi:10.1063/1.3599050 or DOI: 10.1629/22161 or even unlinked in this form: http://0-dx-doi-org.libus.csd.mu.edu/10.1002/poc.3551 What’s so wrong with this approach? To demonstrate, please click on this DOI doi:10.1063/1.3599050 - oh, you can’t click on it? How about I send you to a real example of a publisher page. What I’d like you to do is click the following link and then copy the DOI you find there and come back - http://0-dx-doi-org.libus.csd.mu.edu/10.1002/poc.3551. Are you back? I expect you had to carefully highlight the “10.1063/1.3599050” and then do “edit”, “copy”. That wasn’t too bad but the next step is to put the DOI into an email and send it to someone. But wait - what are they going to do with “10.1063/1.3599050”? It’s useless. If you want it to be useful you’ll have to add “http://doi.org” or https://0-doi-org.libus.csd.mu.edu/ in the front. When publishers follow the guidelines it makes things easier - if you go to https://0-doi-org.libus.csd.mu.edu/10.1063/1.3599050 you’ll note that you can just right click on the full DOI link on the page and get a full menu of options of what to do with it. One of which is to copy the link and then you can easily paste into an email or anywhere else.\nHowever-putting a positive spin on the spotty adherence to the 2011 update to the DOI display guidelines-everyone has another chance with the latest set of updates to make all the changes at once! More on HTTPS (future-proofing scholarly linking) We take providing the central linking infrastructure for scholarly publishing seriously. Because we form the link between publisher sites all over the web, it’s important that we do our bit to enable secure browsing from start to finish. In addition, HTTPS is now a ranking signal for Google who gives sites using HTTPS a small ranking boost.\nThe process of enabling HTTPS on publisher sites will be a long one and, given the number of members we have, it may a while before everyone’s made the transition. But by using HTTPS we are future-proofing scholarly linking on the web.\nSome years ago we started the process of making our new services available exclusively over HTTPS. The Crossref Metadata API is HTTPS enabled, and Crossmark and our Assets CDN use HTTPS exclusively. Last year we collaborated with Wikipedia to make all of their DOI links HTTPS. We hope that we’ll start to see more of the scholarly publishing industry doing the same.\nSo-it’s simple-always make the DOI a full link - https://0-doi-org.libus.csd.mu.edu/10.1006/jmbi.1995.0238 - even when it’s on the abstract or full text page of the content that the DOI identifies - and use “https://0-doi-org.libus.csd.mu.edu/”. ", "headings": ["TL;DR","Hey Ho, “doi:” and “dx” have got to go","Timing and backwards compatibility","We need to do better","More on HTTPS (future-proofing scholarly linking)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-membership-boom-why-metadata-isnt-like-beer/", "title": "The membership boom & why metadata isn’t like beer", "subtitle":"", "rank": 1, "lastmod": "2016-09-23", "lastmod_ts": 1474588800, "section": "Blog", "tags": [], "description": "You might recognize my name if you’ve ever applied for Crossref membership on behalf of your organization. It recently occurred to me that, since I’ve been working in our membership department for eight years, I’ve been a part of shepherding new members for half of our history. And my, how we’ve grown. ", "content": "You might recognize my name if you’ve ever applied for Crossref membership on behalf of your organization. It recently occurred to me that, since I’ve been working in our membership department for eight years, I’ve been a part of shepherding new members for half of our history. And my, how we’ve grown. Membership growth by country Though it may be easy to see our membership growth by looking at the numbers, I think it’s interesting to consider where we’ve grown. The top ten member countries have dramatically changed since Crossref began sixteen years ago. At the end of our first year of operations, our membership included 54 publishers and affiliated organizations. The majority were from the US and the UK, with a small number from Germany, the Netherlands, and Japan. In 2012, participation in our sponsors program began to increase. Sponsors are affiliated organizations that act on behalf of smaller publishers and societies who wish to register their content with Crossref. Several organizations from Turkey and South Korea were among the first sponsors to join and were very successful in representing a large number of publishers and societies from their regions. Soon to follow were sponsors from India, Ukraine, Russia and Brazil. In 2014, the Public Knowledge Project (PKP) became a sponsoring affiliate, focusing on smaller publishers with the aim of increasing the quality and global reach of scholarly publishing. With the introduction of our sponsor program, the past few years have seen a steady increase in the geographical diversity of our members. There are 194 countries in the world. It’s pretty amazing that organizations in 112 of the world’s countries are now represented in our membership. Do I think we’ll see members joining from the other 82 nations? I don’t know but I hope so.\nA look at our trending nations chart shows the diversity of our membership as we’ve grown, depicting the countries that produced the most new members over the last two years. There has been tremendous growth from South Korea! What I find just as interesting is that we have new members from so many different nations that they form their own special bloc, shown here as “Other.” Our growth has taken place at a remarkable rate. When I joined Crossref in 2008, we had over 1800 publishers and affiliates and we were adding about 300 new members per year. In 2015, nearly 1500 members joined and we are seeing even larger numbers so far in 2016. Counting all publishers, affiliates, libraries, sponsors and represented members, our new member total through the end of August is nearly 1200 and will most certainly overtake the 2015 figure. Member perceptions With such a range of new members each month it’s even more important that we help people understand the benefits of joining Crossref. That it’s not just registering metadata and DOIs but maintaining and improving records over time, and participating in reference linking. We are adding and improving some educational tools that will help everyone understand how our services can enhance the discoverability of content, and why sharing richer metadata supports their full participation in the scholarly community. We are in the process of developing a new, cleaner website with videos that better explain our services-to be released in the next few weeks,-a new onboarding experience, and new and improved query and deposit tools. Connected metadata isn’t like beer Sometimes inviting more people to a party means there is less beer to go around. Fortunately for everyone, metadata isn’t like beer. In fact, the more metadata you draw from the tap, the more useful it becomes. So inviting new members to join Crossref makes our community better and more valuable for everyone. Every member uses that metadata to link their content to every other member’s content. This makes all members’ content easier to find, link, and cite, not just at the moment it is published, but over time. Members from around the globe join Crossref everyday and help guide our growing community. If you are interested in joining please contact me at member@crossref.org.\n", "headings": ["Membership growth by country","Member perceptions","Connected metadata isn’t like beer "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossmark-2.0-grab-the-code-and-youre-ready-to-go/", "title": "Crossmark 2.0 - grab the code and you’re ready to go!", "subtitle":"", "rank": 1, "lastmod": "2016-09-15", "lastmod_ts": 1473897600, "section": "Blog", "tags": [], "description": "On September 1st we completed the final stage of the Crossmark v2.0 release and sent an email to all participating publishers containing instructions for upgrading. The first phase of v2.0 happened when we changed the design and layout of the Crossmark box back in May of this year. That allowed us to better display the growing set of additional metadata that our members are depositing, and saw the introduction of the Linked Clinical Trials feature.", "content": "On September 1st we completed the final stage of the Crossmark v2.0 release and sent an email to all participating publishers containing instructions for upgrading. The first phase of v2.0 happened when we changed the design and layout of the Crossmark box back in May of this year. That allowed us to better display the growing set of additional metadata that our members are depositing, and saw the introduction of the Linked Clinical Trials feature.\nNow all publishers have the opportunity to complete the upgrade by simply replacing the Crossmark button and the piece of code that calls the box. The new button designs are, we think, a much better fit for most websites, and are designed to look more like a button than a flat logo. The new buttons are also available\nas .eps files for placement in PDFs.\nCrossmark box on a mobile phone\nMost importantly, switching to 2.0 makes the Crossmark box responsive for better display on mobile devices.\nJust two weeks after the code release a number of publishers have already upgraded and are running Crossmark 2.0 on their content. Congrats to the Pan African Medical Journal who were the first member to upgrade just a couple of days after the release. Of course we realise that many members will need time to schedule the upgrade, and while we are keen to see as many early adopters as possible, we will support version 1.5 of Crossmark through to the end of March 2017.\nIf your content is running Crossmark 2.0 we would love to see it. Drop us a line or put a link in the comments below.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-the-crossref-metadata-api.-part-2-with-paperhive/", "title": "Using the Crossref Metadata API. Part 2 (with PaperHive)", "subtitle":"", "rank": 1, "lastmod": "2016-09-08", "lastmod_ts": 1473292800, "section": "Blog", "tags": [], "description": "We first met the team from PaperHive at SSP in June, pointed them in the direction of the Crossref Metadata API and let things progress from there. That’s the nice thing about having an API - because it’s a common and easy way for developers to access and use metadata, it makes it possible to use with lots of diverse systems and services.\nSo how are things going? Alexander Naydenov, PaperHive’s Co-founder gives us an update on how they’re working with the Crossref metadata: ", "content": "We first met the team from PaperHive at SSP in June, pointed them in the direction of the Crossref Metadata API and let things progress from there. That’s the nice thing about having an API - because it’s a common and easy way for developers to access and use metadata, it makes it possible to use with lots of diverse systems and services.\nSo how are things going? Alexander Naydenov, PaperHive’s Co-founder gives us an update on how they’re working with the Crossref metadata: PaperHive\nPaperHive is a web-platform for collaborative reading and a cross­-publisher layer of interaction on top of research documents. It lets researchers communicate in published documents in a productive and time-saving way. PaperHive thus puts academic literature, which is integrated with the platform, in the limelight and increases content usage and reader engagement.\nTransforming reading into a process of collaboration gives researchers a reason to return to the content and discover new enrichments they can benefit from. Functionality like hiving, deep linking, and the PaperHive browser extension embeds communication in the researcher’s workflow. PaperHive is free to use!\nHow is the Crossref API used within PaperHive?\nPaperHive extends the concept of a living document and offers an innovative way of displaying content without hosting it. Instead, academic documents are dynamically pulled from the publisher’s servers thus ensuring compliance with content licensing. It enables readers to stay in touch with the articles of interest beyond just saving them in an offline folder.\nCrossref is the common ground on which third party companies and initiatives can build valuable services for publishers and researchers. It facilitates the integration of content into PaperHive by providing the metadata of articles and books from numerous publishers independent of the technology behind their content platforms. Moreover, if the publishers provide ORCID identifiers of authors in the Crossref metadata, researchers can immediately interact with the readers of their works.\nWhat are the future plans for PaperHive?\nIn addition to integrating further publishers’ content and extending PaperHive’s feature set for readers, we also plan to extend our partnerships with other technology providers.\nAs far as our cooperation with Crossref is concerned, we are looking forward to the implementation of the Crossref Event Data API.\nWhat else would you like to see in Crossref metadata?\n- The quality of the existing metadata should be improved significantly. We noticed that important fields such as author or title are missing in the metadata of many documents. PaperHive ignores articles and books with incomplete metadata because it impairs the user experience. Publishers, authors and readers can only benefit from the wider and more active usage of content, so we hope that more publishers will improve the data their provide Crossref with.\n- Since researchers are working with full texts on PaperHive, it would be great if links to the full text are provided in the metadata of all articles and books. The metadata should also contain information about the format of the full text (e.g., PDF, EPUB, HTML).\nThanks Alex!\nJust getting started with the API or what to know more? Get in touch via feedback@crossref.org and pass on your questions and comments.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linking-publications-to-data-and-software/", "title": "Linking Publications to Data and Software", "subtitle":"", "rank": 1, "lastmod": "2016-09-07", "lastmod_ts": 1473206400, "section": "Blog", "tags": [], "description": "TL;DR Crossref and Datacite provide a service to link publications and data. The easiest way for Crossref members to participate in this is to cite data using DataCite DOIs and to include them in the references within the metadata deposit. These data citations are automatically detected. Alternatively and/or additionally, Crossref members can deposit data citations (regardless of identifier) as a relation type in the metadata. Data \u0026amp; software citations from both methods are freely propagated.", "content": "TL;DR Crossref and Datacite provide a service to link publications and data. The easiest way for Crossref members to participate in this is to cite data using DataCite DOIs and to include them in the references within the metadata deposit. These data citations are automatically detected. Alternatively and/or additionally, Crossref members can deposit data citations (regardless of identifier) as a relation type in the metadata. Data \u0026amp; software citations from both methods are freely propagated. This blog post also describes how to retrieve the links collected between publication and data \u0026amp; software.\nData \u0026amp; software citation is good research practice (DataCite-STM Joint Statement and FORCE11 Joint Declaration of Data Citation Principles) and is part of the scholarly ecosystem supporting research validation and reproducibility. Data \u0026amp; software citation is also instrumental in enabling the reuse and verification of these research outputs, tracking their impact, and creating a scholarly structure that recognises and rewards those involved in producing them.\nCrossref supports the propagation of data \u0026amp; software citations alongside a publisher’s standard bibliographic metadata. members deposit the data citation link as part of the overall publication metadata when registering their content. Crossref partners with DataCite and together, we jointly provide a clearinghouse for the citations collected. These are all made freely available to the community as open data.\nCitation practices are evolving across different communities of practice. Crossref’s offering is flexible and easily accommodates variations and changes, since it does not rely on a specific set of citation metadata elements, citation format, nor manner of credit and attribution. Publishers deposit data \u0026amp; software citations in their metadata deposit via a) references and/or b) relation type.\nMethod A: Bibliographic references Crossref and DataCite have partnered to provide automatic linking between publications registered with Crossref and datasets bearing DataCite DOIs. This is the most efficient and effective way to ensure that data citations are fully integrated into the scholarly research information network with full and accurate metadata.\nAll data \u0026amp; software citations that include datasets bearing a DataCite DOI are eligible for auto-update linking with Crossref. In this method: authors cite the dataset or software containing the DataCite DOI per journal article submission guidelines and add it to the article citation list (c.f. FORCE11 citation placement, FORCE11 Software Citation Principles). Publishers then deposit references as part of their standard practice when registering content. Crossref checks every reference deposited for a DOI. If the DOI is identified as DataCite’s, we automatically link it to the article. With this method, no additional action is needed when publishers register their content with Crossref.\nData citation links to non-DataCite DOIs can only be exposed in the references if the publisher makes references openly available. Even in the event that the data citation is shared, it remains undifferentiated from other references. Method B described below offers another approach.\nMethod B: Relation type Publishers can link their publication to a variety of associated research objects as part of the article metadata directly in the metadata deposited to Crossref, including data \u0026amp; software, protocols, videos, published peer reviews, preprints, conference papers, etc. Doing so not only groups digital objects together, but formally associates them with the publication. Each link is a relationship and the sum of all these relationships constitutes a ‘research article nexus.’ Data \u0026amp; software citations are a valuable part of this.\nTo tag the citation in the metadata deposit, we ask for: description of dataset or software (optional) dataset or software identifier identifier type relationship type. Crossref can accommodate research outputs with any identifier, though we currently only validate DOI relationships during metadata processing. Technical details are documented in the [Data \u0026 Software Citations Deposit Guide][4]. Combining methods increases total available citations The two methods are independent and can be used exclusively or jointly. Each caters to a different set of conditions and their practical considerations. See the comparison of benefits and limitations for each method in the deposit guide. We recommend that publishers use both methods where possible at this time for optimum specificity and coverage. How to access data \u0026amp; software citations Crossref and DataCite make the data \u0026amp; software citations deposited by Crossref members and DataCite data repositories openly available to a wide host of parties, including both Crossref and DataCite communities as well as the extended research ecosystem (funders, research organisations, technology and service providers, research data frameworks such as Scholix, etc.).\nData \u0026amp; software citations from references can be accessed via the Crossref Event Data API Citations included directly into the metadata by relation type can be accessed via Crossref’s APIs in a number of formats (REST, OAI-­PMH, OpenURL). (A single channel containing data \u0026amp; software citations across interfaces is in development and will be released next year.)\nPublishers, visit our detailed guide on how to deposit data and software citations. We welcome your questions and concerns at feedback@crossref.org.\nSpecial thanks to the following who provided valuable feedback in developing the guide: Martin Fenner (DataCite), Amye Kenall (Springer Nature), Brooks Hanson (AGU), Shelley Stall (AGU), and the FORCE11 Data Citation Implementation Pilot publisher’s subgroup.\n", "headings": ["TL;DR","Method A: Bibliographic references","Method B: Relation type","Combining methods increases total available citations","How to access data \u0026amp; software citations"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/software/", "title": "Software", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossrefs-annual-meeting-is-now-crossref-live16/", "title": "Crossref’s Annual Meeting is now Crossref LIVE16", "subtitle":"", "rank": 1, "lastmod": "2016-09-02", "lastmod_ts": 1472774400, "section": "Blog", "tags": [], "description": "Everyone is invited to our free annual event this 1-2 November in London. (Register here)!\nIn years past, only Crossref members typically attended the [Crossref Annual Meeting](/crossref-live-annual). This year, we looked at the event with new eyes. We realized that we’d have even richer conversations, more creative energy, and the meeting would be even better for our members if we could rally the entire community together. So we decided to re-develop our annual event from the ground-up.", "content": "Everyone is invited to our free annual event this 1-2 November in London. (Register here)!\nIn years past, only Crossref members typically attended the [Crossref Annual Meeting](/crossref-live-annual). This year, we looked at the event with new eyes. We realized that we’d have even richer conversations, more creative energy, and the meeting would be even better for our members if we could rally the entire community together. So we decided to re-develop our annual event from the ground-up. The result is Crossref LIVE16, an event with a new format and a new focus on the entirety of the scholarly communications community. We are opening doors for the whole community, welcoming publishers, librarians, researchers, funders, technology providers, and Crossref members alike. 1st November - Mashup Day, from 12 noon: an afternoon of interactive activities including mingling with the Crossref team and special guests, trying out our services, live troubleshooting, and exclusive previews of some exciting things we’re working on. Plus entertainment and refreshments at an early evening reception. 2nd November - Conference Day: a full-day plenary session with distinguished keynote speakers including April Hathcock (NYU), Carly Strasser (Moore Foundation), Ian Calvert (Digital Science), and Dario Taraborelli (Wikimedia Foundation). We will provide the most important updates about our services, and share our vision and strategies for the future. Note: You are welcome to join us for both days or just one day, as you like. Location: The Royal Society, London, UK. We hope you will join us, and extend this invitation to your colleagues. This is going to be fun. Register here ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/announcing-pidapalooza-a-festival-of-identifiers/", "title": "Announcing PIDapalooza - a festival of identifiers", "subtitle":"", "rank": 1, "lastmod": "2016-08-30", "lastmod_ts": 1472515200, "section": "Blog", "tags": [], "description": "The buzz is building around PIDapalooza - the first open festival of scholarly research persistent identifiers (PID), to be held at the Radisson Blu Saga Hotel Reykjavikon November 9-10, 2016.\nPIDapalooza will bring together creators and users of PIDs from around the world to shape the future PID landscape through the development of tools and services for the research community. PIDs support proper attribution and credit, promote collaboration and reuse, enable reproducibility of findings, foster faster and more efficient progress, and facilitate effective sharing, dissemination, and linking of scholarly works.", "content": "\rThe buzz is building around PIDapalooza - the first open festival of scholarly research persistent identifiers (PID), to be held at the Radisson Blu Saga Hotel Reykjavikon November 9-10, 2016.\nPIDapalooza will bring together creators and users of PIDs from around the world to shape the future PID landscape through the development of tools and services for the research community. PIDs support proper attribution and credit, promote collaboration and reuse, enable reproducibility of findings, foster faster and more efficient progress, and facilitate effective sharing, dissemination, and linking of scholarly works.\nWe believe that by bringing together everyone who’s working with PIDs for two days of discussions, demos, workshops, brainstorming, updates on the state of the art, and more, we can make this happen faster. And you can help by giving us your input on which sessions would be most valuable. Please send us your ideas, using this form by September 18. We will send session proposal notifications the first week of October with the festival lineup.\nRegister to attend Registration is now open — come join the festival with a crowd of like-minded innovators. And please help us spread the word about PIDapalooza in your community! Stay updated with the latest news on on the PIDapalooza website and on Twitter (@PIDapalooza) in the coming weeks.\nLooking forward to seeing you in November! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/chuck-koscher/", "title": "Chuck Koscher", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/preprints-and-crossrefs-metadata-services/", "title": "Preprints and Crossref’s metadata services", "subtitle":"", "rank": 1, "lastmod": "2016-08-29", "lastmod_ts": 1472428800, "section": "Blog", "tags": [], "description": "We’re putting the final touches on the changes that will allow preprint publishers to register their metadata with Crossref and assign DOIs. These changes support Crossref’s CitedBy linking between the preprint and other scholarly publications (journal articles, books, conference proceedings). Full preprint support will be released over the next few weeks.\n", "content": "We’re putting the final touches on the changes that will allow preprint publishers to register their metadata with Crossref and assign DOIs. These changes support Crossref’s CitedBy linking between the preprint and other scholarly publications (journal articles, books, conference proceedings). Full preprint support will be released over the next few weeks.\nI’d like to mention one change that will be immediately visible to Crossref members who use our OAI based service to retrieve CitedBy links to their content.\nThis API, show in an example here, is intended to retrieve large quantities of data detailing all the CitedBy links to a given publication. The example request shows pulling the data for an IEEE conference proceeding.\nexample: http://0-oai-crossref-org.libus.csd.mu.edu/OAIHAndler?verb=ListRecords\u0026usr=*** pwd=****\u0026set=B:10.1109:1070762\u0026metadataPrefix=cr_citedby With the new change, results will now identify the type of content that is doing the citing. The example results below shows that the DOI 10.1109/CSMR.2012.14 is cited by five other items and displays the DOIs of those items and their record type.\nWhen preprint content that cites other scholarly work starts being registered with Crossref, members using this API will start seeing data like the following: For many users of Crossref metadata the introduction of preprints will be transparent until preprint content starts being registered. However, a few changes like the one above have benefits not limited to just preprints. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-article-nexus-linking-publications-to-associated-research-outputs/", "title": "The article nexus: linking publications to associated research outputs", "subtitle":"", "rank": 1, "lastmod": "2016-08-25", "lastmod_ts": 1472083200, "section": "Blog", "tags": [], "description": "Crossref began its service by linking publications to other publications via references. Today, this extends to relationships with associated entities. People (authors, reviewers, editors, other collaborators), funders, and research affiliations are important players in this story. Other metadata also figure prominently in it as well: references, licenses and access indicators, publication history (updates, revisions, corrections, retractions, publication dates), clinical trial and study information, etc. The list goes on.\nWhat is lesser known (and utilized) is that Crossref is increasingly linking publications to associated scholarly artifacts.", "content": "Crossref began its service by linking publications to other publications via references. Today, this extends to relationships with associated entities. People (authors, reviewers, editors, other collaborators), funders, and research affiliations are important players in this story. Other metadata also figure prominently in it as well: references, licenses and access indicators, publication history (updates, revisions, corrections, retractions, publication dates), clinical trial and study information, etc. The list goes on.\nWhat is lesser known (and utilized) is that Crossref is increasingly linking publications to associated scholarly artifacts. At the bottom of it all, these links can help researchers better understand, reproduce, and build off of the results in the paper. But associated research objects can enormously bolster the research enterprise in many ways (e.g., discovery, reporting, evaluation, etc.).\nWith all the relationships declared across all 80+ million Crossref metadata records, Crossref creates a global metadata graph across subject areas and disciplines that can be used by all.\nResearch article nexus As research increasingly goes digital, more research artifacts associated with the formal publication are stored or shared online. We see a plethora of materials closely connected to publications, including: versions, peer reviews, datasets generated or analysed in the research, software packages used in the analysis, protocols and related materials, preprints, conference posters, language translations, comments, etc. Occasionally, these resources are linked from the publication. But very rarely are these relationships made available beyond the publisher platform. Crossref will make these relationships available to the broader research ecosystem. When publishers register content for a publication, they can identify the associated scholarly artifacts directly in the article metadata. Doing so not only groups digital objects together, but formally associates with the publication. Each link is a relationship and the sum of all these relationships constitutes a “research article nexus.”\nAn assortment of connections already abound in the wild today. Examples include:\nF1000Research article http://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.2-198.v3 connected to initial version http://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.2-198.v1 OECD publication http://0-dx-doi-org.libus.csd.mu.edu/10.1787/empl_outlook-2014-en and its German translation http://0-dx-doi-org.libus.csd.mu.edu/10.1787/empl_outlook-2014-de PeerJ article http://0-doi-org.libus.csd.mu.edu/10.7717/peerj.1135 and its peer review http://0-doi-org.libus.csd.mu.edu/10.7287/peerj.1135v0.1/reviews/3 eLife article http://0-doi-org.libus.csd.mu.edu/10.7554/eLife.09771 and its BioArXiv preprint http://0-doi-org.libus.csd.mu.edu/10.1101/018317 PLOS ONE article http://0-doi-org.libus.csd.mu.edu/10.1371/journal.pone.0161541 with underlying data in Dryad http://0-doi-org.libus.csd.mu.edu/10.5061/dryad.d2vf8 Frontiers article http://0-doi-org.libus.csd.mu.edu/10.3389/fevo.2015.00015 with a figshare http://0-doi-org.libus.csd.mu.edu/10.6084/m9.figshare.1305089.v1 video Journal of Chemical Theory and Computation article http://0-doi-org.libus.csd.mu.edu/10.1021/ct400399x with software archived in Zenodo http://0-doi-org.libus.csd.mu.edu/10.5281/zenodo.60678 Nature Biotech article http://0-doi-org.libus.csd.mu.edu/10.1038/nbt.3481 with a Protocols.io protocol http://0-doi-org.libus.csd.mu.edu/10.17504/protocols.io.dm649d To date, almost all these relationships are not directly recorded in the article metadata (great job, PeerJ!). And as a result, they are more than likely “invisible” to the broader scholarly research ecosystem. Publishers can remedy these gaps by depositing associations when registering content with Crossref or updating the records after registration. That is how the article nexus is formed.\n(Associated datasets can also be identified in the reference list as per Joint Declaration of Data Citation Principles as with the FORCE11 Software Citation Principles. Stay tuned next week for a follow up blog post on Crossref’s support for publisher data and software citations through its metadata.)\nForming the nexus The mechanism of declaring these relationships is straightforward and a longstanding part of the standard deposit process. For each associated research object, simply provide the identifier and identifier type for the object, an optional description of it, as well as name the relationship into the metadata record. For the latter, Crossref and DataCite share a closed list of relationship types, which ensures interoperability between mappings. See Crossref technical documentation for more details. We maintain a list of the recommended relation types for a host of associated research objects to promote standardization across publishers. If you have relationships not specified, please contact us at feedback@crossref.org to identify a suitable one considered best practice. Common adoption of relation types will make relationship metadata useful to tool builders and systems. For example, programmatic queries on supporting materials require proper tagging of their respective relationship types.\nThis approach is highly extensible and accommodates the introduction of new research object forms as they emerge. It also supports associated research objects regardless of identifier type. When an associated entity has a DOI, however, we can validate the relationship during metadata processing as well as provide a more reliable representation of the article nexus.\nArticle nexus: a far richer scholarly map Bibliographic metadata is like a ship’s manifest that catalogs each item of cargo in a ship’s hold - crate, drum, sack, and barrel. It identifies the components that have an internal relation to the publication (contributor, funder, article update, license, etc.), each of which are well-understood points on the scholarly map. But when we integrate the article nexus into the graph, new territories become visible - not isolated islands, but places with highways connecting them to addresses already known.\nWhen a publication has its relationships clearly identified, the connections both go out as well as lead back to it. The more connections, the more visibility on the scholarly map, as the Art of Cartography goes. Numerous systems tap into this map: publishing, funders, research institutions, research councils, indexers \u0026amp; repositories, indexers, research information systems, lab \u0026amp; diagnostics systems, reference management and literature discovery, other PID suppliers. So publishers, you can provide the fullest value to your own publishing operation, your authors, their research communities, and the overall research enterprise by ensuring that all publications are fully linked both inside and out.\n", "headings": ["Research article nexus","Forming the nexus","Article nexus: a far richer scholarly map"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-metadata-api-part-1-authorea/", "title": "Using the Crossref Metadata API. Part 1 (with Authorea)", "subtitle":"", "rank": 1, "lastmod": "2016-08-18", "lastmod_ts": 1471478400, "section": "Blog", "tags": ["Metadata", "APIs", "Metadata retrieval", "REST API", "Research Nexus"], "description": "Did you know that we have a shiny, not so new, API kicking around? If you missed Geoffrey’s post in 2014 (or don’t want a Cyndi Lauper song stuck in your head all day), the short explanation is that the Crossref Metadata API exposes the information that publishers provide Crossref when they register their content with us. And it’s not just the bibliographic metadata either-funding and licensing information, full-text links (useful for text-mining), ORCID iDs and update information (via Crossmark)-are all available, if included in the publishers’ metadata.", "content": "Did you know that we have a shiny, not so new, API kicking around? If you missed Geoffrey’s post in 2014 (or don’t want a Cyndi Lauper song stuck in your head all day), the short explanation is that the Crossref Metadata API exposes the information that publishers provide Crossref when they register their content with us. And it’s not just the bibliographic metadata either-funding and licensing information, full-text links (useful for text-mining), ORCID iDs and update information (via Crossmark)-are all available, if included in the publishers’ metadata. Interested? This is the kickoff a series of case studies on the innovative and interesting things people are doing with the Metadata API. Welcome to Part 1.\nWhat can you do with the Metadata API?\nBuild search interfaces. We’ve built some ourselves. Check out Crossref Metadata Search to search the metadata of over 80 million journal articles, books, standards, datasets \u0026 more. Or Crossref Funder Search to search nearly 15,000 funders and the 982,162 records we have that contain funding data. Provide cross-publisher support for text and data mining applications. Get really interesting top-level reports on the metadata Crossref holds - or look at subsets of the information you’re interested in. Third parties are free to build their own products and tools that build off of the Metadata API (below are some of the many examples that we will highlight in this series). Importantly, there’s no sign-up required to use the Metadata API - the data are facts from members, therefore not subject to copyright and free to use for whatever purpose anyone chooses. To help, Scott Chamberlain of rOpenSci has built a set of robust libraries for accessing the Metadata API. These libraries are now available in the R, Python and Ruby languages. Scott’s blog post has some great information on those. For those using the libraries, there have been a few updates since Scott’s post - to serrano, and support for field queries has been added to habanero (coming to serrano and rCrossref soon). Any feedback/bug reports can be submitted via the GitHub repos serrano or habanero. There’s also a javascript library, authored by Robin Berjon. Who’s using the Crossref Metadata API?\nWe get around 30 million requests a month. We’d like to share a few case studies to showcase what they’re doing and how they’re using it. Look out for a series of posts over the next few months where we’ll open the floor to those using the API and let them explain how and why. We’ll let Authorea kick things off… Alberto Pepe, co-founder of Authorea explains:\nAuthorea is a word processor for researchers and scholars. It is a collaboration platform to write, share and openly research in real-time: write manuscripts and include rich media, such as data sets, software, source code and videos. The media-rich, data-driven capabilities of Authorea make it the perfect platform to create and disseminate a new generation of research articles, which are natively web-based, open, and reproducible. Authorea is free to use.\nHow is the Crossref Metadata API used within Authorea?\nAuthorea is specifically made for scholarly documents such as research articles, conference papers, grey literature, class notes, student papers, and problem sets. What makes scholarly documents so peculiar are their citations and references, mathematical notation, tables, and data. For citations and references, we built a citation tool which allows authors to search and cite scholarly papers with ease, without having to leave the editor. While in the middle of writing a sentence, authors can click the “cite” button and a citation tool opens up:\nWe currently use two engines for searching scholarly literature via their APIs: Crossref and Pubmed. Our authors love being able to search (by author name, paper title, topic, etc) and add references to their papers on the fly, in one click.\nWhat are the future plans for Authorea?\nAmong the many plans we have for the future, there is one which is also tied to Crossref: we are going to let authors assign DOIs to Authorea articles such as blog posts, preprints, “data papers”, “software papers” and other kinds of grey literature which does not fit in the traditional scholarly journals.\nWhat else would you like to see in our metadata?\nWell, since you ask: we would love to see unique BibTex IDs being served by the Metadata API (right now, you create the ID automatically using author name and year). Also, in some cases, some important metadata fields are missing (even author or title). I think it is actually more important to fix existing metadata rather than add new fields! Keen to share what you’re doing with the Crossref Metadata API? Contact feedback@crossref.org and share your story.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/get-ready-for-crossmark-2.0/", "title": "Get ready for Crossmark 2.0!", "subtitle":"", "rank": 1, "lastmod": "2016-08-17", "lastmod_ts": 1471392000, "section": "Blog", "tags": ["Crossmark"], "description": "TL;DR… In a few weeks, publishers can upgrade to the new and improved Crossmark 2.0 including a mobile-friendly pop-up box and new button. We will provide a new snippet of code for your landing pages, and we’ll support version v1.5 until March 2017.\nWe recently revealed a new look for the Crossmark box, bringing it up-to-date in design and offering extra space for more metadata. The new box pulls all of a publication’s Crossmark metadata into the same space, so readers no longer have to click between tabs.", "content": "TL;DR… In a few weeks, publishers can upgrade to the new and improved Crossmark 2.0 including a mobile-friendly pop-up box and new button. We will provide a new snippet of code for your landing pages, and we’ll support version v1.5 until March 2017.\nWe recently revealed a new look for the Crossmark box, bringing it up-to-date in design and offering extra space for more metadata. The new box pulls all of a publication’s Crossmark metadata into the same space, so readers no longer have to click between tabs. Linked Clinical Trials and author names (including ORCID iDs) now have their own sections alongside funding information and licenses. Feedback so far tells us that the new box is a vast improvement.\nHowever, this was only phase one of the Crossmark makeover. We will soon complete the upgrade to display a fully responsive, mobile-friendly box. The Crossmark button has been given a facelift too, and we are excited to offer the first public preview today:\nThe new button brings the Crossmark icon up to date and is designed to be more “clickable” than the current button. It will be available in several different ratios and also in greyscale.\nThe first phase of the new design was rolled out in the existing Crossmark pop up window (Crossmark v1.5) without the need for changes within publisher systems. For the Crossmark v2.0 upgrade, publishers will need to update their landing pages with a new snippet of code, to ‘unlock’ the new button and functional enhancements.\nCrossmark 2.0 will be available to adopt in a few weeks, and each publisher can decide when to switch over. We encourage members to upgrade sooner rather than later to get the benefits of the new box, but we also understand there are planned development schedules and the need for a testing period so we will continue to support Crossmark v1.5 until March 2017.\nMany thanks to all of those who completed our surveys to help us shape the new button. And congratulations to Elizabeth Ramsey, a researcher from Trent University in Canada, who will be receiving a limited edition Crossref Moleskine notebook from the survey prize draw.\nOur User Experience Designer, Rakesh Masih, will be blogging soon with details about the research and testing for this project, as well as more about our new approach to user experience at Crossref.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/getting-ready-to-run-with-preprints-any-day-now/", "title": "Getting ready to run with preprints, any day now", "subtitle":"", "rank": 1, "lastmod": "2016-08-16", "lastmod_ts": 1471305600, "section": "Blog", "tags": [], "description": "While preprints have been a formal part of scholarly communications for decades in certain communities, they have not been fully adopted to date across most disciplines or systems. That may be changing very soon and quite rapidly, as new initiatives come thick and fast from researchers, funders, and publishers alike. This flurry of activity points to the realization from these parties of preprints’ potential benefits:\nAccelerating the sharing of results; Catalyzing research discovery; Establishing priority of discoveries and ideas; Facilitating career advancement; and Improving the culture of communication within the scholarly community.", "content": "\rWhile preprints have been a formal part of scholarly communications for decades in certain communities, they have not been fully adopted to date across most disciplines or systems. That may be changing very soon and quite rapidly, as new initiatives come thick and fast from researchers, funders, and publishers alike. This flurry of activity points to the realization from these parties of preprints’ potential benefits:\nAccelerating the sharing of results; Catalyzing research discovery; Establishing priority of discoveries and ideas; Facilitating career advancement; and Improving the culture of communication within the scholarly community. To acknowledge them as a legitimate part of the research story, we need to fully build preprints into the broader research infrastructure. Preprints need infrastructure support just like journal articles, monographs, and other formal research outputs. Otherwise, we (continue to) have a two-tiered scholarly communications system, unlinked and operating independently.\nInfrastructure for preprints For this reason, the team at Crossref is extending its infrastructure services to allow members to register preprints. This new development is designed to provide custom support for preprints. It will ensure that: links to these publications persist over time; they are connected to the full history of the shared research results; and the citation record is clear and up-to-date. We established this preprints service to fully integrate preprint publications into the formal scholarly record with features such as:\nCrossref membership for preprint repositories, joining the community of publishers who have made a commitment to maintain and connect scholarly publications. Persistent identifiers for preprints to ensure successful links to the scholarly record over the course of time via the DOI resolver. Content Registration for preprints with custom metadata that reflect researcher workflows from preprint to formal publication. Notification of links between preprints and formal publications that may follow (journal articles, monographs, etc.). Collection of “event data” that capture activities surrounding preprints (usage, social shares, mentions, discussions, recommendations, links to datasets and other research entities, etc.). Reference linking for preprints, connecting up the scholarly record to associated literature Auto-update of ORCID records to ensure that preprint contributors get credit for their work. Preprint and funder registration to automatically report research contributions based on funder and grant identification. Supporting utility \u0026amp; effectiveness of preprints for all To build the service, we are listening to the research community tell us their vision of what preprints will do. We solicited use cases from the community and have built a registry of preprint user stories with researchers, publishers, funding agencies, tenure and promotion committees in academic institutions, and technology providers. To realize the user stories, the research enterprise will no doubt need brand new tools and existing systems enhancements. Crossref’s preprints infrastructure will support the development of all needs currently registered. The community at large can focus on building effective solutions, instead of finding or securing access to data. All data are available without restriction to all so that participants as well the services and systems supporting them can access the data and reuse it for advancing early dissemination, literature discovery, research tracking, promotion and funding assessment, etc. These are exciting days for scholarly communications. Over time, we envision an even more vibrant ecosystem of research outputs that include existing artefacts linked up to preprints. And Crossref is committed to providing infrastructure for the dynamic enterprise all along the way.\nWe plan to announce the availability of the preprints infrastructure and further technical details within the next few weeks. If you’re interested in learning more about how these will be supported, get in touch! ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-aws-s3-as-a-large-key-value-store-for-chronograph/", "title": "Using AWS S3 as a large key-value store for Chronograph", "subtitle":"", "rank": 1, "lastmod": "2016-08-02", "lastmod_ts": 1470096000, "section": "Blog", "tags": [], "description": "One of the cool things about working in Crossref Labs is that interesting experiments come up from time to time. One experiment, entitled “what happens if you plot DOI referral domains on a chart?” turned into the Chronograph project. In case you missed it, Chronograph analyses our DOI resolution logs and shows how many times each DOI link was resolved per month, and also how many times a given domain referred traffic to DOI links per day.", "content": "One of the cool things about working in Crossref Labs is that interesting experiments come up from time to time. One experiment, entitled “what happens if you plot DOI referral domains on a chart?” turned into the Chronograph project. In case you missed it, Chronograph analyses our DOI resolution logs and shows how many times each DOI link was resolved per month, and also how many times a given domain referred traffic to DOI links per day.\nWe’ve released a new version of Chronograph. This post explains how it was put together. One for the programmers out there.\nBig enough to be annoying Chronograph sits on the boundary between normal-sized data and large-enough-to-be-annoying-size data. It doesn’t store data for all DOIs (it includes only those that are used on average once a day), but it has information on up to 1 million DOIs per month over about 5 years, and about 500 million data points in total.\nStoring 500 million data points is within the capabilities of a well-configured database. In the first iteration of Chronograph a MySQL database was used. But that kind of data starts to get tricky to back up, move around and index.\nEvery month or two new data comes in for processing, and it needs to be uploaded and merged into the database. Indexes need to be updated. Disk space needs to be monitored. This can be tedious.\nKey values Because the data for a DOI is all retrieved at once, it can be stored together. So instead of a table that looks like\n10.5555/12345678 \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;2010-01-01\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;5\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; 10.5555/12345678 \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;2010-02-01\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;7\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; 10.5555/12345678 \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;2010-03-01\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;3\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; Instead we can store\n10.5555/12345678 \u0026lt;td\u0026gt; {\u0026amp;#8220;2010-01-01\u0026amp;#8221;: 5, \u0026amp;#8220;2010-02-01\u0026amp;#8221;: 7, \u0026amp;#8220;2010-03-01\u0026amp;#8221;: 3} \u0026lt;/td\u0026gt; This is much lighter on the indexes and takes much less space to store. However, it means that adding new data is expensive. Every time there’s new data for a month, the structure must be parsed, merged with the new data, serialised and stored again millions of times over.\nAfter trials with MySql, MongoDB and MapDB, this approach was taken with MySQL in the original Chronograph.\nKeep it Simple Storage Service Stupid In the original version of Chronograph the data was processed using Apache Spark. There are various solutions for storing this kind of data, including Cassandra, time-series databases and so on.\nThe flip side of being able to do interesting experiments is wanting them to stick around without having to bother a sysadmin. The data is important to us, but we’d rather not have to worry about running another server and database if possible.\nChronograph fits into the category of ‘interesting’ rather than ‘mission-critical’ projects, so we’d rather not have to maintain expensive infrastructure if possible.\nI decided to look into using Amazon Web Services Simple Storage Service (AWS S3) to store the data. AWS itself is a key-value store, so it seems like a good fit. S3 is a great service because, as the name suggests, it’s a simple service for storing a large number of files. It’s cheap and its capabilities and cost scale well.\nHowever, storing and updating up to 80 million very small keys (one per DOI) isn’t very clever, and certainly isn’t practical. I looked at DynamoDB, but we still face the overhead of making a large number of small updates.\nIs it weird? In these days of plentiful databases with cheap indexes (and by ‘these days’ I mean the 1970s onward) it seems somehow wrong to use plain old text files. However, the whole Hadoop “Big Data” movement was predicated on a return to batch processing files. Commoditisation of services like S3 and the shift to do more in the browser have precipitated a bit of a rethink. The movement to abandon LAMP stacks and use static site generators is picking up pace. The term ‘serverless architecture’ is hard to avoid if you read certain news sites.\nUsing Apache Spark (with its brilliant RDD concept) was useful for bootstrapping the data processing for Chronograph, but the new code has an entirely flat-file workflow. The simplicity of not having to unnecessarily maintain a Hadoop HDFS instance seems to be the right choice in this case.\nRepurposing the Wheel The solution was to use S3 as a big hash table to store the final data that’s served to users.\nThe processing pipeline uses flat files all the way through from input log files to projections to aggregations. At the penultimate stage of the pipeline blocks of CSV per DOI are produced that represent date-value pairs.\n10.5555/12345678 \u0026lt;td\u0026gt; 2010-01 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; 2010-01-01,05\u0026lt;br /\u0026gt; 2010-02-01,02\u0026lt;br /\u0026gt; 2010-01-03,08\u0026lt;br /\u0026gt; \u0026amp;#8230; \u0026lt;/td\u0026gt; 10.5555/12345678 \u0026lt;td\u0026gt; 2010-02 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; 2010-02-1,10\u0026lt;br /\u0026gt; 2010-02-01,7\u0026lt;br /\u0026gt; 2010-02-03,22\u0026lt;br /\u0026gt; \u0026amp;#8230; \u0026lt;/td\u0026gt; At the last stage, these are combined into blocks of all dates for a DOI\n10.5555/12345678 \u0026lt;td\u0026gt; 2010-01 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; 2010-01-01,05\u0026lt;br /\u0026gt; 2010-02-01,02\u0026lt;br /\u0026gt; 2010-01-03,08\u0026lt;br /\u0026gt; \u0026amp;#8230;\u0026lt;br /\u0026gt; 2010-02-1,10\u0026lt;br /\u0026gt; 2010-02-01,7\u0026lt;br /\u0026gt; 2010-02-03,22\u0026lt;br /\u0026gt; \u0026amp;#8230; \u0026lt;/td\u0026gt; The DOIs are then hashed into 12 bits and stored as chunks of CSV\nday-doi.csv-chunks_8841:\n10.1038/ng.3020 2014-06-24,4 2014-06-25,4 2014-06-26,3 ... 10.1007/978-94-007-2869-1_7 2012-06-01,12 2012-06-02,8 ... 10.1371/journal.pone.0145509 2016-02-01,13 2016-02-02,75 2016-02-03,30 ... There are 65,536 (0x000 to 0xFFFF) possible files, each with about a thousand DOIs worth of data in each.\nWhen the browser requests data for a DOI, it is hashed and then the request for the appropriate file in S3 is made. The browser then has to perform a linear scan of the file to find the DOI it is looking for.\nThis is the simplest possible form of hash table: simple addressing with separate linear chaining. The hash function is a 16-bit mask of MD5, chosen because of availability in the browser. It does a great job of evenly distributing the DOIs over all 65,536 possible files.\nStriking the balance In any data structure implementation, there are balances to be struck. Traditionally these concern memory layout, the shape of the data, practicalities of disk access and CPU cost.\nIn this instance, the factors in play included the number of buckets that need to be uploaded and the cost of the browser downloading an over-large bucket. The size of the bucket doesn’t matter much for CPU (as far as the user is concerned it takes about the same time to scan 10 entries as it does 10,000), but it does make a difference asking user to download a 10kb bucket or a 10MB one.\nI struck the balance at 4096 buckets, resulting in files of around 100k, which is the size of a medium sized image.\nIt works The result is a simple system that allows people to look up data for millions of DOIs, without having to look after another server. It’s also portable to any other file storage service.\nThe approach isn’t groundbreaking, but it works.\n", "headings": ["Big enough to be annoying","Key values","Keep it Simple Storage Service Stupid","Is it weird?","Repurposing the Wheel","Striking the balance","It works"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-fairer-approach-to-waiting-for-deposits/", "title": "A fairer approach to waiting for deposits", "subtitle":"", "rank": 1, "lastmod": "2016-07-20", "lastmod_ts": 1468972800, "section": "Blog", "tags": [], "description": "If you ever see me in the checkout line at some store do not ever get in the line I’m in. It is always the absolute slowest.\nCrossref’s metadata system has a sort of checkout line, when members send in their data they got processed essentially in a first come first served basis. It’s called the deposit queue. We had controls to prevent anyone from monopolizing the queue and ways to jump forward in the queue but our primary goal was to give everyone a fair shot at getting processed as soon as possible.", "content": "If you ever see me in the checkout line at some store do not ever get in the line I’m in. It is always the absolute slowest.\nCrossref’s metadata system has a sort of checkout line, when members send in their data they got processed essentially in a first come first served basis. It’s called the deposit queue. We had controls to prevent anyone from monopolizing the queue and ways to jump forward in the queue but our primary goal was to give everyone a fair shot at getting processed as soon as possible. With many different behaviors by our members this could often be a challenge and at times some folks were not 100% happy.\nWe recently made a change where the queue now cycles through all waiting users and selects a job from each. This means that low-frequency users will always get a pretty fast service even if there are a lot of unique users waiting. Everyone gets one bite of the apple on each cycle through the waiting list. Of course, we still have some special controls to help deal with large quantities of files from a single user and ways to jump the queue under really special circumstances.\nWe believe this will, on average, yield a better experience and minimize the backups that formerly required administrator attention to resolve.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2016-upcoming-events-were-out-and-about/", "title": "2016 upcoming events - we’re out and about!", "subtitle":"", "rank": 1, "lastmod": "2016-07-06", "lastmod_ts": 1467763200, "section": "Blog", "tags": [], "description": "Check out the events below where Crossref will attend or present in 2016. We have been busy over the past few months, and we have more planned for the rest of year. If we will be at a place near you, please come see us (and support these organizations and events)! Upcoming Events\nSHARE Community Meeting, July 11-14, Charlottesville, VA, USA\nCrossref Outreach Day - July 19-21 - Seoul, South Korea", "content": " Check out the events below where Crossref will attend or present in 2016. We have been busy over the past few months, and we have more planned for the rest of year. If we will be at a place near you, please come see us (and support these organizations and events)! Upcoming Events\nSHARE Community Meeting, July 11-14, Charlottesville, VA, USA\nCrossref Outreach Day - July 19-21 - Seoul, South Korea\nCASE 2016 Conference - July 20-22 - Seoul, South Korea\nACSE Annual Meeting 2016 - August 10-11 - Dubai, UAE\nVivo 2016 Conference - August 17-19 - Denver CO, USA\nSciDataCon - September 11-17 - Denver CO, USA\nALPSP - September 14-16 - London, UK\nOASPA - September 21-22 - Arlington VA, USA\n3:AM Conference - September 26 - 28 - Bucharest, Romania\nORCID Outreach Conference - October 5-6 - Washington DC, USA\nFrankfurt Book Fair - October 19-23 - Frankfurt, Germany (Hall 4.2, Stand #4.2 M 85)\nCrossref Annual Community Meeting #Crossref16 - November 1-2 - London, UK**\nPIDapalooza - November 9-10 - Reykjavik, Iceland\nOpenCon 2016 - November 12-14 - Washington DC, USA\nSTM Digital Publishing Conference - December 6-8 - London, UK The Crossref outreach team will host a number of outreach events around the globe. Updates about events are shared through social media so please connect with us via @CrossrefOrg. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/doi-like-strings-and-fake-dois/", "title": "DOI-like strings and fake DOIs", "subtitle":"", "rank": 1, "lastmod": "2016-06-29", "lastmod_ts": 1467158400, "section": "Blog", "tags": [], "description": "TL;DR Crossref discourages our members from using DOI-like strings or fake DOIs.\nDetails Recently we have seen quite a bit of debate around the use of so-called “fake-DOIs.” We have also been quoted as saying that we discourage the use of “fake DOIs” or “DOI-like strings”. This post outlines some of the cases in which we’ve seen fake DOIs used and why we recommend against doing so.\nUsing DOI-like strings as internal identifiers Some of our members use DOI-like strings as internal identifiers for their manuscript tracking systems.", "content": "TL;DR Crossref discourages our members from using DOI-like strings or fake DOIs.\nDetails Recently we have seen quite a bit of debate around the use of so-called “fake-DOIs.” We have also been quoted as saying that we discourage the use of “fake DOIs” or “DOI-like strings”. This post outlines some of the cases in which we’ve seen fake DOIs used and why we recommend against doing so.\nUsing DOI-like strings as internal identifiers Some of our members use DOI-like strings as internal identifiers for their manuscript tracking systems. These only get registered as real DOIs with Crossref once an article is published. This seems relatively harmless, except that, frequently, the unregistered DOI-like strings for unpublished (e.g. under review or rejected manuscripts) content ‘escape’ into the public as well. People attempting to use these DOI-like strings get understandably confused and angry when they don’t resolve or otherwise work as DOIs. After years of experiencing the frustration that these DOI-like things cause, we have taken to recommending that our members not use DOI-like strings as their internal identifiers.\nUsing DOI-like strings in access control compliance applications We’ve also had members use DOI-like strings as the basis for systems that they use to detect and block tools designed to bypass the member’s access control system and bulk-download content. The methods employed by our members have fallen into two broad categories:\nSpider (or robot) traps. Proxy bait. Spider traps A “spider trap” is essentially a tripwire that allows a site owner to detect when a spider/robot is crawling their site to download content. The technique involves embedding a special trigger URL in a public page on a web site. The URL is embedded such that a normal user should not be able see it or follow it, but an automated bot (aka “spider”) will detect it and follow it. The theory is that when one of these trap URLs is followed, the website owner can then conclude that the ip address from which it was followed harbours a bot and take action. Usually the action is to inform the organisation from which the bot is connecting and to ask them to block it. But sometimes triggering a spider trap has resulted in the IP address associated with it being instantly cut off. This, in turn, can affect an entire university’s access to said member’s content.\nWhen a spider/bot trap includes a DOI-like string, then we have seen some particularly pernicious problems as they can trip-up legitimate tools and activities as well. For example, a bibliographic management browser plugin might automatically extract DOIs and retrieve metadata on pages visited by a researcher. If the plugin were to pick up one of these spider traps DOI-like strings, it might inadvertently trigger the researcher being blocked- or worse- the researcher’s entire university being blocked. In the past, this has even been a problem for Crossref itself. We periodically run tools to test DOI resolution and to ensure that our members are properly displaying DOIs, Crossmarks, and metadata as per their member obligations. We’ve occasionally been blocked when we ran across the spider traps as well.\nProxy bait Using proxy bait is similar to using a spider trap, but it has an important difference. It does not involve embedding specially crafted DOI like strings on the member’s website itself. The DOI-like strings are instead fed directly to tools designed to subvert the member’s access control systems. These tools, in turn, use proxies on a subscriber’s network to retrieve the “bait” DOI-like string. When the member sees one of these special DOI-like strings being requested from a particular institution, they then know that said institution’s network harbours a proxy. In theory this technique never exposes the DOI-like strings to the public and automated tools should not be able to stumble upon them. However, recently one of our members had some of these DOI-like strings “escape” into the public and at least one of them was indexed by Google. The problem was compounded because people clicking on these DOI-like strings sometimes ended having their university’s IP address banned from the member’s web site. As you can imagine, there has been a lot of gnashing of teeth. We are convinced, in this case, that the member was doing their best to make sure the DOI-like strings never entered the public. But they did nonetheless. We think this just underscores how hard it is to ensure DOI-like strings remain private and why we recommend our members not use them.\nPedantry and terminology Notice that we have not used the phrase “fake DOI” yet. This is because, internally, at least, we have distinguished between “DOI-like strings” and “fake DOIs.” The terminology might be daft, but it is what we’ve used in the past and some of our members at least will be familiar with it. We don’t expect anybody outside of Crossref to know this.\nTo us, the following is not a DOI:\n10.5454/JPSv1i220161014\nIt is simply a string of alphanumeric characters that copy the DOI syntax. We call them “DOI-like strings.” It is not registered with any DOI registration agency and one cannot lookup metadata for it. If you try to “resolve” it, you will simply get an error. Here, you can try it. Don’t worry- clicking on it will not disable access for your university.\nhttp://doi.org/10.5454/JPSv1i220161014\nThe following is what we have sometimes called a “fake DOI”\n10.5555/12345678\nIt is registered with Crossref, resolves to a fake article in a fake journal called The Journal of Psychoceramics (the study of Cracked Pots) run by a fictitious author (Josiah Carberry) who has a fake ORCID (http://orcid.org/0000-0002-1825-0097) but who is affiliated with a real university (Brown University).\nAgain, you can try it.\nhttp://doi.org/10.5555/12345678\nAnd you can even look up metadata for it.\nhttps://api.crossref.org/v1/works/10.5555/12345678\nOur dirty little secret is that this “fake DOI” was registered and is controlled by Crossref.\nWhy does this exist? Aren’t we subverting the scholarly record? Isn’t this awful? Aren’t we at the very least hypocrites? And how does a real university feel about having this fake author and journal associated with them?\nWell- the DOI is using a prefix that we use for testing. It follows a long tradition of test identifiers starting with “5”. Fake phone numbers in the US start with “555”. Many credit card companies reserve fake numbers starting with “5”. For example, Mastercard’s are “5555555555554444” and “5105105105105100.”\nWe have created this fake DOI, the fake journal and the fake ORCID so that we can test our systems and demonstrate interoperable features and tools. The fake author, Josiah Carberry, is a long-running joke at Brown University. He even has a Wikipedia entry. There are also a lot of other DOIs under the test prefix “5555.”\nWe acknowledge that the term “fake DOI” might not be the best in this case- but it is a term we’ve used internally at least and it is worth distinguishing it from the case of DOI-like strings mentioned above.\nBut back to the important stuff….\nAs far as we know, none of our members has ever registered a “fake DOI” (as defined above) in order to detect and prevent the circumvention of their access control systems. If they had, we would consider it much more serious than the mere creation of DOI-like strings. The information associated with registered DOIs becomes part of the persistent scholarly citation record. Many, many third party systems and tools make use of our API and metadata including bibliographic management tools, TDM tools, CRIS systems, altmetrics services, etc. It would be a very bad thing if people started to worry that the legitimate use of registered DOIs could inadvertently block them from accessing content. Crossref DOIs are designed to encourage discovery and access- not block it.\nAnd again, we have absolutely no evidence that any of our members has registered fake DOIs.\nBut just in case, we will continue to discourage our members from using DOI-like strings and/or registering fake DOIs.\nThis has been a public service announcement from the identifier dweebs at Crossref.\nImage Credits Unless otherwise noted, included images purchased from The Noun Project\n", "headings": ["TL;DR","Details","Using DOI-like strings as internal identifiers","Using DOI-like strings in access control compliance applications","Spider traps","Proxy bait","Pedantry and terminology","Image Credits"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/outreach-day-dc.-next-up-you-tell-us/", "title": "Outreach Day DC. Next Up? You Tell Us", "subtitle":"", "rank": 1, "lastmod": "2016-06-28", "lastmod_ts": 1467072000, "section": "Blog", "tags": [], "description": "Rallying the community is a key Crossref role. Sometimes this means collaborating on new initiatives but it is also an ongoing process, a cornerstone of our outreach efforts. Part of rallying the community is bringing people together, literally, in a series of outreach days around the globe. It means we encourage dialog with us and among members and non-publisher affiliates. We want to hear from the community and we hope to facilitate conversations in it.", "content": "Rallying the community is a key Crossref role. Sometimes this means collaborating on new initiatives but it is also an ongoing process, a cornerstone of our outreach efforts. Part of rallying the community is bringing people together, literally, in a series of outreach days around the globe. It means we encourage dialog with us and among members and non-publisher affiliates. We want to hear from the community and we hope to facilitate conversations in it. Not just about Crossref, but larger issues of scholarly communications and your particular part in it. The Crossref outreach team is doing a number of events around the world to bring together the community for updates, feedback and discussion.\nOn 16 June, Crossref hosted an all day session in Washington, DC where we were joined by about 35 attendees from the region, mostly publishers. The size of the group made for lots of discussion, and we are grateful for the feedback. Here is what we took away from the event:\nWe all need a better understanding of who is using Crossref metadata and how Sure, we all know that, for example, submission systems, libraries and hosting platforms use Crossref metadata (‘metadata out’), but pinpointing where in workflows (often multiple instances) and the interplay between publishers and these systems? Not so much. Help us change that: take this short survey to tell us how publisher metadata quality affects your systems and workflows and we will, in turn, make use cases (anonymized if you wish) available as part of an ongoing effort to promote the value of more, better and enriched metadata.\nHere I must say a big thank you to our guest speaker for the day, Carly Robinson, who provided an excellent presentation on the work of OSTI, of the U.S. Department of Energy. Carly shared examples of how OSTI uses the Crossref metadata in their systems to aid compliance and compliment the DOE public access model. A live use case is a welcome way to partner with our community!\nThe more things change, the more they emphasize core best practices A good part of the day was spent on new initiatives such as: DOIs for preprints, auto-update of ORCID records, \u0026lsquo;early content registration\u0026rsquo; , linked clinical trials and more. All good stuff-the industry evolves and workflows must keep pace-but none of which generated a great deal of questions or expressed concern.\nOne session that did spur a lot of discussion was a simple overview of where Crossref services sit in the publishing process (including pre- and post-). Perhaps this is because it was early in the day but the much-appreciated discussion underscored the need to make the case for enriched metadata in a well-understood workflow that reflects the roles of publishers and affiliate users of metadata.\nOutreach is an experiment in which we are all subjects Finally, it must be noted here that we actively seek feedback on our Community Outreach days! We are not a large team and we can’t do as many outreach days as we’d like, but we are very open to hearing from you: So, tell us in this quick survey: what should we discuss? And where should we head next?\n", "headings": ["We all need a better understanding of who is using Crossref metadata and how","The more things change, the more they emphasize core best practices","Outreach is an experiment in which we are all subjects"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hello-preprints-whats-your-story/", "title": "Hello preprints, what’s your story?", "subtitle":"", "rank": 1, "lastmod": "2016-06-23", "lastmod_ts": 1466640000, "section": "Blog", "tags": [], "description": "The role of preprints Crossref provides infrastructure services and therefore we support scholarly communications as it evolves over time. Today, preprints are increasingly discussed as a valuable part of the research story (beyond physics, math, and a small set of sub-disciplines). Preprints might play a positive role in catalyzing research discovery, establishing priority of discoveries and ideas, facilitating career advancement, and improving the culture of communication within the scholarly community.", "content": "The role of preprints Crossref provides infrastructure services and therefore we support scholarly communications as it evolves over time. Today, preprints are increasingly discussed as a valuable part of the research story (beyond physics, math, and a small set of sub-disciplines). Preprints might play a positive role in catalyzing research discovery, establishing priority of discoveries and ideas, facilitating career advancement, and improving the culture of communication within the scholarly community.\nAs we shared in an earlier blog post last month, members will be able to register Crossref DOIs for preprints later this year. We will connect the full history of a research work, and ensure the citation record is clear and up-to-date. As we build out this new record/resource type, we’d love to hear how the research community envisions what preprints will do.\nWhat’s your story, preprint? So we can develop a service that supports the whole host of potential uses for all stakeholders, we ask the entire research community to contribute preprints user stories . User stories are concrete descriptions of a specific need, typically used in technology development: As a [x], I want to [y] to that I can [z]. User stories take the “end-user’s” perspective as they focus on a discrete result and its value. They are essential when implementing solutions that must meet a wide range of needs, across a diverse set of constituents. For example:\nAs an author, I want to share results before my paper is submitted to a journal so that I can get rapid feedback on it and make improvements before publication.\nAs a researcher who is part of a tenure and promotion committee or funder review panel, I want to know the reach of early results published from the candidate so that I can more quickly track the impact of results, rather than relying only on journal articles that take much longer to publish.\nAs a journal publisher, I want to know whether a preprint exists for a manuscript submitted to me so that I can decide whether I will accept the submission based on my editorial policy.\nWe aim to assemble a full catalog that cuts across research disciplines and stakeholder groups. We want to hear from you: researchers, publishers, funding agencies, scholarly societies, academic institutions, technology providers, other infrastructure providers , etc .\nTell us your story here To ensure that your needs are included, please send us your user stories via this user story “deposit” form . They will be added to the full registry of contributions from the community, which we hope will serve as a key resource for all those developing preprints into a core part of scholarly communications (e.g., ASAPbio, etc.).\n", "headings": ["The role of preprints","What’s your story, preprint?","Tell us your story here"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/clinical-trials/", "title": "Clinical Trials", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linked-clinical-trials-initiative-gathers-momentum/", "title": "Linked Clinical Trials initiative gathers momentum", "subtitle":"", "rank": 1, "lastmod": "2016-06-21", "lastmod_ts": 1466467200, "section": "Blog", "tags": [], "description": "We now have linked clinical trials deposits coming in from five publishers: BioMedCentral, BMJ, Elsevier, National Institute for Health Research and PLOS. It’s still a relatively small pool of metadata - around 4000 DOIs with associated clinical trial numbers - but we’re delighted to see that “threads” of publications are already starting to form.\nIf you look at this article in The Lancet and click on the Crossmark button you will see that in the Clinical Trials section there are links to three other articles reporting on the same trial: two from the American Heart Journal and one from BMJ’s Heart.", "content": "We now have linked clinical trials deposits coming in from five publishers: BioMedCentral, BMJ, Elsevier, National Institute for Health Research and PLOS. It’s still a relatively small pool of metadata - around 4000 DOIs with associated clinical trial numbers - but we’re delighted to see that “threads” of publications are already starting to form.\nIf you look at this article in The Lancet and click on the Crossmark button you will see that in the Clinical Trials section there are links to three other articles reporting on the same trial: two from the American Heart Journal and one from BMJ’s Heart. Readers can navigate between these four articles in three separate journals using the Crossmark functionality- a new set of links and routes for discovery have appeared.\nIn another example, three articles from PLOS ONE are threaded together around a trial for the treatment of Type 1 diabetes. And here another PLOS journal, Neglected Tropical Diseases links through to a PLOS ONE article about the same trial.\nIf you publish in the health sciences please do consider joining this exciting initiative so that we can expand these threads and build up the metadata. Read the tech specs here or drop me an email if you have questions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/distributing-references-via-crossref/", "title": "Distributing references via Crossref", "subtitle":"", "rank": 1, "lastmod": "2016-06-17", "lastmod_ts": 1466121600, "section": "Blog", "tags": [], "description": "Known unknowns If you follow this blog, you are going to notice a theme over the coming months- Crossref supports the deposit and distribution of a lot more kinds of metadata than people usually realise.\nWe are in the process of completely revamping our web site, help documentation, and marketing to better promote our metadata distribution capabilities, but in the mean time we think it would be useful highlight one of our most under-promoted functions- the ability to distribute references via Crossref.", "content": "Known unknowns If you follow this blog, you are going to notice a theme over the coming months- Crossref supports the deposit and distribution of a lot more kinds of metadata than people usually realise.\nWe are in the process of completely revamping our web site, help documentation, and marketing to better promote our metadata distribution capabilities, but in the mean time we think it would be useful highlight one of our most under-promoted functions- the ability to distribute references via Crossref.\nOne of the questions we most often get from members is- “can we distribute references via Crossref?” The answer is an emphatic yes. But to do so, you have to take an extra and hitherto obscure step to enable reference distribution. [EDIT 6th June 2022 - all references are now open by default with the March 2022 board vote to remove any restrictions on reference distribution].\nHow? Many members deposit references to Crossref as part of their participation in Crossref’s CitedBy service. However - for historical reasons too tedious to go into- participation in CitedBy does not automatically make references available via Crossref’s standard APIs. In order for publishers to distribute references along with standard bibliographic metadata, publishers need to either:\nContact Crossref support and ask them to turn on reference distribution for all of the prefixes they manage. Set the reference_distribution_opt element to any for each content item registered where they want to make references openly available. Either of these steps will allow references for the affected member DOIs to be distributed without restriction through all of Crossrefs APIs and bulk metadata dumps.\nNote that by doing this, you are not enabling the open querying of your CitedBy data- you are simply allowing the references that you already deposit to be redistributed to interested parties via our public APIs.\nWho? So who does this now? Well, at the moment not many members have enabled this feature. How could they? They probably didn’t know it existed. At the time of writing this 29 publishers have enabled reference distribution for at least some of their DOIs.\nBut that’s why we are writing this post. Given the interest expressed by our members, we expect the list to start growing quickly over the next few months. Particularly now that they know they can do it and have clear instructions on how to do it. 🙂\nIf you are of a geeky persuasion and want to see the list of publishers who are doing this, you can check via our API.\nThe following query will just show you the total number of members who are distributing references for at least some of their DOIs.\nhttps://api.crossref.org/v1/members?filter=has-public-references:true\u0026rows=0 And this query will allow you to page through the member records and see who is distributing references.\nhttps://api.crossref.org/v1/members?filter=has-public-references:true That cool, but can you see how many total DOIs have reference distribution enabled? No, but will will be adding that capability to our API soon.\nOMG! OMG! OMG! Does this mean I can get references from api.crossref.org? Yep. But before you get too excited- note above that not many of our members are doing this yet and that our API is still being updated to allow you to better query this information. At the moment references are not included in our JSON representation- they are only included in our XML representation. You can get the XML for a Crossref DOI either through content negotiation, or by using the following incantation on our API (using an eLife DOI as an example):\nhttps://api.crossref.org/v1/works/10.7554/eLife.10288.xml\nAs we update our API to better support querying DOIs that include references, you will see the new functionality reflected in our documentation at:\nYes. 🤗. See the API docs below. https://0-api-crossref-org.libus.csd.mu.edu\n", "headings": ["Known unknowns","How?","Who?","OMG! OMG! OMG! Does this mean I can get references from api.crossref.org?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/beyond-the-doi-to-richer-metadata/", "title": "Beyond the DOI to richer metadata", "subtitle":"", "rank": 1, "lastmod": "2016-06-15", "lastmod_ts": 1465948800, "section": "Blog", "tags": [], "description": "The act of registering a DOI (Digital Object Identifier) for scholarly content is sometimes conflated with the notion of conferring a seal of approval or other mark of good quality upon an item of content. This is a fundamental misunderstanding.\nA DOI is a tool, not a badge of honor. The presence of a Crossref DOI on content sends a signal that:\nThe owner of the content would like to be formally cited if the content is used in a scholarly context.", "content": "The act of registering a DOI (Digital Object Identifier) for scholarly content is sometimes conflated with the notion of conferring a seal of approval or other mark of good quality upon an item of content. This is a fundamental misunderstanding.\nA DOI is a tool, not a badge of honor. The presence of a Crossref DOI on content sends a signal that:\nThe owner of the content would like to be formally cited if the content is used in a scholarly context. The owner of the content considers that it is worthy of being made persistent. Beyond the DOI\nFor Crossref, a DOI is just one of several types of metadata we register, albeit an important one. Metadata about scholarly works extends beyond the DOI. In addition to bibliographic details, layers of information accompanying published works may now extend to data that describes the research, such as the source of research funding. It may also include non-descriptive information that facilitates usage, such as copyright and access permissions.\nIn fact, this “richer” metadata can tell you more about the context of the content deposited for a published work than you might realize. For example:\nAuthor data - Crossref metadata may include information specifying the author’s unique ORCID, allowing you to find other works by the same person. Copyright and access indicators - You can view the license terms under which the full content may be available, which is very helpful for scholars who want to access the full content for research and teaching or for text and data mining. Funding data - Metadata may also include the identity of the grant-making institution that funded the research, so that the funder and, in the case of publicly funded research, the general public and other researchers, have visibility on the resulting research outputs. Clinical Trials data - Similarly, when research involves a clinical trial, (testing of medicines and treatments on human beings), Crossref metadata can enhance output visibility by displaying the clinical trial number and the related clinical trial registry. Like the full content they describe, these metadata have become research resources in their own right. Unfortunately, too much metadata is entered into Crossref with missing, incomplete, or duplicated fields. This “bad” metadata slows the pace of discovery, confounding attempts to find and understand scholarly content and its context. As a community, we really need to do something about that.\n“The Map is not the Territory”\nAnd the metadata is not the content. In Metadata (MIT Press), Jeffrey Pomerantz quotes Alfred Korzybski’s insight that a map is a simplified representation of a territory, a tool of abstraction that allows us to find our way. Jennifer Lin contributed the concept of the scholarly road map as a useful metaphor for the way we use metadata about scholarly works to find our way between and among them in the digital world. Metadata deposited with Crossref amounts to pieces of information-structured, descriptive, administrative, contextual-about published works that humans can read and machines can use to automate linking and retrieval. The systematic development of such metadata allows us to make sense of such complex information by finding, linking, citing, and assessing scholarly content. If you want to understand how Crossref acts as a map of scholarly metadata, try searching for content on search.crossref.org (our human API interface). Or simply talk with us @CrossrefOrg and via member@crossref.org. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/our-memories-of-ssp2016/", "title": "Our memories of #SSP2016", "subtitle":"", "rank": 1, "lastmod": "2016-06-08", "lastmod_ts": 1465344000, "section": "Blog", "tags": [], "description": "Last week a bunch of Crossref’s staff traveled to the 2016 Society for Scholarly Publishing Annual Meeting in Vancouver, BC. After we returned en masse, all nine of us put our heads together to share some of our personal memories of the event. Crossref’s Rosa and Susan at the Fun Walk/Run sponsored by High Wire. 5K before breakfast!\nOn Cybersecurity and the Scholarly World —“The session described the many and complicated security threats that IT systems face and how threat detection and defense is a constantly ongoing activity.", "content": " Last week a bunch of Crossref’s staff traveled to the 2016 Society for Scholarly Publishing Annual Meeting in Vancouver, BC. After we returned en masse, all nine of us put our heads together to share some of our personal memories of the event. Crossref’s Rosa and Susan at the Fun Walk/Run sponsored by High Wire. 5K before breakfast!\nOn Cybersecurity and the Scholarly World —“The session described the many and complicated security threats that IT systems face and how threat detection and defense is a constantly ongoing activity. Certainly system administrators are challenged with the technology issues that build firewalls, block intrusions and divert disruptive activity. But perhaps even more important are the social issues that must be managed to develop an informed user community that is immune to the less technical but probably more effective hacks like phishing for user passwords.” On Persistent Identifiers in Scholarly Communications: What, Why, How, Where, and Who? “Everyone from Crossref loved this panel, which should come as no surprise (wink). Persistent identifiers such as DOIs and ORCID iDs enable machine and human readers to discover, cite, link, and correctly attribute works across different platforms. David Crotty of the Oxford University Press said it best with \u0026#8216;If you’re not actively building these persistent identifiers into your systems, get busy!’ Alice Meadows of ORCID represented the scholarly communications infrastructure with an image of shiny copper plumbing - don’t tell me we don’t have glamorous jobs! Laura Rueda of DataCite had particularly helpful diagrams to explain how persistent identifiers ease and speed the workflow of a research object as it travels from researcher to publisher to the greater community.” On Crossing Boundaries: Encouraging Diversity in Scientific Communication with Dr. Margaret-Ann Armour — “I decided to attend this keynote when I saw that men as well as women were in the audience. Dr. Armour had great anecdotes that supported formal data on women’s roles in STEM. It made me reflect on how the path to a career in scholarly publishing is often not direct, and relies on personal networking. She was very witty and deserved her standing ovation.” On Standards and Recommended Practices to Support Adoption of Altmetrics — “Todd Carpenter summed up the intent behind many altmetrics initiatives when he said that understanding how many people are using and reading scholarly content is important because \u0026#8216;we all want to know how we’re doing’ but \u0026#8216;this project should never become the number’ because the intent is about ‘trying to add flavor and nuance to the conversation in a meaningful way’. Stuart Maxwell of Scholarly IQ also made a really astute observation that “all assessment is in some way subjective - impact is relative to how you compare yourself to other researchers in your field.” What especially appealed to me about this session was learning that NISO extends its remit to include the data quality performance of altmetrics aggregators themselves. Asking each aggregator to self-report a publicly available, annual accounting of how they comply with the Altmetrics Data Quality Code of Conduct will likely increase consistency, transparency and trust.\u0026#8221; SSP receptions \u0026amp; evening events, where mashed potato sundaes were a thing Yes, the sessions are great, but some of the really interesting sights, sounds and discussions occur at the evening events. It’s impossible for one person to cover all of them (or is it?), but our idea of a few memorable highlights from this year’s SSP are, in no particular order: \u0026#8220;Tuesday’s reception—bar conveniently located just steps from the Crossref booth meant lots of good traffic! The convivial atmosphere made it easy to ignore that we were all tantalizingly close to the glorious view just outside the hotel doors. Wednesday’s reception was a chance to meet all the folks who didn’t make it in Tuesday. Though it seems most of us were delayed arriving in Vancouver, it was well worth the trip and arriving to find a few hundred colleagues all enjoying happy hour is a fine way to start a meeting.\u0026#8221; \u0026#8220;HighWire’s reception at the Vancouver Rowing Club provided a lovely walk on the way there, a great band at the party and a shrimp tower almost (but not quite) too good looking to eat. The pouring rain on the walk back made for a memorable bonding experience.\u0026#8221; \u0026#8220;Wildebeest was the atmospheric site of the Silverchair reception and great chance to see a bit of downtown before enjoying some good cheese and fine company. At least two of us attending made plans to save the world through better metadata. Over sparkling rose wine no less.\u0026#8221; \u0026#8220;Dolphins and sea otters made merry in a pool outside the Sheridan Group reception at the Vancouver Aquarium, while we noshed and drank with the fishes inside. But the food rivalled the undersea sights. A very nice gentleman with an ice cream scoop filled a parfait glass with a perfectly round dollop of mashed potatoes and told me to help myself to toppings. Shut the front door! I got the works. Delicious creamy mashed (whipped) potatoes of a perfect consistency, a ladle full of warm brown gravy topped with a generous sprinkle of finely sliced green onions (scallions), and a healthy spray of large, crispy bacon pieces!! It looked like a sundae … that you eat with a fork!!\u0026#8221;\n\u0026#8220;The President’s reception was in the world’s largest hotel suite (approximately), with some very photogenic desserts and a lot of happy people who know that it’s well worth sacrificing some sleep for the event.\u0026#8221; Of course, the hotel bar in the evenings had some memorable discussions too but what happens in the bar stays in the bar, right? And we should probably all be grateful for the early last call … ’Til next year! ", "headings": ["SSP receptions \u0026amp; evening events, where mashed potato sundaes were a thing"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/https-and-wikipedia/", "title": "HTTPS and Wikipedia", "subtitle":"", "rank": 1, "lastmod": "2016-05-31", "lastmod_ts": 1464652800, "section": "Blog", "tags": [], "description": "This is a joint blog post with Dario Taraborelli, coming from WikiCite 2016.\nIn 2014 we were taking our first steps along the path that would lead us to Crossref Event Data. At this time I started looking into the DOI resolution logs to see if we could get any interesting information out of them. This project, which became Chronograph, showed which domains were driving traffic to Crossref DOIs.\nYou can read about the latest results from this analysis in the “Where do DOI Clicks Come From” blog post.\nHaving this data tells us, amongst other things:\nwhere people are using DOIs in unexpected places where people are using DOIs in unexpected ways where we knew people were using DOIs but the links are more popular than we realised ", "content": "This is a joint blog post with Dario Taraborelli, coming from WikiCite 2016.\nIn 2014 we were taking our first steps along the path that would lead us to Crossref Event Data. At this time I started looking into the DOI resolution logs to see if we could get any interesting information out of them. This project, which became Chronograph, showed which domains were driving traffic to Crossref DOIs.\nYou can read about the latest results from this analysis in the “Where do DOI Clicks Come From” blog post.\nHaving this data tells us, amongst other things:\nwhere people are using DOIs in unexpected places where people are using DOIs in unexpected ways where we knew people were using DOIs but the links are more popular than we realised By the time the ALM Workshop 2014 rolled around there was some preliminary data and we realised that Wikipedia came into the third category. There are lots of DOIs in Wikipedia and people click them!\nI met with Dario Taraborelli, head of research at the Wikimedia Foundation, and shared the data. Dario — who co-authored in 2010 the Altmetrics Manifesto — has been interested in understanding how scholarly citations are used in Wikipedia. Over the years, Wikipedia contributors have made extensive use of references to the scientific literature using DOIs, and by doing so they have created a resource that represents today in many ways the “front matter to all research”. There is growing interest in the community in understanding how DOIs are being used in Wikipedia and in non traditional scholarship.\nDuring our discussions the subject of Wikipedia’s gradual transition to HTTPS was raised: we anticipated that this change would affect our data gathering.\nChanges When you’re reading webpage and click on a link to another page, your web browser will usually tell the server of that second page the last page you were on. This forms the basis of trackers like Google Analytics.\nIn the days before HTTPS, the next site would know the full URL that you were previously on. With the change to HTTPS, this was reduced to just sending the domain name and not the full URL, or no data at all if you click from an HTTPS page to HTTP.\nDOI hyperlinks are just like any other hyperlink, and are mostly HTTP not HTTPS.\nUp until 2015, Wikipedia was served over HTTP, only switching to HTTPS when users were logged in or if they requested it. The Wikimedia Foundation started planning to move to HTTPS and we knew that if they did that, and continued to use HTTP DOIs then we would lose valuable research data.\nA Plan We decided that the best course of action was to try and change the DOIs in Wikipedia to use HTTPS. Simple, right?\nAfter some further research, Dario posted a proposal on how to mitigate the impact of the HTTPS rollout, to make sure that Wikipedia can still signal its importance as a traffic source, while preserving the privacy of its users. Discussion followed and the conclusion was to change the format of every single DOI on Wikipedia, which fortunately could be done without having to edit millions of pages. You can read the full story in this post from a year ago.\nThe result of this effort was that well in advance of the HTTPS switchover, the DOI links were ready to continue reporting referral data.\nThe Switch In June 2015 the Wikimedia foundation made the announcement that they were finalising the switch, and that within a few weeks all traffic would be HTTPS.\nWe held our breath. Would it work? Would we lose all referral data from Wikipedia sites? In February 2016 the last piece of the puzzle fell into place as Wikipedia gained a ‘meta referrer’ tag to explicitly specify how they would like referrers to be sent: a detailed report on the effect of this change is coming up on the Wikimedia Foundation’s blog.\nThe results As detailed in the last blog post the traffic that we measured coming from Wikipedia doesn’t seem to have slowed down during 2015:\nI’d call that a success! Over the period covered in the graph, Wikipedia remained prominent as a non-publisher referral of traffic to DOIs.\nLooking at the balance of HTTP vs HTTPS traffic coming from wikipedia.org, the switchover was dramatic:\nThank you to Dario Taraborelli, Nemo (Federico Leva), Aaron Halfaker, Alex Stinson and everyone who put in this effort.\nI’ll leave the last word to Dario:\nIt’s great to see this data. It shows that the switchover happened successfully, which better protects the privacy of our users whilst still reporting the fact that Wikipedia is a prominent source of traffic. This is important validation of the increasing role that Wikipedia plays in the education and scientific community.\n", "headings": ["Changes","A Plan","The Switch","The results"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/where-do-doi-clicks-come-from/", "title": "Where do DOI clicks come from?", "subtitle":"", "rank": 1, "lastmod": "2016-05-19", "lastmod_ts": 1463616000, "section": "Blog", "tags": ["chronograph"], "description": "As part of our Event Data work we’ve been investigating where DOI resolutions come from. A resolution could be someone clicking a DOI hyperlink, or a search engine spider gathering data or a publisher’s system performing its duties. Our server logs tell us every time a DOI was resolved and, if it was by someone using a web browser, which website they were on when they clicked the DOI. This is called a referral.", "content": "As part of our Event Data work we’ve been investigating where DOI resolutions come from. A resolution could be someone clicking a DOI hyperlink, or a search engine spider gathering data or a publisher’s system performing its duties. Our server logs tell us every time a DOI was resolved and, if it was by someone using a web browser, which website they were on when they clicked the DOI. This is called a referral.\nThis information is interesting because it shows not only where DOI hyperlinks are found across the web, but also when they are actually followed. This data allows us a glimpse into scholarly citation beyond references in traditional literature.\nLast year Crossref Labs announced Chronograph, an experimental system for browsing some of this data. We’re working toward a new version, but in the meantime I’d like to share the results for 2015 and some of 2016. We have filtered out domains that belong to Crossref member publishers to highlight citations beyond traditional publications.\nTop 10 DOI referrals from websites in 2015 This chart shows the top 10 referring non-primary-publisher domains of DOIs per month. Note that if browsers don’t send the referrer (e.g. from an HTTPS page), we don’t get to find out. Because the top 10 can be different month to month, the total number of domains mentioned can be more than 10. Subdomains are combined, which means that, for example, the wikipedia.org entry covers all Wikipedia languages. This chart covers all of 2015 and the first two months of 2016.\nThe top 10 referring domains for the period:\nwebofknowledge.com baidu.com serialssolutions.com scopus.com exlibrisgroup.com wikipedia.org google.com uni-trier.de ebsco.com google.co.uk It’s not surprising to see some of these domains here: for example serialssolutions.com and exlibrisgroup.com are effectively proxies for link resolvers, Baidu and Google are incredibly popular search engines which would show up anywhere. But it is exciting to see Wikipedia ranked amongst these. For more detail look out for the new Chronograph.\nHTTP vs HTTPS in 2015 We’ve also seen a steady increase in HTTPS referral traffic, i.e. people clicking on DOIs from sites that are using HTTPS. While it is still dwarfed by HTTP, there was a steady uptick throughout 2015.\nThis chart shows HTTP vs HTTPS referrals per day, which shows up the weekly spikes. It doesn’t include resolutions where we don’t know the referrer.\nIncreasing numbers of people are moving to HTTPS for reasons of security, privacy and protection from tampering. Google has announced plans to take HTTPS into account when ranking search results. Wikipedia has moved exclusively to HTTPS, and I’ll be telling the story of how Crossref and Wikipedia collaborated in an upcoming blog post.\nChronograph Another version of Chronograph will be available soon. It will contain full data for all non-primary-publisher referring domains. Stay tuned!\n", "headings": ["Top 10 DOI referrals from websites in 2015","HTTP vs HTTPS in 2015","Chronograph"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linked-clinical-trials-are-here/", "title": "Clinical trial data and articles linked for the first time", "subtitle":"", "rank": 1, "lastmod": "2016-05-17", "lastmod_ts": 1463443200, "section": "Blog", "tags": [], "description": "It’s here. After years of hard work and with a huge cast of characters involved, I am delighted to announce that you will now be able to instantly link to all published articles related to an individual clinical trial through the Crossmark dialogue box. Linked Clinical Trials are here!\nIn practice, this means that anyone reading an article will be able to pull a list of both clinical trials relating to that article and all other articles related to those clinical trials – be it the protocol, statistical analysis plan, results articles or others – all at the click of a button.", "content": "It’s here. After years of hard work and with a huge cast of characters involved, I am delighted to announce that you will now be able to instantly link to all published articles related to an individual clinical trial through the Crossmark dialogue box. Linked Clinical Trials are here!\nIn practice, this means that anyone reading an article will be able to pull a list of both clinical trials relating to that article and all other articles related to those clinical trials – be it the protocol, statistical analysis plan, results articles or others – all at the click of a button. Linked Clinical Trials interface\nNow I’m sure you’ll agree that this sounds nifty. It’s definitely a ‘nice-to-have’. But why was it worth all the effort? Well, simply put: “to move a mountain, you begin by carrying away the small stones”.\nScience communication in its current form is an anachronism, or at the very least somewhat redundant.\nYou may have read about the ‘crisis in reproducibility’. Good science, at its heart, should be testable, falsifiable and reproducible, but an historical over-emphasis on results has led to a huge number of problems that seriously undermine the integrity of the scientific literature.\nIssues such as publication bias, selective reporting of outcome and analyses, hypothesising after the results are known (HARKing) and p-hacking are widespread, and can seriously distort the literature base (unless anyone seriously considers Nicholas Cage to be causally related to people drowning in swimming pools).\nThis is, of course, nothing new. Calls for prospective registration of clinical trials date back to the 1980s and it is now becoming increasingly commonplace, recognising that the quality of research lies in the questions it asks and the methods it uses, not the results observed.\nUptake of trial registration year-on-year since 2000\nBuilding on this, a number of journals and funders – starting with BioMed Central’s Trials over 10 years ago – have also pushed for the prospective publication of a study’s protocol and, more recently, statistical analysis plan. The idea that null and non-confirmatory results have value and should be published has also gained increasing support.\nOver the last ten years, there has been a general trend towards increasing transparency. So what is the problem? Well, to borrow an analogy from Jeremy Grimshaw, co-Editor-in-Chief of Trials – we’ve gone from Miró to Pollock.\nAlthough a results paper may reference a published study protocol, there is nothing to link that report to subsequent published articles; and no link from the protocol itself to the results article.\nA single clinical trial can result in multiple publications: the study protocol and traditional results paper or papers, as well as commentaries, secondary analyses and, eventually, systematic reviews, among others, many published in different journals, years apart. This situation is further complicated by an ever-growing body of literature.\nResearchers need access to all of these articles if they are to reliably evaluate bias or selective reporting in a research object, but – as any systematic reviewer can tell you – actually finding them all is like looking for a needle in a haystack. When you don’t know how many needles there are. With the haystack still growing.\nThat’s where we come in. The advent of trial registration means that there is a unique identifier associated with every clinical trial, at the study-level, rather than the article level. Building on this, the Linked Clinical Trials project set out to connect all articles relating to an individual trial together using its trial registration number (TRN).\nBy adapting the existing Crossmark standard, we have captured additional metadata about an article, namely the TRN and the trial registry, with this information then associated with the article’s DOI on publication. This means that you will be able to pull all articles related to an individual clinical trial from the Crossmark dialogue box on any relevant article. This obviously has huge implications for the way science is reported and used. By quickly and easily linking to related published articles, it will enable editors, reviewers and researchers to evaluate any selective reporting in the study, and help to provide far greater context for the results.\nAs all the metadata will be open access (CC0), with no copyright, it will also be possible to access this article ‘thread’ through the Crossref Metadata Search, or independently through an application programming interface (API). This provides a platform for others to build on, with many already looking to take the next step, such as Ben Goldacre’s new Open Trials initiative.\nHowever, in order for this to work, we must capture as many articles and trials as possible to create a truly comprehensive thread of publications. We currently have data from the NIHR Libraries, PLoS and, of course, BioMed Central, but need more publishers and journals to join us in depositing clinical trial metadata. After all, without metadata, this is all merely wishful thinking.\nLet’s hope we’re the pebble that starts the landslide.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/daniel-shanahan/", "title": "Daniel Shanahan", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/members-will-soon-be-able-to-assign-crossref-dois-to-preprints/", "title": "Members will soon be able to assign Crossref DOIs to preprints", "subtitle":"", "rank": 1, "lastmod": "2016-05-05", "lastmod_ts": 1462406400, "section": "Blog", "tags": [], "description": "TL;DR By August 2016, Crossref will enable its members to assign Crossref DOIs to preprints. Preprint DOIs will be assigned by the Crossref member responsible for the preprint and that DOI will be different from the DOI assigned by the publisher to the accepted manuscript and version of record. Crossref’s display guidelines, tools and APIs will be modified in order to enable researchers to easily identify and link to the best available version of a document (BAV). We are doing this in order to support the changing publishing models of our members and in order to clarify the scholarly citation record.\nBackground", "content": "TL;DR By August 2016, Crossref will enable its members to assign Crossref DOIs to preprints. Preprint DOIs will be assigned by the Crossref member responsible for the preprint and that DOI will be different from the DOI assigned by the publisher to the accepted manuscript and version of record. Crossref’s display guidelines, tools and APIs will be modified in order to enable researchers to easily identify and link to the best available version of a document (BAV). We are doing this in order to support the changing publishing models of our members and in order to clarify the scholarly citation record.\nBackground Why is this news? Well, to understand that you need to know a little Crossref history.\n(cue music and fade to sepia) When Crossref was founded, one of its major goals was to clarify the scholarly record by uniquely identifying formally published scholarly content on the web so that it could be cited precisely. At the time, our members had two primary concerns:\nThat a Crossref DOI should point to one intellectually discrete scholarly document. That is, they did not want one Crossref DOI to be assigned to two documents that appeared largely similar, but which might vary in intellectually significant ways.\nThat two DOIs should not point to the same intellectually discrete document. They wanted it to be easy for all to tell when the same discrete intellectual content was cited.\nAs such, when Crossref was founded, we developed a complex set of rules that were colloquially known by our members as Crossref’s rules “prohibiting the assignment of DOIs to duplicative content.”\n(cue music, show wavy lines, return to color)\nWell… as we gained experience in assigning DOIs, many of these rules have been amended or discarded when it became apparent that they didn’t actually support common scholarly citation practice and/or otherwise muddied the scholarly citation record.\nFor example, sometimes a document will be re-published in a special issue or an anthology. Before the advent of the DOI, it was common citation practice to always cite a document in the context in which it was read. The context of the document could, after all, affect the interpretation or crediting of the work. But it would be impossible to support this common citation practice if we were to assign the same Crossref DOI to the article on both its original context and in its re-published form. Our current recommendation in these situations is to assign separate DOIs to content that is republished in another context.\nAnother example occurs when a particular copy of a two identical documents has been annotated. For example, though the Handbook to The birds of Australia By John Gould has its own Crossref DOI (http://0-doi-org.libus.csd.mu.edu/10.5962/bhl.title.8367), another copy of the same book has been hand-annotated by Charles Darwin and also has its own, different Crossref DOI (http://0-doi-org.libus.csd.mu.edu/10.5962/bhl.title.50403). Historians of science quite reasonably may want to refer and cite the particular annotated copy of this historic document.\n__[So much for not assigning two separate Crossref DOIs to identical documents.]\nFinally, we should note a far more common example practice in our industry. Our members often make content available online with a Crossref DOI before they consider it to be formally published. This practice goes by a number of names including “publish ahead of print,” “article in progress,” “article in press,” “online ahead of print,” “online first”, etc.\nBut in each case, the process is the same- the publisher is assigning a Crossref DOI to the document soon after it has been accepted for publication and this same Crossref DOI is carried over to the finally published article. Again, this practice just reflects that the “intellectual” content of the accepted manuscript should not change between the point of acceptance and the point of publication, so of the purposes of “citation” they are largely interchangeable.\n[So much for not assigning one Crossref DOI to two versions of the same document.]\nNow, in the above cases it also helps to clarify the scholarly record to also specify that the respective Crossref DOIs of the original and the “duplicative” work are related, and we encourage our members to make these connections explicit when they can. Nonetheless, it is paramount in both cases to allow the “duplicative works” to be cited precisely and independently.\nWhich brings us back to preprints.\nThe case for preprints First we should define what was meant by preprints because even this commonly used term sometimes means different things to different communities. We have historically considered preprints to be any version of a manuscript that is intended for publication but that has not yet been submitted to a publisher for formal review. Note that this definition does not include “accepted manuscripts” which -as we noted above- often already have Crossref DOIs assigned to them soon after acceptance.\nCrossref members originally worried that, by assigning DOIs to preprints, we would end up muddying the scholarly record. They worried that the very presence of a Crossref DOI would be interpreted to mean that the content to which it had been applied had gone through a formal publishing process. And unlike the case with “accepted manuscripts”, the difference between intellectual content of a preprint and the final published version can sometimes be substantial. At the time, it seemed that the scholarly record would be clarified by prohibiting the assignment of DOIs to preprints.\nBut again, changes in the scholarly communication landscape have led us to -as the youngsters say- pivot.\nA Koan When is a preprint a preprint?\nCrossref has always been catholic in its definition of “publisher.” Many of our members do not consider “publishing” to be their primary mission. The OECD and World Bank are two obvious cases here. But our membership also includes government departments, universities and archives. In these latter cases they have traditionally assigned Crossref DOIs to things like internal reports, grey literature, working papers, etc. This activity was clearly within the original rules set out by Crossref. And this is where our koan comes into play- “when is a preprint a preprint?”\nIt is often difficult to predict when something might eventually be formally published. How do you a priori know that working paper will never be submitted for publication? After all, everything could potentially be submitted for publication (Sometimes it seems everything is.)\nThis is the dilemma that was faced by a few of our members. For example, Cold Spring Harbor Laboratory, which runs bioRxiv has been a Crossref member since 2000 and has assigned over 35,000 Crossref DOIs. They have been assiduous in trying to stick to Crossref’s rules about preprints. Furthermore, they have taken equal care to ensure that preprints in bioRxiv are labeled as such and linked to the final publication (via a Crossref journal DOI) when it is available. This takes a lot of work. But often bioRxiv simply has no way of telling when the authors of a working paper or report might suddenly decide to submit their work for publication. So they have found themselves occasionally and inadvertently violating Crossref’s rules on preprints because they had no way of predicting when something would magically transform from being an innocuous working paper into a fraught preprint.\nIt is a testament to bioRxiv that they have persevered. We have other members who face the same problem. They have not given up. They have not gone elsewhere for their DOIs.\nWhich brings us to our next point.\nNot All DOIs Have you noticed how often we use the phrase “Crossref DOIs?” Were you wondering if this was an annoying affectation or an example of a marketing department gone mad? It’s neither. It is an essential distinction that we make because Crossref is just one of several DOI registration agencies. Although all DOIs are “compatible” in the minimal sense that you can “resolve” them to a location on the web, that does not mean that all DOIs work identically. Different DOI registration agencies have different constituencies, different services, different governance models and different rules covering what their members can assign their respective DOIs to.\nThis was not the case when Crossref was founded and our rules were first drafted. At the time, Crossref was the only registration agency and, as such, the rule which prohibited the assignment of Crossref DOIs to preprints kinda worked. But it was unworkable in the longer term.\nQuite naturally, new DOI registration agencies have been established for different communities with different primary use-cases. While Crossref could have a rule prohibiting the assignment of Crossref DOIs to preprints, there was nothing stopping another registration agency from allowing (indeed, encouraging) its members to assign DOIs to preprints.\nSo the simple fact is that DOIs could be assigned to preprints regardless of Crossref’s old rules. By continuing to prohibit the practice at Crossref we were just making life for some of our existing members more difficult.\nAnd it has become clear that the situation would only get worse as more of our members started to roll-out new publishing and business models.\nBusiness model neutral Crossref has always been business model neutral. We need to adapt and change to support our members’ business models, not the other way around.\nA number of our members are starting to adopt publishing workflows that are more fluid and public than established publishing models. These new workflows make much of the submission and review process open, which, in turn often blurs the historically hard distinctions between a draft manuscript, a preprint, a revised proof, an accepted manuscript, the “final” published version, and subsequent corrections and updates. Where as in classic publishing models a document went through a series of discrete state-changes (some in public, many in private) new publishing workflows treat document versions as a continuum, most of which are made available publicly and which consequently may be used cited at almost any point in the publishing process.\nIn short, Crossref’s members increasingly need the flexibility to assign DOIs at different points in the publishing lifecycle. Rather than enforce rules that enshrined an existing publishing or business model, we need to work with our members to establish and adopt new DOI assignment practices which support evolving publishing models whilst maintaining a clear citation record and which lets researchers easily identify the best available version (BAV) of a document or research object.\nSo you see, not all of our motivations for this change in policy are opportunistic or prosaic. Underneath our gruff and flinty exterior is a soft, idealistic center. There are principles at work here as well.\nWhat next\nSo this isn’t just matter of changing our rules and display guidelines. We also have to make some schema changes, and adjust our services and APIs to clearly distinguish between preprints and accepted manuscripts/versions of record. Additionally, we will be building tools to make it much easier for our members to link preprints to the final published article (and vice versa). Finally, we need to update our documentation to help our members take advantage of the new functionality. We expect that everything will be in place by the end of August, 2016, at which point you will see another announcement from us.\n", "headings": ["TL;DR","Background","The case for preprints","A Koan","Not All DOIs","Business model neutral "] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/brand/", "title": "Brand", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-brand-update-new-names-logos-guidelines-and-video/", "title": "Crossref Brand update: new names, logos, guidelines, + video", "subtitle":"", "rank": 1, "lastmod": "2016-04-29", "lastmod_ts": 1461888000, "section": "Blog", "tags": [], "description": "It can be a pain when companies rebrand as it usually requires some coordinated updating of wording and logos on websites, handouts, and slides. Nevermind changing habits and remembering to use the new names verbally in presentations.\nWhy bother? As our infrastructure and services expanded, we sometimes branded services with no reference to Crossref. As explained in our The Logo Has Landed post last November, this has led to confusion, and it was not scalable nor sustainable.", "content": "It can be a pain when companies rebrand as it usually requires some coordinated updating of wording and logos on websites, handouts, and slides. Nevermind changing habits and remembering to use the new names verbally in presentations.\nWhy bother? As our infrastructure and services expanded, we sometimes branded services with no reference to Crossref. As explained in our The Logo Has Landed post last November, this has led to confusion, and it was not scalable nor sustainable. With a cohesive approach to naming and branding, the benefits of changing to (some) new names and logos should help everyone. Our aim is to stem confusion and be in a much better position to provide clear messages and useful resources so that people don’t have to try hard to understand what Crossref enables them to do. So while it may be a bit of a pain short-term, it will be worth it!\nWhat are the new names? As a handy reference, here is a slide-shaped image giving an overview of our services with their new names:\nOverview of brand name changes, April 2016\nIt’s a lowercase ‘r’ in Crossref That’s right, you’ve spent fifteen years learning to capitalize the second R in Crossref, and now we’re asking you to lowercase it! Please say hello to and start to embrace the more natural and contemporary Crossref.\nReference logos from our new CDN via assets.crossref.org I’m hoping we can count on our community to update logos and names on your end, keeping consistent with new brand guidelines. And I hope we can make it as easy as possible to do: This Content Delivery Network (CDN) at assets.crossref.org allows you to reference logos using a snippet of code. Please do not copy/download the logos.\nThis set of brand guidelines for members.\nWe also have a new website in development which will put support and resources front and center of the user experience. More on that in the next month or two.\nBy using the snippets of code provided via our new CDN at assets.crossref.org, these kind of manual updates should never be a problem in the future if the logo changes again (no plans anytime soon!).\nOf course, we don’t expect people to update new logos and names immediately, there is always a period of transition. Please let us know let us know if we can help you to update your sites and materials in the coming weeks.\nAlso, check out the launch video, which presents five key Crossref brand messages:\n", "headings": ["Why bother?","What are the new names?","It’s a lowercase ‘r’ in Crossref","Reference logos from our new CDN via assets.crossref.org"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/getting-started-with-crossref-dois-courtesy-of-scholastica/", "title": "Getting Started with Crossref DOIs, courtesy of Scholastica", "subtitle":"", "rank": 1, "lastmod": "2016-04-25", "lastmod_ts": 1461542400, "section": "Blog", "tags": [], "description": "I had a great chat with Danielle Padula of Scholastica, a journals platform with an integrated peer-review process that was founded in 2011. We talked about how journals get started with Crossref, and she turned our conversation into a blog post that describes the steps to begin registering content and depositing metadata with us. Since the result is a really useful description of our new member on-boarding process, I want to share it with you here as well.", "content": "I had a great chat with Danielle Padula of Scholastica, a journals platform with an integrated peer-review process that was founded in 2011. We talked about how journals get started with Crossref, and she turned our conversation into a blog post that describes the steps to begin registering content and depositing metadata with us. Since the result is a really useful description of our new member on-boarding process, I want to share it with you here as well. As always, comments and questions are welcome here, at member@Crossref.org, and @CrossrefOrg. - Anna_\nThe internet is in a constant state of change, with new content being added to the web by the minute and old content sometimes getting moved around. While the benefit of publishing scholarly outputs online is that it’s possible to update them at any moment, moving or modifying content can also …\nRead more at: https://blog.scholasticahq.com/post/getting-started-with-dois-at-your-journal-interview-with-anna-tolwinska-crossref/\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-event-data-early-preview-now-available/", "title": "Crossref Event Data: early preview now available", "subtitle":"", "rank": 1, "lastmod": "2016-04-18", "lastmod_ts": 1460937600, "section": "Blog", "tags": [], "description": " Test out the early preview of Event Data while we continue to develop it. Share your thoughts. And be warned: we may break a few eggs from time to time!\nChicken by anbileru adaleru from the The Noun Project\nWant to discover which research works are being shared, liked and commented on? What about the number of times a scholarly item is referenced? Starting today, you can whet your appetite with an early preview of the forthcoming Crossref Event Data service. We invite you to start exploring the activity of DOIs as they permeate and interact with the world after publication.\n", "content": " Test out the early preview of Event Data while we continue to develop it. Share your thoughts. And be warned: we may break a few eggs from time to time!\nChicken by anbileru adaleru from the The Noun Project\nWant to discover which research works are being shared, liked and commented on? What about the number of times a scholarly item is referenced? Starting today, you can whet your appetite with an early preview of the forthcoming Crossref Event Data service. We invite you to start exploring the activity of DOIs as they permeate and interact with the world after publication.\nBut first, a bit of background Discussion around scholarly research increasingly occurs online after publication, for example on blogs, sharing services, social media, and wikis. These ‘events’ occur across the web on numerous platforms and are a critical part of the scholarly enterprise. We are developing an infrastructure service (Crossref Event Data) that collects, stores, and delivers raw data of the events occurring with Crossref DOIs. We will store the data in an open, auditable and portable form for the community to access. Publishers, platforms, funders, bibliometricians and service providers may benefit from access to this raw data, and it can be used to feed into research records or proprietary tools and services that offer aggregation and analysis. For more information, see our pilot blog post and description of potential use cases.\nCollaborative, transparent development Developers Martin Fenner (DataCite) and Joe Wass (Crossref) enjoy a tofu break\nLagotto, the software originally developed at PLOS, has been extended and improved in a joint effort between DataCite and Crossref. The two DOI Registration Agencies have partnered to envision, build and release the service. On the 13th of April, after a year of collaboration, we jointly released Lagotto 5.0. You can read about the collaboration on the DataCite blog post.\nCrossref and DataCite will continue to work closely together to develop Lagotto and the Event Data service. Although Crossref Event Data has mostly Crossref DOIs at launch, you will be able to find DataCite DOIs if they are cited in Crossref or Wikipedia.\nAll of the software that runs Event Data, including Lagotto, is developed in the open and is open source. Please refer to the Crossref Event Data Technical User Guide for full details.\nPreview the data This service is currently under development with a full launch expected the second half of 2016. Before it is launched however, we invite you to take a look around and preview a subset of the data sources we plan to include. You may experience occasional hiccups while we continue building the service.\nAt this stage, we are working with data from three sources although we will greatly expand the variety of platforms from which we collect data as development progresses. At this stage, you can view Mendeley bookmarks, Wikipedia references, and Crossref to DataCite links.\nMendeley Mendeley is a reference manager and academic social network for scholars. View the number of social bookmarks from scholars or groups on Mendeley.\nFor example, doi.org/10.1016/J.JIP.2016.03.007 currently has 8 readers on Mendeley to date.\nWikipedia Wikipedia is an online encyclopaedia, the Internet’s largest and most popular general reference work. View references in Wikipedia of Crossref publications in Wikipedia articles in all languages.\nFor example, doi.org/10.3897/ZOOKEYS.565.7185 was referenced in the Russian Wikipedia page on Oxyscelio\nCrossref to DataCite links DataCite is a global consortium that assigns DOIs to research data. This enables people to find, share, use, and cite data. You can view all the data citations to DataCite research outputs found in Crossref publications (work is underway to make the links found in DataCite metadata available in Event Data). For example, Global, Regional, and National Fossil-Fuel CO2 Emissions (doi.org/10.3334/CDIAC/00001) dataset has been referenced by six Crossref publications to date. Software links are also included. Another example is PGOPHER (doi.org/10.5523/bris.huflggvpcuc1zvliqed497r2), a general purpose software for simulating and fitting rotational, vibrational and electronic spectra, which has been referenced by seven Crossref publications to date.\nReady to take a spin? You can explore the Crossref Event Data early preview by visiting http://0-eventdata-crossref-org.libus.csd.mu.edu and following the links to featured examples within our interim application for inspecting the data, technical documentation, and our Quick Start guide.\nShare your thoughts This service is currently under development and as such we welcome your thoughts and feedback on the data we are collecting currently from our three active sources. As a reminder, we expect to include the following sources as part of our full service launch later this year (pending confirmation):\n[table id=1 /]\nWe’re also on the lookout for new data sources to investigate for future inclusion in the Event Data service so please do get in touch with requests and recommendations. As we continue to build the service throughout 2016, we will be committing to a model of continuous development so that we can make new sources available as they are completed.\nWatch this blog for regular updates on our progress, or subscribe to receive new blog posts by email (just add your details to the upper right side of this page).\n", "headings": ["But first, a bit of background","Collaborative, transparent development ","Preview the data","Mendeley","Wikipedia ","Crossref to DataCite links","Ready to take a spin?","Share your thoughts"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-are-there-80-million-of/", "title": "What are there 80 million of?", "subtitle":"", "rank": 1, "lastmod": "2016-04-08", "lastmod_ts": 1460073600, "section": "Blog", "tags": ["80 million", "80000000", "Crossref80mil"], "description": "As of this week, there are 80,000,000 scholarly items registered with Crossref!\nBy the way, we update these interesting Crossref stats regularly and you can search the metadata.\nThe 80 millionth scholarly item is [drumroll…] Management Approaches in Beihagi History from the journal Oman Chapter of Arabian Journal of Business and Management Review, published by Al Manhal in the United Arab Emirates.\nThere have been loads of changes since Wiley registered \u0026ldquo;Designer selves: Construction of technologically mediated identity within graphical, multiuser virtual environments\u0026rdquo; with the DOI http://dx.", "content": "As of this week, there are 80,000,000 scholarly items registered with Crossref!\nBy the way, we update these interesting Crossref stats regularly and you can search the metadata.\nThe 80 millionth scholarly item is [drumroll…] Management Approaches in Beihagi History from the journal Oman Chapter of Arabian Journal of Business and Management Review, published by Al Manhal in the United Arab Emirates.\nThere have been loads of changes since Wiley registered \u0026ldquo;Designer selves: Construction of technologically mediated identity within graphical, multiuser virtual environments\u0026rdquo; with the DOI http://0-dx-doi-org.libus.csd.mu.edu/10.1002/(SICI)1097-4571(1999)50:10\u0026lt;855::AID-ASI3\u0026gt;3.0.CO;2-6), which happens to have been Crossref’s first official DOI (after many prototype deposits).\nIn the beginning, most of our new members came from the United States and Europe. Now, lots of our members and affiliates come from other parts of the world.\nEd Pentz was Crossref’s first (and only) employee in February 2000. Now it takes 30 of us to manage the 80 million records and over 5,300 participating organizations and to work on projects like Crossref Event Data, \u0026lsquo;early content registration\u0026rsquo; , and all the new stuff you’ll be hearing about later this year.\nMaybe in the context of social media services (e.g. Facebook users) 80,000,000 does not seem like such a big number. But 80,000,000 is an important milestone. Just think — There are also 80 million microbes in a 10 second kiss [Microbiome, 2014, 2:41, Kort et al].\nAnd after 80 million years of extinction events, we’re all still here! What else is 80 million? Tell us in a tweet using #Crossref80mil. There may be a prize! Crossref has 80 million registered content items\n", "headings": ["What else is 80 million? Tell us in a tweet using #Crossref80mil. There may be a prize!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/doi-foundation/", "title": "DOI Foundation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dr-norman-paskin/", "title": "Dr Norman Paskin", "subtitle":"", "rank": 1, "lastmod": "2016-04-06", "lastmod_ts": 1459900800, "section": "Blog", "tags": [], "description": "Dr Norman Paskin It was with great sadness and shock that I learned that Dr Norman Paskin had passed away unexpectedly on the 27th March. This is a big loss to the DOI, Crossref and digital information communities. Norman was the driving force behind the DOI System and was a key supporter and ally of Crossref from the start. Norman founded the International DOI Foundation in 1998 and ran it successfully until the end of 2015 when he moved to a strategic role as an Independent Board Member.", "content": "Dr Norman Paskin It was with great sadness and shock that I learned that Dr Norman Paskin had passed away unexpectedly on the 27th March. This is a big loss to the DOI, Crossref and digital information communities. Norman was the driving force behind the DOI System and was a key supporter and ally of Crossref from the start. Norman founded the International DOI Foundation in 1998 and ran it successfully until the end of 2015 when he moved to a strategic role as an Independent Board Member.\nNorman was an early proponent of the value of persistent digital identifiers paired with standardised metadata and laid the groundwork for the system and infrastructure that has made Crossref and eight other Registration Agencies so successful. Norman was also a key adviser and participant in many standards organizations and initiatives where he regularly provided key intellectual input to help improve digital communications.\nPersonally, it was a great pleasure to work with Norman over the last twenty years and I greatly appreciated his intelligence, humour, advice, and particularly his help and generous support when I relocated to Oxford.\nThe International DOI Foundation has posted a notice, and has created condolences@doi.org for people to send messages. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-wikipedia-library-a-partnership-of-wikipedia-and-publishers-to-enhance-research-and-discovery/", "title": "The Wikipedia Library: A Partnership of Wikipedia and Publishers to Enhance Research and Discovery", "subtitle":"", "rank": 1, "lastmod": "2016-04-04", "lastmod_ts": 1459728000, "section": "Blog", "tags": [], "description": "Back in 2014, Geoffrey Bilder blogged about the kick-off of an initiative between Crossref and Wikimedia to better integrate scholarly literature into the world’s largest knowledge space, Wikipedia. Since then, Crossref has been working to coordinate activities with Wikimedia: Joe Wass has worked with them to create a live stream of content being cited in Wikipedia; and we’re including Wikipedia in Event Data, a new service to launch later this year.", "content": "Back in 2014, Geoffrey Bilder blogged about the kick-off of an initiative between Crossref and Wikimedia to better integrate scholarly literature into the world’s largest knowledge space, Wikipedia. Since then, Crossref has been working to coordinate activities with Wikimedia: Joe Wass has worked with them to create a live stream of content being cited in Wikipedia; and we’re including Wikipedia in Event Data, a new service to launch later this year. In that time, we’ve also seen Wikipedia importance grow in terms of the volume of DOI referrals.\nAlex Stinson, Project Manager for the Wikipedia Library, and our guest blogger! This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license (Source: Myleen Hollero Photography)\nAlex Stinson, Project Manager for the Wikipedia Library, and guest blogger! This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license (Source: Myleen Hollero Photography)\nHow can we keep this momentum going and continue to improve the way we link Wikipedia articles with the formal literature? We invited Alex Stinson, a project manager at The Wikipedia Library (and one of our first guest bloggers) to explain more:\nWikipedia provides the most public gateway to academic and scholarly research. With millions of citations to academic as well as non-academic but reliable sources, like those produced by newspapers, its ecosystem of 5 million English Wikipedia articles and 35 million articles in hundreds of languages provides the first stop for researchers in both scholarly and informal research situations. The practice of “checking Wikipedia” has become ubiquitous in a number of fields; for example, Wikipedia is the most visited source of medical information online, even providing the first stop for many medical students and medical practitioners when looking for medical literature.\nThe Wikipedia Library program helps Wikipedia’s volunteer editors access and use the best sources in their research and citations. Through partnerships with over fifty leading publishers and aggregators, like JSTOR, Project Muse, Elsevier, Newspapers.com, Highbeam, Oxford University Press and others, we have been able to give over 3000 of our most prolific volunteers access to over 5500 accounts. These are clear, win-win relationships where Wikipedia editors get to use these databases to improve Wikipedia, while in turn linking to authoritative resources and enhancing their discovery. JSTOR has been working with us since 2012, providing over 500 accounts to our editors. Kristen Garlock at JSTOR writes: “We’re very happy to collaborate with the Wikipedia Library to provide JSTOR access to Wikipedia editors. Supporting the initiative to increase editor access to scholarly resources and improve the quality of information and sources on Wikipedia has the potential to help all Wikipedia readers. In addition to providing more discoverability for our institutional subscribers, introducing new audiences to the scholarship on JSTOR them discover access opportunities like our Register \u0026amp; Read program.”\nThere are strong signals that Wikipedia’s role in the citation ecosystem helps ensure the best materials reach the public through its over 400 million monthly readers: The latest estimates by Crossref show that Wikipedia has risen from the 8th most prolific referrer to DOIs to the 5th. Two of our access partners have found that around half of the referrals arriving from Wikipedia were able to authenticate into their subscription resources, suggesting that a large portion of our readers can take advantage of subscriptions provided by scholarly institutions. Wikipedia is highly influential in the open access ecosystem as well, with a recent study showing higher citation rates for OA materials than those behind a paywall. Altmetrics tools (such as Altmetric.com, ImpactStory or Plum Analytics) are recognizing Wikipedia’s importance by including Wikipedia citations in their impact metrics. Despite these advances, we think this is only the beginning of Wikipedia’s impact on the landscape of scholarly research and discovery. Wikipedia can become a highly integrated research platform within the broader research ecosystem, where the best scholarship is summarized and discoverable-where Wikipedia effectively becomes the front matter to all research.\nHowever, there are some clear barriers to fulfilling this vision. Currently, most citations on Wikipedia are stored in free-text and not readily available in machine-readable formats; our community is working to fix this. Wikipedia also has major systematic gaps in topics where either we lack volunteer interest or Wikipedia reflects larger systemic biases within society or scholarship.We need the help of volunteers, experts, industry partners, and information technologists to grow Wikipedia’s collection of citations, especially around key missing areas, and to transform existing citations into structured formats. WikiData, Wikipedia’s sister project which crowdsources structured metadata, offers an excellent opportunity for improving the impact of Wikipedia in research. Having Wikipedia citations stored in this structured ecosystem, connecting metadata with semantic meaning, would allow the citations in Wikipedia to become the backbone for discovery tools which emphasize the hand-curated interrelationships between authoritative sources and the knowledge collected by Wikipedia and Wikidata editors.\nWe need more collaborators to realize the full vision of Wikipedia supporting research in the most effective ways:\nWe need help from publishers with subscription databases, to help us give our editors access to the databases through The Wikipedia Library’s access partnership program. These high-quality source materials allow our editors to expose that research in a number of languages and for millions of readers. We need help from the open access community, to figure out how to better support increased citation and strategic use of open access materials within Wikipedia and other Wikimedia projects. Our community has some ideas, but we need your input and collaboration. We need your expertise to build our structured metadata ecosystem, by helping Wikidata map and collect citation data. We need the larger research community to promote Wikipedia as a scholarly communications tool and make contributing to Wikipedia an important part of the social responsibility of experts. Wider citation of sources in Wikipedia ensures widespread discovery and dissemination of that research. If you think you can help, we invite you to contact us at wikipedialibrary@wikimedia.org or via Twitter @WikiLibrary. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/site-search/", "title": "Search", "subtitle":"", "rank": 1, "lastmod": "2016-03-28", "lastmod_ts": 1459197187, "section": "", "tags": [], "description": "Search the Crossref site", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/python-and-ruby-libraries-for-accessing-the-crossref-api/", "title": "Python and Ruby Libraries for accessing the Crossref API", "subtitle":"", "rank": 1, "lastmod": "2016-03-04", "lastmod_ts": 1457049600, "section": "Blog", "tags": [], "description": "I’m a co-founder with rOpenSci, a non-profit that focuses on making software to facilitate reproducible and open science. Back in 2013 we started to make an R client working with various Crossref web services. I was lucky enough to attend last year’s Crossref annual meeting in Boston, and gave one talk on details of the programmatic clients, and another higher level talk on text mining and use of metadata for research.", "content": "I’m a co-founder with rOpenSci, a non-profit that focuses on making software to facilitate reproducible and open science. Back in 2013 we started to make an R client working with various Crossref web services. I was lucky enough to attend last year’s Crossref annual meeting in Boston, and gave one talk on details of the programmatic clients, and another higher level talk on text mining and use of metadata for research.\nCrossref has a newish API encompassing works, journals, members, funders and more (check out the API docs), as well as a few other services. Essential to making the Crossref APIs easily accessible—and facilitating easy tool/app creation and exploration—are programmatic clients for popular languages. I’ve maintained an R client for a while now, and have been working on Python and Ruby clients for the past four months or so.\nThe R client falls squarely into the analytics/research use cases, while the Python and Ruby clients are ideal for general data access and use in web applications (the Javascript library below as well).\nI’ve strived to make each client in idiomatic fashion according to the language. Due to this fact, there is not generally correspondence between the different clients with respect to data outputs. However, I’ve tried to make method names similar across Ruby and Python; although the R client is quite a bit older, so method names differ from the other clients and I’m resistant to changing them so as not to break current users’ projects. In addition, R users are likely to want a data.frame (i.e., table) of results, so we give back that - whereas with Python and Ruby we give back dictionaries and hashes, respectively.\nCrossref clients Python: Source: https://github.com/sckott/habanero Pypi: https://pypi.python.org/pypi/habanero Ruby: Source: https://github.com/sckott/serrano Rubygems: https://rubygems.org/gems/serrano serrano also comes with a command line tool of the same name that’s installed when you install serrano (examples below) R: Source: https://github.com/ropensci/rcrossref CRAN: https://cran.rstudio.com/web/packages/rcrossref/ Javascript: Source: https://github.com/scienceai/crossref NPM: https://www.npmjs.com/package/crossref I’ll cover the Python, Ruby, and R libraries below.\nInstallation Python\non the command line\npip install habanero Ruby\non the command line\ngem install serrano R\nin an R session\ninstall.packages(\"rcrossref\") Examples Output is indicated by the syntax #\u0026gt; in all examples below.\nPython\nin a Python REPL (e.g. iPython)\nImport the Crossref module from within habanero, and initialize a client\nfrom habanero import Crossref cr = Crossref() Query for the phrase “ecology”\nx = cr.works(query = \"ecology\", limit = 5) Index to various parts of the output\nx['message']['total-results'] #\u0026gt; 276188 Extract similar data items from each result. The records are in the “items” slot\n[ z['DOI'] for z in x['message']['items'] ] #\u0026gt; [u'10.1002/(issn)1939-9170', #\u0026gt; u'10.4996/fireecology', #\u0026gt; u'10.5402/ecology', #\u0026gt; u'10.1155/8641', #\u0026gt; u'10.1111/(issn)1439-0485'] In habanero for some methods we require you to instantiate a client.\nYou can set a base URL and API key. This is a future looking feature\nas Crossref API does not require an API key.\nNote: I’ve tried to make sure habanero is Python 2 and 3 compatible. Hopefully you’ll find that’s true.\nRuby\nin a Ruby repl (e.g., pry), load serrano\nrequire 'serrano' Query for “peerj” on the journals route\nx = Serrano.journals(query: \"peerj\") Collect just ISSN’s from each result\nx['message']['items'].collect { |z| z['ISSN'] } #\u0026gt; =\u0026gt; [[\"2376-5992\"], [\"2167-8359\"]] Shell\nThe serrano command line tool is quite powerful if you are used to doing things there.\nHere, search for one article; summary data is shown.\nserrano works 10.1371/journal.pone.0033693 #\u0026gt; DOI: 10.1371/journal.pone.0033693 #\u0026gt; type: journal-article #\u0026gt; title: Methylphenidate Exposure Induces Dopamine Neuron Loss and Activation of Microglia in the Basal Ganglia of Mice There’s also a -json flag to give back JSON data, which can be parsed with the command line tool jq.\nserrano works --filter=has_full_text:true --json --limit=5 | jq '.message.items[].link[].URL' #\u0026gt; \"http://0-api-wiley-com.libus.csd.mu.edu/onlinelibrary/tdm/v1/articles/10.1002%2F9781119208082.ch9\" #\u0026gt; \"http://0-api-wiley-com.libus.csd.mu.edu/onlinelibrary/tdm/v1/articles/10.1002%2F9781119208082.index\" #\u0026gt; \"http://0-api-wiley-com.libus.csd.mu.edu/onlinelibrary/tdm/v1/articles/10.1002%2F9781119208082.ch11\" #\u0026gt; \"http://0-api-wiley-com.libus.csd.mu.edu/onlinelibrary/tdm/v1/articles/10.1002%2F9781119208082.ch15\" #\u0026gt; \"http://0-api-wiley-com.libus.csd.mu.edu/onlinelibrary/tdm/v1/articles/10.1002%2F9781119208082.ch4\" R\nIn an R session, load rcrossref\nlibrary(\"rcrossref\") Search the works route for the phrase “science”\nres \u0026lt;- cr_works(query = \"science\", limit = 5) #\u0026gt; $meta #\u0026gt; total_results search_terms start_index items_per_page #\u0026gt; 1 4333827 science 0 5 #\u0026gt; #\u0026gt; $data #\u0026gt; Source: local data frame [5 x 23] #\u0026gt; #\u0026gt; alternative.id container.title created deposited DOI funder indexed #\u0026gt; (chr) (chr) (chr) (chr) (chr) (chr) (chr) #\u0026gt; 1 2013-11-21 2013-11-21 10.1126/science \u0026lt;NULL\u0026gt; 2015-12-27 #\u0026gt; 2 Science Askew 2004-11-26 2013-12-16 10.1887/0750307145/b426c18 \u0026lt;NULL\u0026gt; 2015-12-24 #\u0026gt; 3 2006-04-10 2010-07-30 10.1002/(issn)1557-6833 \u0026lt;NULL\u0026gt; 2015-12-25 #\u0026gt; 4 2013-08-27 2013-08-27 10.1002/(issn)1469-896x \u0026lt;NULL\u0026gt; 2015-12-27 #\u0026gt; 5 2013-12-19 2013-12-19 10.5152/bs. \u0026lt;NULL\u0026gt; 2015-12-28 #\u0026gt; Variables not shown: ISBN (chr), ISSN (chr), issued (chr), link (chr), member (chr), prefix (chr), publisher #\u0026gt; (chr), reference.count (chr), score (chr), source (chr), subject (chr), title (chr), type (chr), URL #\u0026gt; (chr), assertion (chr), author (chr) #\u0026gt; #\u0026gt; $facets #\u0026gt; NULL Index through to get the DOIs\nres$data$DOI #\u0026gt; [1] \"10.1126/science\" \"10.1887/0750307145/b426c18\" \"10.1002/(issn)1557-6833\" #\u0026gt; [4] \"10.1002/(issn)1469-896x\" \"10.5152/bs.\" rcrossref also has faster versions of most functions with an underscore at the end (_) which only do the http request and give back json (e.g., cr_works_())\nComparison of Crossref Client Methods After installation and loading the libraries above, the below methods are available\nAPI Route \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Python\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Ruby\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;R\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; works \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.works()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.works()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_works()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; members \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; funders \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.funders()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.funders()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_funders()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; types \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.types()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.types()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_types()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; licenses \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.licenses()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.licenses()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_licenses()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; journals \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.journals()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.journals()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_journals()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; members \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_members()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; registration agency \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.registration_agency()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.registration_agency()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_agency()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; random DOIs \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr.random_dois()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.random_dois()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_r()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; Other Crossref Services Service \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Python\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Ruby\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;R\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; content negotiation \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cn.content_negotiation()\u0026lt;/code\u0026gt;\u0026lt;a href=\u0026quot;#footnote-1\u0026quot;\u0026gt;[1]\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.content_negotiation()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_cn()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; CSL styles \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cn.csl_styles()\u0026lt;/code\u0026gt;\u0026lt;a href=\u0026quot;#footnote-1\u0026quot;\u0026gt;[1]\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.csl_styles()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;get_styles()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; citation count \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;counts.citation_count()\u0026lt;/code\u0026gt;\u0026lt;a href=\u0026quot;#footnote-2\u0026quot;\u0026gt;[2]\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano.citation_count()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;cr_citation_count()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; [1] from habanero import cn [2] from habanero import counts Features These are supported in all 3 libraries:\nFilters (see below) Deep paging (see below) Pagination Verbose curl output Filters Filters (see API docs for details) are a powerful way to get closer to exactly what you want in your queries. In the Crossref API filters are passed as query parameters, and are comma-separated like filter=has-orcid:true,is-update:true . In the client libraries, filters are passed in idiomatic fashion according to the language.\nPython\nfrom habanero import Crossref cr = Crossref() cr.works(filter = {'award_number': 'CBET-0756451', 'award_funder': '10.13039/100000001'}) Ruby\nrequire 'serrano' Serrano.works(filter: {award_number: 'CBET-0756451', award_funder: '10.13039/100000001'}) R\nlibrary(\"rcrossref\") cr_works(filter=c(award_number=TRUE, award_funder='10.13039/100000001')) Note how syntax is quite similar among languages, though keys don’t have to be quoted in Ruby and R, and in R you pass in a vector or list instead of a hash as in the other two.\nAll 3 clients have helper functions to show you what filters are available and what the options are for each filter.\nAction \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Python\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;Ruby\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; \u0026lt;span \u0026gt;R\u0026lt;/span\u0026gt; \u0026lt;/th\u0026gt; Filter names \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;filters.filter_names\u0026lt;/code\u0026gt;\u0026lt;a href=\u0026quot;#footnote-3\u0026quot;\u0026gt;[3]\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano::Filters.names\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;filter_names()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; Filter details \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;filters.filter_details\u0026lt;/code\u0026gt;\u0026lt;a href=\u0026quot;#footnote-3\u0026quot;\u0026gt;[3]\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;Serrano::Filters.filters\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;span \u0026gt;\u0026lt;code\u0026gt;filter_details()\u0026lt;/code\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/td\u0026gt; [3] from habanero import filters Deep paging Sometimes you want a lot of data. The Crossref API has parameters for paging (see rows and offset), but large values of either can lead to long response times and potentially timeouts (i.e., request failure). The API has a deep paging feature that can be used when large data volumes are desired. This is made possible via Solr’s cursor feature (e.g., blog post on it). Here’s a run down of how to use it:\ncursor: each method in each client library that allows deep paging has a cursor parameter that if you set to * will tell the Crossref API you want deep paging. cursor_max: for boring reasons we need to have feedback from the user when they want to stop, since each request comes back with a cursor value that we can make the next request with, thus, an additional parameter cursor_max is used to indicate the number of results you want back. limit: this parameter when not using deep paging determines number of results to get back. however, when deep paging, this parameter sets the chunk size. (note that the max. value for this parameter is 1000) For example, cursor=\u0026amp;#8221;*\u0026amp;#8221; states that you want deep paging, cursor_max states maximum results you want back, and limit determines how many results per request to fetch.\nPython\nfrom habanero import Crossref cr = Crossref() cr.works(query = \"widget\", cursor = \"*\", cursor_max = 500) Ruby\nrequire 'serrano' Serrano.works(query: \"widget\", cursor: \"*\", cursor_max: 500) R\nlibrary(\"rcrossref\") cr_works(query = \"widget\", cursor = \"*\", cursor_max = 500) Text mining clients Just a quick note that I’ve begun a few text-mining clients for Python and Ruby, focused on using the low level clients discussed above.\nPython: https://github.com/sckott/pyminer Ruby: https://github.com/sckott/textminer Do try them out!\n", "headings": ["Crossref clients","Installation","Examples","Comparison of Crossref Client Methods","Other Crossref Services","Features","Filters","Deep paging","Text mining clients"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/scott-chamberlain/", "title": "Scott Chamberlain", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/accepted-manuscripts/", "title": "Accepted Manuscripts", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/community-responses-to-our-proposal-for-early-content-registration/", "title": "Community responses to our proposal for early content registration", "subtitle":"", "rank": 1, "lastmod": "2016-03-01", "lastmod_ts": 1456790400, "section": "Blog", "tags": [], "description": "TL;DR: We will proceed with implementing the proposed support for registering content before online availability. Adopting the workflow will be optional and will involve no extra fees.\nBackground At the end of January, Crossref issued a “request for community comment” on a proposed new process to support the registration of content including DOIs before online availability. We promised that we would summarize the results of the survey once we had received and analyzed all the responses.", "content": "TL;DR: We will proceed with implementing the proposed support for registering content before online availability. Adopting the workflow will be optional and will involve no extra fees.\nBackground At the end of January, Crossref issued a “request for community comment” on a proposed new process to support the registration of content including DOIs before online availability. We promised that we would summarize the results of the survey once we had received and analyzed all the responses.\nSupport for Crossref implementing the proposed new workflow was overwhelming. Of the 104 responses, 90 were positive, 7 were neutral and 7 were negative. As such we will proceed to make the necessary changes to better support registering content before online availability. We aim to enable this functionality in the second half of 2016.\nWe received survey responses varying in length from one or two sentences to multiple pages. A lot of the responses also interspersed questions and observations about entirely different issues that were of interest to respondents. As such, it has taken a while for us to analyze the results. We also found it was pretty much impossible for us to tabulate a summary of the responses to the direct questions. Instead we’ll summarize the responses at a high level and then drill down into some of the nuances in the answers and issues that were raised from the responses.\nThe positive responses By “positive” we mean the respondent understood the issues we were trying to address and thought what we were proposing was a reasonable way to address the problems. Here are a few (anonymized) excerpts from the responses:\n“[This] is very timely as we have been made aware of changes to manuscript deposit requirements for UK authors. Authors who partake in the REF system will have to deposit articles at their manuscript stage before publication. We need to set an embargo on the articles so that they only become discoverable at some point after the publication date. Ideally we would like this to happen with all articles regardless of where they are from as authors will put their own work up on open access sites.”\n“Your proposal and the associated workflow look good to us and will help with our media embargo timelines, as well as our authors’ institutional requirements.”\n“The workflows and solutions seem reasonable … The temporary landing page seems like a sustainable technical solution. Hosting by Crossref is key to this – there is no way that all publishers would otherwise take on maintaining temporary pages. And having a standard display for metadata consistency is crucial too.”\n“Early assignment and recording of DOIs from the point of acceptance forms a key step in [the university’s] proposed ‘Submit-accept-deposit’ workflow. We welcome the proposal by Crossref to enable early assignment of DOIs for publications.”\nNote that a positive response did not mean the respondents thought the problems necessarily applied to them or that they would necessarily be implementing the changes - just that what we were proposing seemed sound for those who needed to address the issues.\n“While not directly relevant to our business the proposal seems aimed to protect the integrity of DOIs and Crossref’s role and that is not a bad thing.”\n“I would consider it an irresponsible use of the system on the part of a publisher to circulate dois that don’t (yet) work. This is bound to lead to frustration with users encountering errors. However I appreciate that this situation may arise in some workflows and therefore your proposals to implement temporary landing pages make sense.”\n“I was not aware of these issues, but think that your solutions seem feasible. We are a small journal and generally don’t add doi’s or publish until the article is complete (i.e., we don’t post anything that’s just accepted - only finalized). So we would be unlikely to update our workflow.”\nAlso, though respondents might have been generally positive about the proposal - that didn’t always mean they were also sanguine about it. For example, several shared concerns about the potential costs of changing their workflows.\n“[we] would consider implementing this change into our workflow. Limiting factors would include the effort and additional cost to enable our paper management system vendor…”\n“My only comment is that the process needs to be streamlined as much as possible so small publishers without great technical capacity will not be burdened with twice the work or with additional expense. After reading through Crossref’s proposal, I believe you have taken such things into account and will implement an efficient and worthwhile system.”\n“The workflow makes sense as a solution to the problem you describe […] but will require extensive workflow changes on our end in order to implement. Speaking for a small publishing house I’m not sure it’s reasonable to expect this from us on any short term.”\nSeveral of the positive respondents also wondered about how we would handle particular edge cases (e.g. rescinding acceptances) and/or offered suggestions to improve the proposal. We will discuss these further at the end of this post.\nThe neutral responses The responses we categorised as “neutral” were generally too short to conclude much about. They consisted of one or two sentences that said something like “this doesn’t apply to me.” It wasn’t clear whether it didn’t apply to them because they didn’t have the problems we described or because they’d already solved the problems we described (e.g. by providing their own interim landing pages). They also didn’t comment on the applicability to other members or whether they thought the issues might eventually affect them.\nThe negative responses We categorised responses as “negative” when the member rejected that the issues we outlined were actual problems or they rejected the mechanisms we were proposing to address the problems.\n“…a formal letter of acceptance on a letter in PDF will be OK for authors. Why a DOI is better?”\n“…I am aware of funder and institutional requirements for authors to take action on acceptance of manuscripts for publication in journals but don’t think the time pressure is so high that it has to happen in short time between acceptance and published ahead of print online…”\n“Of all the accepted-but-not-yet-published papers in existence at any time, the number whose existence must be demonstrated to promotion and tenure review boards must be awfully small.”\nThere are a few common themes here. The first is that, historically, the industry has been content with acceptance letters as proof of publication and that it was relatively rare for authors to have to produce such proof.\nThe problem that has led us to propose support for a modified workflow is that now we have situations where all the researchers in a country require such letters on a regular basis - not just when they are up for promotion or tenure. This is the new reality faced by researchers and institutions who are subject to regular national evaluation schemes like the REF and ERA.\nOne of the negative respondents added:\n“This is very familiar territory. It’s definitely coming out of STEM.”\nIndeed, the initial pressure to support the earlier registration of DOIs is certainly falling on our members who focus on STEM publishing. Researchers in the STEM fields are generally under more pressure to publish articles frequently and they are primarily affected by emerging funder mandates. The relatively high research output in STEM fields combined with the need for regular compliance checks and regular evaluation schemes is creating an environment that requires more automated mechanisms to keep track of publications. Asking for and processing letters of acceptance in these situations just doesn’t scale.\nSome of the negative responses also questioned our assertions about the hazards of promulgating unregistered DOI-like strings and/or the problems associated with the delay between when content is made available online and when the content is registered.\n“I don’t buy the argument that people lose trust in DOIs in general because they once tried to resolve one and it didn’t lead to an article. By the same argument, URLs in general are similarly undermined.”\n“where authors ask me for their DOIs so they can accurately cite the paper in another publication or use it for grants and applications. I explain that it won’t work until the issue as a whole posts and I have never heard back about confusion or distrust of the system.”\nTo this all we can say is, we have the data. Next to typos, unregistered DOIs account for the second greatest category of failed resolutions on the Crossref system. Our help desk has to explain them to researchers constantly. We have promoted DOIs as being more robust, persistent identifiers than ordinary URLs. People are not surprised when URLs don’t work. They are surprised when DOIs don’t work. We’d like to keep it that way.\nWhat seems to be at the the root of the few negative responses - is that most assumed that Crossref was mandating that publishers change their workflow - even if they didn’t face any of the issues outlined in the proposal. There is very little that Crossref mandates to participate. This is by design. Our membership is just too diverse for us to have mandates that can be sensibly applied to all. Still - we should have made it clearer in the proposal that the proposed changes would not be mandated. We will certainly need to make this clearer when we roll out support for the new processes.\nOh yeah - one respondent called us out for using the phrase “advanced publication” instead of “advance publication”. For this we are truly sorry. The employee who made this mistake has been dragged out and shotted.\nIssues raised and questions asked Both the positive and negative respondents raised issues, asked questions and provided suggestions regarding the proposal. We will make sure that, when the proposal is implemented, we address all of these issues more clearly, but in the meantime, we thought it would be helpful if we answered some of them briefly here.\nQ: Would Crossref charge extra for the new workflow?\nA: No. We should have made this clear in the proposal. We should have also mentioned that, in the “Crossref-facilitated Early Registration” scenario members will only be charged once they have replaced the “registered_content” metadata with metadata for the published item using one of the existing content schemas (e.g. article, book, confproc).\nQ: Would Crossref require that publishers adopt the new proposed workflows?\nA: No. But we will recommend them to members who need to address the issues outlined in the proposal. And in general, we will recommend that our members register DOIs as early in the process as practicable.\nQ: What does “acceptance” mean? It was pointed out that there were lots of variations of “acceptance” including “acceptance pending revisions”, etc.\nA: We would expect that “contingent acceptance” does not constitute final acceptance and that in this case “acceptance” should mean that the publisher has a copy of the manuscript in which the author has made all of the changes asked of them.\nQ: Doesn’t “acceptance” works both ways? A researcher has to grant permission to publish to the publisher.\nA: This is a vital point - the publisher should only register content for which they have already secured the rights to publish.\nQ: Collecting and verifying the metadata associated with a paper is expensive and time consuming. As such, some publishers only produce complete and robust metadata after a paper has been accepted. We face a Catch-22. if we deposit metadata immediately after acceptance, it will be sparse and unreliable. If we wait to collect and verify the metadata, then we risk violating some of the emerging mandates. How do we resolve this dilemma?\nA: This is clearly beyond our control, but we expect that those issuing the mandates will have to make some reasonable accommodations if they expect publishers to register content both early and with reasonably useful metadata.\nQ: How would publishers handle rescinded acceptances?\nA: Publishers can handle this the same way they handle retractions or withdrawals. Additionally, the registered record type and the “intent to publish” landing page will both support Crossmark for those members who use Crossmark to promulgate corrections to the literature. We will explore adding a new “acceptance rescinded” update type to the Crossmark schema.\nQ: The Crossref DOIs we generate contain embedded publication information such as volume and issue. We don’t know these details at acceptance so how can we register DOIs early? A: Many of our members generate Crossref DOIs with embedded semantic information in them such as volume/issue, publication date or even author initials and title. After 16 years of experience, we have found that this tends to be a bad idea. Publication schedules slip. Metadata changes. We will soon be revising our guidelines on DOI best practice in Crossref DOI generation to recommend against embedding such information into the DOI itself. Clearly, if you decide to assign Crossref DOIs at acceptance, you will need to adopt a DOI structure that accommodates this.\nQ: Our hosting provider manages DOI registrations for us. If we have to register DOIs earlier in the process, can we have one party (e.g. a manuscript tracking system vendor) register the initial “registered_content” metadata and then have different party (e.g. hosting provider or typesetter) replace that record with the final metadata?\nA: Yes.\nQ: Will you be working with industry vendors to help them support this new workflow?\nA: Yes.\nQ: Will we support the pre-registration of DOIs in the the deposit forms on the Crossref site?\nA: Yes.\nQ: If Crossref hosts the “intent to publish” landing page, how will publishers be able to account for visits to the page and incorporate that information into their metrics?\nA: While visitors to the Crossref hosted page will not show up in the publisher’s own hosting platform logs, publishers will be able to easily see how many times their “intent to publish” landing page was accessed by looking at their standard Crossref DOI resolution logs.\nQ: Could the Crossref-hosted landing page also include the URL that the DOI will eventually be associated with?\nA: This is an interesting idea and was suggested by two separate respondents. The challenge will be in explaining to the user that the URL might or might not work. We are also concerned that this would reduce the incentive for publishers to replace the holding page in a timely manner. We’ll explore this option as we continue to work on implementation.\nQ: Would the Crossref-hosted landing page be open to indexing by Google and others? If so, wouldn’t this undermine articles under press embargoes?\nA: The idea behind the limited metadata required for registering content is that it allows the publisher to control the balance between discovery (needed to meet funder requirements) and discretion (needed to manage publicity). So yes, the Crossref-hosted landing pages would be open to indexing, but publishers can still control what gets indexed by withholding metadata as needed.\nQ: The table of required metadata elements for the “registered content” type does not include the author. How are such records supposed to be used as proof of acceptance if they do not include the author name?\nA: We made a mistake. The table should have included the contributor in the required element column. Update: We retract our retraction! We are trying to accommodate several different use cases for \u0026rsquo;early content registration\u0026rsquo; and these different use cases often have contradictory metadata implications. So, for example, including the author is certainly important for monitoring mandate compliance. However, including the author might be problematic when the publisher is trying to manage publicity around an upcoming publication. Again, Crossref is not in a position to resolve this dilemma and we expect that those issuing the mandates will make some reasonable accommodations with publishers who need to manage publicity around publications. In short, “authors” will remain optional metadata.\nSummary and conclusions We were delighted with the response rate on the proposal. It is clear to us that a lot of the respondents really appreciated both being alerted to a set of issues that they were not yet aware of and that they valued the chance to comment on our proposed mechanisms for addressing said issues. We also learned some lessons on how to better structure any such future surveys in order to make them easier for us to summarise and respond to. The wide variety of responses and detailed descriptions of different workflows reconfirmed our sense that Crossref members vary widely in their working practices. We need to continue to work directly with members and understand these different working practices so that we can provide appropriately flexible services to our membership and to the scholarly community in general.\nFinally, the feedback we received will be used by our product team and our communications \u0026amp; outreach teams to refine our rollout plans for registering content before online availability. We expect that we will rollout this functionality in the second half of 2016.\nThanks to those who responded to our RFC. Some of those responses included questions about other matters relating to Crossref. We have attempted to extract these and answer them directly- but if we have not yet answered one of your questions, please follow-up with us at feedback@crossref.org\n", "headings": ["TL;DR:","Background","The positive responses","The neutral responses","The negative responses","Issues raised and questions asked","Summary and conclusions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/event-data-open-for-your-interpretation/", "title": "Event Data: open for your interpretation", "subtitle":"", "rank": 1, "lastmod": "2016-02-25", "lastmod_ts": 1456358400, "section": "Blog", "tags": [], "description": "What happens to a research work outside of the formal literature? That’s what Event Data will aim to answer when the service launches later this year. Following the successful DOI Event Tracker pilot in Spring 2014, development has been underway to build our new service, newly re-named Crossref Event Data. It’s an open data service that registers online activity (specifically, events) associated with Crossref metadata. Event Data will collect and store a record of any activity surrounding a research work from a defined set of web sources.", "content": "What happens to a research work outside of the formal literature? That’s what Event Data will aim to answer when the service launches later this year. Following the successful DOI Event Tracker pilot in Spring 2014, development has been underway to build our new service, newly re-named Crossref Event Data. It’s an open data service that registers online activity (specifically, events) associated with Crossref metadata. Event Data will collect and store a record of any activity surrounding a research work from a defined set of web sources. The data will be made available as part of our metadata search service or via our Metadata API and normalised across a diverse set of sources. Data will be open, audit-able and replicable.\nWe expect to include the following sources at the launch of the clearinghouse in Q3 (pending final confirmation):\n[table id=1 /]\nWhat could you achieve? Anyone interested in metrics and analytics will have direct and open access to a single collection of DOI activity data of events occurring outside of the formal literature. As Event Data records are time-stamped, you can be assured that the data you receive is both auditable and replicable. Collected and stored by Crossref in the one location, we invite researchers, publishers, funders and altmetrics providers to consider the possibilities Event Data offers to enrich and expand your work. With such a corpus of open, transferable and auditable raw data at your fingertips, what could you achieve? General and altmetrics service providers Crossref Event Data is a centrally-managed resource, therefore as a third party vendor you will have the ability to collect real-time data from a central location to enrich, analyze, interpret and report via your own tools. Using our API, you will gain regular access to our collection of raw, auditable data to feed into your own tools and services ready for aggregation and analysis. Additionally, the optional benefit of an SLA with Crossref will ensure that your clients have access to a reliable and flexible source of event data.\nJournal editors Using the data collected in our service, as an editor you can attract authors by offering data on the audience’s research interest, track the full-scope of article dissemination and gain a better understanding of how the publications you manage compare to each other. By analysing the Event Data records, you can quickly find reviewers based on publication network analysis, identify new areas to grow author submissions and track the reach of submissions selected for publication. Funders As a Funder, you can use Event Data to isolate and track the dissemination and usage of the research you funded outside of the scholarly literature. As the data is portable, you can be assured that should a journal move, your ability to track its dissemination moves with it. Using the Event Data records collection, you can:\nEfficiently track progress of the research impact of grant awardees in an automated fashion, with the signals most relevant to your organization Develop measurements of research engagement at the article level which reflect your mission and current funding priorities Gain visibility into the potent success stories highlighting the impact of your work for your development campaigns Analyze trends of past and future funding programs More effectively pursue your funding strategy and manage your portfolio based on data-driven decision making. Publishers and publishing platforms By analyzing and interpreting the Event Data collection, as a publisher or content distributor you can use the records to undertake the following metric-lead analysis to help drive your business needs: Conduct more robust publication growth analysis across titles, subject areas, or all published literature Gain a balanced understanding of the engagement on your publications across subject areas, titles, or managing editors Enhance author services (personalization, content discovery, profile management, etc.) Focused and data-driven product development of tools and services to drive audience engagement Provide content distributors data on downstream reach of publications. Bibliometricians Event Data heavily supports Bibliometric research by facilitating the tracking of DOI-related research activity across different platforms and channels. As a Bibliometrician, use trusted raw data as the underlying data for your research, which you can easily obtain from Crossref in a single, normalized format across a variety of sources. Additionally, as Event Data data is replicable, portable and auditable, you will be assured of high quality results in your research projects.\nResearch institutions All of the stakeholders in your institution, from the research, development and marketing offices to the researchers themselves, will benefit from access to data about where and how your research is being discussed in mainstream and social media. As a research institution, Event Data can help you:\nTrack dissemination of publications (types of channels, rate of growth, etc.) by members of the institution Access up-to-date information on the research progress of faculty members, useful for tenure and promotion decisions View data on downstream impact of publications Roll up data for custom reporting of department’s research activities Stay tuned, testing begins soon! With development work on the MVP (Minimum Viable Product) scheduled to complete shortly, we will soon be releasing a small subset of data sources that are collecting event data as well as a testing environment for interested parties to explore a very preliminary version of the software as we continue to work towards implementation of the full Event Data clearinghouse release in Q3. Look out for our MVP announcement, with full technical specifications and confirmation of the selected initial pull and push sources, over the coming weeks. ", "headings": ["What happens to a research work outside of the formal literature? That’s what Event Data will aim to answer when the service launches later this year.","What could you achieve?","With such a corpus of open, transferable and auditable raw data at your fingertips, what could you achieve? ","General and altmetrics service providers","Journal editors","Funders","Publishers and publishing platforms ","Bibliometricians","Research institutions ","Stay tuned, testing begins soon!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/revived-crossref-books-interest-group/", "title": "Revived: Crossref Books Interest Group", "subtitle":"", "rank": 1, "lastmod": "2016-02-24", "lastmod_ts": 1456272000, "section": "Blog", "tags": [], "description": "We’re reviving the Books Interest Group, and inviting new members!\nAfter a hiatus, Crossref’s Books Interest Group is back. We’re excited to announce that Emily Ayubi of the American Psychological Association has agreed to chair the group.\nIn reviving the group, our intention is to create opportunities to talk about issues that are important to scholarly book publishers. For example, we hope to explore whether it is time to revise the Crossref best practices for depositing, versioning, and linking book content.", "content": "We’re reviving the Books Interest Group, and inviting new members!\nAfter a hiatus, Crossref’s Books Interest Group is back. We’re excited to announce that Emily Ayubi of the American Psychological Association has agreed to chair the group.\nIn reviving the group, our intention is to create opportunities to talk about issues that are important to scholarly book publishers. For example, we hope to explore whether it is time to revise the Crossref best practices for depositing, versioning, and linking book content. We are seeking interested members from the book publishing community, and want to hear your ideas for agenda items and topics for discussion. Our first meeting will be a teleconference held at 11:00 am Eastern time on Wednesday, March 23rd. You will receive dial-in details by email. ** **\nIf you’d like to join—and we’re hoping you will—please email me at aondis@crossref.org.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rfc-registering-content-before-online/", "title": "Request for Community Comment: registering content before online availability", "subtitle":"", "rank": 1, "lastmod": "2016-01-21", "lastmod_ts": 1453334400, "section": "Blog", "tags": [], "description": "Crossref is proposing a process to support the registration of content—including DOIs and other metadata—prior to that content being made available, or published, online. We’ve drafted a paper providing background on the reasons we want to support this and highlighting the use cases. One of the main needs is in journal publishing to support registration of Accepted Manuscripts immediately on or shortly after acceptance, and dealing with press embargoes.\nProposal doc for community comment", "content": "Crossref is proposing a process to support the registration of content—including DOIs and other metadata—prior to that content being made available, or published, online. We’ve drafted a paper providing background on the reasons we want to support this and highlighting the use cases. One of the main needs is in journal publishing to support registration of Accepted Manuscripts immediately on or shortly after acceptance, and dealing with press embargoes.\nProposal doc for community comment\nWe request community comment on the __proposed approach as outlined in this report.\nSome examples of what we’d like to know:\nAre you aware of the issues outlined in this proposal? Are you aware of the funder and institutional requirements for authors to take action on acceptance of manuscripts for publication in journals? Do you think the proposed solution and workflows are reasonable? Are you likely to update your workflow to register content early? If you are likely to update your workflow, how long do you estimate it will take? Any other general comments, questions or feedback on anything raised in this document. Please send comments, feedback and questions to me, Ginny, at feedback@crossref.org. The deadline for comments is February 4th. Thanks!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linking-clinical-trials-enriched-metadata-and-increased-transparency/", "title": "Linking clinical trials = enriched metadata and increased transparency", "subtitle":"", "rank": 1, "lastmod": "2016-01-18", "lastmod_ts": 1453075200, "section": "Blog", "tags": [], "description": "We will shortly be adding a new feature to Crossmark. In a section called “Clinical Trials” we will be using new metadata fields to link together all of the publications we know about that reference a particular clinical trial.\nMost medical journals make clinical trial registration a prerequisite for publication. Trials should be registered with one of the fifteen WHO-approved public trial registries , or with clinicaltrials.gov which is run by the US National Library of Medicine.", "content": "We will shortly be adding a new feature to Crossmark. In a section called “Clinical Trials” we will be using new metadata fields to link together all of the publications we know about that reference a particular clinical trial.\nMost medical journals make clinical trial registration a prerequisite for publication. Trials should be registered with one of the fifteen WHO-approved public trial registries , or with clinicaltrials.gov which is run by the US National Library of Medicine. Once registered, a trial is assigned a clinical trial number (CTN) which is subsequently used to identify that trial in any publications that report on it.\nPublications that result from any one trial are likely to be released in multiple journals from different publishers and at different times, for example secondary analyses coming some time after the publication of the initial results. Cross-publisher collaboration is paramount to linking all of these publications together so that researchers, funders, and regulatory agencies can understand the whole set of results from clinical trials. With this in mind, a group of medical publishers, led by BioMedCentral, approached Crossref to establish a working group, and here, they designed an approach to address this problem: “thread” all the various documents together surrounding a clinical trial.\nUpdated upstream To implement threaded publications, publishers extract clinical trial numbers from papers, or ask authors to submit those numbers to them. Publishers add the CTNs to the Crossref DOI metadata via three new fields: clinical trial number, clinical trial registry where trial is registered, and trial stage (pre-results, results or post-results of the trial). Crossref has assigned unique IDs to each trial registry (much the same as we have done for funders in our Funder Registry and for the same reason - trial registry names and URIs can change over time and we need a persistent identifier). Using a combination of trial registry ID and clinical trial number, we can easily identify other content in the Crossref database that cites the same trial. Finally, Crossref displays the clinical trial metadata on the respective papers for all participating Crossmark publishers. Crossmark is a convenient place for readers to access the clinical trial information and is readily accessible directly from the journal article (online and PDF versions). And of course all of the data also goes into our open API so that anyone can make use of it.\nThe reporting of clinical trial results is notoriously inconsistent, something that the AllTrials initiative is also seeking to address. Publishers can help by collecting this information upstream and disseminating it using the existing Crossref infrastructure.\nWe ask all publishers to deposit the clinical trial data which is so critical to transparency in this area of research, and have already had the first data in from Crossref member the National Institute of Health Research. Once we launch the initial set of linked clinical trials, we will expand coverage of the threaded publications to include all content that reports on or references a clinical trial, from protocol to results to supporting data and systematic reviews.\nStay tuned and watch this space as threaded publications rolls out to journal articles across publishers!\n", "headings": ["Updated upstream"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-the-art-of-cartography-an-open-map-for-scholarly-communications/", "title": "Crossref & the Art of Cartography: an Open Map for Scholarly Communications", "subtitle":"", "rank": 1, "lastmod": "2016-01-08", "lastmod_ts": 1452211200, "section": "Blog", "tags": [], "description": " In the 2015 Crossref Annual Meeting, I introduced a metaphor for the work that we do at Crossref. I re-present it here for broader discussion as this narrative continues to play a guiding role in the development of products and services this year.\nMetadata enable connections At Crossref, we make research outputs easy to find, cite, link, and assess through DOIs. Publishers register their publications and deposit metadata through a variety of channels (XML, CSV, PDF, manual entry), which we process and transform into Crossref XML for inclusion into our corpus. This data infrastructure which makes possible scholarly communications without restrictions on publisher, subject area, geography, etc. is far more than a reference list, index or directory. ", "content": " In the 2015 Crossref Annual Meeting, I introduced a metaphor for the work that we do at Crossref. I re-present it here for broader discussion as this narrative continues to play a guiding role in the development of products and services this year.\nMetadata enable connections At Crossref, we make research outputs easy to find, cite, link, and assess through DOIs. Publishers register their publications and deposit metadata through a variety of channels (XML, CSV, PDF, manual entry), which we process and transform into Crossref XML for inclusion into our corpus. This data infrastructure which makes possible scholarly communications without restrictions on publisher, subject area, geography, etc. is far more than a reference list, index or directory. If research builds on what came before, one could claim that the process of knowledge production is partly the story of the very relationships between results disseminated (i.e., publications). So let’s consider each publication as a node in a graph where each has a coordinate and is connected by its citations to other publications (as well those that cite it). Additionally, each is associated with a set of people and places, along with a whole host of elements involved in the research and dissemination process.\nBut take a wider berth, and we begin to capture relationships between all such contributing agents and objects involved in the research process. Here we find an array of entities belonging to the scholarly graph, including different types of research artifacts, publisher and journal, funders, ORCIDs, peer reviews, publication status updates (corrections, retractions, etc.), citations, license information, additional URLs (machine destinations, hosting platforms, etc.), underlying data, software and protocols, materials, discussions and blog posts, recommendations, reference work mentions, etc. The entities on the graph multiply at an even higher rate as researchers share more outputs across more channels. And over time, the graph expands exponentially, producing a webbing that is far more dense and far more vast than we can currently imagine. Perhaps even to the point we realize Borges’ story where a cartographer builds a map so large it replicates the territory itself (On Exactitude in Science)!\nFrom graph to cartography At the heart of Borges’s poignant story is the map. Crossref’s graph of scholarly communications could be seen in the same light. It has a representational aspect, which is not purely abstract and can be visualized. Here, a map becomes an incredibly potent metaphor. Each link enabled by publisher-deposited metadata is a new street, bridge, or highway that takes us to a particular place (i.e., entity) of interest. These roads lead to articles, researchers, funders, institutions, etc., and in doing so, make them discoverable. They tell a story about the roles of each in the broader research in the landscape dotted with a plethora of places. The scholarly web has a growing corpus of more than 78 million publications at this very moment registered with Crossref. On average ten to fifteen thousand new objects appear every day. Maps are all the more essential for getting around in a bewildering environment of new and unfamiliar places, even for known ones in areas of exploding growth. They are critical for orienteering, discovering relationships, identifying sets of associated objects, naming new neighborhoods that emerge (i.e., new research specialties), etc. And if each connection on the map is seen as an event, maps can also represent micro-narratives about the research process and the agents involved. A multi-dimensional map containing all these entities, which serves as an evolving representation of spacetime that is constantly updated and always available, would finally begin to depict the process of scholarly activity as a dynamic, evolving, almost living system.\nAn open map for scholarly communication Crossref builds such a scholarly map of the research enterprise and makes it openly available for the entire research ecosystem. Call this a meta map or, more recently, call it metastructure. No matter what name it goes by we call it infrastructure at Crossref.\nCrossref’s open map for scholarly communications is a core part of the open information infrastructure for scholarly research. Crossref map data are open, portable, as well as licensed and provisioned for maximum reuse to serve the whole community. This open resource has two entrances: one for humans, another for machines. The Crossref REST API enables machines to traverse this environment and mine it in equal measure to the humans behind them. It is configured so that a robot can learn, a phone can access, and platforms can be built.\nOpenStreetMap and Google Maps, both widely used and mature infrastructure maps, are instructive examples when we consider a map of this kind for scholarly communications. Map data can be represented in unlimited ways, depending on any variety of needs and users. Third parties can add content via interactive layers that tell different stories such as health expenditure by country based on GDP and coral reefs at risk. They have a broad base of users across business models from philanthropic services aimed at disaster relief (Refugeemaps.eu) to commercial entities providing drivers with locations on open parking spaces (AppyParking on Google Map, PocketParker on OpenStreetMap). They power platforms and services that build maps for others (MapQuest, MapBox). They have applications far beyond the business of maps. For example, Place picker is a Google Maps widget that supports easy auto-complete the entry of any place or location on a mobile app where typing is a chore. And as far use cases close to home, the two have served as raw data for academic research (ex: workflow for generating multi-agent traffic simulation scenarios, automatic classification of GPS trajectories for transportation modes, etc.).\nIn kind, the Crossref infrastructure map also supports: the development of any variety of new maps which re-present the data, the makers of map platforms that power the research enterprise, tools that use map data, as well as academic research (bibliometrics). We extract slices of data of common interest from the map and add them as additional layers by which anyone can access and create applications on or across these bands of data: Contributors (authors, editors, reviewers) Funding information (funding body, grant number) Trial \u0026amp; study information (clinical trials registry number, registered report, replication study) Publication history (versions, updates, revisions, corrections, retractions, dates received/accepted/published) Peer review (status, type, reviews) Access indicators (publication license for text \u0026amp; data mining, machine mining URLs) Resources \u0026amp; associated research artifacts (preprints, figures \u0026amp; tables, datasets, software, protocols, research resource IDs) Activity surrounding the publication (peer reviews, comments \u0026amp; discussions, bookmarks, social shares, recommendations). Today, the map powers a host of public and commercial organizations alike for a wide range of scholarly and non-scholarly purposes:\nPublishers Funders Research institutions Archives \u0026 repositories Research councils Data centres Professional networks Patent offices Registration Agencies \u0026lt;td style=\u0026quot;border: 1px solid #ffffff;\u0026quot;\u0026gt; \u0026lt;ul\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Indexing services\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Publishing vendors\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Peer review systems\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Reference manager systems\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Lab \u0026amp; diagnostics suppliers\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Info management systems\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Educational tools\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Data analytics systems\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;li\u0026gt; \u0026lt;span \u0026gt;Literature discovery services\u0026lt;/span\u0026gt; \u0026lt;/li\u0026gt; \u0026lt;/ul\u0026gt; \u0026lt;/td\u0026gt; We will follow up this post to highlight a cross-section of these consumers in the Crossref map ecosystem and elaborate on what \u0026amp; how they have built from our data. An infrastructure map offers endless potential to third parties across publishers, funders, research institutions, and vendors working to serve the scholarly research enterprise.\nThe art of cartography In the Crossref Product Management team, we have ambitious plans for map enhancements this year. They focus on expanding information density and ease of access to the data. In the former case, we will introduce a new class of locations where activity surrounding the publications are occurring when we launch the DOI Event Tracker. We will also initiate an extensive publisher campaign to achieve full metadata deposit completeness across our membership. No one can keep pace with the sheer volume of research activity happening online nor wander the Lonely Web of research alone. The more metadata publishers provide for a publication, the more roads lead to its map location. After all, discoverability is closely associated with connectedness on a map. And finally, in the latter case, we will refresh and enhance the user interface to make it more powerful for humans to traverse the ever-changing landscape (as easily as the REST API enables machines!).\nI gratefully acknowledge the feedback received from the following who served as generous and insightful sounding boards: Virginia Barbour, Theo Bloom, Martin Eve, Daniel S. Katz, Amye Kenall, Catriona MacCullum, Cameron Neylon, Mark Patterson, Kristen Ratan, Carly Strasser, and Kaitlin Thaney.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/xml/", "title": "XML", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/auto-update/", "title": "Auto-Update", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/orcid-tipping-point/", "title": "ORCID tipping point?", "subtitle":"", "rank": 1, "lastmod": "2016-01-07", "lastmod_ts": 1452124800, "section": "Blog", "tags": [], "description": " Today eight publishers have presented an open letter that sets out the rationale for making ORCID iDs a requirement for all corresponding authors, a move that is being backed by even more publishers and researchers as the news spreads on twitter with #publishORCID. Crossref is a founding organization of ORCID and an ongoing supporter so it’s great to see further uptake and even more benefit for the research community.", "content": " Today eight publishers have presented an open letter that sets out the rationale for making ORCID iDs a requirement for all corresponding authors, a move that is being backed by even more publishers and researchers as the news spreads on twitter with #publishORCID. Crossref is a founding organization of ORCID and an ongoing supporter so it’s great to see further uptake and even more benefit for the research community. We encourage all our members to strive for complete metadata and that should include ORCID iDs, whether their workflows are able to require them at submission or not. Since we launched the ORCID auto-update process a couple of months ago, over 10,000 authors have given Crossref permission to automatically update their ORCID records. The open letter—signed by eLife, PLOS, The Royal Society, AGU, EMBO, Hindawi, IEEE, and Science—also offers minimum implementation guidelines for the process: Require. ORCID iDs are required for corresponding authors of published papers, ideally at submission. Collect. The collection of ORCID iDs is done via the ORCID API, so authors are not asked to type in or search for their iD. Auto-update. Crossref metadata is updated to include ORCID iDs for authors, so this information can automatically populate ORCID records. Publish. Author/co-author ORCID iDs are embedded into article metadata. ORCID’s own announcement gives further background and describes the benefits for researchers, such as single sign-on across journals and ultimately, increased discovery of their works. Everybody wins. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2015/", "title": "2015", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-healthy-infrastructure-needs-healthy-funding-data/", "title": "A healthy infrastructure needs healthy funding data", "subtitle":"", "rank": 1, "lastmod": "2015-12-16", "lastmod_ts": 1450224000, "section": "Blog", "tags": ["funding data", "Open Funder Registry"], "description": "We’ve been talking a lot about infrastructure here at Crossref, and how the metadata we gather and organize is the foundation for so many services - those we provide directly - and those services that use our APIs to access that metadata, such as Kudos and CHORUS, which in turn provide the wider world of researchers, administrators, and funders with tailored information and tools.\nThe initiative formerly known as FundRef Together Crossref’s funding data (previously known as FundRef – we simplified the name) and the Open Funder Registry, our taxonomy of grant-giving organizations, comprise a hub for gathering and querying metadata related to the questions: “Who funded this research?” and “Where has the research we funded been published?”\n", "content": "We’ve been talking a lot about infrastructure here at Crossref, and how the metadata we gather and organize is the foundation for so many services - those we provide directly - and those services that use our APIs to access that metadata, such as Kudos and CHORUS, which in turn provide the wider world of researchers, administrators, and funders with tailored information and tools.\nThe initiative formerly known as FundRef Together Crossref’s funding data (previously known as FundRef – we simplified the name) and the Open Funder Registry, our taxonomy of grant-giving organizations, comprise a hub for gathering and querying metadata related to the questions: “Who funded this research?” and “Where has the research we funded been published?”\nTo support the funding data initiative, three key pieces of metadata are needed from publishers:\nFunder ID Funder Name DOI Unfortunately only around half of the 950,000 Crossref DOIs with funding data contain funder IDs, the unique funder identifiers from the Open Funder Registry that are needed to link up all of the data. So, only half of the data is useful. (And 950,000 DOIs is only a fraction of the 77 million DOIs in our database, but more on that later).\nWhen we looked at the funding data that was coming in without funder IDs we were a little surprised. We had expected that most of these would be names that simply aren’t in the Open Funder Registry yet, and we thought there would be a certain amount of incorrect information that had been entered into the “funder_name” field. Instead, what we found was that many of the names were correct, and the funder IDs were just missing. Tidying the data\nTo help correct this, we decided to match incoming names to funder IDs where we could do so with the highest level of confidence. After much testing to minimize false positives, we switched this on at the end of August 2015. Throughout September and October, we inserted funder IDs for about 25% of the names that have been deposited without IDs. For October, the real numbers were 68,000 funder names with no IDs deposited, and 18,000 funder IDs inserted by Crossref. In the same period 42,000 funder IDs were deposited by publishers. With our matching on top of this, we are achieving a little over a 50% overall success rate of “good” funding data (funder names and funder IDs together). We have been very careful to distinguish the funder IDs that we have added from those deposited by publishers - provenance of data is an extremely important part of what we do. All funder IDs are tagged as provided either by the publisher or Crossref. Every time we insert an ID into a deposit, the publisher is notified in the deposit report. We have also now added these tags to our REST API so that publishers can query to find out exactly which DOIs we have amended*. The ideal scenario at this point is that the publisher checks that they are happy with the matching and then redeposits the funding data for those DOIs, over-writing the \u0026lt;span \u0026gt;doi-asserted-by: “crossref”\u0026lt;/span\u0026gt; tag and claiming the metadata as their own. Setting some limits The second largest problem with funding data was incorrectly entered funder name – e.g. concatenation of several names or authors entering overly long or vague program names instead of the official funder name. To help weed this out, we have made a couple of changes to the funding data deposit system:\nFunder_name field can no longer contain a numerical string over 4 digits Funder_name field can no longer contain a text string over 200 characters Funder names that that do not adhere to these two rules will now cause the funding data section of the metadata deposit (not the whole deposit) to fail and return an error message. Getting the growth we need\nAs of today, 198 publishers deposit funding data with Crossref. This amounts to about 3.5% of Crossref’s membership (although it’s a larger proportion of our total deposits). We need more publishers to deposit funding data so that funding data search can become a truly useful tool for the community. There’s no sign-up process or additional fee - read about how to get started, and take a look at our best practices for depositing funding data. Finally, we ask you: how can we get more and better funder metadata in 2016?\nThis is not a rhetorical question. Please tweet your thoughts @CrossrefOrg or email your replies to info@crossref.org. You will receive something special via snail mail if you reply to us – just Crossref’s way of saying thank you.\n*At the time of posting our database is re-indexing and the “asserted-by” tags are still filtering through to the API. Check back in a day or two for the full picture. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/distributed-usage-logging-a-private-channel-for-private-data/", "title": "Distributed Usage Logging: A private channel for private data", "subtitle":"", "rank": 1, "lastmod": "2015-12-04", "lastmod_ts": 1449187200, "section": "Blog", "tags": [], "description": " Forty wire telephone switchboard, 1907, Author unknown, Popular Science Monthly Vol 70, Wikimedia Commons.\nA few months ago Crossref announced that we will be launching a new service for the community in 2016 that tracks activities around DOIs recording user content interactions. These “events” cover a broad spectrum of online activities including publication usage, links to datasets, social bookmarks, blog mentions, social shares, comments, recommendations, etc. The Event Data service collects the data and make it available to all in an open clearinghouse so that data are open, comparable, audit-able, and portable. These data are all publicly available from external platform partners, and they meet the terms of distribution from each partner.\n", "content": " Forty wire telephone switchboard, 1907, Author unknown, Popular Science Monthly Vol 70, Wikimedia Commons.\nA few months ago Crossref announced that we will be launching a new service for the community in 2016 that tracks activities around DOIs recording user content interactions. These “events” cover a broad spectrum of online activities including publication usage, links to datasets, social bookmarks, blog mentions, social shares, comments, recommendations, etc. The Event Data service collects the data and make it available to all in an open clearinghouse so that data are open, comparable, audit-able, and portable. These data are all publicly available from external platform partners, and they meet the terms of distribution from each partner.\nBut Crossref and its members are also concerned about privacy. We recognise that not all data can be made open and public. Particularly if it is sensitive, personally identifiable data about usage. With this in mind, we are also launching an affiliated service, Distributed Usage Logging (DUL), for external parties to transmit sensitive data on user content interactions directly to authorized end points. As researchers are increasingly using “alternative” (non-publisher) platforms to store, access and share literature, publishers are correspondingly interested in incorporating the activity on their publications into their COUNTER reports.\nInterested third-party sites might include the following:\nInstitutional and subject repositories Aggregator platforms (EBSCOhost, IngentaConnect) Researcher-oriented social-networking sites (e.g. Academia.edu, ResearchGate, Mendeley) Reading environments and tools (e.g. ReadCube, Utopia Documents) For publishers to process such events via their COUNTER-compliant usage reporting streams, they need private usage information and a secure channel by which to receive the data from the external platforms. Crossref will provide a switchboard that will enable these non-publisher platforms can safely transmit private data directly to the publisher.\nThe work ahead entails close collaboration between Crossref, COUNTER, and the partners who will be sending and receiving the private data. The cross-organizational team will be working towards the following before launch: technical infrastructure development for production service, semantic definition of the usage logging message, assignment and validation of credentials to participants in the scheme, participant integration of the DUL API, and incorporation of this data type into the COUNTER Code of Practice. We will also continue to consult with data privacy and security authorities to ensure that the scheme respects all governmental obligations and community best practice regarding the processing of personal data.\nWe will share more about the launch of the service as we make progress along the way. Please contact Jennifer Lin for more information.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/usage/", "title": "Usage", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-labs-plays-with-the-raspberry-pi-zero/", "title": "Crossref Labs plays with the Raspberry Pi Zero", "subtitle":"", "rank": 1, "lastmod": "2015-12-02", "lastmod_ts": 1449014400, "section": "Blog", "tags": [], "description": "If you’re anything like us at Crossref Labs (and we know some of you are) you would have been very excited about the launch of the Raspberry Pi Zero a couple of days ago. In case you missed it, this is a new edition of the tiny low-priced Raspberry Pi computer. Very tiny and very low-priced. At $5 we just had to have one, and ordered one before we knew exactly what we want to do with it. You would have done the same. Bad luck if it was out of stock.\n", "content": "If you’re anything like us at Crossref Labs (and we know some of you are) you would have been very excited about the launch of the Raspberry Pi Zero a couple of days ago. In case you missed it, this is a new edition of the tiny low-priced Raspberry Pi computer. Very tiny and very low-priced. At $5 we just had to have one, and ordered one before we knew exactly what we want to do with it. You would have done the same. Bad luck if it was out of stock.\nWe love the way DOIs are being used in Wikipedia, but you probably already know that by now. Not only is it a brilliant source of information, mostly well cited, it’s also an organic living thing, with countless people and bots working together on countless articles. Our live stream of edits that cite (or uncite) DOIs shows new scholarly literature unfold, as it happens. From new articles to new references to improved citations to edit wars to bots cleaning up all the mess, it captivates everyone we show it to. The latest version has a live chart to show exactly how much activity is going on.\nCrossref works in five ways: Rally, Tag, Run, Play, and Make and this definitely comes under ‘Play’. By the time our Raspberry Pi Zero arrived it was clear what we had to do. We ordered a servo, a driver board and a wireless adapter and got to work.\nWe have some new neighbours in the basement. Oxford Hackspace is a community of people who want to work on projects from electronics to metalwork, hack things to improve them or find out how they work. A diverse bunch who at the last visit were working on squeezing unprecedented color capabilities from the 30 year old ZX Spectrum, a nixie tube display, a smartphone controlled doorbell and a robotic glockenspiel. They let us use their soldering iron to solder a few header pins.\nA bit of hacky Python, a pictureframe and lots of duck tape later, we have a live display of how many DOIs are cited and uncited per hour. It updates live every minute, fetches the latest numbers from the Wikipedia DOI citation stream and moves the hand.\n(For the worried engineers amongst you, rest assured that sufficient duck tape was added after this picture)\nIt’s extraordinary to think that a fully fledged computer with very capable specifications can be manufactured and sold for $5. Within the space of a lunchtime we had it up and running, all connected and fetching data over the internet via wireless. A generation ago you would have had to use punched cards, send them by post and load them in by hand. The live stream would have been at least a month behind.\nIt now sits in our Oxford office reminding us that DOIs Aren’t Just for Traditional Bibliographies. Below Geoff Bilder’s reminder about what happens when you have too many standards (they’re telephone plugs from round the world).\nYou can find source code and instructions on the github repository so you can make your own if you want.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/watch-speaker-videos-from-the-2015-annual-meeting/", "title": "Watch Speaker Videos from the 2015 Annual Meeting", "subtitle":"", "rank": 1, "lastmod": "2015-11-24", "lastmod_ts": 1448323200, "section": "Blog", "tags": [], "description": "You might have missed it, but you haven’t missed out. If you want to watch – or savor re-watching – the presentations from last week’s 2015 Crossref Annual Meeting, we’ve embedded each video below in chronological order. Sit back, relax, and take it all in (again) just as though you were in an air-conditioned ballroom at the Taj.\n", "content": "You might have missed it, but you haven’t missed out. If you want to watch – or savor re-watching – the presentations from last week’s 2015 Crossref Annual Meeting, we’ve embedded each video below in chronological order. Sit back, relax, and take it all in (again) just as though you were in an air-conditioned ballroom at the Taj.\nNote: You can find the playlist containing all the videos on our YouTube channel.\nEd Pentz, Crossref Executive Director, focuses on the best practice of writing DOIs as actionable hyperlinks in his presentation, Crossref Best Practice: http://www.slideshare.net/Crossref/ed-pentz-crossref15-55435481 (slides only)\nMartin Paul Eve senior lecturer at Birkbeck University, London, delivers a trenchant criticism of the process small publishers must go through when getting and depositing their first Crossref DOI in his presentation, Crossref Deposit: A Scholar-Publisher Experience:\nAnne Coghill, Manager of Peer Review Operations for the American Chemical Society, detailed their process for deciding where in the manuscript workflow to insert CrossCheck plagiarism screening in her presentation, American Chemical Society Publications and CrossCheck: http://www.slideshare.net/Crossref/ann-coghill-crossref15 (slides only)\nBen Hogan, Regional Manager in Wiley’s Peer Review Management team, shares Wiley’s pain points as well as its positive experiences in using CrossCheck to detect plagiarism in his presentation, _CrossCheck Usage and Case Studies: _\nJure Triglav, Lead Developer for the PubSweet Publishing Framework at the Collaborative Knowledge Foundation, demonstrates how to mine data from the corpus of open science using Crossref’s metadata via its API and open source tools from the Collaborative Knowledge Foundation in his presentation, Making Science Writing Smarter:\nScott Chamberlain, open science researcher, shows the several advantages of using programmatic tools such as R, Python, and Ruby to mine text and data, including Crossref metadata, in his presentation, _Text and Data Mining: _\nHelen Duriez, ePublishing Manager at the Royal Society, describes the Royal Society’s experience with providing Crossmark data as a means of communicating document version information in her presentation, Crossmark – a journey through time (and space?) 2015\nJohn Chodacki, chair of Crossref’s DET committee, describes the future state of the DOI Event Tracker as an open hub for collecting and sharing data around web events that involve DOIs in his presentation, DOI Event Tracker 2015:\nMarc Abrahams, editor and co-founder of the Annals of Improbable Research, makes you LAUGH, then THINK with his keynote speech, Improbable Research, the Ig Nobel Prizes, and You:\nJuan Pablo Alperin describes the ways that Crossref and the Public Knowledge Project can work together to support common goals, in his presentation, _PKP and Crossref: Two P’s in a Cross\nEd Pentz, Crossref Executive Director, summarizes the organization’s expansion over the past year with his presentation, Crossref Growth and Change:\nGinny Hendricks, Director of Member \u0026amp; Community Outreach, details the findings of Crossref’s recent stakeholder research and the organization’s future plans to enhance member experience with her presentation, Member \u0026amp; Community Outreach:\nJennifer Lin, Director of Product Management, visualizes Crossref’s role as a map maker for the scholarly web in her presentation, Crossref: Building an Open Map for the Scholarly Enterprise:\nChuck Koscher, Director of Technology, gives us performance stats for the Crossref system, including aggregate uptimes and how long it takes to deposit metadata, in his presentation, Crossref System Performance:\nGeoffrey Bilder, Director of Strategic Initiatives, sheds light on the status of current and future research projects that are part of Crossref’s new product development process in his presentation, Strategic Initiatives Update:\nScott Chamberlain, open science researcher, proposes the use of programmatic tools, such as the R programming language working with the Crossref search API, to undertake scientific research in his presentation, Thinking Programmatically:\nMartin Paul Eve, senior lecturer at Birkbeck University, London, bears us back to the origins of the scholarly mission, considers the implications of the notion that researchers work within a symbolic economy, and looks at the practical challenges brought about by open access modes of publication for works in the Humanities in his wide-ranging presentation, Open Access \u0026amp; the Humanities: Digital Approaches:\nSlideshare, Too!\nFinally, each speaker has generously made their slides available here: http://www.slideshare.net/Crossref/tag/crossref15\n_ _\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-logo-has-landed/", "title": "The logo has landed", "subtitle":"", "rank": 1, "lastmod": "2015-11-11", "lastmod_ts": 1447200000, "section": "Blog", "tags": [], "description": "\rThe rebranding of Crossref was top priority when I joined in May in a new role called \u0026ldquo;Director of Member \u0026amp; Community Outreach\u0026rdquo;. Since then I’ve been working to understand the array of services, attributes, and audiences we have developed; to answer the questions \u0026ldquo;What do we do, for whom, and why?\u0026rdquo;\nAs Crossref prepares to celebrate turning fifteen at our annual meeting next week, I am thrilled to present our new brand identity with key messages and logo. And along with “thrilled” you may also detect “nervous excitement”.\n", "content": "\rThe rebranding of Crossref was top priority when I joined in May in a new role called \u0026ldquo;Director of Member \u0026amp; Community Outreach\u0026rdquo;. Since then I’ve been working to understand the array of services, attributes, and audiences we have developed; to answer the questions \u0026ldquo;What do we do, for whom, and why?\u0026rdquo;\nAs Crossref prepares to celebrate turning fifteen at our annual meeting next week, I am thrilled to present our new brand identity with key messages and logo. And along with “thrilled” you may also detect “nervous excitement”.\nOver the last few months we have reviewed earlier research and talked with a number of members, affiliates, and academics. Turns out we’re the plain talkers of the industry, the do-ers, the scrappy people who get stuff done, chivvy others along, and in some cases we are—dare I say it—the voice of reason!\nWhile balancing differing views within the scholarly community, we’re all about making connections – literally and figuratively. We help bring together people and metadata in pursuit of an excellent research communications system for all. And, to mirror one of Ed Pentz’s new catchphrases, we are \u0026ldquo;keeping it real\u0026rdquo;; with down-to-earth language.\nCrossref Key Messages\nNew logos and names for all our products will come soon (in some cases it’ll be a ‘de-brand’ rather than a re-brand!). We’ll gradually phase in the new identity over the next month or two, starting with our annual meeting, and with a complete website relaunch following in 2016. We will contact all of our members and partners in the coming weeks with information about using the new logo, using a content delivery network (CDN) so that sites can reference the correct file.\nWhy rebrand? We have not rebranded because we plan on doing something different but rather to better express the things we already do. Our ‘problem’ was that often people didn’t know Crossref was behind initiatives like CrossCheck, Crossmark and FundRef. Our products had become unlinked from the organisation. And since we’re all about linking things together, that just made no sense.\nWe needed an icon to give more flexibility across the web that a word mark cannot do alone. The icon is made up of two interlinked angle brackets familiar to those who work with metadata, and can also act as arrows depicting Metadata In and Metadata Out, two themes under which our services can generally be grouped. Sentence case helps to avoid splitting the word; we do not want to tempt the Cross and the Ref to divide again. So that lowercase R you see in the middle of our name is indeed an official change. (Hopefully we can change the habit!) The palette gives a nod to the history of Crossref with red \u0026amp; dark grey, but brings in contemporary colors for a fresh palette that is distinctive in our industry (we researched a lot - everyone has circles, and traditional shades abound). Our aesthetic embodies classic Swiss design principles and is minimalist in keeping with our straight-talking personality. So, in the words of Board Chair, Ian Bannerman, it’s time for Crossref to step forward.\nAbout Crossref\nI’m looking forward to revealing more of the story at our annual meeting next week!\n", "headings": ["Why rebrand?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/auto-update-has-arrived-orcid-records-move-to-the-next-level/", "title": "Auto-Update Has Arrived! ORCID Records Move to the Next Level", "subtitle":"", "rank": 1, "lastmod": "2015-10-26", "lastmod_ts": 1445817600, "section": "Blog", "tags": [], "description": " Crossref goes live in tandem with DataCite to push both publication and dataset information to ORCID profiles automatically. All organisations that deposit ORCID iDs with Crossref and/or DataCite will see this information going further, automatically updating author records. ", "content": " Crossref goes live in tandem with DataCite to push both publication and dataset information to ORCID profiles automatically. All organisations that deposit ORCID iDs with Crossref and/or DataCite will see this information going further, automatically updating author records. We’re cross-posting ORCID’s blog below with all the details:\nSince ORCID’s inception, our key goal has been to unambiguously identify researchers and provide tools to automate the connection between researchers and their creative works. We are taking a big step towards achieving this goal today, with the launch of Auto-Update functionality in collaboration with Crossref and [DataCite](https://www.datacite.org/. There’s already been a lot of excitement about Auto-Update: Crossref’s recent announcement about the imminent launch generated a flurry of discussion and celebration on social media. Our own tweet on the topic was viewed over 10,500 times and retweeted by 60 other accounts. So why all the fuss? We think Auto-Update will transform the way researchers manage their scholarly record. Until now, researchers have had to manually maintain their record, connecting new activities as they are made public. In ORCID, that meant using Search \u0026amp; Link tools developed by our member organizations to claim works manually. Researchers frequently ask, “Why, if I include my ORCID iD when I submit a manuscript or dataset, isn’t my ORCID record “automagically” updated when the work is published?”\nWith the launch of Auto-Update, that is just what will happen. It might seem like magic but there are a few steps to make it work:\nResearchers. You need to do two things: (1) use your ORCID iD when submitting a paper or dataset, and (2) authorize Crossref and DataCite to update your ORCID record. In keeping with our commitment to ensuring that researchers maintain full control of their ORCID record, you may revoke this permission at any time, and may also choose privacy settings for the information posted on your record.\nPublishers and data centers. These organizations also have two things to do: (1) collect ORCID identifiers during the submission workflow, using a process that involves authentication (not a type-in field!), and (2) embed the iD in the published paper and include the iD when submitting information to Crossref or DataCite. Crossref and DataCite. Upon receipt of data from a publisher or data center with a valid identifier, Crossref or DataCite can automatically push that information to the researcher’s ORCID record. More information about how to opt out of this service can be found here: the ORCID Inbox.\nWhy is this so revolutionary? A bit of background, first. Crossref and DataCite, both non-profit organizations, are leaders in minting DOIs (Digital Object Identifiers) for research publications and datasets. A DOI is a unique alphanumeric string assigned to a digital object – in this case, an electronic journal article, book chapter, or a dataset. Each DOI is associated with a set of basic metadata and a URL pointer to the full text, so that it uniquely identifies the content item and provides a persistent link to its location on the internet.\nCrossref, working with over a thousand scholarly publishers, has generated well over 75 million DOIs for journal articles and book chapters. DataCite works with nearly 600 data centers worldwide and has generated over 6.5 million DOIs to date. Between them, Crossref and DataCite have already received almost a half a million works from publishers and data centers that include an ORCID iD validated by the author/contributor. With Auto-Update functionality in place, information about these articles can transit (with the author’s permission) to the author’s ORCID record. Auto-Update doesn’t stop at a researcher’s ORCID record. Systems that have integrated ORCID APIs and have a researcher’s ORCID record connected to that system — their faculty profile system, library repository, webpage, funder reporting system — can receive alerts from ORCID. Information can move easily and unambiguously across systems. This is the beginning of the end for the endless rekeying of information that plagues researchers — and anyone involved in research reporting. Surely something to celebrate!\nQuestions you may have:\nQ. What do I need to do to sign up for auto-update?\nYou need to grant permission to Crossref and DataCite to post information to your ORCID record. You can do this today by using the Search and Link wizard for DataCite available through the ORCID Registry or the DataCite Metadata Search page. We also have added a new ORCID Inbox, so that you can receive a message from Crossref or DataCite if they receive a datafile with your iD, and you can grant permission directly. See More on the ORCID Inbox.\nQ. Will Crossref and DataCite be able to update my ORCID record with already published works for which I did not use my ORCID iD?\nNo. The auto-update process only applies to those works that these organizations receive that include your ORCID iD. For previous works that did not include your ORCID iD, you will need to use the DataCite and Crossref Search and Link wizards to connect information with your iD.\nQ. What information will be posted to my record?\nWith your permission, basic information about the article (such as title, list of contributors, journal or publisher) or dataset (such as data center name and date of publication) will be posted, along with a DOI that allows users to navigate to the source paper or dataset landing page.\nQ. What if my journal or data center doesn’t collect ORCID iDs?\nAsk them to! This simple step can be accomplished using either the Public or Member ORCID APIs. Information about integrating ORCID iDs in publishing and repository workflows is publicly available.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/nov-9th-new-webinar-crossref-for-open-access-publishers/", "title": "Nov 9th - New Webinar: Crossref for Open Access Publishers", "subtitle":"", "rank": 1, "lastmod": "2015-10-19", "lastmod_ts": 1445212800, "section": "Blog", "tags": ["Open Access"], "description": "Register for our webinar to learn best practices for depositing metadata and ways to help with the dissemination and discoverability of OA content.\nNew Crossref services are being developed that have particular application to OA publishers. Did you know that our upcoming DOI Event Tracker service was inspired by a group of OASPA publishers asking if there was a way to centrally support the gathering of data that could be analyzed as altmetrics?\n", "content": "Register for our webinar to learn best practices for depositing metadata and ways to help with the dissemination and discoverability of OA content.\nNew Crossref services are being developed that have particular application to OA publishers. Did you know that our upcoming DOI Event Tracker service was inspired by a group of OASPA publishers asking if there was a way to centrally support the gathering of data that could be analyzed as altmetrics?\nA large number of Crossref members classify their content as Open Access, and we’ve been thinking about how our infrastructure can support and communicate this. In many ways, it already does:\nCrossref supports the deposit of license and funding information in the DOI metadata. Crossref’s Crossmark Service is useful to OA publishers who need to have the means to update info about their content, no matter where it sits. Crossref’s APIs allow publishers to make it easier for researchers to mine full-text content. Register for the Crossref Open Access Webinar\nDate: November 9, 2015\nTime: 8:00 am (San Francisco), 11:00 am (New York), 4:00 pm (London) Register: https://attendee.gotowebinar.com/register/4198524003003451650\nPlease join us for this new webinar that gives an overview of Crossref and its network of member publishers, along with information on Crossref services that have specific relevance to OA scholarly content.\nCrossref will be joined by two guest speakers - Frontiers will talk about their OA workflows and how Crossref services integrate with these, and James MacGregor from PKP will show participants the Crossref Export/Registration Plugin which journals can enable to assign DOIs with Crossref and to help them participate in other Crossref services.\nThere will be time for questions and discussion during the webinar. The webinar will be recorded.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/open-access/", "title": "Open Access", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/webinars/", "title": "Webinars", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/2015-annual-meeting-speakers-announced/", "title": "2015 Annual Meeting: Speakers Announced", "subtitle":"", "rank": 1, "lastmod": "2015-10-13", "lastmod_ts": 1444694400, "section": "Blog", "tags": [], "description": "Curious about who will be speaking at Crossref’s Annual Meeting this year? We have a flock of scholarly communications talent gathering at the Taj Hotel in Boston from November 17-18, 2015. In addition to our line-up of keynote speeches and technical workshops, we will be celebrating Crossref’s 15th Anniversary with a quindecennial fête on Wednesday evening, November 18th. There’s still time to register, so please join us! ", "content": "Curious about who will be speaking at Crossref’s Annual Meeting this year? We have a flock of scholarly communications talent gathering at the Taj Hotel in Boston from November 17-18, 2015. In addition to our line-up of keynote speeches and technical workshops, we will be celebrating Crossref’s 15th Anniversary with a quindecennial fête on Wednesday evening, November 18th. There’s still time to register, so please join us! Distinguished Guest Speaker Bios:\nMarc Abrahams will be a keynote speaker at Crossref’s 2015 Annual Meeting. Marc writes about research that makes people LAUGH, then THINK. He is editor and co-founder of the magazine Annals of Improbable Research (AIR), host and main writer of the Improbable Research weekly podcast (distributed by CBS), and author of This is Improbable Too and other books. He edits and writes much of the web site and blog www.improbable.com, and for thirteen years wrote a column (called “Improbable Research”) for The Guardian newspaper. Marc is the father and Master of Ceremonies of the Ig Nobel Prize Ceremony, honoring achievements that make people LAUGH, then THINK. The Prizes are handed out by genuine Nobel Laureates at a gala ceremony held each autumn at Harvard University and broadcast on the internet and on National Public Radio. Marc is author of the books The Ig Nobel Prizes, The Man Who Cloned Himself, Why Chickens Prefer Beautiful Humans, This Is Improbable, This is Improbable Too, The Ig Nobel Cookbook, volume 1 (co-authored with Corky White and Gus Rancatore). He edited (and wrote much of) the science humor anthologies The Best of Annals of Improbable Research and Sex As a Heap of Malfunctioning Rubble (and other improbabilities). Marc has a degree in applied mathematics from Harvard College, spent several years developing optical character recognition computer systems (including a reading machine for the blind) at Kurzweil Computer Products, and later founded Wisdom Simulators, a creator of educational software. Juan Pablo Alperin will be a keynote speaker at Crossref’s 2015 Annual Meeting. Juan is an Assistant Professor and a Research Associate with the Public Knowledge Project (PKP) at Simon Fraser University. Juan started working with the PKP in 2007, and has continued to be involved as systems developer, project manager, and researcher. Juan leads and advises on several of PKP’s R\u0026amp;D and Scholarly Inquiry initiatives as a complement to his research and work on scholarly communications more broadly. He can be reached via @juancommander. ORCID iD: orcid.org/0000-0002-9344-7439.\nScott Chamberlain will be a keynote speaker as well as a presenter at Crossref’s 2015 Annual Meeting. Scott is a scientific programmer who contributes to the field of scholarly literature by developing software for accessing open data on the web. He co-founded a developer collective called rOpenSci to help connect open source data into the R environment, a free software environment for statistical computing and graphics that runs on all major platforms. Scott maintains a few clients to work with Crossref APIs, and a text mining client that leverages Crossref’s TDM service. In addition, Scott maintains clients in R, Ruby, and Python to interact with Legotto, a platform for collecting and delivering altmetric data. A former ecologist, Scott is currently working full time on rOpenSci at the University of California at Berkeley. He can be reached via @recology_/@opensci. ORCID iD: http://orcid.org/0000-0003-1444-9135.\nJohn Chodacki will be a presenter at Crossref’s 2015 tech workshops. John is Director of University of California Curation Center (UC3) at California Digital Library (CDL). At UC3, John works with UC campuses and the broader community to ensure that CDL’s digital curation services meet the emerging needs of the scholarly community, including digital preservation, data management, and reuse. Prior to joining UC3, John was Product Director at PLOS where he led cross-departmental strategic projects such as the Article-Level Metrics (ALM) initiative. He has served on the Crossref board and is currently the Committee Chair for DOI Event Tracker (DET). He can be reached via @chodacki. ORCID iD: orcid.org/0000-0002-7378-2408. Anne Coghill will be a presenter at Crossref’s 2015 Annual Meeting. Anne is Manager, Peer Review Operations, in the American Chemical Society Publications Division. She and her colleagues manage the manuscript submission and peer review environment for ACS’ scholarly journals and books publishing program. Anne holds a Bachelor of Science in chemistry from Illinois State University and a Master in Science in Management Studies from Northwestern University. She is also the co-editor of The ACS Style Guide, third edition. She can be reached via @AnneCoghill. ORCID iD: orcid.org/0000-0002-2773-2282. Helen Duriez will be a presenter at Crossref’s 2015 tech workshops. Helen is the ePublishing Manager at the Royal Society, responsible for developing the Society’s digital journals strategy as well as the day-to-day management of its journal websites. Since digital innovation transcends the traditional boundaries of scholarly publishing, she spends a lot of time pondering a variation of Freud’s musings, ‘what do researchers want?’ Helen can be contacted via @HDuriez and @RSocPublishing.\nMartin Paul Eve will be a keynote speaker as well as a presenter at Crossref’s 2015 Annual Meeting. Martin is Senior Lecturer in Literature, technology and Publishing at Birkbeck, University of London and a founder of the Open Library of Humanities. He is the author of three books: Pynchon and Philosophy: Wittgenstein, Foucault and Adorno (Palgrave, 2014); Open Access and the Humanities: Contexts, Controversies and the Future (Cambridge University Press, 2014); and Password [a cultural history (Bloomsbury, forthcoming 2016) and many journal articles. A strong advocate for open access to scholarly material, Martin has given evidence to the UK House of Commons Select Committee Inquiry into Open Access; served on the Jisc OAPEN-UK Advisory Board, the Jisc National Monograph Strategy Group, and the Jisc Scholarly Communications Advisory Board; been a member of the HEFCE Open Access Monographs Expert Reference Group; and is a member of the SCONUL Strategy Group on Academic Content and Communications. Martin is also a qualified computer programmer (Microsoft Professional in C# and the .NET Framework) and is the author of the digital publishing tools meTypeset and CaSSius. He can be reached via @martin_eve. ORCID iD: orcid.org/0000-0002-5589-8511.\nBen Hogan will be a presenter at Crossref’s 2015 tech workshops. Ben is a Regional Manager in Wiley’s Peer Review Management team, responsible for leading the North America and Open Access teams. He works with internal and external stakeholders to bring in new work and refine the peer review experience to be as efficient as possible for authors and editorial offices. Ben’s worked in publishing since 2007 in a variety of capacities, including books and journals production, training, and peer review. His interests include user experience and publication ethics.\nJure Triglav will be a presenter at Crossref’s 2015 tech workshops. His presentation,Using Crossref’s API to Make Smarter Science Writing , will explore how continuously talking to Crossref’s API can help us write better scientific content. Topics will include calling the API from JavaScript, combining Crossref data with modern web-based text editors, and more.Jure is an open science software developer. Jure graduated from medical school 4 years ago, but started working as a developer for Academia.edu shortly after. Now he focuses on technology issues present in open science and runs several projects in this space: @ScienceGist, @ScienceToolbox and @ScholarNinja. Jure also works with open science organizations like PLOS, working on software that will power the future of scientific publishing. He can be reached via @juretriglav.\nCrossref Staff Speaker Bios: Geoffrey Bilder is Director of Strategic Initiatives at Crossref, where he has led the technical development and launch of a number of industry initiatives including CrossCheck, Crossmark, ORCID and FundRef. He co-founded Brown University’s Scholarly Technology Group in 1993, providing the Brown academic community with advanced technology consulting in support of their research, teaching and scholarly communication. He was subsequently head of IT R\u0026amp;D at Monitor Group, a global management consulting firm. From 2002 to 2005, Geoffrey was Chief Technology Officer of scholarly publishing firm Ingenta, and just prior to joining Crossref, he was a Publishing Technology Consultant at Scholarly Information Strategies. He can be reached via @Geoffrey Bilder. ORCID iD: orcid.org/0000-0003-1315-5960.\nGinny Hendricks is Director of Member \u0026amp; Community Outreach for Crossref, and is responsible for Crossref’s communications, business development, member services, and product support initiatives. Before joining Crossref, she ran Ardent Marketing for nine years, where she consulted with publishers to craft multichannel marketing strategies, develop, brand, and launch online products, and build engaged communities. She previously managed Elsevier’s launch of Scopus, the abstract and citation database of peer-reviewed literature. While at Elsevier, she established advisory boards and outreach programs with library and scientific communities. In 1998, Ginny started an early e-resources help desk for Blackwell’s information Services and later led training and communication programs for Swets’ digital portfolio in Asia Pacific, Middle East, and Africa. She’s lived and worked in many parts of the world, has managed globally dispersed creative, technical, and commercial teams, and co-hosts the Scholarly Social networking events in London. She can be reached via @GinnyLDN. ORCID iD: http://orcid.org/0000-0002-0353-2702.\nChuck Koscher has been the Director of Technology for Crossref since 2002. His primary responsibility has been the development and operation of Crossref’s core services and technical infrastructure. As a senior staff member he also contributes to the definition of Crossref’s mission and the expansion of its services such as the recent launch of Fundref. His role includes management of technical support and back-end business operations. Chuck and his team interface directly with members in dealing with issues effected by new or evolving industry practices such as those involving non-journal content like books, standards and databases. Chuck has been active within the industry having served 9 years on the NISO board of directors, and a participant in initiatives such as the NISO/NFAIS Best Practices in Journal Publishing and NISO’s Supplemental Material Working Group. Prior to Crossref Chuck has over 20 years in software engineering experience primarily in the aerospace industry. ORCID iD: orcid.org/0000-0003-2181-9595.\nRachael Lammey is a Product Manager on Crossref’s Crosscheck plagiarism screening and Text and Data Mining API initiatives, among other tools that Crossref make available for publishers build upon. Rachael has been with Crossref since March 2012. She previously worked in journals publishing for Taylor \u0026amp; Francis for nearly six years, managing a team who worked with online submission and peer review systems. She has a degree in English Literature from St. Andrews University and a MA in Publishing Studies from the University of Stirling. She can be reached via @rachaellammey. ORCID iD: http://orcid.org/0000-0001-5800-1434.\nJennifer Lin is the Director of Product Management at Crossref. She has worked in product development, project management, community outreach, and change management within the scholarly communications, education, and public sectors since 2000. She spent four years at the Public Library of Science (PLOS) where she oversaw product strategy and development for their data program, article-level metrics initiative, and open assessment activities. Prior to PLOS, she was a consultant with Accenture, working with Fortune 500 companies as well as governments, to develop and deploy new products and services. Jennifer earned her PhD at Johns Hopkins University. Jennifer can be reached via @jenniferlin15. ORCID iD: http://orcid.org/0000-0002-9680-2328.\nEd Pentz is the Executive Director of Crossref, a not-for-profit membership association of publishers set up to provide a cross-publisher reference linking service to organise publisher metadata, run the infrastructure that makes Digital Object Identifier (DOI) links work, and rally multiple community stakeholders to develop tools and services that enable advancements in scholarly publishing. Ed was appointed as Crossref’s first Executive Director when the organization was created in 2000. Crossref is now the largest DOI registrar in the world with over 75,000,000 DOIs. Ed is also Chair of the Board of ORCID, a registry of unique identifiers for researchers established in 2010. Prior to joining Crossref, Ed held electronic publishing, editorial and sales positions at Harcourt Brace in the US and UK and managed the launch of Academic Press’ first online journal, the Journal of Molecular Biology, in 1995. Ed has a degree in English Literature from Princeton University and lives in Oxford, England. He can be reached via @epentz. ORCID iD http://orcid.org/0000-0002-5993-8592.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-in-reddit/", "title": "DOIs in Reddit", "subtitle":"", "rank": 1, "lastmod": "2015-09-30", "lastmod_ts": 1443571200, "section": "Blog", "tags": [], "description": "Skimming the headlines on Hacker News yesterday morning, I noticed something exciting. A dump of all the submissions to Reddit since 2006. “How many of those are DOIs?”, I thought. Reddit is a very broad community, but has some very interesting parts, including some great science communication. How much are DOIs used in Reddit?\n(There has since been a discussion about this blog post on Hacker News)\nWe have a whole strategy for DOI Event Tracking, but nothing beats a quick hack or is more irresistible than a data dump.\n", "content": "Skimming the headlines on Hacker News yesterday morning, I noticed something exciting. A dump of all the submissions to Reddit since 2006. “How many of those are DOIs?”, I thought. Reddit is a very broad community, but has some very interesting parts, including some great science communication. How much are DOIs used in Reddit?\n(There has since been a discussion about this blog post on Hacker News)\nWe have a whole strategy for DOI Event Tracking, but nothing beats a quick hack or is more irresistible than a data dump.\nWhat is a DOI? If you know what a DOI is, skip this! The DOI system (Digital Object Identifier) is a link redirection service. When a publisher puts some content online they could just hand out the URL. But the URL can change, and within a very short space of time, link-rot happens. DOIs are designed to fight link rot. When a publisher mints a DOI to an article they just published, they can change the article’s URL and then update the DOI to point to the new place. DOIs are persistent. They are URLs. They’re also identifiers (kind of like ISBNs), and they’re used in scholarly publishing as to do citations.\nCrossref is the DOI registration agency for scholarly publishing. That means mostly things like journal articles. There are other registration agencies, for example, DataCite, who do DOIs for research datasets. But at this point in time, most DOIs are Crossref’s.\nWhat does finding DOIs in Reddit mean? It means someone used a DOI to cite something! DOIs can be used for any kind of content, but because of the sheer volume of scientific publishing, lots of DOIs are for science. Having a DOI doesn’t say anything about quality or content. But it does indicate that the person who created the DOI probably intended it to be cited. We care because it means that every time a DOI is used a tiny bit of link-rot doesn’t have the opportunity to take hold. Every time something is discussed on Reddit and the DOI is used, it means that archaeologists using the data dump in 100 years will have identifiers to find the things being discussed, even if the web and URLs have long since crumbled to dust.\nOr, more likely, in five year’s time when a few URLs will have shuffled around.\nThe results DOIs have been used on Reddit since 2008 (the logs start in 2006). After a rocky start, we see hundreds being used per year.\nThat’s dozens per month.\nThe best subreddit to find DOIs is /r/Scholar, followed by /r/science. And then a lot of others with one or two per year.\nOpportunities It’s great to see DOIs being used in Reddit. But let’s be honest, it’s not a massive amount.\nWe have a list of domains that our DOIs point to. They mostly belong to publishers, so every time we see a link to a domain on the list, there’s a chance (not a certainty) that the link could have been made using a DOI. We found a large number of these, orders of magnitude more than DOIs. We’re still crunching the data.\nThe data The data is quite large. It’s a 40 Gigabyte download compressed, which comes to about 170 GB that uncompressed. It contains the submissions to reddit between 2006 and 2015, not the comments, so each data point represents a thread of conversation about a DOI.\nReproducibility (updated) You can find the source code and reproduce the figures at http://github.com/crossref/reddit-dump-experiment. We use Apache Spark for this kind of thing.\nThe data and methodology are very experimental. You can download all results here:\nhttps://s3-eu-west-1.amazonaws.com/crossref-labs-data/2015-10-06/reddit-dump-experiment.zip\nIt includes all data for charts in this post, as well as the full list of DOIs, the full list of URLs that could possibly have DOIs, and the full JSON input line for each of these.\nMore info Read about our DOI Event Tracking strategy, including our live stream of Wikipedia citations.\n", "headings": ["What is a DOI?","What does finding DOIs in Reddit mean?","The results","Opportunities","The data","Reproducibility (updated)","More info"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/scheduled-booth-presentations-at-the-frankfurt-book-fair/", "title": "Scheduled Booth Presentations at the Frankfurt Book Fair", "subtitle":"", "rank": 1, "lastmod": "2015-09-29", "lastmod_ts": 1443484800, "section": "Blog", "tags": [], "description": "Oktoberfest is in full swing and that makes me think that it’s almost Frankfurt Book Fair time again!\nThis year in addition to individual meetings we’ll have scheduled flash presentations on our booth, M91 in Hall 4.2. These short (10-minute) presentations are great for anyone wanting a quick intro to what Crossref is all about. Running on Wednesday, Thursday, and Friday - at the following times each of those days:\n", "content": "Oktoberfest is in full swing and that makes me think that it’s almost Frankfurt Book Fair time again!\nThis year in addition to individual meetings we’ll have scheduled flash presentations on our booth, M91 in Hall 4.2. These short (10-minute) presentations are great for anyone wanting a quick intro to what Crossref is all about. Running on Wednesday, Thursday, and Friday - at the following times each of those days:\n10am - Small Publisher Tools 12pm - DOIs \u0026amp; Metadata Basics 3pm - Exploring through APIs If you’d like to meet with us (Ed Pentz, Ginny Hendricks, Rachael Lammey, or Anna Tolwinska) please contact Rosa Morais Clark to set up a meeting.\nWe look forward to seeing you there!", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/taxonomies/", "title": "Taxonomies", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/taxonomies-meet-up-at-fbf15/", "title": "Taxonomies Meet-up at #FBM15", "subtitle":"", "rank": 1, "lastmod": "2015-09-25", "lastmod_ts": 1443139200, "section": "Blog", "tags": [], "description": "The Taxonomies Interest Group would like to invite Crossref members to an informal drop-in at the Frankfurt Book Fair:\n4-5pm on Wednesday 14th October at the TEMIS booth H76\n", "content": "The Taxonomies Interest Group would like to invite Crossref members to an informal drop-in at the Frankfurt Book Fair:\n4-5pm on Wednesday 14th October at the TEMIS booth H76\nThe group would like to discuss how different publishers use their taxonomies for content enrichment and to explore the role that the Crossref interest group can play in promoting industry collaboration and emerging standards. TEMIS have kindly offered to host the event at their booth and provide refreshments: Please come by from 4pm at Booth H76.\nGraham McCann from IOP Publishing and Christian Kohl from De Gruyter will be coordinating the event. For background information on the work the group is doing, take a look at this webinar recording from March 2015.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-to-auto-update-orcid-records/", "title": "Crossref to Auto-Update ORCID Records", "subtitle":"", "rank": 1, "lastmod": "2015-09-24", "lastmod_ts": 1443052800, "section": "Blog", "tags": [], "description": "In the next few weeks, authors with an ORCID iD will be able to have Crossref automatically push information about their published work to their ORCID record. It’s something that ORCID users have been asking for and we’re pleased to be the first to develop the integration. 230 publishers already include ORCID iDs in their metadata deposits with us, and currently there are 248,000 DOIs that include ORCID iDs.\n", "content": "In the next few weeks, authors with an ORCID iD will be able to have Crossref automatically push information about their published work to their ORCID record. It’s something that ORCID users have been asking for and we’re pleased to be the first to develop the integration. 230 publishers already include ORCID iDs in their metadata deposits with us, and currently there are 248,000 DOIs that include ORCID iDs.\nWhat this means for researchers\nMore visibility for your work! Crossref represents over 5000 scholarly publishers and many of them ask authors for their ORCID iD and include it in the publication information they send us. Also it will mean less manual searching and adding; you’ve always been able to search crossref metadata for your name and/or publications and manually add them to your ORCID record, this auto-update simply means that when your publishers include the info we can update and add work(s) to your ORCID record automatically for you. You can still choose to hide/show whatever works you choose, and, of course, you’ll have the opportunity to authorize or switch off the integration completely (though future publications may trigger a new request). Overall, you’ll benefit from a more complete and up-to-date ORCID record to showcase your work.\nWhat this means for publishers\nIf you’re one of the 230 Crossref publishers who already supply ORCID iDs along with the usual metadata submissions, then you’re all good. If you don’t offer this yet, you might want to think about starting - it’s beneficial for funders, publishers, other researchers, libraries, and universities to be able to integrate with complete researcher records. You can ask for ORCIDs upon manuscript submission or acceptance and tag it in your metadata deposits with Crossref. We’ll ensure the rest.\nVarious caveats and important details to be aware of\nApparently not all publishers are members of Crossref (we know, crazy), and in addition only a subset of Crossref publishers (230 in total) are asking authors for ORCID iDs and/or including them in their metadata deposits. Some publishers may choose to opt out of passing through the details to ORCID using the Crossref auto-update (perhaps they plan to send this directly at some point) but if you’ve included your ORCID with your submission and it isn’t automatically updated, then check with your publisher. We have a “backlog” of almost 250,000 DOIs that include ORCID iDs so that may mean we do some bulk updates at a later date where authors will receive an email with a long list of works to add. Even if the works have been listed before, it’s worth accepting as it will add the most up-to-date metadata to ensure the most accurate record. Any questions can be directed to our support team.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/annual-meeting-join-crossref-in-boston-this-november/", "title": "Annual Meeting: Join Crossref in Boston this November!", "subtitle":"", "rank": 1, "lastmod": "2015-09-18", "lastmod_ts": 1442534400, "section": "Blog", "tags": [], "description": "We’d like to invite the scholarly publishing community to get together in Boston this November with the Crossref Annual Meeting as a rally point. This is the event we hold just once a year to get the whole team under one roof, host a lively discussion with the leading voices in scholarly communications, present technical workshops, and offer you the chance to get hands’ on with our latest metadata services. Our free two-day event takes place from November 17-18, 2015 in Boston, MA.\n", "content": "We’d like to invite the scholarly publishing community to get together in Boston this November with the Crossref Annual Meeting as a rally point. This is the event we hold just once a year to get the whole team under one roof, host a lively discussion with the leading voices in scholarly communications, present technical workshops, and offer you the chance to get hands’ on with our latest metadata services. Our free two-day event takes place from November 17-18, 2015 in Boston, MA.\nAgenda:\nTuesday, November 17 - Tech Workshops: The morning is an opportunity to get into small groups and talk directly with our development and support teams. We will present best practices around using Crossref’s metadata. After lunch, we will feature member case studies with tips on implementation and lessons learned. If you’re on the technical production side of scholarly publishing, you’ll want to be there — and not just for the beer \u0026amp; pretzels afterwards.\nWednesday, November 18 - Member Meeting: A day to hear from thought leaders from the larger scholarly publishing community as well as from inside Crossref. Our keynote speaker will be Dr. Ben Goldacre (Bad Science), and our distinguished speakers include Dr. Scott Chamberlain (rOpenSci), Dr. Juan Pablo Alperin (Public Knowledge Project), and Dr. Martin Eve, (Open Library of Humanities). We will share details about the road map for Crossref Labs’ current and future initiatives, hear about the latest organizational developments from new members of our team, and see the debut of our new brand logo and communications strategy. Following the formal discussion, we’ll continue the conversation over cocktails as part of our celebration of Crossref’s milestone 15th Anniversary!\n✱ Tickets:\nReserve your free tickets here: https://www.eventbrite.com/e/crossref15-tech-workshops-member-meeting-tickets-17921679225\nWho Should Attend?\nScholarly publishers, technology providers, librarians, researchers, academic institutions, funders, journalists, and others who are keen to discuss tools and services to advance scholarly publishing are encouraged to attend.\n✱ Venue:\nHotel Taj Boston 15 Arlington Street Boston, MA 02116 USA About Crossref Crossref is a not-for profit membership organization that wants to improve research communication. We organize publisher metadata, run the infrastructure that makes DOI links work, and we rally multiple community stakeholders in order to develop tools and services to enable advancements in scholarly publishing.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/det-poised-for-launch/", "title": "DOI Event Tracker (DET): Pilot progresses and is poised for launch", "subtitle":"", "rank": 1, "lastmod": "2015-09-15", "lastmod_ts": 1442275200, "section": "Blog", "tags": [], "description": "\nPublishers, researchers, funders, institutions and technology providers are all interested in better understanding how scholarly research is used. Scholarly content has always been discussed by scholars outside the formal literature and by others beyond the academic community. We need a way to monitor and distribute this valuable information.\n", "content": "\nPublishers, researchers, funders, institutions and technology providers are all interested in better understanding how scholarly research is used. Scholarly content has always been discussed by scholars outside the formal literature and by others beyond the academic community. We need a way to monitor and distribute this valuable information.\nThe Crossref DOI Event Tracker (DET) To meet this need, Crossref will be introducing a new service that tracks activity surrounding a research work from potentially any web source where an event is associated with a DOI. Following a successful pilot run started Spring 2014, the service has been approved to move toward production and is expected to launch in 2016. Any party wishing to join this phase is welcome to contact Jennifer Lin. The DOI Event Tracker (DET) registers a wide variety of events such as bookmarks, comments, social shares, citations, and links to other research entities, from a growing list of online sources. DET aggregates them, and stores and delivers the data in many ways.\nOpen, portable, and licensed for maximum reuse\nCrossref has long served as the citation linking and metadata infrastructure provider for scholarly communication; the new DOI Event Tracker is a natural next step, providing a practical solution as a resource for the whole community. The tracker offers the following features:\nData on event activity across a common pool of online channels. Near real-time alerting for select sources with push notifications to the system. Cross-publisher monitoring to enable benchmarking and provide context to the data. Common format for normalizing data results across the diverse set of sources via modern REST API. Secure and regularly refreshed backups of critical data for long term data preservation. Transparency of data collection so as to ensure auditable, replicable, and trustworthy results. Query-initiated retrieval or real-time alerts when an event of interest occurs. CC-0 license for open and flexible propagation of data. A number of platforms are already confirmed and more parties are welcomed at any stage. So far we have confirmation to track DOI events on the following platforms:\n[table id=1 /]\nThis set of sources reflects our initial focus on parties willing to allow their data to be redistributed in the common pool. Efforts are underway to expand the source list to include Twitter and MyScienceWork, among others. Publishers can also act as sources by publishing and distributing DOI event data via the DET when an event occurs on its platform (for example, when a PDF is downloaded, or when a comment mentions a DOI in a locally hosted discussion forum, etc.). This would make local DOI activity globally available to funders, researchers, institutions, etc.\nDET provides benefits of scale and ease of access as a central point for collecting and propagating data to the community. As a single point of access, it overcomes the business and technical hurdles that are a part of managing multiple online sources where scholarly activity occurs, in a rapidly changing landscape of online channels. This resource covers content across publishers and serves as a strong foundation to support the development of tools and services by any party. DET users will always be able to combine the DET data with those individually collected via negotiated or paid access. DET remains a utility separate from any value-added amenities, such as analytics, presentation, and reporting.\nDET Service-Level Agreement For those who seek the highest level of service and a more flexible range of access options, Crossref will provide a Service-Level Agreement (SLA) service for the DOI Event Tracker. The DET SLA includes the following additional features on top of the common data offering:\nAccess to the complete suite of sources, which includes restricted and/or paid sources in addition to common data, providing the fullest picture of DOI usage activity possible. Guaranteed uptime and response time to the latest raw data on the aggregate activity surrounding a DOI. Guaranteed support response time to questions and issues surrounding data and data delivery. Flexible data access options: on-demand real time data access and scheduled bulk downloads for processing batch analytics. Optimum retrieval rates and accelerated delivery speeds with the dedicated SLA API. Access to a webhook API for events of interest as an alternative to polling DET. Standardized and enhanced linkback service for the difficult-to-track, grey literature. The DET SLA service has a simple, value-based pricing model based on subscriber size. Register your interest in Crossref’s DOI Event Tracker and the DET SLA service if you would like stay informed of the upcoming launch. Please contact Jennifer Lin for more information.\nImage modified from “Radar” icon by Karsten Barnett from the Noun Project.\n", "headings": ["The Crossref DOI Event Tracker (DET)","DET Service-Level Agreement"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/best-practices-for-depositing-funding-data/", "title": "Best Practices for Depositing Funding Data", "subtitle":"", "rank": 1, "lastmod": "2015-09-01", "lastmod_ts": 1441065600, "section": "Blog", "tags": [], "description": "Crossref’s funding data initiative (FundRef) encourages publishers to deposit information about the funding sources of authors’ research as acknowledged in their papers. The funding data comprises funder name and identifier, and grant number or numbers. Funding data can be deposited on its own or with the rest of the metadata for an item of content.\n", "content": "Crossref’s funding data initiative (FundRef) encourages publishers to deposit information about the funding sources of authors’ research as acknowledged in their papers. The funding data comprises funder name and identifier, and grant number or numbers. Funding data can be deposited on its own or with the rest of the metadata for an item of content.\nThere are two ways that publishers can collect this funding information for any given piece of content: by asking authors to input the funder name(s) and award number(s) via their submission system, or extracting the funder names and award numbers from the acknowledgements in the paper.\nThe funding data is only useful if it is standardised, and so it is absolutely critical that funder names are deposited with their associated funder IDs from the Funder Registry.\nFor publishers considering or about to start collecting and depositing funding data, and for those already doing so, we have drawn up some guidelines that will help you to ensure good quality metadata.\nIf you are collecting funding information from authors via your submission system:\nProvide very clear instructions for your authors. Your submission system should prompt the author towards the canonical name from Crossref’s Funder Registry as they type, or guide them through a pick-list. Make it clear to authors that they should choose funder names from this list and not copy and paste from their manuscript. Work with your submission system vendor or adapt your in-house system to make it easy for authors to select from the Funder Registry, and more difficult to paste incorrect names or ignore the suggested names. Consider a warning message if an unknown name is entered, and offer a list of close matches. Instruct authors to look for the name of the funding body rather than a specific program or project. If you or one of your vendors is extracting funding information from papers:\nProvide the same clear instructions to your vendor(s). Stress the importance of matching the funder names in the acknowledgements to the names in the Funder Registry. Look for common text-extraction errors such as concatenated funder names, punctuation errors, and stop words such as “of/for” that are commonly used interchangeably, or the presence or absence of “the” at the start of a funder name. For both workflows:\nAdd QA into your workflow. Many of the names sent to Crossref without IDs are very obviously funders that are in the Registry, and a check by editorial or production staff could correct misspellings or fill in blanks. Check that grant numbers have been separated and are not being deposited as one long string. Be aware that funder names deposited without IDs are not valid funding data and will be hidden from Crossref’s search tools and APIs until such time as they are updated with a funder ID. The funding data section of a deposit (but not the rest of the deposit) will be rejected by the Crossref deposit system if The funder_name field contains a numerical string longer than 4 digits The funder_id field contains a number that is not an ID from the Funder Registry The funder_name contains text that exceeds 200 characters Consider only depositing data that has funder IDs and holding the rest to re-poll against the Funder Registry at a later date when more funder names have been added. The Funder Registry is updated at approximately two-monthly intervals. You can sign up to be alerted to updates here. If there are funders that appear regularly in your particular subject or geographical area that are not in the Registry, send a list to funder.registry@crossref.org. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/andrew-gilmartin/", "title": "Andrew Gilmartin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-and-matching-regular-expressions/", "title": "DOIs and matching regular expressions", "subtitle":"", "rank": 1, "lastmod": "2015-08-11", "lastmod_ts": 1439251200, "section": "Blog", "tags": [], "description": "We regularly see developers using regular expressions to validate or scrape for DOIs. For modern Crossref DOIs the regular expression is short\n/^10.\\d{4,9}/[-._;()/:A-Z0-9]+$/i\nFor the 74.9M DOIs we have seen this matches 74.4M of them. If you need to use only one pattern then use this one.\n", "content": "We regularly see developers using regular expressions to validate or scrape for DOIs. For modern Crossref DOIs the regular expression is short\n/^10.\\d{4,9}/[-._;()/:A-Z0-9]+$/i\nFor the 74.9M DOIs we have seen this matches 74.4M of them. If you need to use only one pattern then use this one.\nThe other 500K are mostly from Crossref’s early days when the battle between “human-readable” identifiers and “opaque” identifiers was still being fought, the web was still new, and it was expected that “doi” would become as well a supported URI schema name as “gopher”, “wais”, …. Ok, that didn’t go so well.\nAn early Crossref’s member was John Wiley \u0026amp; Sons. They faced the need to design DOIs without much prior work to lean on. Many of those early DOIs are not expression friendly. Nevertheless, they are still valid and valuable permanent links to the work’s version of record. You can catch 300K more DOIs with\n/^10.1002/[^\\s]+$/i\nWhile the DOI caught is likely to be the DOI within the text it may also contain trailing characters that, due to the lack of a space, are caught up with the DOI. Even the recommended expression catches DOIs ending with periods, colons, semicolons, hyphens, and underscores. Most DOIs found in the wild are presented within some visual design program. While pleasant to look at the visual design can misdirect machines. Is the period at the end of the line part of the DOI or part of the design? Is that endash actually a hyphen? These issues lead to a DOI bycatch.\nAdding the following 3 expressions with the previous 2 leaves only 72K DOIs uncaught. To catch these 72K would require a dozen or more additional patterns. Each additional pattern, unfortunately, weakens the overall precision of the catch. More bycatch.\n/^10.\\d{4}/\\d+-\\d+X?(\\d+)\\d+\u0026lt;[\\d\\w]+:[\\d\\w]*\u0026gt;\\d+.\\d+.\\w+;\\d$/i\n/^10.1021/\\w\\w\\d++$/i\n/^10.1207/[\\w\\d]+\\\u0026amp;\\d+_\\d+$/i\nCrossref is not the only DOI Registration Agency and while our members account for 65-75% of all registered DOIs this means there are tens of millions of DOIs that we have not seen. Luckily, the newer RAs and their publishers can copy our successes and avoid our mistakes.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rehashing-pids-without-stabbing-myself-in-the-eyeball/", "title": "Rehashing PIDs without stabbing myself in the eyeball", "subtitle":"", "rank": 1, "lastmod": "2015-06-11", "lastmod_ts": 1433980800, "section": "Blog", "tags": [], "description": "Anybody who knows me or reads this blog is probably aware that I don’t exactly hold back when discussing problems with the DOI system. But just occasionally I find myself actually defending the thing…\n", "content": "Anybody who knows me or reads this blog is probably aware that I don’t exactly hold back when discussing problems with the DOI system. But just occasionally I find myself actually defending the thing…\nAbout once a year somebody suggests that we could replace existing persistent citation identifiers (e.g. DOIs) with some new technology that would fix some of the weaknesses of the current systems. Usually said person is unhappy that current systems like\nDOI, Handle, Ark, perma.cc, etc. depend largely on a social element to update the pointers between the identifier and the current location of the resource being identified. It just seems manifestly old-fashioned and ridiculous that we should still depend on bags of meat to keep our digital linking infrastructure from falling apart.\nIn the past, I’ve threatened to stab myself in the eyeball if I was forced to have the discussion again. But the dirty little secret is that I play this game myself sometimes. After all, the best thing a mission-driven membership organisation could do for its members would be to fulfil its mission and put itself out of business. If we could come up with a technical fix that didn’t require the social component, it would save our members a lot of money and effort.\nWhen one of these ideas is posed, there is a brief flurry of activity as another generation goes through the same thought processes and (so far) comes to the same conclusions.\nThe proposals I’ve seen generally fall into one of the following groups:\nReplace persistent identifiers (PIDs) with hashes, checksums, etc. Just use search (often, but not always coupled with 1 above) Automagically create PIDs out of metadata. Automagically redirect broken citations to archived versions of the content identified And more recently… use the blockchain I thought it might help advance the discussion and avoid a bunch of dead ends if I summarised (rehashed?) some of the issues that should be considered when exploring these options.\nWarning: Refers to FRBR terminology. Those of a sensitive disposition might want to turn away now.\nDOIs, PMIDs, etc. and other persistent identifiers are primarily used by our community as “citation identifiers”. We generally cite at the “expression” level. Consider the difference between how a “citation identifier” a “work identifier” and a “content verification identifier” might function. How do you deal with “equivalent manifestations” of the same expression. For example the ePub, PDF and HTML representations of the same article are intellectually equivalent and interchangeable when citing. The same applies to csv \u0026amp; tsv representations of the same dataset. So, for example, how do hashes work here as a citation identifier? Content can be changed in ways that typically doesn’t effect the interpretation or crediting of the work. For example, by reformatting, correcting spelling, etc. In these cases the copies should share the same citation identifier, but the hashes will be different. Content that is virtually identical (and shares the same hash) might be republished in different venues (e.g. a normal issue and a thematic issue). Context in citation is important. How do you point somebody at the copy in the correct context? Some copies of an article or dataset are stewarded by publishers. That is, if there is an update, errata, corrigenda, retraction/withdrawal, they can reflect that on the stewarded copy, not on copies they don’t host or control. Location is, in fact, important here. Some copies of content will be nearly identical, but will differ in ways that would affect the interpretation and/or crediting of the work. A corrected number in a table for example. How would you create a citation form a search that would differentiate the correct version from the incorrect version? Some content might be restricted, private or under embargo. For example private patient data, sensitive data about archaeological finds or the migratory patterns of endangered animals. Some content is behind paywalls (cue jeremiads) Content is increasingly composed of static and dynamic elements. How do you identify the parts that can be hashed? How do you create an identifier out of metadata and not have them look like this? This list is a starting point that should allow people to avoid a lot of blind alleys.\nIn the mean time, good luck to those seeking alternatives to the current crop of persistent citation identifier systems. I’m not convinced it is possible to replace them, but if it is- I hope I beat you to it. 🙂 And I hope I can avoid stabbing myself in the eye.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/coming-to-you-live-from-wikipedia/", "title": "Coming to you Live from Wikipedia", "subtitle":"", "rank": 1, "lastmod": "2015-05-20", "lastmod_ts": 1432080000, "section": "Blog", "tags": [], "description": "We’ve been collecting citation events from Wikipedia for some time. We’re now pleased to announce a live stream of citations, as they happen, when they happen. Project this on your wall and watch live DOI citations as people edit Wikipedia, round the world.\nView live stream » In the hours since this feature launched, there are events from Indonesian, Portugese, Ukrainian, Serbian and English Wikipedias (in that order).\n", "content": "We’ve been collecting citation events from Wikipedia for some time. We’re now pleased to announce a live stream of citations, as they happen, when they happen. Project this on your wall and watch live DOI citations as people edit Wikipedia, round the world.\nView live stream » In the hours since this feature launched, there are events from Indonesian, Portugese, Ukrainian, Serbian and English Wikipedias (in that order).\nThe usual weasel words apply. This is a labs project and so may not be 100% stable. If you experience any problems please email labs@crossref.org .\n", "headings": ["View live stream »"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/handle/", "title": "Handle", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/january-2015-doi-outage-followup-report/", "title": "January 2015 DOI Outage: Followup Report", "subtitle":"", "rank": 1, "lastmod": "2015-03-17", "lastmod_ts": 1426550400, "section": "Blog", "tags": [], "description": "Background On January 20th, 2015 the main DOI HTTP proxy at doi.org experienced a partial, rolling global outage. The system was never completely down, but for at least part of the subsequent 48 hours, up to 50% of DOI resolution traffic was effectively broken. This was true for almost all DOI registration agencies, including Crossref, DataCite and mEDRA.\nAt the time we kept people updated on what we knew via Twitter, mailing lists and our technical blog at CrossTech. We also promised that, once we’d done a thorough investigation, we’d report back. Well, we haven’t finished investigating all implications of the outage. There are both substantial technical and governance issues to investigate. But last week we provided a preliminary report to the Crossref board on the basic technical issues, and we thought we’d share that publicly now.\n", "content": "Background On January 20th, 2015 the main DOI HTTP proxy at doi.org experienced a partial, rolling global outage. The system was never completely down, but for at least part of the subsequent 48 hours, up to 50% of DOI resolution traffic was effectively broken. This was true for almost all DOI registration agencies, including Crossref, DataCite and mEDRA.\nAt the time we kept people updated on what we knew via Twitter, mailing lists and our technical blog at CrossTech. We also promised that, once we’d done a thorough investigation, we’d report back. Well, we haven’t finished investigating all implications of the outage. There are both substantial technical and governance issues to investigate. But last week we provided a preliminary report to the Crossref board on the basic technical issues, and we thought we’d share that publicly now.\nThe Gory Details First, the outage of January 20th was not caused by a software or hardware failure, but was instead due to an administrative error at the Corporation for National Research Initiatives (CNRI). The domain name “doi.org” is managed by CNRI on behalf of the International DOI Foundation (IDF). The domain name was not on “auto-renew” and CNRI staff simply forgot to manually renew the domain. Once the domain name was renewed, it took about 48 hours for the fix to propagate through the DNS system and for the DOI resolution service to return to normal. Working with CNRI we analysed traffic through the Handle HTTP proxy and here’s the graph:\nThe above graph shows traffic over a 24 hour period on each day from January 12, 2015 through February 10th, 2015. The heavy blue line for January 20th and the heavy red line for January 21st show how referrals declined as the doi.org domain was first deleted, and then added back to DNS.\nIt could have been much worse. The domain registrar (GoDaddy) at least had a “renewal grace and registry redemption period” which meant that even though CNRI forgot to pay its bill to renew the domain, the domain was simply “parked” and could easily be renewed by them. This is the standard setting for GoDaddy. Cheaper domain registrars might not include this kind of protection by default. Had there been no grace period, then it would have been possible for somebody other than CNRI to quickly buy the domain name as soon as it expired. There are many automated processes which search for and register recently expired domain names. Had this happened, at the very least it would have been expensive for CNRI to buy the domain back. The interruption to DOI resolutions during this period would have also been almost complete.\nSo we got off relatively easy. The domain name is now on auto-renew. The outage was not as bad as it could have been. It was addressed quickly and we can be reasonably confident that the same administrative error will not happen again. Crossref even managed to garner some public praise for the way in which we handled the outage. It is tempting to heave a sigh of relief and move on.\nWe also know that everybody involved at CNRI, the IDF and Crossref have felt truly dreadful about what happened. So it is also tempting to not re-open old wounds.\nBut it would be a mistake if we did not examine a fundamental strategic issue that this partial outage has raised: How can Crossref claim that its DOIs are ‘persistent’ if Crossref does not control some of the key infrastructure on which it depends? What can we do to address these dependencies?\nWhat do we mean by “persistent?” @kaythaney tweets on definition of “persistent”\nTo start with, we should probably explore what we mean by ‘persistent’. We use the word “persistent” or “persistence” about 470 times on the Crossref web site. The word “persistent” appears central to our image of ourselves and of the services that we provide. We describe our core, mandatory service as the “Crossref Persistent Citation Infrastructure.”\nThe primary sense of the word “persistent” in the New Oxford American Dictionary is:\nContinuing firmly or obstinately in a course of action in spite of difficulty or opposition.\nWe play on this sense of the word as a synonym for “stubborn” when we half-jokingly say that, “Crossref DOIs are as persistent as Crossref staff.” Underlying this joke is a truth, which is that persistence is primarily a social issue, not a technical issue.\nYet presumably we once chose to use the word “persistent” instead of “perpetual” or “permanent” for other reasons. “Persistence” implies longevity, without committing to “forever.” Scholarly publishers, perhaps more than most industries, understand the long term. After all, the scholarly record dates back to at least 1665 and we know that the scholarly community values even our oldest journal backfiles. By using the word “persistent” as opposed to the more emphatic “permanent” we are essentially acknowledging that we, as an industry, understand the complexity and expense of stewarding the content for even a few hundred years to say nothing of “forever.” Only the chronologically naïve would recklessly coin terms like “permalink” for standard HTTP links which have a documented half-life of well under a decade.\nSo “persistent” implies longevity- without committing to forever- but this still begs questions. What time span is long enough to qualify as “persistent?” What, in particular, do we mean by “persistent” when we talk about Crossref’s “Persistent Citation Infrastructure?” or of Crossref DOIs being “persistent identifiers?”\nWhat do we mean by “persistent identifiers?” ]5 @violetailik tweets on outage and implication for term “persistent identifier”\nFirst, we often make the mistake of talking about “persistent identifiers” as if there is some technical magic that makes them continue working when things like HTTP URIs break. The very term “persistent identifier” encourages this kind of magical thinking and, ideally, we would instead talk about “persist-able” identifiers. That is, those that have some form of indirection built into them. There are many technologies that do this- Handles, DOIs, Purls, ARKs and every URL shortener in existence. Each of them simply introduces a pointer mapping between an identifier and location where a resource or content resides. This mapping can be updated when the content moves, thus preserving the link. Of course, just because an identifier is persist-able doesn’t mean it is persistent. If Purls or DOIs are not updated when content moves, then they are no more persistent than normal URLs.\nAndrew Treloar points out that when we talk about “persistent identifiers,” we tend to conflate several things:\nThe persistence of the identifier- that is the token or string itself. The persistence of the thing being pointed at by the identifier. For example, the content. The persistence of the mapping of the identifier to the thing being identified. The persistence of the resolver that allows one to follow the mapping of the identifier to the thing being identified. The persistence of a mechanism for updating the mapping of the identifier to the thing being identified. If any of the above fails, then “persistence” fails. This is probably why we tend to conflate them in the first place.\nEach of these aspects of “persistence” is worthy of much closer scrutiny, however, in the most recent case of the January outage of “doi.org,” the problem specifically occurred with item “D”- the persistence of the resolver. When CNRI failed to renew the domain name for “doi.org” on time, the DOI resolver was rendered unavailable to a large percentage of people over a period of about 48 hours as global DNS servers first removed, and then added back the “doi.org” domain.\nTurtles all the way down* The initial public reaction to the outage was, almost unanimous in one respect- people assumed that the problem originated with Crossref.\n@iainh_z tweets to Crossref enquiring about failed DOI resoluton @LibSkrat tweets at Crossref about DOI outage\nThis is both surprising and unsurprising. It is surprising because we have fairly recent data indicating that lots of people recognise the DOI brand, but not the Crossref brand. Chances are, that this relatively superficial “brand” awareness does not correlate with understanding how the system works or how it relates to persistence. It is likely plenty of people clicked on DOIs at the time of the outage and, when they didn’t work, simply shrugged or cursed under their breath. They were aware of the term ‘DOI’ but not of the promise of “persistence”. Hence, they did not take to twitter to complain about it, and if they did, they probably wouldn’t have known who to complain to or even how to complain to them (neither CNRI or the IDF has a Twitter account).\nBut the focus on Crossref is also unsurprising. Crossref is by far the largest and most visible DOI Registration Agency. Many otherwise knowledgeable people in the industry simply don’t know that there are even other RAs.\nThey also generally didn’t know of the strategic dependencies that exist in the Crossref system. By “strategic dependencies” we are not talking about the vendors, equipment and services that virtually every online enterprise depends on. These kinds of services are largely fungible. Their failures may be inconvenient and even dramatic, but they are rarely existential.\nInstead we are talking about dependencies that underpin Crossref’s ability to deliver on its mission. Dependencies that not only affect Crossref’s operations, but also its ability to self-govern and meet the needs of its membership. In this case there are three major dependencies: Two of which are specific to Crossref and other DOI registration agencies and one which is shared by virtually all online enterprises today. The organizations are: The International DOI Foundation (IDF), Corporation for National Research Initiatives (CNRI) and the Internet Corporation for Assigned Names and Numbers (ICANN).\nDependency of RAs on IDF, CNRI and ICANN. Turtles all the way down.\nEach of these agencies has technology, governance and policy impacts on Crossref and the other DOI registration agencies, but here we will focus on the technological dependencies.\nAt the top of the diagram are a subset of the various DOI Registration Agencies. Each RA uses the DOI for a particular constituency (e.g. scholarly publishers) and application (e.g. citation). Sometimes these constituencies/applications overlap (as with mEDRA, Crossref and DataCite), but sometimes they are orthogonal to the other RAs, as is the case with EIDR. All, however, are members of the IDF.\nThe IDF sets technical policies and development agendas for the DOI infrastructure. This includes recommendations about how RAs should display and link DOIs. Of course all of these decisions have an impact on the RAs. However, the IDF provides little technical infrastructure of its own as it has no full-time staff. Instead it outsources the operation of the system to CNRI, this includes the management of the doi.org domain which the IDF owns.\nThe actual DOI infrastructure is hosted on a platform called the Handle System which was developed by and is currently run by CNRI. The Handle System is part of a quite complex and sophisticated platform for managing digital objects that was originally developed for DARPA. A subset of the Handle system is designated for use by DOIs and is identified by the “10” prefix (e.g. 10.5555/12345678). The Handle system itself is not based on HTTP (the web protocol). Indeed, one of the much touted features of the Handle System is that it isn’t based on any specific resolution technology. This was seen as a great virtue in the late 1990s when the DOI system was developed and the internet had just witnessed an explosion of seemingly transient, competing protocols (e.g. Gopher, WAIS, Archie, HyperWave/Hyper-G, HTTP, etc.). But what looked like a wild-west of protocols quickly settled into an HTTP hegemony. In practice, virtually all DOI interactions with the Handle system are via HTTP and so, in order to interact with the web, the Handle System employs a “Handle proxy” which translates back and forth between HTTP, and the native Handle system. This all may sound complicated, and the backend of the Handle system is really very sophisticated, but it turns out that the DOI really uses only a fraction of the Handle system’s features. In fact, the vast majority of DOI interactions merely use the Handle system as a giant lookup table which allows one to translate an identifier into a web location. For example, it will take a DOI Handle like this:\n10.5555/12345678 and redirect it to (as of this writing) the following URL:\nhttp://psychoceramics.labs.crossref.org/10.5555-12345678.html This whole transformation is normally never seen by a user. It is handled transparently by the web browser, which does the lookup and redirection in the background using HTTP and talking to the Handle Proxy. In the late 1990s, even doing this simple translation quickly, at scale with a robust distributed infrastructure, was not easy. These days however we see dozens if not hundreds of “URL Shorteners” doing exactly the same thing at far greater scale than the Handle System.\nIt may seem a shame that more of the Handle Systems features are not used, but the truth is the much touted platform independence of the Handle System rapidly became more of a liability and impediment to persistence than an aid. To be blunt, if in X years a new technology comes out that supersedes the web, what do we think the societal priority is going to be?\nTo provide a robust and transparent transition from the squillions of existing HTTP URI identifiers that the entire world depends on? To provide a robust and transparent transition from the tiny subset of Handle-based identifiers that are used by about a hundred million specialist resources? Quite simply, the more the Handle/DOI systems diverge from common web protocols and practice, then the more we will jeopardise the longevity of our so-called persistent identifiers.\nSo, in the end, DOI registration agencies really only use the Handle system for translating web addresses. All of the other services and features one might associate with DOIs (reference resolution, metadata lookup, content negotiation, OAI-PMH, REST APIs, Crossmark, CrossCheck, TDM Services, FundRef etc) are all provided at the RA level.\nBut this address resolution is still critical. And it is exactly what failed for many users on January 20th 2015. And to be clear, it wasn’t the robust and scaleable Handle System that failed. It wasn’t the Handle Proxy that failed. And it certainly wasn’t any RA-controlled technology that failed. These systems were all up and running. What happened was that the standard handle proxy that the IDF recommends RAs use, “dx.doi.org”, was effectively rendered invisible to wide portions the internet because the “doi.org” domain was not renewed. This underscores two important points.\nThe first is that it doesn’t much matter what precisely caused the outage. In this case it was an administrative error. But the effect would have been similar if the Handle proxies had failed of if the Handle system itself had somehow collapsed. In the end, Crossref and all DOI registration agencies are existentially dependent on the Handle system running and being accessible.\nThe second is that the entire chain of dependencies from the RAs down through CNRI are also dependent on the DNS system which, in turn, is governed by ICANN. We should really not be making too much of the purported technology independence of the DOI and Handle systems. To be fair, this limitation is inherent to all persistent identifier schemes that aim to work with the web. It really is “turtles all the way down.”\nWhat didn’t fail on January 19th/20th and why? You may have noticed a lot of hedging in our description of the outage of January 19th/20th. For one thing, we use the term “rolling outage.” Access to the Handle Proxy via “dx.doi.org” was never completely unavailable during the period. As we’ve explained, this is because the error was discovered very quickly and the domain was renewed hours after it expired. The nature of DNS propagation meant that even as some DNS servers were deleting the “doi.org” entry, others were adding it back to their tables. In some ways this was really confusing because it meant it was difficult to predict where the system was working and where it wasn’t. Ultimately it all stabilised after the standard 48-hour DNS propagation cycle.\nBut there were also some Handle-based services that simply were not affected at all by the outage. During the outage, a few people asked us if there was an alternative way to resolve DOIs. The answer was “yes,” there were several. It turns out that “doi.org” is not the only DNS name that points to the Handle Proxy. People could easily substitute “dx.doi.org” with “dx.crossref.org” or “dx.medra.org” or “hdl.handle.net” and “resolve” any DOI. Many of Crossref’s internal services use these internal names and so the services continued to work. This is partly why we only discovered the “doi.org” was down from people reporting it on Twitter.\nAnd, of course, there were other services that were not affected by the outage. Crossmark, the REST API, and Crossref Metadata Search all continued to work during the outage.\nProtecting ourselves So what can we do to reduce our dependencies and/or the risks intrinsic to those dependencies?\nObviously, the simplest way to have avoided the outage would have been to ensure that the “doi.org” domain was set to automatically renew. That’s been done. Is there anything else we should do? A few ideas have been floated that might allow us to provide even more resilience. They range greatly in complexity and involvement.\nProvide well-publicised public status dashboards that show what systems are up and which clearly map dependencies so that people could, for instance, see that the doi.org server was not visible to systems that depended on it. Of course, if such a dashboard had been hosted at doi.org, nobody would have been able to connect to it. Stoopid turtles. Encourage DOI RAs to have the members point to Handle proxies using domain names under the RA’s control. Simply put, if Crossref members had been using “dx.crossref.org” instead of “dx.doi.org”, then Crossref DOIs would have continued to work throughout the outage of “doi.org”. The same with mEDRA, and the other RAs. This way each RA would have control over another critical piece of their infrastructure. It would also mean that if any single RA made a similar domain name renewal mistake, the impact would be isolated to a particular constituency. Finally, using RA-specific domains for resolving DOIs might also make it clear that different DOIs are managed by different RAs and might have different services associated with them. Perhaps Crossref would spend less time supporting non-Crossref DOIs? Provide a parallel, backup resolution technology that could be pointed to in the event of a catastrophic Handle System failure. For example we could run a parallel system based on PURLs, ARKs or another persist-able identifier infrastructure. Explore working with ICANN to get the handle resolvers moved under the special “.arpa” top level domain (TLD). This TLD (RFC 3172) is reserved for services that are considered to be “critical to the operation of the internet.” This is an option that was first discussed at a meeting of persistent identifier providers in 2011. These are all tactical approaches to addressing the specific technical problem of the Handle System becoming unavailable, but they do not address deeper issues relating to our strategic dependence on several third parties. Even though the IDF and CNRI provide us with pretty simple and limited functionality, that functionality is critical to our operations and our claim to be providing persistent identifiers. Yet these technologies are not in our direct control. We had to scramble to get hold of people to fix the problem. For a while, we were not able to tell our users or members what was happening because we did not know ourselves.\nThe irony is that Crossref was held to account, and we were in the firing line the entire time. Again, this was almost unavoidable. In addition to being the largest DOI RA, we are also the only RA that has any significant social media presence and support resources. Still, it meant that we were the public face of the outage while the IDF and CNRI remained in the background.\nAnd this is partly why our board has encouraged us to investigate another option:\nExplore what it would take to remove Crossref dependencies on the IDF and CNRI. Crossref is just part of a chain of dependencies the goes from our members down through the IDF, CNRI and, ultimately, ICANN. Our claim to providing a persistent identifier structure depends entirely on the IDF and CNRI. Here we have explored some of the technical dependencies. But there are also complex governance and policy implications of these dependencies. Each organization has membership rules, guidelines and governance structures which can impact Crossref members. Indeed, the IDF and CNRI are themselves members of groups (ISO and DONA, respectively) which might ultimately have policy or governance impact for DOI registration agencies. We will need to understand the strategic implications of these non technical dependencies as well.\nNote that the Crossref board has merely asked us to “explore” what it would take to remove dependencies. They have not asked us to actually take any action. Crossref has been massively supportive of the IDF and CNRI, and they have been massively supportive of us. Still, over the years we have all grown and our respective circumstances have changed. It is important that occasionally we question what we might have once considered to be axioms. As we discussed above, we use the term “persistent” which, in turn, is a synonym for “stubborn.” At the very least we need to document the inter-dependencies that we have so that we can understand just how stubborn we can reasonably expect our identifiers to be.\nThe outage of January 20th was a humbling experience. But in a way we were lucky: Forgetting to renew the domain name was a silly and prosaic way to partially bring down a persistent identifier infrastructure, but it was also relatively easy to fix. Inevitably, there was a little snark and some pointed barbs directed at us during the outage, but we were truly overwhelmed by the support and constructive criticism we received as well. We have also been left with a clear message that, in order for this good-will to continue, we need to follow-up with a public, detailed and candid analysis of our infrastructure and its dependencies. Consider this to be the first section of a multi-part report.\n@kevingashley tweets asking for followup analysis @WilliamKilbride tweets asking for followup and lessons learned\nImage Credits Turtle image CC-BY “Unrecognised MJ” from the Noun Project\n", "headings": ["Background","The Gory Details","What do we mean by “persistent?”","What do we mean by “persistent identifiers?”","Turtles all the way down*","What didn’t fail on January 19th/20th and why?","Protecting ourselves","Image Credits"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/citation-formats/", "title": "Citation Formats", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/real-time-stream-of-dois-being-cited-in-wikipedia/", "title": "Real-time Stream of DOIs being cited in Wikipedia", "subtitle":"", "rank": 1, "lastmod": "2015-03-03", "lastmod_ts": 1425340800, "section": "Blog", "tags": ["chronograph"], "description": "TL;DR Watch a real-time stream of DOIs being cited (and “un-cited!” ) in Wikipedia articles across the world: https://0-live-eventdata-crossref-org.libus.csd.mu.edu/live.html\nBackground For years we’ve known that the Wikipedia was a major referrer of Crossref DOIs and about a year ago we confirmed that, in fact, the Wikipedia is the 8th largest refer of Crossref DOIs. We know that people follow the DOIs, too. This despite a fraction of Wikipedia citations to the scholarly literature even using DOIs. So back in August we decided to create a Wikimedia Ambassador programme. The goal of the programme was to promote the use of persistent identifiers in citation and attribution in Wikipedia articles. We would do this through outreach and through the development of better citation-related tools.\n", "content": "TL;DR Watch a real-time stream of DOIs being cited (and “un-cited!” ) in Wikipedia articles across the world: https://0-live-eventdata-crossref-org.libus.csd.mu.edu/live.html\nBackground For years we’ve known that the Wikipedia was a major referrer of Crossref DOIs and about a year ago we confirmed that, in fact, the Wikipedia is the 8th largest refer of Crossref DOIs. We know that people follow the DOIs, too. This despite a fraction of Wikipedia citations to the scholarly literature even using DOIs. So back in August we decided to create a Wikimedia Ambassador programme. The goal of the programme was to promote the use of persistent identifiers in citation and attribution in Wikipedia articles. We would do this through outreach and through the development of better citation-related tools.\nRemember when we originally wrote about our experiments with the PLOS ALM code and how that has transitioned into the DOI Event Tracking Pilot? In those posts we mentioned that one of the hurdles in gathering information about DOI events is the actual process of polling third party APIs for activity related to millions of DOIs. Most parties simply wouldn’t be willing handle the load of a 100K API calls an hour. Besides, polling is a tremendously inefficient process, only a fraction of DOIs are ever going to generate events, but we’d have to poll for each of them, repeatedly, forever, to get an accurate picture of DOI activity. We needed a better way. We needed to see if we could reverse this process and convince some parties to instead “push” us information whenever they saw DOI related events (e.g. citations, downloads, shares, etc). If only we could convince somebody to try this…\nWikipedia DOI Events In December 2014 we took the opportunity of the 2014 PLOS/Crossref ALM Workshop in San Francisco too meet with Max Klein and Anthony Di Franco where we kicked off a very exciting project.\nThere’s always someone editing a Wikipedia somewhere in the world. In fact, you can see a dizzying live stream of edits. We thought that given that there are so many DOIs in Wikipedia, that live stream may contain some diamonds (DOIs are made of diamond, that’s how they can be persistent). Max and Anthony went away and came back with a demo that contains a surprising amount of DOI activity.\nThat demo is evolving into a concrete service, called Cocytus. It is running at Wikimedia Labs monitoring live edits as you read this.\nFor now we’re feeding that data into the DOI Events Collection app (which is an off-shoot of the Chronograph project). We are in the process of modifying the Lagotto code so that we can instead push those events into the DOI Event Tracking Instance.\nThe first DOI event we noticed was delightfully prosaic: The DOI for “The polymath project” is cited by the Wikipedia page for “Polymath Project”. Prosaic perhaps, but the authors of that paper probably want to know. Maybe they can help edit the page.\nOr how about this. Someone wrote a a paper about why people edit Wikipedia and then it was cited by Wikipedia. And then the citation was removed. The plot thickens…\nWe’re interested in seeing how DOIs are used outside of the formal scholarly literature. What does that mean? We don’t fully know, that’s the point. We have retractions in scholarly literature (and our Crossmark metadata and service allow publishers to record that), but it’s a bit different on Wikipedia. Edit wars are fought over … well you can see for yourself.\nCitations can slip in and out of articles. We saw the DOI 10.1001/archpediatrics.2011.832 deleted from “Bipolar disorder in children”. If we’d not been monitoring the live feed (we had considered analysing snapshots of the Wikipedia in bulk) we might never have seen that. This is part of what non-traditional citations means, and it wasn’t obvious until we’d seen it.\nYou can see this activity on the Chronograph’s stream. Or check your favourite DOI. Please be aware that we’re only collecting newly added citations as of today. We do intend to go back and back-fill, but that may take some time- as it * cough * requires polling again.\nSome Technical Things A few interesting things that happened as a result of all this:\nSecure URLs SSL and HTTPS were invented so you could do things like banking on the web without fear of interception or tampering. As the web becomes a more important part of life, many sites are upgrading from HTTP to HTTPS, the secure version. This is not only because your confidential details may be tampered with, but because certain governments might not like you reading certain materials.\nBecause of this, some time ago, Wikipedia decided to embark on an upgrade to HTTPS last year, and they are a certain way along the path. The IDF, who are responsible for running the DOI system, upgraded to HTTPS this Summer, although most DOIs are referred to by HTTP still.\nWe met with Dario Taraborelli at the ALM workshop and discussed the DOI referral data that is fed into the Chronograph. We put two and two together and realised that Wikipedia was linking to DOIs (which are mostly HTTP) from pages which might be served over HTTPS. New policies in HTML5 specify that referrer URL headers shouldn’t be sent from HTTPS to HTTP (in case there was something secret in them). The upshot of this is, if someone’s browsing Wikipedia via HTTPS and click on a normal DOI, we won’t know that the user came from Wikipedia. Not a huge problem today, but as Wikipedia switches over to entirely secure, we’re going to miss out on very useful information.\nFortunately, the HTML5 specification includes a way to fix this (without leaking sensitive information). We discussed this with Dario, and he did some research, and came up with a suggestion, which got discussed. It’s fascinating to watch a democratic process like this take place and take part in it.\nWe’re waiting to see how the discussion turns out, and hope that it all works out so we can continue to report on how amazing Wikipedia is at sending people to scholarly literature.\nHow shall I cite thee? Another discussion grew out of that process, and we started talking to a Wikipedian called Nemo (note to Latin scholars: we weren’t just talking to ourselves). Nemo (real name Federico Leva) had a few suggestions of his own. Another way to solve the referrer problem is by using HTTPS URLs (HTML5 allows browsers to send the referrer domain when going from HTTPS to HTTPS).\nThis means going back to all the articles that use DOIs and change them from HTTP to HTTPS. Not as simple as it sounds, and it doesn’t sound simple. We started looking into how DOIs were cited on Wikipedia.\nAfter some research we found that there are more ways that we expected to cite DOIs.\nFirst, there’s the URL. You can see it in action in this article. URLs can take various forms.\nhttp://dx.doi.org/10.5555/12345678 http://0-doi-org.libus.csd.mu.edu/10.5555/12345678 https://0-dx-doi-org.libus.csd.mu.edu/10.5555/12345678 https://0-doi-org.libus.csd.mu.edu/10.5555/12345678 http://0-doi-org.libus.csd.mu.edu/hvx https://0-doi-org.libus.csd.mu.edu/hvx Second there’s the official template tag, seen in action here:\n\u0026lt;ref name=\"SCI-20140731\"\u0026gt;{{cite journal |title=Sustained miniaturization and anatomical innovation in the dinosaurian ancestors of birds |url=http://0-www-sciencemag-org.libus.csd.mu.edu/content/345/6196/562 |date=1 August 2014 |journal=[[Science (journal)|Science]] |volume=345 |issue=6196 |pages=562–566 |doi=10.1126/science.1252243 |accessdate=2 August 2014 |last1=Lee |first1=Michael S. Y. |first2=Andrea|last2=Cau |first3=Darren|last3=Naish|first4=Gareth J.|last4=Dyke}}\u0026lt;/ref\u0026gt; There’s a DOI in there somewhere. This is the best way to cite DOIs, firstly as it’s actually a proper traditional citation and there’s nothing magic about DOIs, secondly because it’s a template tag and can be re-rendered to look slightly different if needed.\nThird there’s the old official DOI template tag that’s now discouraged:\n\u0026lt;ref name=\"Example2006\"\u0026gt;{{Cite doi|10.1146/annurev.earth.33.092203.122621}}\u0026lt;/ref\u0026gt; And then there’s another one.\n{{doi|10.5555/123456789}} Knowing all this helps us find DOIs. But if we want to convert DOIs links in Wikipedia to use HTTPS, it means that there are more template tags to modify and more pages to re-render.\nNemo also put DOIs on the Interwiki Map which should make automatically changing some of the URLs a lot easier.\nWe’re very grateful to Nemo for his suggestions and work on this. We’ll report back!\nThe elephant in the room Those of you who know how DOIs work will have spotted an unsecured elephant in the room. When you visit a DOI, you visit the URL, which hits the DOI resolver proxy server, which returns a message to your browser to redirect to the landing page on the publisher’s site.\nSecurely talking to the DOI resolver by using HTTPS instead of HTTP means that no-one can eavesdrop and see which DOI you are visiting, or tamper with the result and send you off to a different page. But the page you are sent to will be, in nearly all cases, still HTTP. Upgrading infrastructure isn’t trivial, and, with over 4000 members (mostly publishers), most Crossref DOIs will still redirect to standard HTTP pages for the foreseeable future.\nYou can keep as secure as possible by using HTTPS Everywhere.\nFin There’s lots going on, watch this space to see developments. Thanks for reading this, and all the links. We’d love to know what you think.\nBootnote Not long after this blog post was published we saw something very interesting.\nThat’s no DOI. We like interesting things, but they can panic us. This turned out to be a great example of why this kind of thing can be useful. A minute’s digging and we found the article edit:\nIt turns out that this was a typo: someone put a title when they should have put in a DOI. And, as the event shows, this was removed from the Wikipedia article.\n", "headings": ["TL;DR","Background","Wikipedia DOI Events","Some Technical Things","Secure URLs","How shall I cite thee?","The elephant in the room","Fin","Bootnote"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossrefs-doi-event-tracker-pilot/", "title": "Crossref’s DOI Event Tracker Pilot", "subtitle":"", "rank": 1, "lastmod": "2015-03-02", "lastmod_ts": 1425254400, "section": "Blog", "tags": [], "description": "TL;DR Crossref’s “DOI Event Tracker Pilot”- 11 million+ DOIs \u0026amp; 64 million+ events. You can play with it at: http://goo.gl/OxImJa\nTracking DOI Events So have you been wondering what we’ve been doing since we posted about the experiments we were conducting using PLOS’s open source ALM code? A lot, it turns out. About a week after our post, we were contacted by a group of our members from OASPA who expressed an interest in working with the system. Apparently they were all about to conduct similar experiments using the ALM code, and they thought that it might be more efficient and interesting if they did so together using our installation. Yippee. Publishers working together. That’s what we’re all about.\n", "content": "TL;DR Crossref’s “DOI Event Tracker Pilot”- 11 million+ DOIs \u0026amp; 64 million+ events. You can play with it at: http://goo.gl/OxImJa\nTracking DOI Events So have you been wondering what we’ve been doing since we posted about the experiments we were conducting using PLOS’s open source ALM code? A lot, it turns out. About a week after our post, we were contacted by a group of our members from OASPA who expressed an interest in working with the system. Apparently they were all about to conduct similar experiments using the ALM code, and they thought that it might be more efficient and interesting if they did so together using our installation. Yippee. Publishers working together. That’s what we’re all about.\nSo we convened the interested parties and had a meeting to discuss what problems they were trying to solve and how Crossref might be able to help them. That early meeting came to a consensus on a number of issues:\nThe group was interested in exploring the role Crossref could play in providing an open, common infrastructure to track activities around DOIs, they were not interested in having Crossref play a role in the value-add services of reporting on an interpreting the meaning of said activities. The working group needed representatives from multiple stakeholders in the industry. Not just open access publishers from OASPA, but from subscription based publishers, funders, researchers and third party service providers as well. That it was desirable to conduct a pilot to see if the proposed approach was both technically feasible and financially sustainable. And so after that meeting, the “experiment” graduated to becoming a “pilot.” This Crossref pilot is based on the premise that the infrastructure involved in tracking common information about “DOI events” can be usefully separated from the value-added services of analysing and presenting these events in the form of qualitative indicators. There are many forms of events and interactions which may be of interest. Service providers will wish to analyse, aggregate and present those in a range of different ways depending on the customer and their problem. The capture of the underlying events can be kept separate from those services.\nIn order to ensure that the Crossref pilot is not mistaken for some sub rosa attempt to establish new metrics for evaluating scholarly output, we also decided eschew any moniker that includes the word “metrics” or synonyms. So the “ALM Experiment” is dead. Long live the “”DOI Event Tracker” (DET) pilot. Similarly PLOS’s open source “ALM software” has been resurrected under the name “Lagotto.”\nThe Technical Issues Crossref members are interested in knowing about “events” relating to the DOIs that identify their content. But our members face a now-classic problem. There are a large number of sources for scholarly publications (3k+ Crossref members) and that list is still growing. Similarly, there are an unbounded number of potential sources for usage information. For example:\nSupplemental and grey literature (e.g. data, software, working papers) Orthogonal professional literature (e.g. patents, legal documents, governmental/NGO/IGO reports, consultation reports, professional trade literature). Scholarly tools (e.g. citation management systems, text and data mining applications). Secondary outlets for scholarly literature (institutional and disciplinary repositories, A\u0026amp;I services). Mainstream media (e.g. BBC, New York Times). Social media (e.g. Wikipedia, Twitter, Facebook, Blogs, Yo). Finally, there is a broad and growing audience of stakeholders who are interested in seeing how the literature is being used. The audience includes publishers themselves as well as funders, researchers, institutions, policy makers and citizens.\nPublishers (or other stakeholders) could conceivably each choose to run their own system to collect this information and redistribute it to interested parties. Or they can work with a vendor to do the same. But either case, they would face the following problems:\nThe N sources will change. New ones will emerge. Old ones will vanish. The N audiences will change. New ones will emerge. Old ones will vanish. Each publisher/vendor will need to deal with N source’s different APIs, rate limits, T\u0026amp;Cs, data licenses, etc. This is a logistical headache for both the publishers/vendors and for the sources. Each audience will need to deal with N publisher/vendor APIs, rate limits, T\u0026amp;Cs, data licenses, etc. This is a logistical headache for both the audiences and for the publishers. If publishers/vendors use different systems which in turn look at different sources, it will be difficult to compare or audit results across publishers/vendors. If a journal moves from one publisher to another, then how are the metrics for that journal’s articles going to follow the journal? And then there is the simple issue of scale. Most parties will be interested in comparing the data that they collect for their own content, with data about their competitors. Hence, if they all run their own system, they will each be querying much more than their own data. If, for example, just the commercial third-party providers were interested in collecting data covering the formal scholarly literature, they would each find themselves querying the same sources for the same 80 million DOIs. To put this into perspective, to refresh the data for 10 million DOIs once a month, would require sources to support ~ 14K API calls an hour. 60 million DOIs would require 100K API calls an hour. Current standard API caps for many of the sources that people are interested in querying hover around 2K per hour. We may see these sources lift that cap for exceptional cases, but they are unlikely to do so for many different clients all of whom are querying essentially the same thing.\nThese issues typify the “multiple bilateral relationships” problem that Crossref was founded to try and ameliorate. When we have many organizations trying to access the exact same APIs to process the exact same data (albeit to different ends), then it seems likely that Crossref could help make the process more efficient.\nPiloting A Proposed Solution The Crossref DET pilot aims to show the feasibility of providing a hub for the collection, storage and propagation of DOI events from multiple sources to multiple audiences.\nData Collection Pull: DET will collect DOI event data from sources that are of common interest to the membership, but which are unlikely to make special efforts to accommodate the scholarly communications industry. Examples of this class of source include large, broadly popular services like FaceBook, Twitter, VK, Sina Weibo, etc. Push: DET will allow sources to send DOI event data directly to Crossref in one of three ways: Standard Linkback: Using standards that are widely used on the web. This will automatically enable linkback-aware systems like WordPress, Moveable Type, etc. to alert DET to DOI events. Scholarly Linkback: A to-be-defined augmented linkback-style API which will be optimized to work with scholarly resources and which will allow for more sophisticated payloads including other identifiers (e.g. ORCIDs, FundRefs), metadata, provenance information and authorization information. This system could be used by tools designed for scholarly communications. So, for example, it could be used by publisher platforms to distribute events related to downloads or comments within their discussion forums. It could also be used by third party scholarly apps like Zotero, Mendeley, Papers, Authorea, IRUS-UK, etc. in order to alert interested parties in events related to specific DOIs. Redirect: DET will also be able to serve as a service discovery layer that will allow sources to push DOI event data directly to an appropriate publisher-controlled endpoint using the above scholarly linkback mechanism. This can be used by sources like repositories in order to send sensitive usage data directly to the relevant publishers. Data Propagation Parties may want to use the DET in order to propagate information about DOI events. The system will support two broad data propagation patterns:\none-to-many: DOI events that are commonly harvested (pulled) by the DET system from a single source will be distributed freely to anybody who queries the DET API. Similarly, sources that push DOI events via the standard or scholarly linkback mechanisms, will also propagate their DOI events openly to anybody who queries the DET API. DOI events that are propagated in either of these cases will be kept and logged by the DET system along with appropriate provenance information. This will be the most common, default propagation model for the DET system. one-to-one: Sources of DOI events can also report (push) DOI event data directly to owner of the relevant DOI if the DOI owner provides \u0026amp; registers a suitable end-point with the DET system. In these cases, data sources seeking to report information relating to a DOI, will be redirected (with a suitable 30X HTTP status and relevant headers) to the end-point specified by the DOI owner. The DET system will not keep the request or provenance information. One-to-one propagation model is designed to handle use cases where the source of the DOI event has put restrictions on the data and will only share the DOI events with the owner (registrant) of the DOI. This use case may be used, for example, by aggregators or A\u0026amp;I services that want to report confidential data directly back to a publisher. The advantage of the redirect mechanism is that Crossref is not put into the position of having to secure sensitive data as said data will never reside on Crossref systems. Note that the two patterns can be combined. So, for example, a publisher might want to have public social media events reported to the DET and propagated accordingly, but to also to private third parties report confidential information directly to the publisher.\nSo Where Are We? So to start with, the DET Working Group has grown substantially since the early days and we have representatives from a wide variety of stakeholders. The group includes:\nCameron Neylon, PLOS Chris Shillum, Elsevier Dom Mitchell, Co-action Publishing Euan Adie, Altmetric Jennifer Lin, PLOS Juan Pablo Alperin, PKP Kevin Dolby, Wellcome Trust Liz Ferguson, Wiley Maciej Rymarz, Mendeley Mark Patterson, eLife Martin Fenner, PLOS Mike Thelwell, U Wolverhampton Rachel Craven, BMC Richard O’Beirne, OUP Ruth Ivimey-Cook, eLife Victoria Rao, Elsevier As well as the usual contingent of Crossref cat-herders including: Geoffrey Bilder, Rachael Lammey \u0026amp; Joe Wass.\nWhen we announced the then-DET experiment, we said that one of the biggest challenges would be to create something that scaled to industry levels. At launch, we only loaded in about 317,500+ Crossref DOIs representing publications from 2014 and we could see the system was going to struggle. Since then Martin Fenner and Jennifer Lin at PLOS have been focusing on making sure that the Lagotto code scales appropriately and now it is currently humming along with just over 11.5 million DOIs for which we’ve gathered over 64 million “events.” We aren’t worried about scalability on that front any more.\nWe’ve also shown that third parties should be able to access the API to provide value added reporting and metrics. As a demonstration of this, PLOS configured a copy of its reporting software “Parascope” to point at the Crossref DET instance. The next step we’re taking is to start testing the “push” API mechanism and the “point-to-point redirect” API mechanism. For the push API, we should have a really exciting demo available to show within the next few days. And on the point-to-point redirect, we have a sub-group exploring how the point-to-point redirect mechanism could potentially be used for reporting COUNTER stats as a compliment to the Sushi initiative.\nThe other major outstanding task we have before us is to calculate what the costs will be of running the DET system as a production service. In this case we expect to have some pretty accurate data to go on as we will have had close to half a year of running the pilot with a non-trivial number of DOIs and sources. Note that the work group is concerned to ensure that the underlying data from the system remains open to all. Keeping this raw data open as seen as critical to establishing trust in the metrics and reporting systems that third parties build on the data. The group has also committed to leaving the creation of value-add services to third parties. As such we have been focusing on exploring business models based around service-level-agreement backed versions of the API to complement the free version of the same API. The free API will come with no guarantees of uptime, performance characteristics or support. For those users that depend on the API in order to deliver their services, we will offer paid-for SLA-backed versions of the free APIs. We can then configure our systems so that we can independently scale these SLA-backed APIs in order to meet SLA agreements.\nOur goal is to have these calculations complete in time for the working group to make a recommendation to the Crossref board meeting in July 2015.\nUntil then, we’ll use CrossTech as a venue for notifying people when we’ve hit new milestones or added new capabilities to the DET Pilot system.\n", "headings": ["TL;DR","Tracking DOI Events","The Technical Issues","Piloting A Proposed Solution","Data Collection","Data Propagation","So Where Are We?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/problems-with-dx.doi.org-on-january-20th-2015-what-we-know./", "title": "Problems with dx.doi.org on January 20th 2015- what we know.", "subtitle":"", "rank": 1, "lastmod": "2015-01-21", "lastmod_ts": 1421798400, "section": "Blog", "tags": [], "description": "Hell’s teeth.\nSo today (January 20th, 2015) the DOI HTTP resolver at dx.doi.org started to fail intermittently around the world. The doi.org domain is managed by CNRI on behalf of the International DOI Foundation. This means that the problem affected all DOI registration agencies including Crossref, DataCite, mEDRA etc. This also means that more popularly known end-user services like FigShare and Zenodo were affected. The problem has been fixed, but the fix will take some time to propagate throughout the DNS system. You can monitor the progress here:\nhttps://www.whatsmydns.net/#A/doi.org\nNow for the embarrassing stuff…\n", "content": "Hell’s teeth.\nSo today (January 20th, 2015) the DOI HTTP resolver at dx.doi.org started to fail intermittently around the world. The doi.org domain is managed by CNRI on behalf of the International DOI Foundation. This means that the problem affected all DOI registration agencies including Crossref, DataCite, mEDRA etc. This also means that more popularly known end-user services like FigShare and Zenodo were affected. The problem has been fixed, but the fix will take some time to propagate throughout the DNS system. You can monitor the progress here:\nhttps://www.whatsmydns.net/#A/doi.org\nNow for the embarrassing stuff…\nAt first lots of people were speculating that the problem had to do with somebody forgetting to renew the dx.doi.org domain name. Our information from CNRI was that the problem had to do with a mistaken change to a DNS record and that the domain name wasn’t the issue. We corrected people who were reporting that domain name renewal as the cause, but eventually we learned that it was actually true. We have had it confirmed that the problem originated with CNRI manually renewing the domain name at the last minute. Ugh. CNRI will issue a statement soon. We’ll link to it as soon as they do. UPDATE (Jan 21st): CNRI has sent Crossref a statement. They do not have it on their site yet, so we have can included it below.\nIn the mean time, if you are having trouble resolving DOIs, a neat trick to know is that you can do so using the Handle system directly. For example:\nhttp://hdl.handle.net/10.5555/12345678\nCrossref will, of course, also analyse what occurred, and issue a public report as well. Obviously, this report will include an analysis of how the outage effected DOI referrals to our members.\nThe amazingly cool thing is that everybody online has been very supportive and has helped us to diagnose the problem. Some have even said that the event underscores a point we often make about so-called “persistent-identifiers”- which is that they are not magic technology; the “persistence” is the result of a social contract. We like to say that Crossref DOIs are as persistent as Crossref staff. Well, to that phrase we have to add “and IDF staff” and “CNRI staff” and “ICANN staff”. It is turtles all the way down.\nWe don’t want to dismiss this event as an inevitable consequence of interdependent systems.And we don’t want to pass the buck. We need to learn something practical from this. How can we guard against this type of problem in the future? Again, people following this issue on Twitter have already been helping with suggestions and ideas. Can we crowd-source the monitoring of persistent identifier SLAs? Could we leverage Wikipedia, Wikidata or something similar to monitor critical identifiers and other infrastructure like purls, DOIs, handles, PMIDs, perma.cc, etc? Should we be looking at designating special exceptions to the normal rules governing DNS names? Do we need to distribute the risk more? Or is it enough cough to simply ensure that somebody, somewhere in the dependency chain had enabled DNS protection and auto-renewal for critical infrastructure DNS names?\nTruly, we are humbled. For all the redundancy built into our systems (multiple servers, multiple hosting sites, Raid drives, redundant power), we were undone by a simple administrative task. Crossref, IDF and CNRI- we all feel a bit crap. But we’ll get back. We’ll fix things. And we’ll let you know how we do it.\nWe will update this space as we know more. We will also keep people updated on twitter on @CrossrefNews. And we will report back in detail as soon as we can.\nCNRI Statement \u0026quot;The doi.org domain name was inadvertently allowed to expire for a brief period this morning (Jan 20). It was reinstated shortly after 9am this morning as soon as the relevant CNRI employee learned of it. A reminder email sent earlier this month to renew the registration was apparently missed. We sincerely apologize for any difficulties this may have caused. The domain name has since been placed on automatic renewal, which should prevent any repeat of this event.\u0026quot;\n", "headings": ["CNRI Statement"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/introducing-chronograph/", "title": "Introducing the Crossref Labs DOI Chronograph", "subtitle":"", "rank": 1, "lastmod": "2015-01-12", "lastmod_ts": 1421020800, "section": "Blog", "tags": ["chronograph"], "description": "tl;dr http://0-chronograph-labs-crossref-org.libus.csd.mu.edu\nAt Crossref we mint DOIs for publications and send them out into the world, but we like to hear how they’re getting on out there. Obviously, DOIs are used heavily within the formal scholarly literature and for citations, but they’re increasingly being used outside of formal publications in places we didn’t expect. With our DOI Event Tracking / ALM pilot project we’re collecting information about how DOIs are mentioned on the open web to try and build a picture about new methods of citation.\n", "content": "tl;dr http://0-chronograph-labs-crossref-org.libus.csd.mu.edu\nAt Crossref we mint DOIs for publications and send them out into the world, but we like to hear how they’re getting on out there. Obviously, DOIs are used heavily within the formal scholarly literature and for citations, but they’re increasingly being used outside of formal publications in places we didn’t expect. With our DOI Event Tracking / ALM pilot project we’re collecting information about how DOIs are mentioned on the open web to try and build a picture about new methods of citation.\nAs part of the preparation for collaborating with Wikipedia, we looked at our statistics about when DOIs are clicked and discovered that Wikipedia was, over a two year period from 2012, the eighth largest referrer of DOIs. This means that not only does Wikipedia have a lot of DOIs, but people click them too. This bit of one-off data analysis (which surprised us) gave us enough of a prod to kickstart our collaboration with Wikipedia.\nAt the ALM Workshop 2014 in San Francisco we talked to some Wikipedians and bibliometricians and realised that we were sitting on a really interesting data-set and that it would be churlish not to share it. At the hackathon (read the report here) we started work on a service to gather information about DOIs and, a month later, we’re ready to unveil the DOI Chronograph.\nShow me the goods\nYou can see:\nDaily referrals (clicks) from top level domains, e.g. Wikipedia.org: http://0-chronograph-labs-crossref-org.libus.csd.mu.edu/domain.html?domain=wikipedia.org\nDaily referrals from specific subdomains, e.g. fr.wikipedia.org: http://0-chronograph-labs-crossref-org.libus.csd.mu.edu/domain.html?domain=fr.wikipedia.org\nDaily resolutions per DOI: http://0-chronograph-labs-crossref-org.libus.csd.mu.edu/doi.html?doi=10.1787%2F20752288\nAnd, the chart that kicked this all off: DOI referring domains league tables. This shows that Wikipedia is the 3rd or 4th non-traditional referrer of DOIs (i.e. excluding referrals from Publishers’ domains): http://0-chronograph-labs-crossref-org.libus.csd.mu.edu/top.html\nTry it out\nVisit the Chronograph and give it a try chronograph.labs.crossref.org on your favourite DOI (everyone has one).\nMore data\nTalking to a bibliometrician we also realised we can correlate other data for DOIs. We’re getting the issue date (approximately the publication date) from our own metadata, as well as the date that the Crossref metadata was updated. This gives interesting results, like the resolutions for 10.1038/ncomms2953, which peak after publication and then tails off. We are attempting to collect the following information:\ndaily resolution counts day on which resolution was first successful day on which it’s possible to resolve the DOI (we’ve got a bot running for new publications) day on which the publisher says the article was published day on which the metadata was most recently deposited with us day on which the metadata was first deposited with us We’re not there yet, but we’ve made a start and we’ve already got some pretty interesting data!\nWeasel words\nIt’s a labs project so the usual weasel words apply. Specifically, we currently have the logs for 2012 to 2014 (we’re working at digging out the rest), and the referral information for 50 million DOIs (out of 71 million). That number will be higher by the time you read this. If your page is slow to load, be patient, as it’s currently working hard crunching numbers.\nThis project is focused on exploring the use of DOIs outside of the formal literature. As such, we are only looking at referrals from domains that do not appear to belong to primary publishers (i.e. our members). If you try a domain and it doesn’t work, it could be that the domain belongs to one of our members. If you’ve notice any mistakes, please email us at labs@crossref.org .\nFinally, these numbers contain all DOI resolutions. That’s human clicks but also content negotiation to retrieve metadata, robots etc. We might try to filter them in future, but for now be aware that not every visitor is a human.\nI’ll detail some of the the technical stuff (it’s very interesting) and what happened next with Wikipedia in a future post. Watch this space.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/documents/", "title": "Documents", "subtitle":"", "rank": 1, "lastmod": "2014-11-03", "lastmod_ts": 1414972800, "section": "Labs", "tags": [], "description": "\rProjects Using Crossref REST API ", "content": "\rProjects Using Crossref REST API ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2014/", "title": "2014", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/linking-data-and-publications/", "title": "Linking data and publications", "subtitle":"", "rank": 1, "lastmod": "2014-09-21", "lastmod_ts": 1411257600, "section": "Blog", "tags": [], "description": "Do you want to see if a Crossref DOI (typically assigned to publications) refers to DataCite DOIs (typically assigned to data)? Here you go:\nhttps://web.archive.org/web/20150121025249/http://0-api-labs-crossref-org.libus.csd.mu.edu/graph/doi/10.4319/lo.1997.42.1.0001\nConversely, do you want to see if a DataCite DOI refers to Crossref DOIs? Voilà:\nhttps://web.archive.org/web/20150321190744/http://0-api-labs-crossref-org.libus.csd.mu.edu/graph/doi/10.1594/pangaea.185321\nBackground “How can we effectively integrate data into the scholarly record?” This is the question that has, for the past few years, generated an unprecedented amount of handwringing on the part researchers, librarians, funders and publishers.", "content": "Do you want to see if a Crossref DOI (typically assigned to publications) refers to DataCite DOIs (typically assigned to data)? Here you go:\nhttps://web.archive.org/web/20150121025249/http://0-api-labs-crossref-org.libus.csd.mu.edu/graph/doi/10.4319/lo.1997.42.1.0001\nConversely, do you want to see if a DataCite DOI refers to Crossref DOIs? Voilà:\nhttps://web.archive.org/web/20150321190744/http://0-api-labs-crossref-org.libus.csd.mu.edu/graph/doi/10.1594/pangaea.185321\nBackground “How can we effectively integrate data into the scholarly record?” This is the question that has, for the past few years, generated an unprecedented amount of handwringing on the part researchers, librarians, funders and publishers. Indeed, this week I am in Amsterdam to attend the 4th RDA plenary in which this topic will no doubt again garner a lot of deserved attention.\nWe hope that the small example above will help push the RDAs agenda a little further. Like the recent ODIN project, It illustrates how we can simply combine two existing scholarly infrastructure systems to build important new functionality for integrating research objects into the scholarly literature.\nDoes it solve all of the problems associated with citing and referring to data? Can the various workgroups at RDA just cancel their data citation sessions and spend the week riding bikes and gorging on croquettes? Of course not. But my guess is that by simply integrating DataCite and Crossref in this way, we can make a giant push in the right direction.\nThere are certainly going to be differences between traditional citation and data citation. Some even claim that citing data isn’t “as simple as citing traditional literature.” But this is a caricature of traditional citation. If you believe this, go off an peruse the MLA, Chicago, Harvard, NLM and APA citation guides. Then read Anthony Grafton’s, The Footnote? Are you back yet? Good, so let’s continue…\nCitation of any sort is a complex issue- full of subtleties, edge-cases exceptions, disciplinary variations and kludges. Historically, the way to deal with these edge-cases has been social, not technical. For traditional literature we have simply evolved and documented citation practices which generally make contextually-appropriate use of the same technical infrastructure (footnotes, endnotes, metadata, etc.). I suspect the same will be true in citing data. The solutions will not be technical, they will mostly be social. Researchers, and publishers will evolve new, contextually appropriate mechanisms to use existing infrastructure deal with the peculiarities of data citation.\nDoes this mean that we will never have to develop new systems to handle data citation? Possibly But I don’t think we’ll know what those systems are or how they should work until we’ve actually had researchers attempting to use and adapt the tools we have.\nTechnical background About five years ago, Crossref and DataCite explored the possibility of exposing linkages between DataCite and Crossref DOIs. Accordingly, we spent some time trying to assemble an example corpus that would illustrate the power of interlinking these identifiers. We encountered a slight problem. We could hardly find any examples. At that time, virtually nobody cited data with DataCite DOIs and, if they did, the Crossref system did not handle them properly. We had to sit back and wait a while.\nAnd now the situation has changed.\nThis demonstrator harvests DataCite DOIs using their OAI-PMH API and links them in a graph database with Crossref DOIs. We have exposed this functionality on the “labs” (i.e. experimental) version of our REST API as a graph resource. So…\nNow you can get a list of Crossref DOIs that refer to DataCite DOIs using Event Data.\nCaveats and Weasel Words We have not finished indexing all the links. The API is currently a very early labs project. It is about as reliable as a devolution promise from Westminster. The API is run on a pair of raspberry-pi’s connected to the internet via bluetooth. It is not fast. The representation and the API is under active development.Things will change. Watch the Crossref Labs site for updates on this collaboration with DataCite ", "headings": ["Background","Technical background","Caveats and Weasel Words"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/citation-needed/", "title": "Citation needed", "subtitle":"", "rank": 1, "lastmod": "2014-08-07", "lastmod_ts": 1407369600, "section": "Blog", "tags": [], "description": "Remember when I said that the Wikipedia was the 8th largest referrer of DOI links to published research? This despite only a fraction of eligible references in the free encyclopaedia using DOIs.\nWe aim to fix that. Crossref and Wikimedia are launching a new initiative to better integrate scholarly literature in the world’s largest public knowledge space, Wikipedia.\nThis work will help promote standard links to scholarly references within Wikipedia, which persist over time by ensuring consistent use of DOIs and other citation identifiers in Wikipedia references.", "content": "Remember when I said that the Wikipedia was the 8th largest referrer of DOI links to published research? This despite only a fraction of eligible references in the free encyclopaedia using DOIs.\nWe aim to fix that. Crossref and Wikimedia are launching a new initiative to better integrate scholarly literature in the world’s largest public knowledge space, Wikipedia.\nThis work will help promote standard links to scholarly references within Wikipedia, which persist over time by ensuring consistent use of DOIs and other citation identifiers in Wikipedia references. Crossref will support the development and maintenance of Wikipedia’s citation tools on Wikipedia. This work will include bug fixes and performance improvements for existing tools, extending the tools to enable Wikipedia contributors to more easily look up and insert DOIs, and providing a “linkback” mechanism that alerts relevant parties when a persistent identifier is used in a Wikipedia reference.\nIn addition, Crossref is creating the role of Wikimedia Ambassador (modeled after Wikimedian-in-Residence) to act as liaison with the Wikimedia community, promote use of scholarly references on Wikipedia, and educate about DOIs and other scholarly identifiers (ORCIDs, PubMed IDs, DataCite DOIs, etc) across Wikimedia projects.\nStarting today, Crossref will be working with Daniel Mietchen to coordinate Crossref’s Wikimedia-related activities. Daniel’s team will be composed of Max Klein and Matt Senate, who will work to enhance Wikimedia citation tools, and will share the role of Wikipedia ambassador with Dorothy Howard.\nSince the beginnings of Wikipedia, Daniel Mietchen has worked to integrate scholarly content into Wikimedia projects. He is part of an impressive community of active Wikipedians and developers who have worked extensively on linking Wikipedia articles to the formal literature and other scholarly resources. We’ve been talking to him about this project for nearly a year, and are happy to finally get it off the ground.\n-G\n]7 Matt, Max and Daniel at #wikimania2014. Photo by Dorothy.\nwikimania2014 ", "headings": ["wikimania2014"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/researchers-just-wanna-have-funds/", "title": "♫ Researchers just wanna have funds ♫", "subtitle":"", "rank": 1, "lastmod": "2014-04-10", "lastmod_ts": 1397088000, "section": "Blog", "tags": [], "description": "photo credit\nSummary You can use a new Crossref API to query all sorts of interesting things about who funded the research behind the content Crossref members publish.\nBackground Back in May 2013 we launched Crossref’s FundRef service. It can be summarized like this:\nCrossref keeps and manages a canonical list of Funder Names (ephemeral) and associated identifiers (persistent). We encourage our members (or anybody, really- the list is available under A CC-Zero license waiver) to use this list for collecting information on who funded the research behind the content that our members publish.", "content": "\nphoto credit\nSummary You can use a new Crossref API to query all sorts of interesting things about who funded the research behind the content Crossref members publish.\nBackground Back in May 2013 we launched Crossref’s FundRef service. It can be summarized like this:\nCrossref keeps and manages a canonical list of Funder Names (ephemeral) and associated identifiers (persistent). We encourage our members (or anybody, really- the list is available under A CC-Zero license waiver) to use this list for collecting information on who funded the research behind the content that our members publish. We then ask that our members deposit this data in their normal Crossref metadata deposits. And that was cool.\nBut then people started asking us awkward questions. Questions like “what can I do with the funder data?” and “how do I query it?”.\nStoopit people.\nCan’t you just let us bask for a few minutes in the sunny glow of actually conceiving of and launching a project within a year?\nBut seriously, funders, were interested to see how they could use the funder metadata being collected in Crossref. In particular, some funding agencies were interested in being able to measure Key Performance Indicators (“KPIs” to management wonks) related to recent mandates such as the February 22nd 2013 OSTP memo, Public Access to the Results of Federally Funded Research. Two groups also approached us, CHORUS and SHARE. Both are interested in exploring how to build reporting tools for funders, institutions and researchers and each brought us a gigantic hairball of use-cases they were hoping we would be able to meet.\nConveniently, we were in the process of creating a revised, modern Crossref API that is entirely buzzword-compliant, and so we set to work…\nWe thought people might be interested in seeing what you can do with the Crossref REST API in relation to funding information and the expectations that are increasingly being attached to them. CHORUS is already using the Crossref REST API heavily and we expect that SHARE will soon start making use of it as well. The feedback from both groups has been very useful, but we are looking for broader feedback as well. The API is still in development, so now is your chance to help us shape it.\nBrief Examples Please note, the following are APIs calls, although you can copy and paste the URIs into your browser, the data is returned in a machine readable representation called JSON. If you want the results to look a little more presentable, we advise you install the JSONView plugin:\nFirefox Users: JSONView Chrome Users: JSONView Also note that publishers have only just started to deposit the metadata needed for these APIs to work, so the data is currently sparse. We know that many of our members are working feverishly to populate more of the needed metadata, but this requires updates to the their manuscript tracking systems, production systems and hosting systems. It takes time.\nBut for now you can paste the relevant URIs below into your browser and see the results that we do have. Expect these numbers to increase sharply over the next few months\nTo start with, you might want to know how many articles in Crossref have FundRef metadata:\nhttps://api.crossref.org/v1/works?filter=has-funder:true\u0026amp;rows=0 You could then be interested in knowing how many works in Crossref use FundRef to credit the United States’ National Science Foundation (NSF) for funding their research? First you need to find out what the FundRef identifier is for the NSF:\nhttps://api.crossref.org/v1/funders?query=NSF You can see that there are several entries that match “NSF”, and that the one we are looking for has the identifier http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000001. Remember, funding agency names can change frequently, the ID provides a persistent link to the funder even if their name changes.\nIf you are curious, you can see the details for the NSF entry, including its location, parent and child organizations:\nhttps://api.crossref.org/v1/funders/10.13039/100000001 Notice that the results also lists the work-count. This is the number of works in the Crossref metadata that list the US NSF as having funded the research.\nSo perhaps you would like to see the list of works. The following will list the first twenty:\nhttps://api.crossref.org/v1/funders/10.13039/100000001/works You can page through the results with the offset argument:\nhttps://api.crossref.org/v1/funders/10.13039/100000001/works?offset=20 https://0-api-crossref-org.libus.csd.mu.edu/v1/funders/10.13039/100000001/works?offset=40 ... How many works that have listed the NSF as a funder have license information:\nhttps://api.crossref.org/v1/funders/10.13039/100000001/works?filter=has-license:true\u0026amp;rows=0 Lets see the first batch that have license information:\nhttps://api.crossref.org/v1/funders/10.13039/100000001/works?filter=has-license:true Lets look at the metadata for one of the DOIs returned:\nhttps://api.crossref.org/v1/works/10.1063/1.3593378 Interesting, the metadata shows an article published by AIP. It includes license information (CC-BY 3.0) as well as a link to the full text. If you follow the link to the full text, you can retrieve it:\nhttp://link.aip.org/link/applab/v98/i21/p216101/pdf/CHORUS Wow- A pretty short article. But you can see that it does credit the NSF and that the award number recorded in the text is the same as the award number recorded in the FundRef section of the Crossref metadata. Yay.\nYou can see in the brief examples above that there is a lot of other metadata you may want to query on and explore. It can include ORCIDS, information about archiving arrangements- even abstracts. It all depends on what the Crossref member has decided to provide.\nYou can get a simple overview of what a Crossref member has provided by looking at a member summary. Here is an example for Hindawi:\nhttps://api.crossref.org/v1/members?query=hindawi Note again that names are fickle, so the above query can also be accomplished using the member identifier like this:\nhttps://api.crossref.org/v1/members/98 Groovy init?\nIf you want more pointers on where you can learn how to use the API, read on…\nMore examples and documentation. We have a draft of the full documentation for the Crossref REST API. Note that this is undergoing active revision and we ask that you look at the updated documentation if things that once work cease to. We would also love your feedback and suggestions. Send them to:\nWe often get asked “what metadata does a publisher need to provide in order to enable this kind of functionality?” To answer that, we have developed a document titled Crossref metadata best practice to support key performance indicators (KPIs) for funding agencies. Try saying that ten times very fast.\nThe Future of the Crossref REST API. Our aim is for the Crossref REST API to go into production this Summer (2014). As with most of our newer APIs, there will be a free API for public use and a paid for API for professional use. The only difference between the two will be that the professional version will come with a service level agreement (SLA) covering uptime, response time and support. Naturally, this also means that the professional one will be on dedicated hosting equipment so that we can meet these SLAs, whereas the performance of the free version will be subject to the vicissitudes inherent in using a shared, constrained resource (i.e. the server and network it is running on).\nAgain, the basics of the API are in place. It should be fairly stable, but we do reserve the right to make changes to it over the next few months. Please send us feedback.\n— The Weasel\n", "headings": ["Summary","Background","Brief Examples","More examples and documentation.","The Future of the Crossref REST API."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/many-metrics-such-data-wow/", "title": "Many Metrics. Such Data. Wow.", "subtitle":"", "rank": 1, "lastmod": "2014-02-24", "lastmod_ts": 1393200000, "section": "Blog", "tags": [], "description": "[\nCrossref Labs loves to be the last to jump on an internet trend, so what better than than to combine the Doge meme with altmetrics?\nNote: The API calls below have been superceeded with the development of the Event Data project. See the latest API documentation for equivalent functionality\nWant to know how many times a Crossref DOI is cited by the Wikipedia?\nhttp://det.labs.crossref.org/works/doi/10.1371/journal.pone.0086859 Or how many times one has been mentioned in Europe PubMed Central?", "content": "[\nCrossref Labs loves to be the last to jump on an internet trend, so what better than than to combine the Doge meme with altmetrics?\nNote: The API calls below have been superceeded with the development of the Event Data project. See the latest API documentation for equivalent functionality\nWant to know how many times a Crossref DOI is cited by the Wikipedia?\nhttp://det.labs.crossref.org/works/doi/10.1371/journal.pone.0086859 Or how many times one has been mentioned in Europe PubMed Central?\nhttp://det.labs.crossref.org/works/doi/10.1016/j.neuropsychologia.2013.10.021 Or DataCite?\nhttp://det.labs.crossref.org/works/doi/10.1111/jeb.12289 Background Back in 2011 PLOS released its awesome ALM system as open source software (OSS). At Crossref Labs, we thought it might be interesting to see what would happen if we ran our own instance of the system and loaded it up with a few Crossref DOIs. So we did. And the code fell over. Oops. Somehow it didn’t like dealing with 10 million DOIs. Funny that.\nBut the beauty of OSS is that we were able to work with PLOS to scale the code to handle our volume of data. Crossref contracted with Cottage Labs and we both worked with PLOS to make changes to the system. These eventually got fed back into the main ALM source on Github. Now everybody benefits from our work. Yay for OSS.\nSo if you want to know technical details, skip to Details for Propellerheads. But if you want to know why we did this, and what we plan to do with it, read on.\nWhy? There are (cough) some problems in our industry that we can best solve with shared infrastructure. When publishers first put scholarly content online, they used to make bilateral reference linking agreements. These agreements allowed them to link citations using each other’s proprietary reference linking APIs. But this system didn’t scale. It was too time-consuming to negotiate all the agreements needed to link to other publishers. And linking through many proprietary citation APIs was too complex and too fragile. So the industry founded Crossref to create a common, cross-publisher citation linking API. Crossref has since obviated the need for bilateral linking arrangements. So-called altmetrics look like they might have similar characteristics. You have ~4000 Crossref member publishers and N sources (e.g. Twitter, Mendeley, Facebook, CiteULike, etc.) where people use (e.g. discuss, bookmark, annotate, etc.) scholarly publications. Publishers could conceivably each choose to run their own system to collect this information. But if they did, they would face the following problems: The N sources will be volatile. New ones will emerge. Old ones will vanish. Each publisher will need to deal with each source’s different APIs, rate limits, T\u0026amp;Cs, data licenses, etc. This is a logistical headache for both the publishers and for the sources. If publishers use different systems which in turn look at different sources, it will be difficult to compare results across publishers. If a journal moves from one publisher to another, then how are the metrics for that journal’s articles going to follow the journal? This isn’t a complete list, but it shows that there might be some virtue in publishers sharing an infrastructure for collecting this data. But what about commercial providers? Couldn’t they provide these ALM services? Of course - and some of them currently do. But normally they look on the actual collection of this data as a means to an end. The real value they provide is in the analysis, reporting and tools that they build on top of the data. Crossref has no interest in building front-ends to this data. If there is a role for us to play here, it is simply in the collection and distribution of the data. No, really, WHY? Aren’t these altmetrics an ill-conceived and meretricious idea? By providing this kind of information, isn’t Crossref just encouraging feckless, neoliberal university administrators to hasten academia’s slide into a Stakhanovite dystopia? Can’t these systems be gamed? FOR THE LOVE OF FSM, WHY IS CROSSREF DABBLING IN SOMETHING OF SUCH QUESTIONABLE VALUE? takes deep breath. wipes spittle from beard These are all serious concerns. Goodhart’s Law and all that… If a university’s appointments and promotion committee is largely swayed by Impact Factor, it won’t improve a thing if they substitute or supplement Impact Factor with altmetrics. Amy Brand has repeatedly pointed out, the best institutions simply don’t use metrics this way at all (PowerPoint presentation). They know better. But yes, it is still likely that some powerful people will come to lazy conclusions based on altmetrics. And following that, other lazy, unscrupulous and opportunistic people will attempt to game said metrics. We may even see an industry emerge to exploit this mess and provide the scholarly equivalent of SEO. Feh. Now I’m depressed and I need a drink. So again, why is Crossref doing this? Though we have our doubts about how effective altmetrics will be in evaluating the quality of content, we do believe that they are a useful tool for understanding how scholarly content is used and interpreted. The most eloquent arguments against altmetrics for measuring quality, inadvertently make the case for altmetrics as a tool for monitoring attention. Critics of altmetrics point out that much of the attention that research receives outside of formal scholarly communications channels can be ascribed to: Puffery. Researchers and/or university/publisher “PR wonks” over-promoting research results. Innocent misinterpretation. A lay audience simply doesn’t understand the research results. Deliberate misinterpretation. Ideologues misrepresent research results to support their agendas. Salaciousness. The research appears to be about sex, drugs, crime, video games or other popular bogeymen. Neurobollocks. A category unto itself these days. In short, scholarly research might be misinterpreted. Shock horror. Ban all metrics. Whew. That won’t happen again. Scholarly research has always been discussed outside of formal scholarly venues. Both by scholars themselves and by interested laity. Sometimes these discussions advance the scientific cause. Sometimes they undermine it. The University of Utah didn’t depend on widespread Internet access or social networks to promote yet-to-be peer-reviewed claims about cold fusion. That was just old-fashioned analogue puffery. And the Internet played no role in the Laetrile or DMSO crazes of the 1980s. You see, there were once these things called “newspapers.” And another thing called “television.” And a sophisticated meatspace-based social network called a “town square.” But there are critical differences between then and now. As citizens get more access to the scholarly literature, it is far more likely that research is going to be discussed outside of formal scholarly venues. Now we can build tools to help researchers track these discussions. Now researchers can, if they need to, engage in the conversations as well. One would think that conscientious researchers would see it as their responsibility to remain engaged, to know how their research is being used. And especially to know when it is being misused. That isn’t to say that we expect researchers will welcome this task. We are no Pollyannas. Researchers are already famously overstretched. They barely have time to keep up with the formally published literature. It seems cruel to expect them to keep up with the firehose of the Internet as well. Which gets us back to the value of altmetrics tools. Our hope is that, as altmetrics tools evolve, they will provide publishers and researchers with an efficient mechanism for monitoring the use of their content in non-traditional venues. Just in the way that citations were used before they were distorted into proxies for credit and kudos. We don’t think altmetrics are there yet. Partly because some parties are still tantalized by the prospect of usurping one metric for another. But mostly because the entire field is still nascent. People don’t yet know how the information can be combined and used effectively. So we still make naive assumptions such as “link=like” and “more=better.” Surely it will eventually occur to somebody that, instead, there may be a connection between repeated headline-grabbing research and academic fraud. A neuroscientist might be interested in a tool that alerts them if the MRI scans in their research paper are being misinterpreted on the web to promote neurobollocks. An immunologist may want to know if their research is being misused by the anti-vaccination movement. Perhaps the real value in gathering this data will be seen when somebody builds tools to help researchers DETECT puffery, social-citation cabals, and misinterpretation of research results? But Crossref won’t be building those tools. What we might be able to do is help others overcome another hurdle that blocks the development of more sophisticated tools; getting hold of the needed data in the first place. This is why we are dabbling in altmetrics. Wikipedia is already the 8th largest referrer of Crossref DOIs. Note that this doesn’t just mean that the Wikipedia cites lots of Crossref DOIs, it means that people actually click on and follow those DOIs to the scholarly literature. As scholarly communication transcends traditional outlets and as the audience for scholarly research broadens, we think that it will be more important for publishers and researcher to be aware of how their research is being discussed and used. They may even need to engage more with non-scholarly audiences. In order to do this, they need to be aware of the conversations. Crossref is providing this experimental data source in the hope that we can spur the development of more sophisticated tools for detecting and analyzing these conversations. Thankfully, this is an inexpensive experiment to conduct - largely thanks to the decision on the part of PLOS to open source its ALM code. What Now? Crossref’s instance of PLOS’s ALM code is an experiment. We mentioned that we had encountered scalability problems and that we had resolved some of them. But there are still big scalability issues to address. For example, assuming a response time of 1 second, if we wanted to poll the English-language version of the Wikipedia to see what had cited each of the 65 million DOIs held in Crossref, the process would take years to complete. But this is how the system is designed to work at the moment. It polls various source APIs to see if a particular DOI is “mentioned”. Parallelizing the queries might reduce the amount of time it takes to poll the Wikipedia, but it doesn’t reduce the work. Another obvious way in which we could improve the scalability of the system is to add a push mechanism to supplement the pull mechanism. Instead of going out and polling the Wikipedia 65 million times, we could establish a \u0026#8220;scholarly linkback” mechanism that would allow third parties to alert us when DOIs and other scholarly identifiers are referenced (e.g. cited, bookmarked, shared). If the Wikipedia used this, then even in an extreme case scenario (i.e. everything in Wikipedia cites at least one Crossref DOI), this would mean that we would only need to process ~ 4 million trackbacks. The other significant advantage of adding a push API is that it would take the burden off of Crossref to know what sources we want to poll. At the moment, if a new source comes online, we’d need to know about it and build a custom plugin to poll their data. This needlessly disadvantages new tools and services as it means that their data will not be gathered until they are big enough for us to pay attention to. If the service in question addresses a niche of the scholarly ecosystem, they may never become big enough. But if we allow sources to push data to us using a common infrastructure, then new sources do not need to wait for us to take notice before they can participate in the system. Supporting (potentially) many new sources will raise another technical issue- tracking and maintaining the provenance of the data that we gather. The current ALM system does a pretty good job of keeping data, but if we ever want third parties to be able to rely on the system, we probably need to extend the provenance information so that the data is cheaply and easily auditable. Perhaps the most important thing we want to learn from running this experimental ALM instance is: what it would take to run the system as a production service? What technical resources would it require? How could they be supported? And from this we hope to gain enough information to decide whether the service is worth running and, if so, by whom. Crossref is just one of several organizations that could run such a service, but it is not clear if it would be the best one. We hope that as we work with PLOS, our members and the rest of the scholarly community, we’ll get a better idea of how such a service should be governed and sustained. Details for Propellerheads Warning, Caveats and Weasel Words The Crossref ALM instance is a Crossref Labs project. It is running on R\u0026D equipment in a non-production environment administered by an orangutang on a diet of Redbulls and vodka. So what is working? The system has been initially loaded with 317,500+ Crossref DOIs representing publications from 2014. We will load more DOIs in reverse chronological order until we get bored or until the system falls over again. We have activated the following sources: PubMed DataCite PubMedCentral Europe Citations and Usage We have data from the following sources but will need some work to achieve stability: Facebook Wikipedia CiteULike Twitter Reddit Some of them are faster than others. Some are more temperamental than others. WordPress, for example, seems to go into a sulk and shut itself off after approximately 1,300 API calls. In any case, we will be monitoring and tweaking the sources as we gather data. We will also add new sources as we get requested API keys. We will probably even create one or two new sources ourselves. Watch this blog and we’ll update you as we add/tweak sources. Dammit, shut up already and tell me how to query stuff. You can login to the Crossref ALM instance simply using a Mozilla Persona (yes, we’d eventually like to support ORCID too). Once logged-in, your account page will list an API key. Using the API key, you can do things like: http://0-det-labs-crossref-org.libus.csd.mu.edu/api/v5/articles?ids=10.1038/nature12990 And you will see that (as of this writing), said Nature article has been cited by the Wikipedia article here:\nhttps://en.wikipedia.org/wiki/HE0107-5240#cite_ref-Keller2014_4-0;\nPLOS has provided lovely detailed instructions for using the API- So, please, play with the API and see what you make of it. On our side we will be looking at how we can improve performance and expand coverage. We don’t promise much- the logistics here are formidable. As we said above, once you start working with millions of documents, the polling process starts to hit API walls quickly. But that is all part of the experiment. We appreciate your helping us and would like your feedback. We can be contacted at: ", "headings": ["Background","Why?","No, really, WHY?","What Now?","Details for Propellerheads"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2013/", "title": "2013", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-unambiguously-and-persistently-identify-published-trustworthy-citable-online-scholarly-literature-right/", "title": "DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right?", "subtitle":"", "rank": 1, "lastmod": "2013-09-20", "lastmod_ts": 1379635200, "section": "Blog", "tags": [], "description": "The South Park movie , “Bigger, Longer \u0026amp; Uncut” has a DOI:\na) http://0-dx-doi-org.libus.csd.mu.edu/10.5240/B1FA-0EEC-C316-3316-3A73-L\nSo does the pornographic movie, “Young Sex Crazed Nurses”:\nb) http://0-dx-doi-org.libus.csd.mu.edu/10.5240/4CF3-57AB-2481-651D-D53D-Q\nAnd the following DOI points to a fake article on a “Google-Based Alien Detector”:\nc) http://0-dx-doi-org.libus.csd.mu.edu/10.6084/m9.figshare.93964\nAnd the following DOI refers to an infamous fake article on literary theory:\nd) http://0-dx-doi-org.libus.csd.mu.edu/10.2307/466856\nThis scholarly article discusses the entirely fictitious Australian “Drop Bear”:\ne) http://0-dx-doi-org.libus.csd.mu.edu/10.1080/00049182.2012.731307\nThe following two DOIs point to the same article- the first DOI points to the final author version, and the second DOI points to the final published version:", "content": " The South Park movie , “Bigger, Longer \u0026amp; Uncut” has a DOI:\na) http://0-dx-doi-org.libus.csd.mu.edu/10.5240/B1FA-0EEC-C316-3316-3A73-L\nSo does the pornographic movie, “Young Sex Crazed Nurses”:\nb) http://0-dx-doi-org.libus.csd.mu.edu/10.5240/4CF3-57AB-2481-651D-D53D-Q\nAnd the following DOI points to a fake article on a “Google-Based Alien Detector”:\nc) http://0-dx-doi-org.libus.csd.mu.edu/10.6084/m9.figshare.93964\nAnd the following DOI refers to an infamous fake article on literary theory:\nd) http://0-dx-doi-org.libus.csd.mu.edu/10.2307/466856\nThis scholarly article discusses the entirely fictitious Australian “Drop Bear”:\ne) http://0-dx-doi-org.libus.csd.mu.edu/10.1080/00049182.2012.731307\nThe following two DOIs point to the same article- the first DOI points to the final author version, and the second DOI points to the final published version:\nf) https://web.archive.org/web/20160423204031/https://figshare.com/articles/Relating_ion_channel_expression,_bifurcation_structure,_and_diverse_firing_patterns_in_a_model_of_an_identified_motor_neuron/96546\ng) http://0-dx-doi-org.libus.csd.mu.edu/10.1007/s10827-012-0416-6\nThis following two DOIs point to the same article- there is no apparent difference between the two copies:\nh) http://0-dx-doi-org.libus.csd.mu.edu/10.6084/m9.figshare.91541\ni) http://0-dx-doi-org.libus.csd.mu.edu/10.1038/npre.2012.7151.1\nAnother example where two DOIs point to the same article and there is no apparent difference between the two copies:\nj) http://0-dx-doi-org.libus.csd.mu.edu/10.1364/AO.39.005477\nk) http://0-dx-doi-org.libus.csd.mu.edu/10.3929/ethz-a-005707391\nThese journals assigned DOIs, but not through Crossref:\nl) http://0-dx-doi-org.libus.csd.mu.edu/10.3233/BIR-2008-0496\nm) https://web.archive.org/web/20160423192452/https://figshare.com/articles/Role_of_brain_glutamic_acid_metabolism_changes_in_neurodegenerative_pathologies/95564\nn) http://0-dx-doi-org.libus.csd.mu.edu/10.3205/cto000081\nThese two DOIs are assigned to two different data sets by two different RAs:\no) http://0-dx-doi-org.libus.csd.mu.edu/10.1107/S0108767312019034/eo5016sup1.xls\np) http://0-dx-doi-org.libus.csd.mu.edu/10.1594/PANGAEA.726855\nThis DOI appears to have been published, but was not registered until well after it was published. There were 254 unsuccessful attempts to resolve it in September 2012 alone:\nq) http://0-dx-doi-org.libus.csd.mu.edu/10.4233/uuid:995dd18a-dc5d-4a9a-b9eb-a16a07bfcc6d\nThe owner of prefix, ‘10.4223,’ who is responsible for the above DOI had 378,790 attempted resolutions in September 2012 of which there were 377,001 failures. The top 10 DOI failures for this prefix each garnered over 200 attempted resolutions. As of November 2012 the prefix had only registered 349 DOIs.\nOf the above 16 example DOIs 11 cannot be used for CrossCheck or Crossmark. 3 cannot be used with content negotiation. To search metadata for the above examples, you need to visit four sites:\nhttps://web.archive.org/web/20131229210637/http://0-search-crossref-org.libus.csd.mu.edu/\nhttps://ui.eidr.org/search\nhttps://www.medra.org/en/search.htm\nhttps://search.datacite.org/\nThe 14 examples come from just 4 of the 8 existing DOI registration agencies (RAs) It is virtually impossible for somebody without specialized knowledge to tell which DOIs are Crossref DOIs and which ones are not.\nBackground So DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right? Wrong.\nThe examples above are useful because they help elucidate some misconceptions about the DOI itself, the nature of the DOI registration agencies and, in particular issues being raised by new RAs and new DOI allocation models.\nDOIs are just identifiers Crossref’s dominance as the primary DOI registration agency makes it easy to assume Crossref’s particular application of the DOI as a scholarly citation identifier is somehow intrinsic to the DOI. The truth is, the DOI has nothing specifically to do with citation or scholarly publishing. It is simply an identifier that can be used for virtually any application. DOIs could be used as serial numbers on car parts, as supply-chain management identifiers for videos and music or as cataloguing numbers for museum artifacts. The first two identifiers listed in the examples (a \u0026amp; b) illustrate this. They both belong to MovieLabs and are part of the EIDR (Entertainment Identifier Registry) effort to create a unique identifier for television and movie assets. At the moment, the DOIs that MoveLabs are assigning are B2B-focused and users are unlikely to see them in the wild. But we should recall that Crossref’s application of DOIs was also initially considered a B2B identifier- but it has since become widely recognized and depended on by researchers, librarians and third parties. The visibility of EIDR DOIs could change rapidly as they become more popular.\nMultiple DOIs can be assigned to the same object There is no International DOI Foundation (IDF) prohibition against assigning multiple DOIs to the same object. At most the IDF suggests that RAs might coordinate to avoid duplicate assignments, but it provides no guidelines on how such cross-RA checks would work.\nCrossref, in its particular application of the DOI, attempts to ensure that we don’t assign two different copies of the same article with different DOIs, but that is designed in order to avoid having publishers mistakenly making duplicate submissions. Even then, there are subtle exceptions to this rule- the same article, if legitimately published in two different issues (e.g. a regular issue and a thematic issue) will be assigned different DOIs. This is because, though the actual article content might be identical, the context in which it is cited is also important to record and distinguish. Finally, of course, we assign multiple DOIs to the same “object” when we assign book-level and chapter level DOIs. Or when we assign DOIs to components or reference work entries.\nThe likelihood of multiple DOIs being assigned to the same object increases as we have multiple RAs. In the future we might legitimately have a monograph that has different Bowker DOIs for different e-book platforms (Kindle, iPad, Kobo.) yet all three might share the same Crossref DOI for citation purposes.\nAgain, the examples show this already happening. The examples f \u0026amp; g are assigned by DataCite (via FigShare) and Crossref respectively. The first identifies the author version and was presumably assigned by said author. The second identifies the publisher version and was assigned by the publisher.\nAlthough Crossref, as a publisher-focused RA, might have historically proscribed the assignment of Crossref DOIs to archive or author versions, there has never been and could never be any such restrictions on other DOI RAs. These are legitimate applications of two citation identifiers to two versions of the same article.\nHowever, the next set of examples, h, i, j and k show what appears to be a slightly different problem. In these cases articles that appear to be in all aspects identical have been assigned two separate DOIs by different RAs. In one respect this is a logistical or technical problem- although Crossref can check for such potential duplicate assignments within its own system, there is no way for us to do this across different RAs. But this is also a marketing and education problem- how do RAs with similar constituencies (publishers, researchers, librarians) and application of the DOI (scholarly citation) educate and inform their members about best practice in applying DOIs in that particular RAs context?\nDOI registration agencies are not focused on record types, they are focused on constituencies and applications The examples f through k also illustrate another area of fuzzy thinking about RAs- that they are somehow built around particular record types. We routinely hear people mistakenly explain that difference between Crossref and DataCite is that “Crossref assigns DOIs to journal articles” and that “DataCite assigns DOIs to data.” Sometimes this is supplemented with “and Bowker assigns DOIs to books.” This is nonsense. Crossref assigns DOIs to data (example o) as well as conference proceedings, programs, images, tables, books, chapters, reference entries, etc. And DataCite covers a similar breadth of record types including articles (examples c, h, f, l, m ). The difference between Crossref, DataCite and Bowker is their constituencies and applications- not the record types they apply DOIs to. Crossref’s constituency is publishers. DataCite’s constituency is data repositories, archives and national libraries. But even though Crossref and DataCite have different constituencies, they share a similar application of the DOI- that is the use of DOI as citation identifiers. This is in contrast to MovieLabs whose application of the DOI is supply chain management.\nDOI registration agency constituencies and applications can overlap or be entirely separate Although Crossref’s constituency is “publishers”, we are catholic in our definition of “publisher” and have several members who run repositories that also “publish” content such as working papers and other grey literature (e.g. Woods Hole Oceanographic Institution, University of Michigan Library, University of Illinois Library). DataCite’s constituency is data repositories, archives and national libraries, but this doesn’t stop DataCite (through CDL/FigShare) from working with the publisher, PLoS, on their “Reproducibility Initiative” which requires the archiving of article-related datasets. PloS has announced that they will host all supplemental data sets on FigShare but will assign DOIs to those items through Crossref.\nCrossref’s constituency of publishers overlaps heavily with Airiti, JaLC, mEDRA, ISTIC and Bowker. In the case of all but Bowker we also overlap in our application of the DOI in the service of citation identification. Bowker, though it shares Crossref’s constituency, uses DOIs for supply chain management applications.\nMeanwhile, EIDR is an outlier, its constituency does not overlap with Crossref’s and its application of the DOI is different as well.\nThe relationship between RA constituency overlap (e.g. scholarly publishers vs television/movie studios) and application overlap (e.g. citation identification vs. supply chain management) can be visualized as such:\nThe differences (subtle or large) between the various RAs are not evident to anybody without a fairly sophisticated understanding of the identifier space and the constituencies represented by the various RAs. To the ordinary person these are all just DOIs, which in turn are described as simply being “persistent interoperable identifiers.”\nWhich of course begs the question, what do we mean by “persistent” and “interoperable?”\nDOIs only are as persistent as the registration agency’s application warrants. The word “persistent” does not mean “permanent.” Andrew Treloar is known to point out that the primary sense of the word “persistent” in the New Oxford American Dictionary is:\nContinuing firmly or obstinately in a course of action in spite of difficulty or opposition\nYet presumably the IDF once chose to use the word “persistent” instead of “perpetual” or “permanent” for other reasons. “Persistence” implies longevity, without committing to “forever.”\nIt may sound prissy, but it seems reasonable to expect that the useful life-expectancy for the identifier used for managing inventory of the the movie “Young Sex Crazed Nurses” might be different than the life expectancy for the identifier used to cite Henry Oldenburg’s “Epistle Dedicatory” in the first issue of the Philosophical Transactions. In other words, some RAs have a mandate to be more “obstinate” than others and so their definitions of “persistence” may vary. Different RAs have different service level agreements.\nThe problem is that ordinary users of the “persistent” DOI have no way of distinguishing between those DOIs that are expected to have a useful life of 5 years and those DOIs that are expected to have a useful lifespan of 300+ years. Unfortunately, if one of the more than 6 million non-Crossref DOIs breaks today, it will likely be blamed on Crossref.\nSimilarly, if a DOI doesn’t work with an existing Crossref service, like CrossCheck, Crossmark or Crossref Metadata Search, it will also be laid at the foot of Crossref. This scenario is likely to become even more complex as different RAs provide different specialized services for their constituencies.\nIronically, the converse doesn’t always apply. Crossref oftentimes does not get credit for services that we instigated at the IDF level. For instance, FigShare has been widely praised for implementing content negotiation for DOIs even though this initiative had nothing to do with FigShare, instead it was implemented by DataCite with the prodding and active help of Crossref (DataCite even used Crossref’s code for a while). To be clear, we don’t begrudge praise for FigShare. We think FigShare is very cool- this just serves as an example of the confusion that is already occurring.\nDOIs are only “interoperable” at a least common denominator level of functionality There is no question that use of Crossref DOIs has enabled the interoperability of citations across scholarly publisher sites. The extra level of indirection built into the DOI means that publishers do not have to worry about negotiating multiple bilateral linking agreements and proprietary APIs. Furthermore, at the mundane technical level of following HTTP links, publishers also don’t have to worry about whether the DOI was registered with mEDRA, DataCite or Crossref as long as the DOI in question was applied with citation linking in mind.\nHowever, what happens if somebody wants to use metadata to search for a particular DOI? What happens if they expect that DOI to work with content negotiation or to enable a CrossCheck analysis or show a Crossmark dialog or carry FundRef data? At this level, the purported interoperability of the DOI system falls apart. A publisher issuing DataCite DOIs cannot use CrossCheck. A user with a mEDRA DOI cannot use it with content negotiation. Somebody searching Crossref Metadata Search or using Crossref’s OpenURL API will not find DataCite records. Somebody depositing metadata in an RA other than Crossref or DataCite will not be able to deposit ORCIDs.\nThere are no easy or cheap technical solutions to fix this level of incompatibility baring the creation of a superset of all RA functionality at the IDF level. But even if we had a technical solution to this problem- it isn’t clear that such a high-level of interoperability is warranted across all RAs. The degree of interoperability that is desirable between RAs is only in proportion to the degree that they serve overlapping constituencies (e.g. publishers) or use the DOI for overlapping applications (e.g. citation)\nDOI Interoperability matters more for some registration agencies than others This raises the question of what it even means to be “interoperable” between different RAs that share virtually no overlap in constituencies or applications. In what meaningful sense do you make a DOI used for inventory control “interoperable” with a DOI used for identifying citable scholarly works? Do we want to be able to check “Young Sex Crazed Nurses” for plagiarism? Or let somebody know when the South Park movie has been retracted or updated? Do we need to alert somebody when their inventory of citations falls below a certain threshold? Or let them know how many copies of a PDF are left in the warehouse?\nThe opposite, but equally vexing issue arrises for RAs that actually share constituencies and/or applications. Crossref, DataCIte and mEDRA have all built separate metadata search capabilities, separate deposit APIs, separate OpenURL APIs, and separate stats packages- all geared at handling scholarly citation linking.\nFinally, it seems a shame that a third party, like ORCID, who wants to enable researchers to add any DOI and its associated metadata to their ORCID profile, will end up having to interface with 4-5 different RAs.\nSummary and closing thoughts Crossref was founded by publishers who were prescient in understanding that, as scholarly content moved online, there was the potential to add great value to publications by directly linking citations to the documents cited. However, publishers also realized that many of the architectural attributes that made the WWW so successful (decentralization, simple protocols for markup, linking and display, etc.), also made the web a fragile platform for persistent citation.\nThe Crossref solution to this dilemma was to introduce the use of the DOI identifier as a level of citation indirection in order to layer a persist-able citation infrastructure onto the web. The success of this mechanism has been evident at a number of levels. A first-order effect of the system is that it has allowed publishers to create reliable and persistent links between copies of publisher content. Indeed uptake of the Crossref system by scholarly and professional publishers has been rapid and almost all serious scholarly publishers are now Crossref members.\nThe second order effects of the Crossref system have also been remarkable. Firstly, just as researchers have long expected that any serious paper-based publication would include citations, now researchers expect that serious online scholarly publications will also support robust online citation linking. Secondly, some have adopted a cargo-cult practice of seeing the mere presence of a DOI on a publication as a putative sign of “citability” or “authority.” Thirdly, interest in use of the DOI as a linking mechanism has started to filter out to researchers themselves, thus potentially extending the use of Crossref DOIs beyond being primarily a B2B citation convention.\nThe irony is that although the DOI system was almost single-handedly popularized and promoted by Crossref, the DOI brand is better known than Crossref itself. We now find that new RAs like EIDR, DataCite and new services like FigShare are building on the DOI brand and taking it in new directions. As such the first and second order benefits of Crossref’s pioneering work with DOIs are likely to be effected by the increasing activity of the new DOI RAs as well as the introduction of new models for assigning and maintaining DOIs.\nHow can you trust that a DOI is persistent if different RAs have different conceptions of persistence? How can you expect the presence of a DOI to indicate “authority” or “scholarliness” if DOIs are being assigned to porn movies? How can you expect a DOI to point to the “published” version of an article when authors can upload and assign DOIs to their own copies of articles?\nIt is precisely because we think that some of the qualities traditionally (and wrongly) accorded to DOIs (e.g. scholarly, published, stewarded, citable, persistent) are going to be diluted in the long term that we have focused so much of our recent attention on new initiatives that have a more direct and unambiguous connection to assessing the trustworthiness of Crossref member’s content. CrossCheck and the CrossCheck logos are designed to highlight the role that publishers play in detecting and preventing academic fraud. The Crossmark identification service will serve as a signal to researchers that publishers are committed to maintaining their scholarly content as well as giving scholars the information they need to verify that they are using the most recent and reliable versions of a document. FundRef is designed to make the funding sources for research and articles transparent and easily accessible. And finally we have been both adjusting Crossref’s branding and display guidelines as well as working with the IDF to refine its branding and display guidelines so as to help clearly differentiate different DOI applications and constituencies.\nWhilst it might be worrying to some that DOIs are being applied in ways that Crossref has not expected and may not have historically endorsed, we should celebrate that the broader scholarly community is finally recognizing the importance of persist-able citation identifiers.\nThese developments also serve to reinforce a strong trend that we have encountered in several guises before. That is, the complete scholarly citation record is made up of more than citations to the formally published literature. Our work on ORCID underscored that researchers, funding agencies, institutions and publishers are interested in developing a more holistic view of the manifold contributions that are integral to research. The “C” in ORCID stands for “contributor” and ORCID profiles are designed to ultimately allow researchers to record “products” which include not only formal publications, but also data sets, patents, software, web pages and other research outputs. Similarly, Crossref’s analysis of the CitedBy references revealed that one in fifteen references in the scholarly literature published in 2012 included a plain, ordinary HTTP URI- clear evidence that researchers need to be able to cite informally published content on the web. If the trend in CitedBy data continues, then in two to three years one in ten citations will be of informally published literature.\nThe developments that we are seeing are a response to the need that users have to persistently identify and cite the full gamut of record types that make up the scholarly literature. If we can not persistently site these record types, the scholarly citation record will grow increasingly porous and structurally unsound. We can either stand back and let these gaps be filled by other players under their terms and deal reactively with the confusion that is likely to ensue- or we can start working in these areas too and help to make sure that what gets developed interacts with the existing online scholarly citation record in a responsible way.\n", "headings": [" ","Background","DOIs are just identifiers","Multiple DOIs can be assigned to the same object","DOI registration agencies are not focused on record types, they are focused on constituencies and applications","DOI registration agency constituencies and applications can overlap or be entirely separate","DOIs only are as persistent as the registration agency’s application warrants.","DOIs are only “interoperable” at a least common denominator level of functionality","DOI Interoperability matters more for some registration agencies than others","Summary and closing thoughts"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/fundref-widget/", "title": "Funding data widget", "subtitle":"", "rank": 1, "lastmod": "2013-07-31", "lastmod_ts": 1375228800, "section": "Labs", "tags": [], "description": "A number of Crossref members have told us that they are struggling with implementing a usable funding data collection user interface in their content management systems. To that end, we have worked with some user interface consultants to come up with a \u0026ldquo;funding data widget\u0026rdquo; that illustrates some of the considerations one has to take into account when collecting funding data data.\nSpecifically the funding data widget illustrates how to:", "content": "\rA number of Crossref members have told us that they are struggling with implementing a usable funding data collection user interface in their content management systems. To that end, we have worked with some user interface consultants to come up with a \u0026ldquo;funding data widget\u0026rdquo; that illustrates some of the considerations one has to take into account when collecting funding data data.\nSpecifically the funding data widget illustrates how to:\nAutocomplete on Funder Registry names Handle multiple funders Handle multiple award numbers Handle sub-organizations Handle funding agencies that have the same name, but are in different locations (e.g. \u0026ldquo;Academy of Sciences\u0026rdquo;) Handle sub-organizations that have the same name, but different parent organizations (e.g. \u0026ldquo;Division of Physics\u0026rdquo;) Query the funding data API to make sure your Funder Registry list is always current Allow users to manually enter a funding agency if it is not already included in the Funder Registry. The funding data widget source is posted on Github and is released under an open source license (MIT). We encourage anybody implementing funding data to either use the Widget as-is or base their own UX on the widget.\nAs usual, we welcome feedback and snark at:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/fundref-reconciliation-service/", "title": "Open Funder Reconciliation Service", "subtitle":"", "rank": 1, "lastmod": "2013-06-10", "lastmod_ts": 1370822400, "section": "Labs", "tags": [], "description": "What? The Open Funder Registry Reconciliation Service is designed to help members (or anybody) more easily clean-up their funder data and map it to the Open Funder Registry. It is built on Open Refine and Funding Data Search.\nIf you are impatient and want to see it working, then skip to the short 15-minute video tutorial.\nOr read on…\nWhy? Some of our members have been collecting data related to the funding of publications for years.", "content": "\rWhat? The Open Funder Registry Reconciliation Service is designed to help members (or anybody) more easily clean-up their funder data and map it to the Open Funder Registry. It is built on Open Refine and Funding Data Search.\nIf you are impatient and want to see it working, then skip to the short 15-minute video tutorial.\nOr read on…\nWhy? Some of our members have been collecting data related to the funding of publications for years. Ideally, we would like them to submit Open Funder Registry identifiers for their backfiles into the Crossref system. However, we realize that there are at least two major barriers to doing this:\nPublisher’s funder data may not have used a controlled vocabulary and may be inconsistent (e.g. they list “NASA” under both “NASA” and “National Aeronautics and Space Administration”) Publishers have no easy way to map their existing, home-grown funder identifiers to the new standard Open Funder identifiers. How? We have created the Open Funder Reconciliation Service to work with The Open Refine desktop application. The Open Refine application provides a number of powerful tools that allow one to clean-up messy metadata and the Open Funder Reconciliation Service will allow one to take that cleaned-up data and semi-automatically map it to the latest version of the Open Funder Registry.\nBasically, you start with a tab-delimited text file which includes every funder name you have recorded along with whatever internal identifier you use for that funder. For example:\npublisher_id\tname 1000001\tUniversity of Oxford 1000002\tOxford University 1000003\tWelcome Trust 1000004\tUS Department of Transportation 1000005\tNational Aeronautics and Space Administration 1000006\tNational Institutes of Health 1000007\tU.S. Department of Transportation 1000008\tUS Department of Transport 1000009\tNASA 1000010\tWellcome Trust 1000011\tSloan Foundation 1000012\tNational Institute of Health 1000013\tAll Souls College, University of Oxford 1000014\tRinky Dink Foundation Once you load that text file into Open Refine and process it, you will be left with a file that looks like this:\npublisher_id name fundref_funder_id 1000001 University Of Oxford http://0-dx-doi-org.libus.csd.mu.edu/10.13039/501100000769 1000002 University Of Oxford http://0-dx-doi-org.libus.csd.mu.edu/10.13039/501100000769 1000003 Wellcome Trust http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100004440 1000004 U.S. Department of Transportation http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000140 1000005 National Aeronautics and Space Administration http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000104 1000006 National Institutes of Health http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000002 1000007 U.S. Department of Transportation http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000140 1000008 U.S. Department of Transportation http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000140 1000009 National Aeronautics and Space Administration http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000104 1000010 Wellcome Trust http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100004440 1000011 Alfred P. Sloan Foundation http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000879 1000012 National Institutes of Health http://0-dx-doi-org.libus.csd.mu.edu/10.13039/100000002 1000013 All Souls College, University of Oxford http://0-dx-doi-org.libus.csd.mu.edu/10.13039/501100000524 1000014 Rinky Dink Foundation You can then use this latter file to convert all of your internal funder names and identifiers into Opne FUnder-compatible names and identifiers.\nYou can see a complete demo of how the system works by watching this 15-minute video tutorial. The Open Refine website also has some more general tutorials on using the tool that you may find helpful.\nOnce you have watched the video, you can practice with the tool yourself. Simply download the Open Refine application (runs of OSX, Windows and Linux) and our short, sample file of publisher funder data. Two key pieces of information you may may not want to re-type are:\nThe URL for the Open Funder Reconciliation Service\nhttp://recon.labs.crossref.org/reconcile\nThe “Refine Expression Language” command for creating a new column of Open Funder Identifiers:\ncell.recon.match.id\nObviously, we welcome your feedback. Send comments/questions/gripes to:\n", "headings": ["What?","Why?","How?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/easily-add-publications-to-your-orcid-profile/", "title": "Easily add publications to your ORCID profile", "subtitle":"", "rank": 1, "lastmod": "2013-01-24", "lastmod_ts": 1358985600, "section": "Blog", "tags": [], "description": "You can now easily search for publications and add them to your ORCID profile in the new beta of Crossref Metadata Search (CRMDS). The user interface is pretty self-explanatory, but if you want to read about it before trying it, here is a summary of how it works.\nWhen you go to to CRMDS, you will see that there is now a small ORCID sign-in button on the top right-hand side of the screen.", "content": "You can now easily search for publications and add them to your ORCID profile in the new beta of Crossref Metadata Search (CRMDS). The user interface is pretty self-explanatory, but if you want to read about it before trying it, here is a summary of how it works.\nWhen you go to to CRMDS, you will see that there is now a small ORCID sign-in button on the top right-hand side of the screen.\nclick on thumbnail to see larger image\nClicking on this button allows you to connect CRMDS to your ORCID profile and authorises CRMDS to add publications to your profile. First, if you are not already logged into ORCID, CRMDS will ask ORCID to log you in:\nclick on thumbnail to see larger image\nOnce you have logged in, ORCID will ask you if you want to allow CRMDS to be able to view and update your ORCID profile:\nclick on thumbnail to see larger image\nAfter you authorise CRMDS to access your profile, you will be returned to the CRMDS screen and the top right corner of the CRMDS page will indicate that you have connected to your ORCID profile (note, you can always de-authorise CRMDS from accessing your ORCID profile in your ORCID settings):\nclick on thumbnail to see larger image\nOnce you are logged in, you can enter search terms that are likely to return records of your publications:\nclick on thumbnail to see larger image\nEach search result will show an icon telling you whether that particular item is visible in your ORCID profile. If the item is not in your ORCID profile, you see an icon like this:\nAnd if the item is already in your ORCID profile, you will see an icon like this:\nIn the following search results you can see that 1 item is already in Josiah Carberry’s profile, and 2 items are not: click on thumbnail to see larger image\nClicking on the “Add to Profile” button will confirm that you want to add the specified publication to your ORCID profile:\nclick on thumbnail to see larger image\nAfter clicking on \u0026#8220;Yes\u0026#8221; to add the publication to your profile, the search results will refresh to reflect that the item has been added. click on thumbnail to see larger image\nYou can then just continue searching for and adding any publications that are not in your ORCID profile. Note that, occasionally, you may see an orange icon that says that an item is \u0026#8220;Not Visible\u0026#8221; click on thumbnail to see larger image\nThis only occurs when you have previously added an item to your profile using CRMDS and then either: Set the ORCID privacy for that particular work item to “Private” in your ORCID profile. Deleted the work from your ORCID profile. Unfortunately, CRMDS has no way to determine which of these two events occurred However, If you click on the “Not Visible” icon, you will be prompted with two ways to resolve this issue. Either you can:\nReset the privacy settings on the specified work to “Public” or “Limited” Confirm to CRMDS that you have deleted the item from your profile. click on thumbnail to see larger image\nIf the issue was your privacy settings, then once you have changed the privacy settings to public/limited you can simply click on the \u0026#8220;Refresh\u0026#8221; button and CRMDS will reflect the correct status of the work. The best way to avoid this kind of confusion is to go to your ORCID settings and set the default privacy level for \u0026#8220;works\u0026#8221; to either \u0026#8220;limited\u0026#8221; or \u0026#8220;public.\u0026#8221; Crossref Metadata Search is still a \u0026#8220;Crossref Labs\u0026#8221; project and, as such, we are very interested to hear feedback on this new ORCID functionality for CRMDS. Please send comments, etc. to: ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/citation-formatting-service/", "title": "Citation Formatting Service", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nIts functionality is now a standard feature of DOI content negotiation.\nCrossref labs has added two new content types to dx.doi.org resolution for Crossref DOIs. These allow anyone to retrieve DOI bibliographic metadata as formatted bibliographic entries. To perform the formatting we’re using the citation style language processor, citeproc-js which supports a shed load of citation styles and locales.", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nIts functionality is now a standard feature of DOI content negotiation.\nCrossref labs has added two new content types to dx.doi.org resolution for Crossref DOIs. These allow anyone to retrieve DOI bibliographic metadata as formatted bibliographic entries. To perform the formatting we’re using the citation style language processor, citeproc-js which supports a shed load of citation styles and locales. In fact, all the styles and locales found in the CSL repositories, including many common styles such as bibtex, apa, ieee, harvard, vancouver and chicago are supported.\nFirst off, if you’d like to try citation formatting without using content negotiation, there\u0026rsquo;s a simple web UI that allows input of a DOI, style and locale selection.\nIf you’re more into accessing the web via your favorite programming language, have a look at these content negotiation curl examples. To make a request for the new “text/bibliography” content type:\n$ curl -LH \u0026#34;Accept: text/bibliography; style=bibtex\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 And get:\n@article{Atkins_Gershell_2002, title={From the analyst\u0026#39;s couch: Selective anticancer drugs}, volume={1}, DOI={10.1038/nrd842}, number={7}, journal={Nature Reviews Drug Discovery}, author={Atkins, Joshua H. and Gershell, Leland J.}, year={2002}, month={Jul}, pages={491-492}} A locale can be specified with the “locale” content type parameter, like this:\n$ curl -LH \u0026#34;Accept: text/bibliography; style=mla; locale=fr-FR\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 Which gets you:\nAtkins, Joshua H., et Leland J. Gershell. « From the analyst\u0026#39;s couch: Selective anticancer drugs ». Nature Reviews Drug Discovery 1.7 (2002): 491-492. You may want to process metadata through CSL yourself. For this use case, there’s another new content type, “application/citeproc+json” that returns metadata in a citeproc-friendly JSON form:\n$ curl -LH \u0026#34;Accept: application/citeproc+json\u0026#34; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 Returns the following\n{ \u0026#34;volume\u0026#34;:\u0026#34;1\u0026#34;, \u0026#34;issue\u0026#34;:\u0026#34;7\u0026#34;, \u0026#34;DOI\u0026#34;:\u0026#34;10.1038/nrd842\u0026#34;, \u0026#34;title\u0026#34;:\u0026#34;From the analyst\u0026#39;s couch: Selective anticancer drugs\u0026#34;, \u0026#34;container-title\u0026#34;:\u0026#34;Nature Reviews Drug Discovery\u0026#34;, \u0026#34;issued\u0026#34;:{\u0026#34;date-parts\u0026#34;:[[2002,7]]}, \u0026#34;author\u0026#34;:[{\u0026#34;family\u0026#34;:\u0026#34;Atkins\u0026#34;,\u0026#34;given\u0026#34;:\u0026#34;Joshua H.\u0026#34;},{\u0026#34;family\u0026#34;:\u0026#34;Gershell\u0026#34;,\u0026#34;given\u0026#34;:\u0026#34;Leland J.\u0026#34;}], \u0026#34;page\u0026#34;:\u0026#34;491-492\u0026#34;, \u0026#34;type\u0026#34;:\u0026#34;article-journal\u0026#34; } Finally, to retrieve lists of supported styles and locales, either hit these URLs:\nhttps://api.crossref.org/styles https://0-api-crossref-org.libus.csd.mu.edu/locales or check out the CSL style and locale repositories.\nThere’s one big caveat to all this. The CSL processor will do its best with Crossref metadata which can unfortunately be quite patchy at times. There may be pieces of metadata missing, inaccurate metadata or even metadata items stored under the wrong field, all resulting in odd-looking formatted citations. Most of the time, though, it works.\nIf you have any comments, suggestions or bug reports please fee free to send them to us here at:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/family-name-detector/", "title": "Family Name Detector", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Overview\nCrossref Labs has created a small web API that wraps a family name database here at Crossref R\u0026amp;D. The database, built from Crossref’s metadata, lists all unique family names that appear as contributors to articles, books, datasets and so on that are known to Crossref.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Overview\nCrossref Labs has created a small web API that wraps a family name database here at Crossref R\u0026amp;D. The database, built from Crossref’s metadata, lists all unique family names that appear as contributors to articles, books, datasets and so on that are known to Crossref. As such the database likely accounts for the majority of family names represented in the scholarly record.\nThe web API comes with two services: a family name detector that will pick out potential family names from chunks of text and a family name autocompletion system.\nVery brief documentation can be found here along with a jQuery example of autocompletion.\nThe Weasel Speaks…\nThe database is still in development so there may be some oddities and inaccuracies in there. Right now one obvious omission from the name list that I hope to address soon are double-worded names such as “von Neumann”. We’re not proposing this database as an authority but rather something that backs a practical service for family name detection and autocompletion.\nIf you have any comments, suggestions, bug reports or (hint, hint) patches, please fee free to send them to us here at:\nOr just snipe about it on Google+.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/linked-periodical-data/", "title": "Linked Periodical Data", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. We already make a list of Crossref members and titles available for free on our web site, including a downloadable coma-separated list] of them (caution: this file is large ~5MB). But this format isn’t really amenable to easy integration with linked-data.\nWe worked with Talis on a project to create a linked periodical data resource on their Data Incubator platform.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. We already make a list of Crossref members and titles available for free on our web site, including a downloadable coma-separated list] of them (caution: this file is large ~5MB). But this format isn’t really amenable to easy integration with linked-data.\nWe worked with Talis on a project to create a linked periodical data resource on their Data Incubator platform. This has since been superseded by our support for content negotiation.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/openurl/", "title": "OpenURL", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "First, Get an API Key. It takes no time at all. Just fill in the form and we will activate a key for you. Note that we do this because occasionally we need to get in touch with somebody if their scripts start misbehaving and are hammering our systems. The alternative is to simply block the script- and we wouldn’t want to do that, right?\nIn the following examples you need to substitute API_KEY with the key that you get from us.", "content": "\rFirst, Get an API Key. It takes no time at all. Just fill in the form and we will activate a key for you. Note that we do this because occasionally we need to get in touch with somebody if their scripts start misbehaving and are hammering our systems. The alternative is to simply block the script- and we wouldn’t want to do that, right?\nIn the following examples you need to substitute API_KEY with the key that you get from us.\nOften you just want to take an existing Crossref DOI and lookup the metadata for it. For that you want to use our OpenURL query. So, if you have the Crossref DOI:\n10.3998/3336451.0009.101 You can simply construct a URL like this:\nhttp://www.crossref.org/openurl/?id=doi:10.3998/3336451.0009.101\u0026amp;noredirect=true\u0026amp;pid=API_KEY\u0026amp;format=unixref And you will get an XML representation of the metadata. Unless, of course, you forgot to substitute the API_KEY in the URL above with the one that you activated in the first step… 😉\nThe OpenURL query interface to Crossref is capable of much more than just DOI lookups. You can visit our documentation to get more detailed examples of fielded searches, etc.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/patentcite/", "title": "PatentCite", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nEquivalent functionality can now be found in Crossref Metadata Search.\nCrossref has been working with Cambia and the The Lens to explore how we can better link scholarly literature to and from the patent literature. The first step of our collaboration was to link Lens entries to the relevant Scholarly literature (For example)", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nEquivalent functionality can now be found in Crossref Metadata Search.\nCrossref has been working with Cambia and the The Lens to explore how we can better link scholarly literature to and from the patent literature. The first step of our collaboration was to link Lens entries to the relevant Scholarly literature (For example)\nCrossref has taken this matched data and has now released a Crossref Labs experimental service, called PatentCite, that allows you to take any Crossref DOI and see what Patents in the Lens system cite it.\nAs with all Crossref Labs services- this one is likely to be:\na) As stable as the global economy\nc) As reliable as a UK train\nii) Out-of-date. It is based on a snapshot of Crossref /Lens data.\nAs accurate as my list ordering Howzat for an SLA?\nAs we get feedback from Crossref’s membership and as we gain more experience linking Patents to and from the scholarly literature, we will explore including this functionality in our production CitedBY service. But until then, please send us your feedback on this experimental service to:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/pdfextract/", "title": "pdfextract", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Since the retirement of this project, we recommend that you use the excellent Cermine instead.\nPdf-extract is an open source set of tools and libraries for identifying and extracting semantically significant regions of a scholarly journal article (or conference proceeding) PDF.\nIn English, please…", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Since the retirement of this project, we recommend that you use the excellent Cermine instead.\nPdf-extract is an open source set of tools and libraries for identifying and extracting semantically significant regions of a scholarly journal article (or conference proceeding) PDF.\nIn English, please…\nThe pdf-extract tools allow you to identify and extract the individual references from a scholarly journal article. References extracted using pdf-extract can, in turn, be resolved to the appropriate Crossref DOI using Crossref’s citation resolution tools, Simple Text Queryand the experimental Crossref Metadata Search.\nLimitations\nThe pdf-extract tools will only work with full text journal article PDFs. It will not work with PDFs which contain scanned bitmap images of pages. In practice, this means the pdf-extract tools are unlikely to work with older journal articles that were produced before the advent of computer typesetting.\nWhy have we done this?\nWe have built pdf-extract as part of an overall effort to make it easier for small and medium-sized publishers to meet Crossref\u0026rsquo;s linking requirements and contribute to Crossref\u0026rsquo;s Cited-by service.\nWhen members join Crossref and start registering DOIs and metadata for their content, they also make a commitment to link references in their content to the relevant sources using DOIs. For larger publishers with skilled production departments, this requirement to link their references is relatively easy to meet. For smaller publishers, it is much more difficult. Those who do meet the obligations, often find themselves having to manually copy and resolve references for each article that they publish. Some members don\u0026rsquo;t even have the resources to do this. This inability to meet Crossref\u0026rsquo;s linking obligations effects all Crossref members, including our larger ones, because it means that fewer references are being followed online and because Cited-by information is incomplete.\nOver the next few months we also plan on extending PDF extract to identify other semantically meaningful sections of scholarly articles including abstracts, methods sections, figures tables, captions, etc.\nThe pdf-extract tools are currently only designed for use by the technically savvy. To get them to work, you will need to know how to install and use software on a server running linux.\nThe pdf-extract tool will eventually be incorporated into a user-friendly set of web tools that will allow our members to automatically deposit article references into the Crossref system by uploading PDFs using a simple form. We expect these more user-friendly tools to be available by Q1 2013.\nUntil then, we have created an experimental web form called “Extracto” that at least allows you to play with the pdf-extract tool without having to download and install the libraries.\nNote that Extracto is running on very feeble server on a very slow internet connection and the only guarantee that we can make about it is that it will repeatedly fall over and annoy you. If those weasel words don’t put you off, you can have a play with it here. (extracto has been retired)\nBut your best bet is really to download and run the code locally. In order to do that, follow the instructions on github.\nHow does it work? You can see a brief presentation we did at the Crossref Annual meeting where we discuss, amongst other things, the pdf-extract tool.\nOtherwise, read on…\nMost tools that attempt to extract text from a PDF have the nasty habit of throwing away formatting information. Unfortunately, this formatting information generally provides significant semantic clues to the contents of each region of a document.\nFor example, if you look at the following redacted image, chances are you can immediately tell that this is an image of a scholarly article. Similarly, you can easily identify significant portions of the article, including the article’s title, the authors, the author affiliations and footnotes. What is important here- it that you can do all of this without reading or understanding a single word of the article. Instead, you do this by identifying the significant “shapes” within the article page.\nSimilarly, in the following redacted image, it is easy to identify the references section, each individual reference, and even the acknowledgements section- all without being able to read a single word of the document.\nThe pdf-extract tool uses a similar “visual” technique to identify semantically important areas of a PDF. After identifying semantically significant regions of text, it uses a set of heuristics to analyse certain “traits” in each region which help the tool understand what that region is doing. For example, the reference section of a PDF tends to have a significantly higher ration of proper names, initials, years and punctuation. This can be illustrated by comparing a normal paragraph within an article and the references section of the same article.\nUsing this combination of visual cues and content traits, the pdf-extract tool is able to detect semantically significant regions of the PDF without having to know the precise formatting conventions of any particular member or title.\nOne a region like the “references section” is detected, the pdf-extract tool can again use visual cues to identify individual references. Basically, citation styles tend to break down into the following visual categories.\nPdf-extract can detect which category a particular PDF is using simply by analyzing the margin and spacing use within the references region.\nOnce individual references are identified within the PDF, then we can use any of Crossref resolution tools, such as our Simple Text Query system or Crossref Metadata Search to try to resolve the reference to a Crossref DOI.\nHow can you help? We have tested the pdf-extract tools extensively over sample sets of PDFs provided to us by our members. The tool works well, but it can also be tweaked significantly as we apply it to more test cases and understand new variations in publisher formatting conventions.\nIf you are a developer with the requisite skills, we encourage you to contribute patches and fixes to the open source pdf-extract project.\nIf you are in production and encounter specific classes of PDFs that pdf-extract does not handle well, we encourage you to send us samples of said PDFs, as well as any potentially pertinent production information (e.g. tools used to produce PDFs, etc.) to:\n", "headings": ["How does it work?","How can you help?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/pdfmark/", "title": "pdfmark", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Overview\n“pdfmark” is an experimental open source tool that allows you to add Crossref metadata to a PDF. You can add metadata to a PDF by passing the tool a pre-generated XMP file, or you can apply Crossref bibliographic metadata by passing the command a Crossref DOI as an argument. If you pass it a Crossref DOI, it will automatically lookup the metadata for that DOI using the Crossref OpenURL API, generate XMP from said metadata and apply it to the PDF.", "content": "\rOverview\n“pdfmark” is an experimental open source tool that allows you to add Crossref metadata to a PDF. You can add metadata to a PDF by passing the tool a pre-generated XMP file, or you can apply Crossref bibliographic metadata by passing the command a Crossref DOI as an argument. If you pass it a Crossref DOI, it will automatically lookup the metadata for that DOI using the Crossref OpenURL API, generate XMP from said metadata and apply it to the PDF.\nNote that pdfmark is non-destructive. It will always generate a new PDF with the XMP added to it. Having said this, pdfmark does not re-linearize the resulting file. To re-linearized the PDF you can simply use ghostscript’s pdfopt command or any similar tool (e.g. Acrobat Pro).\n“pdfmark” is open source. We have released it in order to encourage publishers and other content producers to start adding embedded bibliographic metadata to their PDFs.\nWhy Should I Care?\nPDF is widely regarded as a pretty “dumb” file format. That is, it sacrifices semantics in favour of display fidelity. But this doesn’t have to be the case. PDF has evolved over the past years and now supports the ability to imbed semantic metadata into the PDF. This metadata can, in turn, be used by bibliographic management tools, search engines, text mining tools, etc. And, of course, the big advantage of embedding bibliographic metadata in the PDF is that the content and metadata are never separated. The PDF can be copied, emailed and archived and it will always have its metadata with it.\nFinally, Tony Hammond has written extensively on XMP in the CrossTech blog. Reading his various posts will give you a very solid background on the pros and cons of XMP.\nHow do I use it? We are assuming that you are at least technical enough to know whether you have a recent version of Java installed on your system and that you are comfortable with the command line. If this doesn’t describe you, then you had better stop here and get your resident geek to help you with this.\nSo, assuming you are of the geeky persuasion…\nIf you had a PDF of Allen Renear and Carole Palmer’s Science article, “Strategic Reading, Ontologies, and the Future of Scientific Publishing” and said PDF file was named “renear_palmer.pdf”, simply invoking the following would add the relevant metadata to the PDF:\njava -jar pdfmark.jar -d 10.1126/science.1157784 renear_palmer.pdf If the PDF is “linearized”, then the command will exit with a warning that the resulting PDF will be de-linearized. If you want to force it to generate the new PDF anyway, pass the command like the -“f” opton like this:\njava -jar pdfmark.jar -f -d 10.1126/science.1157784 renear_palmer.pdf pdfmark will automatically try to fill in the dc:copyright element with the name of the publisher of the PDF. To override this behaviour, use the “–no-copyright” flag.\nNaturally, we are hoping that people will give us feedback or, better yet- patch, debug and build on the source we have released. Send comments, etc. too:\n", "headings": ["How do I use it?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/pdfstamp/", "title": "pdfstamp", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Overview\nPdfstamp is an open source command-line tool that allows you to add an image or “stamp” to any location on a PDF and to link said image to a URL of your choice.\nThis means that, when a user opens a “stamped PDF” and clicks on the image, they will (assuming they are connected to the internet) have their browser open and take them to the URL pointed to.", "content": "\rOverview\nPdfstamp is an open source command-line tool that allows you to add an image or “stamp” to any location on a PDF and to link said image to a URL of your choice.\nThis means that, when a user opens a “stamped PDF” and clicks on the image, they will (assuming they are connected to the internet) have their browser open and take them to the URL pointed to.\nWhy?\nWell, we needed this functionality for a project that we are developing internally and we thought it might be useful to others in the community.\nSyntax\nThe pdfstamp tool gives you pretty granular control over the positioning and resolution of the placed image. It also allows you to specify how to name files for batch processes.\nUsage: pdfstamp [options] | -d N : Optional. Target DPI. Defaults to 300. -e EXT : Optional. Extension appended to the PDF filename. -i FILE : Required. Image file containing image of the stamp. -l X,Y : Required. Location on page to apply stamp. -o FILE : Optional. Output directory. -p P1,P2... : Optional. Page numbers to stamp. -1 is the last page. -r : Optional. Descend recursively into directories. -u URL : Optional. Target URL of the stamp. -v : Optional. Verbose output. Notes\nPdfstamp is non-destructive. It will always generate a new PDF with the stamp added to it. However, If you are using pdfstamp on a “linearized” (aka “optimised” or “fast web view enabled”) PDF, the resulting PDF will need to be re-linearized using ghostscript’s pdfwrite device, and specifying -dFastWebView or any similar tool (e.g. Acrobat Pro).\nPdfstamp is designed for batch-processing and, to give you an idea of performance, we were able to stamp, link and re-linearize (using pdfopt) 500 journal article PDFs in about eight seconds on a recent-vintage Macbook Pro. Performance very much depends on the resolution and size of the image that you are adding to the PDF and the size of the PDF itself.\nWhat next?\nIf you have any comments, suggestions, bug reports or (hint, hint) patches, please fee free to send them to us here at:\nOr just complain bitterly on Twitter.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/randoim/", "title": "Randoim", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nEquivalent functionality can now be found using the sample function in the Crossref REST API.\nRandoim is a simple API that will return you a…\n(drum roll)\nRandom Crossref DOI.\nUm, have we gone nuts?\nWell, possibly, but this is actually a useful tool for Crossref internally as it allows us to more easily test our various new APIs and tools against representative samples of Crossref metadata.", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work.\nEquivalent functionality can now be found using the sample function in the Crossref REST API.\nRandoim is a simple API that will return you a…\n(drum roll)\nRandom Crossref DOI.\nUm, have we gone nuts?\nWell, possibly, but this is actually a useful tool for Crossref internally as it allows us to more easily test our various new APIs and tools against representative samples of Crossref metadata.\nIt might also be useful to anybody doing research on scholarly publications.\nThe API, of course, also supports limiting your random DOI selection to particular subsets of Crossref metadata. So, for example, you can select a random DOI from a particular date range, ISSN, etc. Documentation for the APIas well as an explanation of how we “randomly” select DOIs can be found on the site itself.\nAlso note that, as with all things on the labs site, this tool is only guaranteed to break and make your life even more miserable and pathetic than it already is.\nSend complaints, hate mail and snide remarks to:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/reverse-domain-lookup/", "title": "Reverse Domain Lookup", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. This service has graduated and metamorphised into Event Data Reverse. I know, we suck at naming things.\nTake a domain name and find out whether it belongs to a Crossref member. Why on earth would you want to do such a thing? Well, you might be interested in analysing your log files and determining whether links to your site are coming from scholarly publishers.", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. This service has graduated and metamorphised into Event Data Reverse. I know, we suck at naming things.\nTake a domain name and find out whether it belongs to a Crossref member. Why on earth would you want to do such a thing? Well, you might be interested in analysing your log files and determining whether links to your site are coming from scholarly publishers. Or you might be building some snazzy new “alt-metrics” tools and want to be able to detect when people link to a scholarly publisher. In either case, this now allows you to take a domain name (e.g. plosbiology.org) and quickly determine whether it belongs to a Crossref member (i.e. a scholarly publisher) as well as the name of said publisher.\nWe’ve developed an example service, which you can call dynamically, a browser button, which allows you to play with the service interactively, and we’ve also created a downloadable list of the domain names (hashed) which you can use locally if speed is of the essence.\nThe service page linked to above contains documentation and plenty of weasel words explaining the limitations of the system.\nIf you have any comments, suggestions or bug reports please fee free to send them to us here at:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/toi-doi-i-e-short-dois/", "title": "TOI DOI (i.e. Short DOIs)", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. “TOI DOIs” (pronounced “toy doi”) were an illustration of what a DOI shortening service my look like .\nPlease note that this project has actually “graduated” from Crossref labs and has now been implemented by the IDF as shortDOI™", "content": " Note: This experiment has graduated. This description has been kept for reference, but many of the links and/or services that appear below no longer work. “TOI DOIs” (pronounced “toy doi”) were an illustration of what a DOI shortening service my look like .\nPlease note that this project has actually “graduated” from Crossref labs and has now been implemented by the IDF as shortDOI™\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/ubiquity-plugin/", "title": "Ubiquity Plugin", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Mozilla’s Ubiquity is an excellent proof-of-concept tool that allows one to easily create extensions to the FireFox browser. Crossref has created a sample command set that allows you to either:\nSelect a citation in a web page and automatically resolve the citation to the online version of the item cited.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Mozilla’s Ubiquity is an excellent proof-of-concept tool that allows one to easily create extensions to the FireFox browser. Crossref has created a sample command set that allows you to either:\nSelect a citation in a web page and automatically resolve the citation to the online version of the item cited. Select a citation in a web page and automatically show the Crossref metadata for the item cited. Obviously, this will only work if the cited item is actually online (duh) and if the item has been assigned a Crossref DOI. In practice this applies to the vast majority of scholarly and professional journals and conference proceedings.\nIn order to use the Crossref command you need to install Ubiquity into your copy of FireFox and then return to this page. Once you have returned to this page, you should see an option at the top of the page that will allow you to install the Crossref command. Note that when you install a Ubiquity command you will be given all sorts of dire security warnings. The Ubiquity site explains these warnings and what they mean. It is up to you as to whether you trust Crossref ;-).\nOnce the Crossref command is installed, you need to:\nSelect the citation that interests you (for practice, we have included a few samples below) Invoke Ubiquity (typically by typing control/alt + space) and type in the crossref command. Wait until you get a list of potential hits. Once you have a list of hits, pressing enter will take you to the first document cited. Otherwise, you can click on either the Resolve or the Metadata of any listed citation. This Ubiquity command makes use of the metadata search system which lies behind the WordPress and Moveable Type plugins that we released previously. The search system is very experimental and you should be careful to note that it can sometimes return the wrong item. Also note that the index only includes journal articles and conference proceedings. It does NOT yet contain books.\nFinally, be aware that Ubiquity itself is very much a proof-of-concept. “In flux”, as they say. The same applies to this Crossref command. This is a proof-of-concept and it is almost guaranteed to break as Ubiquity evolves. In fact, the few people who have tested this have had a variety of inconsistent results. It seems that you really need to make sure that you have current versions of FireFox and Ubiquity. It also seems to be temperamental about other FireFox extensions. Having said all that you can see the source (MPL) and try to fix and/or evolve it yourself. Any comments, questions, suggestions, fixes, etc. can be directed to: citation-plugin@crossref.org\nSample References To Play With\nFree Form Text\nAllen Renear's \"What is Text Really?\" Citation Which Already Includes An Unlinked DOI\nPavol Hell , Jaroslav Nesetril, The core of a graph, Discrete Mathematics, v.109 n.1-3, p.117-126, Nov. 12, 1992 [ doi:10.1016/0012-365X(92)90282-K ] Citation Which Contains No DOI\nMiklos Ajtai , Yuri Gurevich, Datalog vs. first-order logic, Journal of Computer and System Sciences, v.49 n.3, p.562-588, Dec. 1994 ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/wordpress-moveable-type-plugins/", "title": "WordPress & Moveable Type Plugins", "subtitle":"", "rank": 1, "lastmod": "2013-01-22", "lastmod_ts": 1358812800, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. We have created WordPress \u0026amp; Moveable Type plugins that allow you to search Crossref metadata using citations or partial citations. When you find the reference that you want, insert the formatted and DOI-linked citation into your blog posting along with supporting COinS metadata. The plugin supports both a long citation format and a short (op.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. We have created WordPress \u0026amp; Moveable Type plugins that allow you to search Crossref metadata using citations or partial citations. When you find the reference that you want, insert the formatted and DOI-linked citation into your blog posting along with supporting COinS metadata. The plugin supports both a long citation format and a short (op. cit.) format. The plugins can be found on SourceForge.\nPlease note the following about the plugins:\nTo install these, you will need \u0026ldquo;shell access\u0026rdquo; to the machine running your WordPress or Moveable Type instance. If you don’t know what this means, you should probably talk to your system administrator.\nWe are releasing these as a test. The back-end is running on R\u0026amp;D equipment in a non-production environment and so it may disappear without warning or perform erratically. If it isn’t working for some reason, come back later and try again. If it seems to be broken for a prolonged period of time, then please report the problem to us via SourceForge.\nThere is currently a 20 item limit on the number of hits returned per query. This might seem arbitrary and stingy, but please remember- we are not trying to create a fully blown search engine- we’re just trying to create a citation lookup service. Of course, if, after looking at how the service is used, it looks like we need to up this limit, we will.\nIf you look in the plugin options (or at the code), you will see that the system includes an API key. At the moment we have no restrictions on use of this service, but have included this in case we need to protect the system from abuse. We recommend that you set this API key to the email address of the administrator of the blog.\nThe bulk of the functionality we have developed is actually at the back-end. The plugins are just lightweight interfaces to that back-end. You can examine the guts of the plugins in order to easily figure out how to create similar functionality for your favourite blog platform, wiki, etc. If you do create something, please let us know. We’d love to see what people are building.\nWe are continuing to experiment with the metadata search function in order to increase its accuracy and flexibility. Again, this might result in seemingly inconsistent behaviour. Did we mention that this is a test?\nPlease note that this API is not meant for bulk harvesting of Crossref metadata. If you need such facilities, then please look at our web site for information about our metadata services.\nWe welcome your ideas for tools that we can provide to help researchers. Please, please, please send comments, requests, queries and ideas to us at:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/crossref-metadata-search/", "title": "Crossref Metadata Search", "subtitle":"", "rank": 1, "lastmod": "2013-01-20", "lastmod_ts": 1358640000, "section": "Labs", "tags": [], "description": "Crossref Metadata Search allows you to search across the almost 50 million Crossref Metadata records* for journal articles and conference proceedings. It supports the following features:\nORCID Support A completely new UI Faceted searches Copying of search results as formatted citations using CSL COinS, so that you can easily import results into Zotero and other document management tools An API, so that you can integrate Crossref Metadata Search into your own applications, plugins, etc.", "content": "\rCrossref Metadata Search allows you to search across the almost 50 million Crossref Metadata records* for journal articles and conference proceedings. It supports the following features:\nORCID Support A completely new UI Faceted searches Copying of search results as formatted citations using CSL COinS, so that you can easily import results into Zotero and other document management tools An API, so that you can integrate Crossref Metadata Search into your own applications, plugins, etc. Basic OpenSearch support- so that you can integrate Crossref Metadata Search into your browser’s search bar. Searching for a particular Crossref DOI Searching for a particular Crossref ShortDOI Searching for articles in a particular journal via the journal’s ISSN Links to any patents that cite a particular Crossref DOI At the moment, Crossref Metadata Search (CRMDS) is a “labs” project and, as such, should be used with some trepidation. Our goal is to release CRMS as a production service ASAP, but we wanted to get public feedback on the service before making the move to a production system.\nSpecifically, please be aware that not all record types available at Crossref have been included yet. We will be adding books and components to the index soon.\nPlease send us your feedback on Crossref Metadata Search to:\n*Please note that Crossref collect abstracts and, as such, the metadata does not include abstracts.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/inchi-lookup/", "title": "InChI Lookup", "subtitle":"", "rank": 1, "lastmod": "2013-01-20", "lastmod_ts": 1358640000, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. The idea is to create a mechanism that would allow Crossref publishers to record InChIs in their submitted Crossref metadata. This, in turn, would allow us to provide a service that would allow users to:\nLookup the published articles that mention a particular InChI.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. The idea is to create a mechanism that would allow Crossref publishers to record InChIs in their submitted Crossref metadata. This, in turn, would allow us to provide a service that would allow users to:\nLookup the published articles that mention a particular InChI. Lookup the InChIs mentioned in a published article. Similar services could, conceivably, be provided for other types of semantic metadata.\nThe following is a demonstrator of what an DOI2InChI lookup service might look like. Please note that the XML representation of the results is very basic and is not best-practice for linked-data.\nThe demonstrator currently only holds DOIs and InChIs for a few publishers. A summary of the contents of the database can be found on the status page\nhttp://inchi.crossref.org/status A list of all the Crossref DOIs that contain InChIs can be seen here:\nhttp://inchi.crossref.org/dois A list of all the InChIs that have been registered with Crossref can be seen here:\nhttp://inchi.crossref.org/inchis The system provides the following API calls:\nReturn all the DOIs that have been registered with a given InChI\nhttp://inchi.crossref.org/dois/InChI=1S/C4H6O2/c1-3-6-4(2)5/h3H,1H2,2H3 Return all the InChIs that have been registered for a given DOI\nhttp://inchi.crossref.org/inchis/10.1038/nchem.215 ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/pmid2doi/", "title": "pmid2doi", "subtitle":"", "rank": 1, "lastmod": "2013-01-20", "lastmod_ts": 1358640000, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Crossref has worked with the Concept Web Alliance to use Crossref Metadata Search to attempt to locate the DOIs for the 15 million items in PubMed that do not contain one. Using this technique, we have located the DOIs for an additional 3 million items and have combined them with existing pmid to doi mapping for a total of 7.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. Crossref has worked with the Concept Web Alliance to use Crossref Metadata Search to attempt to locate the DOIs for the 15 million items in PubMed that do not contain one. Using this technique, we have located the DOIs for an additional 3 million items and have combined them with existing pmid to doi mapping for a total of 7.7 million pmid to doi mappings. Using this data we have, in turn, deployed a pmid2doi mapping service on the Crossref Labs site. The service can be used interactively, via a web form as well as via content negotiation. So, for instance, using the ‘curl’ command line tool, one can issue the following content-negotiated querie:\ncurl -H \u0026#34;Accept: application/json\u0026#34; \u0026#34;http://0-pmid2doi-labs-crossref-org.libus.csd.mu.edu/5\u0026#34; -D - -L and get the following response:\n{“mapping\u0026#34;:{\u0026#34;doi\u0026#34;:\u0026#34;10.1016/0006-291X(75)90508-2\u0026#34;,\u0026#34;pmid”:”5”}} or use the following query:\ncurl -H \u0026#34;Accept: application/pmiddoi+xml\u0026#34; \u0026#34;http://0-pmid2doi-labs-crossref-org.libus.csd.mu.edu/5\u0026#34; -D - -L and get the following response:\n10.1016/0006-291X(75)90508-25 The Weasel Speaks…\nThis service is based on a database snapshot taken in November, 2010. We are not yet updating the database. If there is enough interest in the service, we may work with the CWA to figure out how we can keep the database updated as efficiently as possible.\nIf you have any comments, suggestions, bug reports or (hint, hint) patches, please fee free to send them to us here at:\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/qr-code-generator/", "title": "QR Code Generator", "subtitle":"", "rank": 1, "lastmod": "2013-01-20", "lastmod_ts": 1358640000, "section": "Labs", "tags": [], "description": "Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. You might also be interested in this handy resource for determining if you should be using QR Codes.\nQR Codes are a form of barcode that can be easily scanned and used by mobile phones, web-cams, etc. A QR code can include encoded data, so, for instance, they are often used in Japan and other countries to encode URLs for display in magazines, on billboards, on packaging, etc.", "content": " Note: This experiment has been retired. This description has been kept for reference, but many of the links and/or services that appear below no longer work. You might also be interested in this handy resource for determining if you should be using QR Codes.\nQR Codes are a form of barcode that can be easily scanned and used by mobile phones, web-cams, etc. A QR code can include encoded data, so, for instance, they are often used in Japan and other countries to encode URLs for display in magazines, on billboards, on packaging, etc.\nFor example, the following is the QR Code for the article “Strategic Reading, Ontologies, and the Future of Scientific Publishing” by Allen Renear and Carole Palmer.\nAPI\nCrossref has created a sample API that will take a Crossref DOI and turn it into a QR Code. The QR Code will include the title of the document pointed to by the Crossref DOI as well as the “htttp://dx.doi.org” URL of that will allow you to link to that item. Most mobile phone QR Code readers will allow you to bookmark, share or link to the URLs thus encoded.\nThe API is pretty straight-forward and is of the form:\nhttp://qrcode.labs.crossref.org/[doi] So the URL to generate the above QR Code was:\nhttp://qrcode.labs.crossref.org/10.1126/science.1157784 If you want to try out this technology, you might try installing the iPhone application QuickMark from the Apple app store.\nNote that the API doesn’t currently let you vary the size or encoding level of the QR Code. If there is enough interest in this functionality, we might add it.\nApplications\nSo what can the qr-encoded Crossref DOI be used for? Damn if we know. Minimally, they could be added to the cover pages of PDFs so that researchers could easily bookmark a link to any paper articles they are reading. Perhaps they could also be added to the posters on poster sessions at conferences? Heck, maybe we could persuade researchers to print the QR-codes of their publications on t-shirts that they could then wear to conferences\u0026hellip;\nOr maybe not.\nBut seriously, we did this because it was easy and pretty cool. We’re not sure of the applications yet, so that’s why we thought we should release it and see what people can think of.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/2013/01/hello-world/", "title": "Hello world!", "subtitle":"", "rank": 1, "lastmod": "2013-01-18", "lastmod_ts": 1358467200, "section": "Labs", "tags": [], "description": "\rWelcome to WordPress. This is your first post. Edit or delete it, then start blogging!\r", "content": "\rWelcome to WordPress. This is your first post. Edit or delete it, then start blogging!\r", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2012/", "title": "2012", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-metadata-search-plus-plus/", "title": "Crossref Metadata Search++", "subtitle":"", "rank": 1, "lastmod": "2012-10-11", "lastmod_ts": 1349913600, "section": "Blog", "tags": [], "description": "We have just released a bunch of new functionality for Crossref Metadata Search. The tool now supports the following features:\nA completely new UI Faceted\u0026nbsp;searches Copying of search results as formatted citations using\u0026nbsp;CSL COinS, so that you can easily import results into Zotero and other document management tools An API, so that you can integrate Crossref Metadata Search into your own applications, plugins, etc. Basic\u0026nbsp;OpenSearch\u0026nbsp;support- so that you can integrate Crossref Metadata Search into your browser’s search bar.", "content": "We have just released a bunch of new functionality for Crossref Metadata Search. The tool now supports the following features:\nA completely new UI Faceted\u0026nbsp;searches Copying of search results as formatted citations using\u0026nbsp;CSL COinS, so that you can easily import results into Zotero and other document management tools An API, so that you can integrate Crossref Metadata Search into your own applications, plugins, etc. Basic\u0026nbsp;OpenSearch\u0026nbsp;support- so that you can integrate Crossref Metadata Search into your browser’s search bar. Searching for a particular Crossref DOI Searching for a particular Crossref\u0026nbsp;ShortDOI Searching for articles in a particular journal via the journal’s ISSN At the moment, Crossref Metadata Search (CRMDS) is a Crossref Labs project and, as such, should be used with some trepidation. Our goal is to release CRMS as a production service ASAP, but we wanted to get public feedback on the service before making the move to a production system.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/patentcite/", "title": "PatentCite", "subtitle":"", "rank": 1, "lastmod": "2012-08-13", "lastmod_ts": 1344816000, "section": "Blog", "tags": [], "description": "If you’ve ever thought that scholarly citation practice was antediluvian and perverse- you should check-out patents some day.\nOver the past year of so Crossref has been working with Cambia and the The Lens to explore how we can better link scholarly literature to and from the patent literature. The first object of our collaboration was to attempt to link patents hosted on the new, beta version of The Lens to the Scholarly literature.", "content": "If you’ve ever thought that scholarly citation practice was antediluvian and perverse- you should check-out patents some day.\nOver the past year of so Crossref has been working with Cambia and the The Lens to explore how we can better link scholarly literature to and from the patent literature. The first object of our collaboration was to attempt to link patents hosted on the new, beta version of The Lens to the Scholarly literature. To do this, Crossref and Cambia been enhancing Crossref’s citation matching mechanisms in order to better resolve the wide variety of eclectic and terse patent citation styles to Crossref DOIs.\nYou can see the results of these ongoing attempts on the The Lens beta site where all of The Len’s 8 million+ 80 million+ patents and applications (obtained through subscriptions with WIPO, USPTO, EPO and IP Australia) are starting to be linked directly to the scholarly literature. See, for example:\nhttp://beta.lens.org/lens/patent/US\\_RE42150\\_E1/citations\n[Editor\u0026rsquo;s update: Link is broken. Removed January 2021]\nCrossref has taken this matched data and has now released a Crossref Labs *experimental* service , called PatentCite, that allows you to take any Crossref DOI and see what Patents in the The Lens system cite it.\nAs with all Crossref Labs services- this one is likely to be:\na) As stable as the global economy\nc) As reliable as a UK train\nii) Out-of-date. It is based on a snapshot of Crossref /Lens data.\nAs accurate as my list ordering Howzat for an SLA?\nAs we get feedback from Crossref’s membership and as we gain more experience linking Patents to and from the scholarly literature, we will explore including this functionality in our production CitedBY service. But until then- please send us your feedback on this experimental service.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/patents/", "title": "Patents", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/admin/", "title": "Admin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/content-negotiation/", "title": "Content Negotiation", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-and-datacite-unify-support-for-http-content-negotiation/", "title": "Crossref and DataCite unify support for HTTP content negotiation", "subtitle":"", "rank": 1, "lastmod": "2012-05-17", "lastmod_ts": 1337212800, "section": "Blog", "tags": [], "description": "Last year Crossref and DataCite announced support for HTTP content negotiation for DOI names. Today, we are pleased to report further collaboration on the topic. We think it is very important that the two largest DOI Registration Agencies work together in order to provide metadata services to DOI names.\nThe current implementation is documented in detail at http://citation.crosscite.org/\nThe documentation explains HTTP content negotiation as implemented by both Registration Agencies and provides a list of supported resource/content/record types.", "content": "Last year Crossref and DataCite announced support for HTTP content negotiation for DOI names. Today, we are pleased to report further collaboration on the topic. We think it is very important that the two largest DOI Registration Agencies work together in order to provide metadata services to DOI names.\nThe current implementation is documented in detail at http://citation.crosscite.org/\nThe documentation explains HTTP content negotiation as implemented by both Registration Agencies and provides a list of supported resource/content/record types.\nAn example application of HTTP content negotiation is a citation formatting service. You can try it at http://citation.crosscite.org/.\nThis service will accept DOIs from both Crossref and DataCite, unlike the previous formatting service which accepted only Crossref DOI names.\nThis is possible because Crossref and DataCite support a shared, common metadata format. When you input a DOI into the formatting service, it doesn’t know where the DOI was registered. The service will make an\nHTTP content negotiation request to the global DOI resolver specifying which format of the metadata should be returned in the HTTP Accept header. The global DOI resolver will notice (Accept header!) that this is not a regular DOI resolution request; it will turn to Crossref or DataCite accordingly for the relevant metadata instead of redirecting to a landing page. The format of metadata is shared between both registration agencies so the formatting service can interpret it without knowledge of the DOI origin.\nIn summary HTTP content negotiation lets you process a DOI’s metadata without knowledge of its origin or specifics of the registration agency.\nIf you have any problems, email us at tech@datacite.org or labs@crossref.org. For general discussion please kindly leave a comment below.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/pdf/", "title": "PDF", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/pdf-extract/", "title": "PDF-Extract", "subtitle":"", "rank": 1, "lastmod": "2012-04-17", "lastmod_ts": 1334620800, "section": "Blog", "tags": [], "description": "PDF-EXTRACT Crossref Labs is happy to announce the first public release of “pdf-extract” an open source set of tools and libraries for extracting citation references (and, eventually, other semantic metadata) from PDFs. We first demonstrated this tool to Crossref members at our annual meeting last year. See the pdf-extract labs page for a detailed introduction to this new set of tools.\nIf you are unable to download and install the tool, you can play with a experimental web interface called “Extracto.” Be warned, Extracto is running on very feeble server using an erratic and slow internet connection. The only guarantee that we can make about using it is that it will repeatedly fall over and annoy you. The weasel has spoken.\n", "content": "PDF-EXTRACT Crossref Labs is happy to announce the first public release of “pdf-extract” an open source set of tools and libraries for extracting citation references (and, eventually, other semantic metadata) from PDFs. We first demonstrated this tool to Crossref members at our annual meeting last year. See the pdf-extract labs page for a detailed introduction to this new set of tools.\nIf you are unable to download and install the tool, you can play with a experimental web interface called “Extracto.” Be warned, Extracto is running on very feeble server using an erratic and slow internet connection. The only guarantee that we can make about using it is that it will repeatedly fall over and annoy you. The weasel has spoken.\n", "headings": ["PDF-EXTRACT"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/comics/", "title": "Comics", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-for-phd-comics-valentines-day-reading-list/", "title": "DOIs for PHD Comics’ Valentine’s Day Reading List", "subtitle":"", "rank": 1, "lastmod": "2012-02-14", "lastmod_ts": 1329177600, "section": "Blog", "tags": [], "description": "PHD Comics has posted its Valentine’s Day Reading list. Without DOIs!\u0026nbsp; \u0026nbsp; So in order to preserve the scholarly citation record, we’ve resolved those that have DOIs\u0026#8230;. Title:\u0026nbsp; The St. Valentine’s Day Frontal Passage Citation:\u0026nbsp; Sassen, K, 1980, \u0026#8216;The St. Valentine’s Day Frontal Passage’, Bulletin of the American Meteorological Society, vol. 61, no. 2, p. 122. Crossref DOI:\u0026nbsp; http://0-dx-doi-org.libus.csd.mu.edu/10.1175/1520-0477(1980)061\u003c0122:TSVDFP\u003e2.0.CO;2 Title:\u0026nbsp; SUICIDE AND HOMICIDE ON ST. VALENTINE’S DAY Citation:\u0026nbsp; LESTER, D, 1990, \u0026#8216;SUICIDE AND HOMICIDE ON ST.", "content": " PHD Comics has posted its Valentine’s Day Reading list. Without DOIs!\u0026nbsp; \u0026nbsp; So in order to preserve the scholarly citation record, we’ve resolved those that have DOIs\u0026#8230;. Title:\u0026nbsp; The St. Valentine’s Day Frontal Passage Citation:\u0026nbsp; Sassen, K, 1980, \u0026#8216;The St. Valentine’s Day Frontal Passage’, Bulletin of the American Meteorological Society, vol. 61, no. 2, p. 122. Crossref DOI:\u0026nbsp; http://0-dx-doi-org.libus.csd.mu.edu/10.1175/1520-0477(1980)061\u003c0122:TSVDFP\u003e2.0.CO;2 Title:\u0026nbsp; SUICIDE AND HOMICIDE ON ST. VALENTINE’S DAY Citation:\u0026nbsp; LESTER, D, 1990, \u0026#8216;SUICIDE AND HOMICIDE ON ST. VALENTINE’S DAY’, Perceptual and Motor Skills, vol. 71, no. 7, p. 994. Crossref DOI:\u0026nbsp; http://0-dx-doi-org.libus.csd.mu.edu/10.2466/PMS.71.7.994-994 Title:\u0026nbsp; The St. Valentineʼs Day Massacre Citation:\u0026nbsp; Eckert, W, 1980, \u0026#8216;The St. Valentineʼs Day Massacre’, The American Journal of Forensic Medicine and Pathology, vol. 1, no. 1, pp. 67-70. Crossref DOI:\u0026nbsp; http://0-dx-doi-org.libus.csd.mu.edu/10.1097/00000433-198003000-00011 Title:\u0026nbsp; For Valentine’s Day Citation:\u0026nbsp; Kutzner, H, 2001, \u0026#8216;For Valentine’s Day’, Cancer, vol. 91, no. 4, pp. 804-805. Crossref DOI:\u0026nbsp; http://0-dx-doi-org.libus.csd.mu.edu/10.1002/1097-0142(20010215)91:4\u003c804::AID-CNCR1067\u003e3.3.CO;2-K ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2011/", "title": "2011", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/karl-ward/", "title": "Karl Ward", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/turning-dois-into-formatted-citations/", "title": "Turning DOIs into formatted citations", "subtitle":"", "rank": 1, "lastmod": "2011-11-28", "lastmod_ts": 1322438400, "section": "Blog", "tags": [], "description": "Today two new record types were added to dx.doi.org resolution for Crossref DOIs. These allow anyone to retrieve DOI bibliographic metadata as formatted bibliographic entries. To perform the formatting we’re using the citation style language processor, citeproc-js which supports a shed load of citation styles and locales. In fact, all the styles and locales found in the CSL repositories, including many common styles such as bibtex, apa, ieee, harvard, vancouver and chicago are supported.", "content": "Today two new record types were added to dx.doi.org resolution for Crossref DOIs. These allow anyone to retrieve DOI bibliographic metadata as formatted bibliographic entries. To perform the formatting we’re using the citation style language processor, citeproc-js which supports a shed load of citation styles and locales. In fact, all the styles and locales found in the CSL repositories, including many common styles such as bibtex, apa, ieee, harvard, vancouver and chicago are supported. First off, if you’d like to try citation formatting without using content negotiation, there’s a simple web UI that allows input of a DOI, style and locale selection. If you’re more into accessing the web via your favorite programming language, have a look at these content negotiation curl examples. To make a request for the new “text/bibliography” record type: $ curl -LH \u0026ldquo;Accept: text/bibliography; style=bibtex\u0026rdquo; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 @article{Atkins_Gershell_2002, title={From the analyst\u0026rsquo;s couch: Selective anticancer drugs}, volume={1}, DOI={10.1038/nrd842}, number={7}, journal={Nature Reviews Drug Discovery}, author={Atkins, Joshua H. and Gershell, Leland J.}, year={2002}, month={Jul}, pages={491-492}} A locale can be specified with the “locale” record type parameter, like this: $ curl -LH \u0026ldquo;Accept: text/bibliography; style=mla; locale=fr-FR\u0026rdquo; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 Atkins, Joshua H., et Leland J. Gershell. « From the analyst\u0026rsquo;s couch: Selective anticancer drugs ». Nature Reviews Drug Discovery 1.7 (2002): 491-492. You may want to process metadata through CSL yourself. For this use case, there’s another new record type, “application/citeproc+json” that returns metadata in a citeproc-friendly JSON form: $ curl -LH \u0026ldquo;Accept: application/citeproc+json\u0026rdquo; http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrd842 {\u0026ldquo;volume\u0026rdquo;:\u0026ldquo;1\u0026rdquo;,\u0026ldquo;issue\u0026rdquo;:\u0026ldquo;7\u0026rdquo;,\u0026ldquo;DOI\u0026rdquo;:\u0026ldquo;10.1038/nrd842\u0026rdquo;,\u0026ldquo;title\u0026rdquo;:\u0026ldquo;From the analyst\u0026rsquo;s couch: Selective anticancer drugs\u0026rdquo;,\u0026ldquo;container-title\u0026rdquo;:\u0026ldquo;Nature Reviews Drug Discovery\u0026rdquo;,\u0026ldquo;issued\u0026rdquo;:{\u0026ldquo;date-parts\u0026rdquo;:[[2002,7]]},\u0026ldquo;author\u0026rdquo;:[{\u0026ldquo;family\u0026rdquo;:\u0026ldquo;Atkins\u0026rdquo;,\u0026ldquo;given\u0026rdquo;:\u0026ldquo;Joshua H.\u0026rdquo;},{\u0026ldquo;family\u0026rdquo;:\u0026ldquo;Gershell\u0026rdquo;,\u0026ldquo;given\u0026rdquo;:\u0026ldquo;Leland J.\u0026rdquo;}],\u0026ldquo;page\u0026rdquo;:\u0026ldquo;491-492\u0026rdquo;,\u0026ldquo;type\u0026rdquo;:\u0026ldquo;article-journal\u0026rdquo;} Finally, to retrieve lists of supported styles and locales, see:\n* https://crosscite.org\nstyle and locale repositories. There’s one big caveat to all this. The CSL processor will do its best with Crossref metadata which can unfortunately be quite patchy at times. There may be pieces of metadata missing, inaccurate metadata or even metadata items stored under the wrong field, all resulting in odd-looking formatted citations. Most of the time, though, it works.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/determining-the-crossref-membership-status-of-a-domain/", "title": "Determining the Crossref membership status of a domain", "subtitle":"", "rank": 1, "lastmod": "2011-11-22", "lastmod_ts": 1321920000, "section": "Blog", "tags": [], "description": "We’ve been asked a few times if it is possible to determine whether or not a particular domain name belongs to a Crossref member. To address this we’re launching another small service that performs something like a “reverse look-up” of URLs and domain names to DOIs and Crossref member status.\nThe service provides an API that will attempt to reverse look-up a URL to a DOI and return the membership status (member or non-member) of the root domain of the URL.", "content": "We’ve been asked a few times if it is possible to determine whether or not a particular domain name belongs to a Crossref member. To address this we’re launching another small service that performs something like a “reverse look-up” of URLs and domain names to DOIs and Crossref member status.\nThe service provides an API that will attempt to reverse look-up a URL to a DOI and return the membership status (member or non-member) of the root domain of the URL. In practice resolving URLs to DOIs has substantial limitations - many publishers redirect the resolution URL of DOIs to other online content and URLs become clogged up with session IDs and other cruft appearing in their query parameters. All of this means it is unlikely that the URLs that appear to be the end result of DOI resolution are actually the URLs pointed to.\nHowever, it’s also possible to provide only a host name, in which case, as with a URL, the Crossref membership status for the root domain will be returned.\nThere’s also a downloadable list of hashed domains that belong to Crossref members which will be useful to those who want to determine the membership status of a domain locally. Also, a bookmarklet allows anyone to easily check a web page they are looking at to see if the domain it is hosted on belongs to a Crossref member.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/domains/", "title": "Domains", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/reverse-look-up/", "title": "Reverse Look-Up", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/datacite-supporting-content-negotiation/", "title": "DataCite supporting content negotiation", "subtitle":"", "rank": 1, "lastmod": "2011-10-10", "lastmod_ts": 1318204800, "section": "Blog", "tags": [], "description": "In April In April for its DOIs. At the time I cheekily called-out DataCite to start supporting content negotiation as well.\nEdward Zukowski (DataCite’s resident propellor-head) took up the challenge with gusto and, as of September 22nd DataCite has also been supporting content negotiation for its DOIs. This means that one million more DOIs are now linked-data friendly. Congratulations to Ed and the rest of the team at DataCite.\nWe hope this is a trend.", "content": "In April In April for its DOIs. At the time I cheekily called-out DataCite to start supporting content negotiation as well.\nEdward Zukowski (DataCite’s resident propellor-head) took up the challenge with gusto and, as of September 22nd DataCite has also been supporting content negotiation for its DOIs. This means that one million more DOIs are now linked-data friendly. Congratulations to Ed and the rest of the team at DataCite.\nWe hope this is a trend. Back in June Knowledge Exchange organized a seminar on Persistent Object Identifiers. One of the outcomes of the meeting was “Den Haag Manifesto” a document outlining five relatively simple steps that different persistent identifier systems could take in order to increase interoperability. Most of these steps involved adopting linked data principles including support for content negotiation. We look forward to hearing about other persistent identifiers adopting these principles over the next year.\nHaving said that, this time I will refrain from calling-out anybody specifically…\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/family-names/", "title": "Family Names", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/family-names-service/", "title": "Family Names Service", "subtitle":"", "rank": 1, "lastmod": "2011-10-06", "lastmod_ts": 1317859200, "section": "Blog", "tags": [], "description": "Today I’m announcing a small web API that wraps a family name database here at Crossref R\u0026amp;D. The database, built from Crossref’s metadata, lists all unique family names that appear as contributors to articles, books, datasets and so on that are known to Crossref. As such the database likely accounts for the majority of family names represented in the scholarly record.\nThe web API comes with two services: a family name detector that will pick out potential family names from chunks of text and a family name autocompletion system.", "content": "Today I’m announcing a small web API that wraps a family name database here at Crossref R\u0026amp;D. The database, built from Crossref’s metadata, lists all unique family names that appear as contributors to articles, books, datasets and so on that are known to Crossref. As such the database likely accounts for the majority of family names represented in the scholarly record.\nThe web API comes with two services: a family name detector that will pick out potential family names from chunks of text and a family name autocompletion system.\nVery brief documentation can be found here along with a jQuery example of autocompletion.\nThe database is still in development so there may be some oddities and inaccuracies in there. Right now one obvious omission from the name list that I hope to address soon are double-worded names such as “von Neumann”. We’re not proposing this database as an authority but rather something that backs a practical service for family name detection and autocompletion.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/content-negotiation-for-crossref-dois/", "title": "Content Negotiation for Crossref DOIs", "subtitle":"", "rank": 1, "lastmod": "2011-04-19", "lastmod_ts": 1303171200, "section": "Blog", "tags": [], "description": "So does anybody remember the posting DOIs and Linked Data: Some Concrete Proposals?\nWell, we went with option “D.”\nFrom now on, DOIs, expressed as HTTP URIs, can be used with content-negotiation.\nLet’s get straight to the point. If you have curl installed, you can start playing with content-negotiation and Crossref DOIs right away:\ncurl -D - -L -H “Accept: application/rdf+xml” “http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784” curl -D - -L -H “Accept: text/turtle” “http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784”", "content": "So does anybody remember the posting DOIs and Linked Data: Some Concrete Proposals?\nWell, we went with option “D.”\nFrom now on, DOIs, expressed as HTTP URIs, can be used with content-negotiation.\nLet’s get straight to the point. If you have curl installed, you can start playing with content-negotiation and Crossref DOIs right away:\ncurl -D - -L -H “Accept: application/rdf+xml” “http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784” curl -D - -L -H “Accept: text/turtle” “http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784”\ncurl -D - -L -H “Accept: application/atom+xml” “http://dx.doi.org/10.1126/science.1157784” Or if you are already using Crossref’s “unixref” format:\ncurl -D - -L -H “Accept: application/unixref+xml” “http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784\u0026amp;#8221; This will work with over 46 million Crossref DOIs as of today, but the beauty of the setup is that from now on, any DOI registration agency can enable content negotiation for their constituencies as well. DataCite- we’re looking at you 😉 .\nIt also means that, as registration agency members (Crossref publishers, for instance) start providing more complete and richer representations of their content, we can simply redirect content-negotiated requests directly to them.\nWe expect that that this development will round-out Crossref’s efforts to support standard APIs including OpenURL and OAI_PMH and we look forward to seeing DOIs increasingly used in linked data applications.\nFinally, Crossref would just like to thank the IDF and CNRI for their hard work on this as well as Tony Hammond and Leigh Dodds for their valuable advice and persistent goading.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/monitoring-crossref-technical-developments/", "title": "Monitoring Crossref Technical Developments", "subtitle":"", "rank": 1, "lastmod": "2011-03-29", "lastmod_ts": 1301356800, "section": "Blog", "tags": [], "description": "Announcements regarding Crossref system status or changes are posted in an Announcements forum on our support portal (http://support.crossref.org). We recommend that someone from your organization monitor this forum to stay informed about Crossref system status, schema changes, or other issues affecting deposits and queries. Subscribe to this forum via RSS feed (https://0-support-crossref-org.libus.csd.mu.edu/hc/en-us) or select the ‘Subscribe’ option in the forum to subscribe by email.\nThe TWG Discussion forum replaces the TWG mailing list and can be accessed by members of the Crossref community who log in to our support portal.", "content": "Announcements regarding Crossref system status or changes are posted in an Announcements forum on our support portal (http://support.crossref.org). We recommend that someone from your organization monitor this forum to stay informed about Crossref system status, schema changes, or other issues affecting deposits and queries. Subscribe to this forum via RSS feed (https://0-support-crossref-org.libus.csd.mu.edu/hc/en-us) or select the ‘Subscribe’ option in the forum to subscribe by email.\nThe TWG Discussion forum replaces the TWG mailing list and can be accessed by members of the Crossref community who log in to our support portal. Intended topics include technical matters related to Crossref’s services, DOI issues and Crossref system operation.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2010/", "title": "2010", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/add-linked-images-to-pdfs/", "title": "Add linked images to PDFs", "subtitle":"", "rank": 1, "lastmod": "2010-08-16", "lastmod_ts": 1281916800, "section": "Blog", "tags": [], "description": "While working on an internal project, we developed “pdfstamp“, a command-line tool that allows one to easily apply linked images to PDFs. We thought some in our community might find it useful and have released it on github. Some more PDF-related tools will follow soon.", "content": "While working on an internal project, we developed “pdfstamp“, a command-line tool that allows one to easily apply linked images to PDFs. We thought some in our community might find it useful and have released it on github. Some more PDF-related tools will follow soon.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/inchi/", "title": "InChI", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/xmp/", "title": "XMP", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-in-rsc-pdfs/", "title": "XMP in RSC PDFs", "subtitle":"", "rank": 1, "lastmod": "2010-08-03", "lastmod_ts": 1280793600, "section": "Blog", "tags": [], "description": "Just a quick heads-up to say that we’ve had a go at incorporating InChIs and ontology terms into our PDFs with XMP. There isn’t a lot of room in an XMP packet so we’ve had to be a bit particular about what we include.\nInChIs: the bigger the molecule the longer the InChI, so we’ve standardized on the fixed-length InChIKey. This doesn’t mean anything on its own, so we’ve gone the Semantic Web route of including an InChI resolver HTTP URI.", "content": "Just a quick heads-up to say that we’ve had a go at incorporating InChIs and ontology terms into our PDFs with XMP. There isn’t a lot of room in an XMP packet so we’ve had to be a bit particular about what we include.\nInChIs: the bigger the molecule the longer the InChI, so we’ve standardized on the fixed-length InChIKey. This doesn’t mean anything on its own, so we’ve gone the Semantic Web route of including an InChI resolver HTTP URI. Alternatively you can extract the InChIKeys with a regular expression. Ontology terms: we’re using HTTP URIs again and pointing to either Open Biomedical Ontology URIs (biology, biomedicine; slashy) or RSC ontology terms (chemistry; hashy). Often the OBO URIs resolve to a specific web page, but for the moment the RSC URIs just point to a large OWL file. Slashy URIs are quite a bit more involved so we’ll have to see what the demand is like. There’s only about 4K to play with, so it’s only ever going to be a best-of. More detailed article metadata has to go in either a sidecar file, as Tony has pointed out before, or ideally on the article landing page. The example files are here and I’ve posted something with a different slant on the RSC technical blog.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/opensearch/sru-integration-paper/", "title": "OpenSearch/SRU Integration Paper", "subtitle":"", "rank": 1, "lastmod": "2010-07-19", "lastmod_ts": 1279497600, "section": "Blog", "tags": [], "description": "Since I’ve already blogged about this a number of times before here, I thought I ought to include a link to a fuller writeup in this month’s D-Lib Magazine of our nature.com OpenSearch service which serves as a case study in OpenSearch and SRU integration:\ndoi:10.1045/july2010-hammond", "content": "Since I’ve already blogged about this a number of times before here, I thought I ought to include a link to a fuller writeup in this month’s D-Lib Magazine of our nature.com OpenSearch service which serves as a case study in OpenSearch and SRU integration:\ndoi:10.1045/july2010-hammond\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/tony-hammond/", "title": "Tony Hammond", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/search-an-evolution/", "title": "Search: An Evolution", "subtitle":"", "rank": 1, "lastmod": "2010-04-28", "lastmod_ts": 1272412800, "section": "Blog", "tags": [], "description": "(Click image for full size graphic.)\nI thought I could take this opportunity to demonstrate one evolution path from traditional record-based search to a more contemporary triple-based search. The aim is to show that these two modes of search do not have to be alternative approaches but can co-exist within a single workflow.\nLet me first mention a couple of terms I’m using here: ‘graphs’ and ‘properties’. I’m using ‘property’ loosely to refer to the individual RDF statement (or triple) containing a property, i.", "content": "\n(Click image for full size graphic.)\nI thought I could take this opportunity to demonstrate one evolution path from traditional record-based search to a more contemporary triple-based search. The aim is to show that these two modes of search do not have to be alternative approaches but can co-exist within a single workflow.\nLet me first mention a couple of terms I’m using here: ‘graphs’ and ‘properties’. I’m using ‘property’ loosely to refer to the individual RDF statement (or triple) containing a property, i.e. a triple is a ‘(subject, property, value)’ assertion. And a ‘graph’ is just a collection of ‘properties’ (or, more properly, triples). Oh, and I’ll also use the term ‘records’ when considering ‘graphs’ as pre-fabricated objects returned within a result set.\nSo, what do we have here? We have on the left a traditional means of disseminating search results which is typically record based. A new set of records may be generated by querying using the API provided, whether proprietary or public such as Lucene or SRU/CQL. We can thus consider this search service as a ‘record store’ – even though records tend to generated anew rather than retrieved. The individual records in the result set are collections or groupings of ‘properties’ about the subjects of the query. Note that this is somewhat similar to the way music is packaged for physical distribution with many tracks (‘properties’) combined onto a single album (‘record’ or ‘graph’) which contains a thematic coherence – either same artist or compilation around a given topic.\nDigital music distribution, on the other hand, allows for albums to be atomized so that individual tracks may be cherry-picked at will. This is not dissimilar from what happens in a ‘triple store’ where the basic properties (‘tracks’) that in a regular search engine were together combined in a ‘record’ (‘album’) to present a search result can now be plucked apart and recombined into newer bespoke ensembles. Note that this querying and recombination can be applied across the full triple store or even across this triple store and remote triple stores since the same data model is applied. Certainly, at the data model level federated searching thus becomes a non-issue.\nSuppose now that our search server (or record store) is an OpenSearch-type service, i.e. the result sets are distributed as some list-based format, typically RSS, and that the list-based format either provides an RDF graph or can be transformed to such a graph, we could then use that as a basis for feeding an RDF triple store.\nSo, now then at right we have a triple store which is a large database of triples (or properties) compiled from all the records in the record store. And since this is a triple store we can query it using SPARQL. For example, this trival SPARQL query:\nPREFIX dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; PREFIX prism: \u0026lt;http://prismstandard.org/namespaces/basic/2.0/\u0026gt; SELECT ?doi ?title WHERE { ?s prism:doi ?doi . ?s dc:title ?title . FILTER regex(?title, \"boson\", \"i\" ) } LIMIT 5 returns the first five articles (referenced by DOI) with title containing the word ‘boson’:\n-------------------------------------------------------------------------------------------------- | doi | title | ================================================================================================== | \"10.1038/nature05513\" | \"Comparison of the Hanbury Brown–Twiss effect for bosons and fermions\" | | \"10.1038/221999a0\" | \"Physics: The Intermediate Boson\" | | \"10.1038/313506b0\" | \"The nuts and bolts of bosons\" | | \"10.1038/301287a0\" | \"The search for bosons: A golden year for the weak force\" | | \"10.1038/424003a\" | \"Below-par performance hampers Fermilab quest for Higgs boson\" | -------------------------------------------------------------------------------------------------- Now let’s contrast this with a conventional record-based search, such as shown at left, to find the first five articles (referenced by DOI) with title containing the word ‘boson’ would use a query (here SRU/CQL, and CQL is bolded) such as:\n?query=dc.title=\"boson\"\u0026maximumRecords=5\u0026httpAccept=application/rss+xml and would receive a set of result records (here RSS) like so:\n... \u0026lt;item rdf:about=\"http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nature05513\"\u0026gt; \u0026lt;title\u0026gt;Comparison of the Hanbury Brown–Twiss effect for bosons and fermions\u0026lt;/title\u0026gt; \u0026lt;link\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nature05513\u0026lt;/link\u0026gt; \u0026lt;dc:identifier\u0026gt;doi:10.1038/nature05513\u0026lt;/dc:identifier\u0026gt; \u0026lt;dc:title\u0026gt;Comparison of the Hanbury Brown–Twiss effect for bosons and fermions\u0026lt;/dc:title\u0026gt; ... \u0026lt;/item\u0026gt; \u0026lt;item rdf:about=\"http://0-dx-doi-org.libus.csd.mu.edu/10.1038/221999a0\"\u0026gt; \u0026lt;title\u0026gt;Physics: The Intermediate Boson\u0026lt;/title\u0026gt; \u0026lt;link\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.1038/221999a0\u0026lt;/link\u0026gt; \u0026lt;dc:identifier\u0026gt;doi:10.1038/221999a0\u0026lt;/dc:identifier\u0026gt; \u0026lt;dc:title\u0026gt;Physics: The Intermediate Boson\u0026lt;/dc:title\u0026gt; ... \u0026lt;/item\u0026gt; ... Note also that there is an interesting halfway house as shown in the diagram, whereby a set of result records presenting a single RDF graph can be queried as its own (very) restricted triple store.\nIn general, because a triple store is so primitive and it can be queried alongside other triple stores the queries that can be put together can be highly complex and customized with arbitrary data. The result from such a query differs from a traditional ‘record’ where a fixed property set is bound together in a presentation. Such a result is user-determined as opposed to the server-determined nature of traditional result ‘records’.\nI hope that this post has been able to show in some degree that although there are some obvious differences there is nevertheless a synergy between these two modes of searching: prêt-à-porter and tailored.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-and-linked-data-some-concrete-proposals/", "title": "DOIs and Linked Data: Some Concrete Proposals", "subtitle":"", "rank": 1, "lastmod": "2010-03-25", "lastmod_ts": 1269475200, "section": "Blog", "tags": [], "description": "Since last month’s threads (here, here, here and here) talking about the issues involved in making the DOI a first-class identifier for linked data applications, I’ve had the chance to actually sit down with some of the thread’s participants (Tony Hammond, Leigh Dodds, Norman Paskin) and we’ve been able sketch-out some possible scenarios for migrating the DOI into a linked data world.\nI think that several of us were struck by how little actually needs to be done in order to fully address virtually all of the concerns that the linked data community has expressed about DOIs.", "content": "Since last month’s threads (here, here, here and here) talking about the issues involved in making the DOI a first-class identifier for linked data applications, I’ve had the chance to actually sit down with some of the thread’s participants (Tony Hammond, Leigh Dodds, Norman Paskin) and we’ve been able sketch-out some possible scenarios for migrating the DOI into a linked data world.\nI think that several of us were struck by how little actually needs to be done in order to fully address virtually all of the concerns that the linked data community has expressed about DOIs. Not only that- but in some of these scenarios we would put ourselves in a position to be able to semantically-enable over 40 million DOIs with what amounts to the flick of a switch.\nGiven the huge interest in linked data on the part of researchers and Crossref members- it seems like it would be a fantastic boon to both the IDF (International DOI Foundation) and Crossref if we were able to do something quickly here.\nAnyway- The following are notes outlining several concrete proposals for addressing the limitations of DOIs as identifiers in linked data applications. They range in complexity/effort involved- with the simplest scenario providing minimal (yet functional) LD capabilities for just one RA’s members (Crossref’s) and the most complex providing per-RA and per-RA-member configurability on how DOIs would behave for LD applications.\nWe’d appreciate comments, questions, suggestions, corrections, etc.\nA: Simplest Scenario What would need to be done? Crossref implements a linked data service. For example, hosted at rdf.crossref.org. Crossref recommends that any member publisher who wants to add rudimentary linked data capabilities to their site could simply insert some simple link elements into their landing Pages. So, for instance, for the article with the DOI 10.5555/1234567 in the Journal of Psychoceramics, the publisher would put the following in the landing page for the article: \u0026lt;link rel=\u0026quot;primarytopic\u0026quot; href=\u0026quot;http://0-doi-crossref-org.libus.csd.mu.edu/10.5555/1234567\u0026quot; /\u0026gt; \u0026lt;link rel=\u0026quot;alternate\u0026quot; type=\u0026quot;application/rdf+xml\u0026quot; href=\u0026quot;http://0-rdf-crossref-org.libus.csd.mu.edu/metadata/10.5555/1234567.rdf\u0026quot; title=\u0026quot;RDF/XML version of this document\u0026quot;/\u0026gt; \u0026lt;link rel=\u0026quot;alternate\u0026quot; type=\u0026quot;text/html\u0026quot; href=\u0026quot;http://www.journalofpsychoceramics.org/10.5555/1234567.html\u0026quot; title=\u0026quot;HTML version of this document\u0026quot;/\u0026gt; \u0026lt;link rel=\u0026quot;alternate\u0026quot; type=\u0026quot;application/json\u0026quot; href=\u0026quot;http://0-rdf-crossref-org.libus.csd.mu.edu/metadata/10.5555/1234567.json\u0026quot; title=\u0026quot;RDF/JSON version of this document\u0026quot;/\u0026gt; \u0026lt;link rel=\u0026quot;alternate\u0026quot; type=\u0026quot;text/turtle\u0026quot; href=\u0026quot;http://0-rdf-crossref-org.libus.csd.mu.edu/metadata/10.5555/1234567.ttl\u0026quot; title=\u0026quot;Turtle version of this document\u0026quot;/\u0026gt;\nIn the above snippet the HTML version of the document is the publisher’s existing landing page.\nHow it would work A sem-web-enabled browser would query dx.doi.org/10.5555/1234567 and get a normal 302 redirect to the publisher’s landing page. The sem-web-enabled browser would sniff the page for the link elements and retrieve the representations it wanted from rdf.crossref.org The returned document would contain an appropriate representation of the metadata that the publisher has deposited with Crossref. It would also assert that: doi.crossref.org/10.5555/12334567 owl:sameAs dx.doi.org/10.5555/1234567 . dx.doi.org/10.5555/12334567 owl:sameAs info:doi/10.5555/12334567\u0026lt;/p\u0026gt; info:doi/10.5555/12334567 owl:sameAs doi:10.5555/1234567\nAlternatively, the publisher could implement their own linked data support on their own domain using whatever appropriate method they want. So, for instance, a larger publisher could support content negotiation at their site and return different/enhanced metadata, etc. Pros Doesn’t require changes at DOI/Handle levels Is easy for publisher to opt-in or opt-out Requires minimal development on the part of Crossref. Cons Only applies to Crossref DOIs. It depends on publishers taking action. Might be a long time before publishers add the needed links to their landing pages or support content negotiation. DOI system is still not strictly LD compliant (e.g. it is returning 302 redirects. Naive sem-web browsers might ‘stop’ after getting a 302. Should ideally use 303s, content negotiation, etc.) Doesn’t work for DOIs that currently bypass landing pages and which go directly to content. B: Simple + IDF Global Semantic Compliance What would need to be done? Same as “Simplest Scenario” IDF globally changes dx.doi.org to return 303 redirect How would it work? Same as Simplest Scenario, except that, because sem-web-enabled browser had been told it was being redirected to a NIR (via the 303), it would presumably be more likely to continue. Pros All DOIs conform to expectations for LD identifiers Easy for publisher to opt-in or opt-out Requires minimal development on part of Crossref Requires minimal work (?) on part of IDF Cons Requires global change on part of IDF. Global change might conflict with requirements of other RAs. It depends on publishers taking action. Might be a long time before publishers add needed links to their landing pages or support content negotiation. Doesn’t work for DOIs that currently bypass landing pages (e.g. OECD spreadhseets, UICR datasets, etc.) C: Simple + IDF Global Semantic Compliance + RA CN Intercept What would need to be done? Same as “B: Simple + IDF Global Semantic Compliance” Scenario\nIDF changes dx.doi.org to redirect content-negotiated dx.doi.org queries to RA-controlled resolver depending on the preferences of the RA.\nRA implements DOI resolver (e.g. dx.crossref.org) that supports content negotiation. RA allows its members to specify to the RA that they want either: \u0026lt;ol type=a\u0026gt;\nRA to forward all requests to the member’s site.\nRA to “intercept” content-negotiations for non-HTML representations and direct them appropriately (e.g. return appropriate representation from rdf.crossref.org) How would it work? Pros All DOIs conform to expectations for LD identifiers Allows RA to potentially LD-enable its members very quickly. Easy for ra-members to opt-in or opt-out Requires minimal development on part of Crossref Would even work for DOIs that bypass landing pages Cons Requires global change on part of IDF. Global change might conflict with requirements of other RAs. Requires change to add decision logic implementation on part of IDF. Requires development of RA resolvers that implement per-member resolution logic (note- this would probably actually be done at DOI level) D: Simple + IDF Selective Semantic Compliance + RA CN Intercept What would need to be done? Same as Simplest Scenario\nIDF changes dx.doi.org to return either 302 or 303 redirect depending on the preferences of the RA.\nIDF changes dx.doi.org to redirect content-negotiated dx.doi.org queries to RA-controlled resolver depending on the preferences of the RA.\nRA implements DOI resolver (e.g. dx.crossref.org) that supports content negotiation. RA allows its members to specify to the RA that they want either: RA to forward all requests to the member’s site.\nRA to “intercept” content-negotiations for non-HTML representations and direct them appropriately (e.g. return appropriate representation from rdf.crossref.org) How would it work? Pros Allows RA to potentially LD-enable its members very quickly. Easy for ra-members to opt-in or opt-out Requires minimal development on part of Crossref Would even work for DOIs that bypass landing pages Cons Only some DOIs conform to expectations for LD identifiers Requires change to add decision logic implementation on part of IDF. Requires development of RA resolvers that implement per-member resolution logic (note- this would probably actually be done at DOI level) ", "headings": ["A: Simplest Scenario","What would need to be done?","How it would work","Pros","Cons","B: Simple + IDF Global Semantic Compliance","What would need to be done?","How would it work?","Pros","Cons","C: Simple + IDF Global Semantic Compliance + RA CN Intercept","What would need to be done?","How would it work?","Pros","Cons","D: Simple + IDF Selective Semantic Compliance + RA CN Intercept","What would need to be done?","How would it work?","Pros","Cons"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/is-frbr-the-osi-for-web-architecture/", "title": "Is FRBR the OSI for Web Architecture?", "subtitle":"", "rank": 1, "lastmod": "2010-02-13", "lastmod_ts": 1266019200, "section": "Blog", "tags": [], "description": "(This post is just a repost of a comment to Geoff’s last entry made because it’s already rather long, because it contains one original thought - FRBR as OSI - and because, well, it didn’t really want to wait for moderation.)\nHi Geoff:\nFirst off, there is no question but that Crossref was established to take on the reference linking challenge for scholarly literature. (Hell, it’s there, as you point out, in the organization name - PILA - as well as in the application name - Crossref.", "content": "(This post is just a repost of a comment to Geoff’s last entry made because it’s already rather long, because it contains one original thought - FRBR as OSI - and because, well, it didn’t really want to wait for moderation.)\nHi Geoff:\nFirst off, there is no question but that Crossref was established to take on the reference linking challenge for scholarly literature. (Hell, it’s there, as you point out, in the organization name - PILA - as well as in the application name - Crossref.)\nBut one should also remember that DOI as it was sold at the time was promising so much more. I disagree with you that the participants back then were as wholly innocent of the FRBR terms as you might suggest. Certainly there were ample presentations on DOI that sought to elucidate those relationships.\nNo matter. FRBR is a useful reference model to clarify some of these concepts. But not one that we are overly concerned with at this time. Nor even whether DOI maps one to one onto a given FRBR layer. What we are more concerned with on a pragmatic level is how DOI maps onto the Web architecture and especially how it plays along with Linked Data concepts.\n(Aside: A propos FRBR we might be in danger of repeating the OSI mistake for standardizing the network layer model. Ultimately that was maintained as a reference model but dropped as a concrete model in favour of the TCP/IP stack. Could be that FRBR is our OSI and Linked Data is our TCP/IP stack? That is, we might have to settle on the coarser data model in order to get a coherent story out the door where all can agree.)\nYou say:\n“we need a mechanism to distinguish between when we are getting the thing pointed to by the Crossref DOI (the PDF , HTML, etc.) as opposed to “something about the thing” (e.g. the landing page, metadata record, etc.)”\nBut that is exactly what we were chasing up in the earlier posts (both my DOI: What Do We Got? and John Erickson’s DOIs, URIs and Cool Resolution). You want to distinguish between a thing and a description about a thing. And Web architecture does just that: it distinguishes between Information Resources (i.e. the things) and Non-Information Resources (i.e. descriptions of the things).\nNow is this something that Crossref can truly distinguish and make apparent in its service architecture? If we retain the notion of landing page we are already essentially saying that a Crossref HTTP URI identifies a decsription of the resource, i.e. a Non-Information Resource, or Other Resource, and that is properly indicated within the architecture by returning a “303 See Other status” code.\nI think that’s all we’re saying at the moment as a first step.\nWeb architecture wants to know if the DOI HTTP URI is a thing or description of a thing. I say the latter. You seem to suggest in your comment the latter too. I wonder if we could get a vote on that.\nAnd btw, I am not suggesting that Crossref needs to dive into the business of “tracking compoend documents in their entirety”. Far from it. Lets just get a common resource architecture agreed publicly and then we can build on that.\nThis observation I received in a private email is something I fully support:\n“The real problem is what doi http uri identify on the web. Everything flows from the answer to that Q.”\nTony\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/does-a-crossref-doi-identify-a-work/", "title": "Does a Crossref DOI identify a “work?”", "subtitle":"", "rank": 1, "lastmod": "2010-02-11", "lastmod_ts": 1265846400, "section": "Blog", "tags": [], "description": "Tony’s recent thread on making DOIs play nicely in a linked data world has raised an issue I’ve meant to discuss here for some time- a lot of the thread is predicated on the idea that Crossref DOIs are applied at the abstract “work” level. Indeed, that it what it currently says in our guidelines. Unfortunately, this is a case where theory, practice and documentation all diverge.\nWhen the Crossref linking system was developed it was focused primarily on facilitating persistent linking amongst journals and conference proceedings.", "content": "Tony’s recent thread on making DOIs play nicely in a linked data world has raised an issue I’ve meant to discuss here for some time- a lot of the thread is predicated on the idea that Crossref DOIs are applied at the abstract “work” level. Indeed, that it what it currently says in our guidelines. Unfortunately, this is a case where theory, practice and documentation all diverge.\nWhen the Crossref linking system was developed it was focused primarily on facilitating persistent linking amongst journals and conference proceedings. The system was quickly adapted to handle books and more recently to handle working papers, technical reports, standards and “components”- a catchall term used to refer to everything from individual article images to database records.\nIn practice the content outside of the core journals and conference proceedings has accounted for relatively low volume. However, we expect that over the next few years this will change and that books and databases will increasingly drive the future growth in Crossref’s citation linking services. Interestingly, these record types all share characteristics that make them substantially different from the journals and conference proceedings that we have hitherto focused on.\nBoth books and databases introduce new challenges to technology and policies of our citation linking service. The challenges revolved around two areas:\nStructure: Both books and databases can have complex structures and the publishers of this content are likely to require granular identification of these content substructures along with a mechanism for documenting the relationship between these substructures (e.g. this section is part of this chapter which is part of this monograph which is part of this series) Versioning: Unlike typical journals and conference proceedings, books and database records sometimes change over time. When confronted with the issues of structure and versioning publishers are often tempted to take shortcuts and decide to simply assign DOIs at the highest level structure and to the “work” instead of a particular “manifestation” or version of that work. Indeed, section 5.5 of Crossref’s [DOI Name Information and Guidelines][2] recommends this. But this approach could have a negative impact on the integrity of the scholarly citation record that Crossref is attempting to maintain.\nFundamentally, Crossref DOIs are aimed at providing a persistent online citation infrastructure for scholarly and professional publishers. Consequently, decisions about where to apply Crossref DOIs should be guided by common expectations about the way in which citations work. Citations are typically used to credit ideas or provide evidence. A reader follows a citation in order to obtain more detail or to verify that an author is accurately representing the item cited. A rule of thumb is that a reader has a reasonable expectation that when they follow a citation, they will be taken to what the author saw when creating the citation. Any divergent behavior could result in the reader concluding that the author was misrepresenting the item cited. A further implication of this is that any changes to content that are likely to effect the crediting or interpretation of the content should result in that changed content getting a new Crossref DOI.\nTypically, this means that Crossref DOIs should be probably assigned at the expression level and different expressions should be assigned different Crossref DOIs. This is because assigning a Crossref DOI at the higher “work” level is generally not granular enough to guarantee that a reader following the citation will see what the author saw when creating the citation. For example, one translation of a work might be substantially different from another translation of the same work. Similarly a draft version of a work might be substantially different from the final published version of the work. In each case, resolving a citation to a different expression of the work than the expression that was originally cited might result in the reader interpreting the content differently than the citing author.\nIn general, different “equivalent manifestations” of the same work can safely be assigned the same Crossref DOI. So, for instance, the HTML formatted version an article and the PDF formatted version of an article can almost always be assigned the same Crossref DOI. Any differences between the two are unlikely to affect the crediting of, or reader’s interpretation of, the work. But sometimes it is even possible that different manifestations of an expression will differ enough to merit different Crossref DOIs. For instance, a semantically enhanced version of an article might require new crediting (e.g. the parties responsible for adding the semantic information) and the resulting semantic enhancement may conceivably alter the reader’s interpretation of the article.\nUnfortunately, there is no hard and fast rule about where and when to assign new Crossref DOIs. Instead there is only a guideline, namely:\n“Assign new Crossref DOIs to content in a way that will ensure that a reader following the citation will see something as close to what the original author cited as is possible.”\nThe implications of this to publishers are important, especially when they are assigning DOIs to protean records types. For instance, it may mean that:\nBook publishers should be expected to keep old editions of books available for link resolution purposes. Publishers of content that can change rapidly (e.g. by the second) should provide facilities for creating frozen, archived snapshots of content for citation purposes. All publishers of protean content should issue guidelines instructing researchers on when it is appropriate to cite a work, manifestation or version. Crossref needs to actively consider these issues as publishers start assigning Crossref DOIs to more dynamic types of content. Minimally, we should be able to provide publishers with recommendations on how to make dynamic content citable. We may even want to consider enshrining certain types of behavior in our terms and conditions so as to ensure the future integrity of the scholarly citation record.\nIn short, we need to update our guidelines.\n[2]: Crossref DOI display guidelines\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-response-page/", "title": "The Response Page", "subtitle":"", "rank": 1, "lastmod": "2010-02-10", "lastmod_ts": 1265760000, "section": "Blog", "tags": [], "description": "(Update - 2010.02.10: I just saw that I posted here on this same topic over a year ago. Oh well, I guess this is a perennial.)\nI am opening a new entry to pick up one point that John Erickson made in his last comment to the previous entry:\n“I am suggesting that one “baby step” might be to introduce (e.g.) RDFa coding standards for embedding the doi:D syntax.”\nYea!", "content": "(Update - 2010.02.10: I just saw that I posted here on this same topic over a year ago. Oh well, I guess this is a perennial.)\nI am opening a new entry to pick up one point that John Erickson made in his last comment to the previous entry:\n“I am suggesting that one “baby step” might be to introduce (e.g.) RDFa coding standards for embedding the doi:D syntax.”\nYea!\nIt might be worth consulting the latest Crossref DOI Name Information and Guidelines to see what that has to say about this. Section 6.3 - The response page has these two specific requirements for publishers:\nWhen metadata and DOIs are deposited with Crossref, the publisher must have active response pages in place so that they can resolve incoming links. A minimal response page must contain a full bibliographic citation displayed to the user. A response page without bibliographic information should never be presented to a user. What is truly shocking about these requirements is that this are purely user focussed. There is no mention whatsoever of machines. One might have thought that with the Linked Data gospel in full swing there would at least be a nod to machine-readable metadata. But there’s none. I’m not saying that there should be any requirement, or even any recommendation. But a mention might have been useful to chivvy us all along.\nI agree with John that publishers could be encouraged (or even just reminded) that machine-readable metadata could be made available through various mechanisms: HTML META tags (such as we currently provide at Nature - and as blogged here earlier), COinS objects, RDF/XML comments, or best of all RDFa markup as John mentions.\nThe Web is getting semantic. It’s about time that Crossref members joined the wave. And would be helpful if Crossref were there to help us with some new guidelines too!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/doi-what-do-we-got/", "title": "DOI: What Do We Got?", "subtitle":"", "rank": 1, "lastmod": "2010-02-09", "lastmod_ts": 1265673600, "section": "Blog", "tags": [], "description": "(Click image for full size graphic.)\nFollowing the JISC seminar last week on persistent identifiers (#jiscpid on Twitter) there was some discussion about DOI and its role within a Linked Data context. John Erickson has responded with a very thoughtful post DOIs, URIs and Cool Resolution, which ably summarizes how the current problem with DOI in that the way the DOI is is implemented by the handle HTTP proxy may not have kept pace with actual HTTP developments.", "content": "\n(Click image for full size graphic.)\nFollowing the JISC seminar last week on persistent identifiers (#jiscpid on Twitter) there was some discussion about DOI and its role within a Linked Data context. John Erickson has responded with a very thoughtful post DOIs, URIs and Cool Resolution, which ably summarizes how the current problem with DOI in that the way the DOI is is implemented by the handle HTTP proxy may not have kept pace with actual HTTP developments. (For example, John notes that the proxy is not capable of dealing with ‘Accept’ headers.) He has proposed a solution, and the post has attracted several comments.\nI just wanted to offer here the above diagram in an attempt to corral some of the various facets relating to DOI that I am aware of. I realize that this may seem like an open invitation to flame on - and this is a very preliminary draft - but … be kind!\nSo, this may be totally off the wall but it represents my best understanding of DOI as used by Crossref.\nI have distinguished three main contexts:\nGeneric Data - A generalized information context where the an object is identified with a DOI, an identifier system that is currently being ratified through the ISO process. This is the raw DOI number. (This definitely is not a first class object on the Web as it has no URI.) Web Data - An online information context (here I use the term ‘Web’ in its widest sense) where resources are identified by URI (not necessarily an HTTP URI). Here DOI is represented under two URI schemes: ‘doi:’ (unregistered but preferred by Crossref), and ‘info:’ (registered and available for general URI use). Also it has a presence on the Web via an HTTP proxy (dx.doi.org) URL where it is used as a slug to create a permalink (as listed at ‘A’). A simple HTTP redirect is used (with status code 302) to turn this permalink into the publisher response page http://example/1. (Note that typically a second redirect will occur on the publisher platform, here shown by the redirect to http://example/2.) Linked Data - An online information context where resources are identified by HTTP URI and conform to Linked Data principles. Now this is where there is a tension arises between the common publisher perspective and the strict semantic viewpoint. Implicit in the general Web context given above was the notion that the permalink (‘A’) was somehow related to the abstract object and the redirection service applied to it associated the abstract resource with concrete representations of the object. So how do we relate the DOI HTTP URI with the abstract (‘work’) identifier listed at ‘D’ in the diagram?\nWell the Architecture of the World Wide Web recognizes two distinct classes of resources: Information Resources (IR) and Non-Information Resources (NR). (Note: Only the term ‘information resource’ is used in AWWW.) IR are those that can be directly retrieved using HTTP, whereas NR are not directly retrievable but have an associated description which is retrievable and is itself a proxy for the real world object.\nSo either the HTTP URI denotes an IR (as listed at ‘B’) and is resolved (through HTTP status code ‘302 Found’) to a default representation, which is the view that the Linked Data community would currently have of DOI. But this is at odds with what the Crossref position which regards DOI as identifying the abstract work. Alternately to fit better the Crossref model of DOI the HTTP URI would denote an NR (as listed at ‘A’) which would be resolved (through HTTP status code ‘303 See Other’) to an associated description - a publisher response page.\nThere will be those self-appointed URI czars who will bemoan the fact of there being multiple URIs. But frankly there is nothing inherently wrong with that. Just as in the real world there are many languages so in the online world there are multiple contexts and histories. We can attempt to make some sense of this by making use of the well-known semantic properties owl:sameAs and ore:similarTo and declare (as also shown in the diagram) the following assertions:\n``\n(Click image for full size graphic.)\nFollowing the JISC seminar last week on persistent identifiers (#jiscpid on Twitter) there was some discussion about DOI and its role within a Linked Data context. John Erickson has responded with a very thoughtful post DOIs, URIs and Cool Resolution, which ably summarizes how the current problem with DOI in that the way the DOI is is implemented by the handle HTTP proxy may not have kept pace with actual HTTP developments. (For example, John notes that the proxy is not capable of dealing with ‘Accept’ headers.) He has proposed a solution, and the post has attracted several comments.\nI just wanted to offer here the above diagram in an attempt to corral some of the various facets relating to DOI that I am aware of. I realize that this may seem like an open invitation to flame on - and this is a very preliminary draft - but … be kind!\nSo, this may be totally off the wall but it represents my best understanding of DOI as used by Crossref.\nI have distinguished three main contexts:\nGeneric Data - A generalized information context where the an object is identified with a DOI, an identifier system that is currently being ratified through the ISO process. This is the raw DOI number. (This definitely is not a first class object on the Web as it has no URI.) Web Data - An online information context (here I use the term ‘Web’ in its widest sense) where resources are identified by URI (not necessarily an HTTP URI). Here DOI is represented under two URI schemes: ‘doi:’ (unregistered but preferred by Crossref), and ‘info:’ (registered and available for general URI use). Also it has a presence on the Web via an HTTP proxy (dx.doi.org) URL where it is used as a slug to create a permalink (as listed at ‘A’). A simple HTTP redirect is used (with status code 302) to turn this permalink into the publisher response page http://example/1. (Note that typically a second redirect will occur on the publisher platform, here shown by the redirect to http://example/2.) Linked Data - An online information context where resources are identified by HTTP URI and conform to Linked Data principles. Now this is where there is a tension arises between the common publisher perspective and the strict semantic viewpoint. Implicit in the general Web context given above was the notion that the permalink (‘A’) was somehow related to the abstract object and the redirection service applied to it associated the abstract resource with concrete representations of the object. So how do we relate the DOI HTTP URI with the abstract (‘work’) identifier listed at ‘D’ in the diagram?\nWell the Architecture of the World Wide Web recognizes two distinct classes of resources: Information Resources (IR) and Non-Information Resources (NR). (Note: Only the term ‘information resource’ is used in AWWW.) IR are those that can be directly retrieved using HTTP, whereas NR are not directly retrievable but have an associated description which is retrievable and is itself a proxy for the real world object.\nSo either the HTTP URI denotes an IR (as listed at ‘B’) and is resolved (through HTTP status code ‘302 Found’) to a default representation, which is the view that the Linked Data community would currently have of DOI. But this is at odds with what the Crossref position which regards DOI as identifying the abstract work. Alternately to fit better the Crossref model of DOI the HTTP URI would denote an NR (as listed at ‘A’) which would be resolved (through HTTP status code ‘303 See Other’) to an associated description - a publisher response page.\nThere will be those self-appointed URI czars who will bemoan the fact of there being multiple URIs. But frankly there is nothing inherently wrong with that. Just as in the real world there are many languages so in the online world there are multiple contexts and histories. We can attempt to make some sense of this by making use of the well-known semantic properties owl:sameAs and ore:similarTo and declare (as also shown in the diagram) the following assertions:\n``\nNote that ore:similarTo (stemming from the OAI-ORE work) is a weaker kind of relationship than owl:sameAs (which comes from OWL) and may be appropriate in this usage.\nIn sum, scenario ‘A’ is what we have currently implemented, scenario ‘B’ is what might be commonly perceived as being implemented, and scenario ‘C’ may be a more correct semantic position.\nYour comments (and not unkind comments, please;) are more than welcome.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2009/", "title": "2009", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-christmas-reading-list-with-dois/", "title": "A Christmas Reading List… with DOIs", "subtitle":"", "rank": 1, "lastmod": "2009-12-13", "lastmod_ts": 1260662400, "section": "Blog", "tags": [], "description": "Was outraged (outraged, I tell you) that one of my favorite online comics, PhD, didn’t include DOIs in their recent bibliography of Christmas-related citations.. So I’ve compiled them below.\nWe care about these things so that you don’t have to. Bet you will sleep better at night knowing this.\nOr perhaps not…\nA Christmas Reading List… with DOIs. Citation: Biggs, R, Douglas, A, Macfarlane, R, Dacie, J, Pitney, W, Merskey, C \u0026amp; O’Brien, J, 1952, ‘Christmas Disease’, BMJ, vol.", "content": "Was outraged (outraged, I tell you) that one of my favorite online comics, PhD, didn’t include DOIs in their recent bibliography of Christmas-related citations.. So I’ve compiled them below.\nWe care about these things so that you don’t have to. Bet you will sleep better at night knowing this.\nOr perhaps not…\nA Christmas Reading List… with DOIs. Citation: Biggs, R, Douglas, A, Macfarlane, R, Dacie, J, Pitney, W, Merskey, C \u0026amp; O’Brien, J, 1952, ‘Christmas Disease’, BMJ, vol. 2, no. 4799, pp. 1378-1382.\nCrossref DOI: .doi.org/10.1136/bmj.2.4799.1378\nTitle: More Than a Labor of Love: Gender Roles and Christmas Gift Shopping\nCitation: Fischer, E \u0026amp; Arnold, S, 1990, ‘More Than a Labor of Love: Gender Roles and Christmas Gift Shopping’, Journal of Consumer Research, vol. 17, no. 3, p. 333.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.1086/208561\nTitle: Looking at Christmas trees in the nucleolus\nCitation: Scheer, U, Xia, B, Merkert, H \u0026amp; Weisenberger, D, 1997, ‘Looking at Christmas trees in the nucleolus’, Chromosoma, vol. 105, no. 7-8, pp. 470-480.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.1007/s004120050209\nTitle: The Vela glitch of Christmas 1988\nCitation: McCulloch, P, Hamilton, P, McConnell, D \u0026amp; King, E, 1990, ‘The Vela glitch of Christmas 1988’, Nature, vol. 346, no. 6287, pp. 822-824.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.1038/346822a0\nTitle: Cardiac Mortality Is Higher Around Christmas and New Year’s Than at Any Other Time: The Holidays as a Risk Factor for Death\nCitation: Phillips, D, 2004, ‘Cardiac Mortality Is Higher Around Christmas and New Year’s Than at Any Other Time: The Holidays as a Risk Factor for Death’, Circulation, vol. 110, no. 25, pp. 3781-3788.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.1161/01.CIR.0000151424.02045.F7\nTitle: Red Crabs in Rain Forest, Christmas Island: Biotic Resistance to Invasion by an Exotic Snail\nCitation: Lake, P \u0026amp; O’Dowd, D, 1991, ‘Red Crabs in Rain Forest, Christmas Island: Biotic Resistance to Invasion by an Exotic Snail’, Oikos, vol. 62, no. 1, p. 25.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.2307/3545442\nTitle: The Carvedilol Hibernation Reversible Ischaemia Trial, Marker of Success (CHRISTMAS) study Methodology of a randomised, placebo controlled, multicentre study of carvedilol in hibernation and heart failure\nCitation: Pennell, D, 2000, ‘The Carvedilol Hibernation Reversible Ischaemia Trial, Marker of Success (CHRISTMAS) study Methodology of a randomised, placebo controlled, multicentre study of carvedilol in hibernation and heart failure’, International Journal of Cardiology, vol. 72, no. 3, pp. 265-274.\nCrossref DOI: http://0-dx-doi-org.libus.csd.mu.edu/10.1016/S0167-5273(99)00198-9\n", "headings": ["A Christmas Reading List… with DOIs."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/add-crossref-metadata-to-pdfs-using-xmp/", "title": "Add Crossref metadata to PDFs using XMP", "subtitle":"", "rank": 1, "lastmod": "2009-12-09", "lastmod_ts": 1260316800, "section": "Blog", "tags": [], "description": "In order to encourage publishers and other content producers to embed metadata into their PDFs, we have released an experimental tool called “pdfmark”, This open source tool allows you to add XMP metadata to a PDF. What’s really cool, is that if you give the tool a Crossref DOI, it will lookup the metadata in Crossref and then apply said metadata to the PDF. More detail can be found on the pdfmark page on the Crossref Labs site.", "content": "In order to encourage publishers and other content producers to embed metadata into their PDFs, we have released an experimental tool called “pdfmark”, This open source tool allows you to add XMP metadata to a PDF. What’s really cool, is that if you give the tool a Crossref DOI, it will lookup the metadata in Crossref and then apply said metadata to the PDF. More detail can be found on the pdfmark page on the Crossref Labs site. The usual weasels words and excuses about “experiments” apply.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/qr-codes-and-dois/", "title": "QR Codes and DOIs", "subtitle":"", "rank": 1, "lastmod": "2009-12-08", "lastmod_ts": 1260230400, "section": "Blog", "tags": [], "description": "Inspired by Google’s recent promotion of QR Codes, I thought it might be fun to experiment with encoding a Crossref DOI and a bit of metadata into one of the critters. I’ve put a short write-up of the experiment on the Crossref Labs site, which includes a demonstration of how you can generate a QR Code for any given Crossref DOI. Put them on postcards and send them to your friends for the holidays.", "content": "Inspired by Google’s recent promotion of QR Codes, I thought it might be fun to experiment with encoding a Crossref DOI and a bit of metadata into one of the critters. I’ve put a short write-up of the experiment on the Crossref Labs site, which includes a demonstration of how you can generate a QR Code for any given Crossref DOI. Put them on postcards and send them to your friends for the holidays. Tattoo them on your pets. The possibilities are endless.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/got-search-if-you-want-it/", "title": "got SEARCH if you want it!", "subtitle":"", "rank": 1, "lastmod": "2009-11-24", "lastmod_ts": 1259020800, "section": "Blog", "tags": [], "description": "[See this link if you’re short on time: facets search client. Only tested on Firefox at this point. Caveat: At time of writing the Crossref Metadata Search was being very slow but was still functional. Previously it was just slow.]\nFollowing on from Geoff’s announcement last month of a prototype Crossref Metadata OpenSearch on labs.crossref.org, I wanted to show what typical OpenSearch responses might look like in a more mature implementation.", "content": "[See this link if you’re short on time: facets search client. Only tested on Firefox at this point. Caveat: At time of writing the Crossref Metadata Search was being very slow but was still functional. Previously it was just slow.]\nFollowing on from Geoff’s announcement last month of a prototype Crossref Metadata OpenSearch on labs.crossref.org, I wanted to show what typical OpenSearch responses might look like in a more mature implementation.\nI have taken the liberty of modelling these on the response formats that we are already providing in our nature.com OpenSearch service which in turn are based on the draft syndication formats that I blogged here earlier.\nI am therefore returning ATOM, JSON, JSONP and RSS responses from these four OpenSearch URL templates:\nhttp://nurture.nature.com/cgi-bin/opensearch?db=crossref\u0026#038;out=atom\u0026#038;q={searchTerms} http://0-nurture-nature-com.libus.csd.mu.edu/cgi-bin/opensearch?db=crossref\u0026#038;out=json\u0026#038;q={searchTerms} http://0-nurture-nature-com.libus.csd.mu.edu/cgi-bin/opensearch?db=crossref\u0026#038;out=jsonp\u0026#038;q={searchTerms} http://0-nurture-nature-com.libus.csd.mu.edu/cgi-bin/opensearch?db=crossref\u0026#038;out=rss\u0026#038;q={searchTerms} as this OpenSearch description file details. Note that the URL templates include no indexing or pagination parameters as the Crossref prototype does not currently support these features.\nAn example query (‘apple’) returning an ATOM feed from a Crossref Metadata OpenSearch would be the following:\nhttp://nurture.nature.com/cgi-bin/opensearch?db=crossref\u0026#038;out=atom\u0026#038;q=apple And the same query returning a JSON version of that ATOM feed would look as follows:\nhttp://nurture.nature.com/cgi-bin/opensearch?db=crossref\u0026#038;out=json\u0026#038;q=apple By the way, this is just for demonstration purposes and there are still issues to be resolved including character encoding.\nThis interface uses the existing Crossref OpenSearch response format and parses the COinS objects embedded in that response to provide a more standard OpenSearch syndication result set format. The prototype implemenatation also has some bugs which I needed to work around. (I will forward on details of these.) And there is also a more fundamental issue of response time from the experimental search server.\nBut still this should give some idea of what a Crossref Metadata OpenSearch service could look like.\nTo show this all in action I’ve worked up one of my demo OpenSearch clients for nature.com OpenSearch which displays a facetted search response for a Crossref search. For good measure this includes also an OpenSearch interface for PubMed and the search client allows for simple selection between three journals databases: nature.com, Crossref and PubMed.\nOf course, with a reasonably uniform set of search result formats such as presented here it then becomes a simple exercise to reuse these search responses in additional search clients.\nAs can be anticipated it would be very straightforward to carry this over into a single metasearch service which could run across these multiple databases.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-cheatsheet-for-nature.com-opensearch/", "title": "A Cheatsheet for nature.com OpenSearch", "subtitle":"", "rank": 1, "lastmod": "2009-10-22", "lastmod_ts": 1256169600, "section": "Blog", "tags": [], "description": " Following on from my recent post about our shiny new nature.com OpenSearch service we just put up a cheatsheet for users. I’m posting about this here as this may also be of interest especially to those exploring how SRU and OpenSearch intersect.\nThe cheatsheet can be downloaded from our nature.com OpenSearch test page and is available in two forms:\nCheatsheet (PDF, 65K) Cheatsheet (PNG, 141K) Naurally, all comments welcome. ", "content": " Following on from my recent post about our shiny new nature.com OpenSearch service we just put up a cheatsheet for users. I’m posting about this here as this may also be of interest especially to those exploring how SRU and OpenSearch intersect.\nThe cheatsheet can be downloaded from our nature.com OpenSearch test page and is available in two forms:\nCheatsheet (PDF, 65K) Cheatsheet (PNG, 141K) Naurally, all comments welcome. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/recommendations-on-rss-feeds-for-scholarly-publishers/", "title": "Recommendations on RSS Feeds for Scholarly Publishers", "subtitle":"", "rank": 1, "lastmod": "2009-10-19", "lastmod_ts": 1255910400, "section": "Blog", "tags": [], "description": "We’re pleased to announce that a Crossref working group has released a set of best practice recommendations for scholarly publishers producing RSS feeds.\nVariations in practice amongst publisher feeds can be irritating for end-users, but they can be insurmountable for automated processes. RSS feeds are increasingly being consumed by knowledge discovery and data mining services. In these cases, variations in date formats, the practice of lumping all authors together in one \u0026lt;dc:creator\u0026gt; element, or generating invalid XML can render the RSS feed useless to the service accessing it.", "content": "We’re pleased to announce that a Crossref working group has released a set of best practice recommendations for scholarly publishers producing RSS feeds.\nVariations in practice amongst publisher feeds can be irritating for end-users, but they can be insurmountable for automated processes. RSS feeds are increasingly being consumed by knowledge discovery and data mining services. In these cases, variations in date formats, the practice of lumping all authors together in one \u0026lt;dc:creator\u0026gt; element, or generating invalid XML can render the RSS feed useless to the service accessing it.\nThe recommendations intended to facilitate good practice in the production and provision of TOC RSS Feeds. The guidelines include general recommendations for good practice, specific recommendations on the use of RSS Modules and an example RSS TOC feed. Ultimately, we expect that industry wide adoption of these best practices will help drive more traffic to publisher web sites. Note that most of these recommendation can also be applied to non-TOC RSS feeds such as thematic feeds, automated search result feeds, etc.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/rss/", "title": "RSS", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-labs/", "title": "Crossref Labs", "subtitle":"", "rank": 1, "lastmod": "2009-10-13", "lastmod_ts": 1255392000, "section": "Blog", "tags": [], "description": "The other day Noel O’Boyle wrote to tell me that he had updated the Ubiquity plug-in that we had developed in order to to make it work with the latest version of Firefox. The problem was, I had *also* updated the Ubiquity plug-in, but I hadn’t really indicated to anybody how they could find updates to the plug-in. /me=embarrassed. So it seemed time to provide a home for some of the prototypes and experiments that we’ve been developing at Crossref.", "content": "The other day Noel O’Boyle wrote to tell me that he had updated the Ubiquity plug-in that we had developed in order to to make it work with the latest version of Firefox. The problem was, I had *also* updated the Ubiquity plug-in, but I hadn’t really indicated to anybody how they could find updates to the plug-in. /me=embarrassed. So it seemed time to provide a home for some of the prototypes and experiments that we’ve been developing at Crossref. To that end, we have created a Crossref Labs site. Here you can find links to various tools and services that either make it easier to use Crossref services (e.g. Blog/Ubiquity plugins and OpenSearch Description files) or that serve to illustrate a concept that has been of interest to our members (InChI lookup, TOI-DOIs). Oh, yeah- and when we update these experiments, you should be able to find the updates on their respective pages. Sorry about that Noel… Finally, I will quote from the Crossref Labs home page:\n“Most of the experiments linked to here are running on R\u0026amp;D equipment in a non-production environment. They may disappear without warning and/or perform erratically. If one of them isn’t working for some reason, come back later and try again.” Have fun.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/nature.com-opensearch-a-structured-search-service/", "title": "nature.com OpenSearch: A Structured Search Service", "subtitle":"", "rank": 1, "lastmod": "2009-10-05", "lastmod_ts": 1254700800, "section": "Blog", "tags": [], "description": "(Click panels in figure to read related posts.)\nFollowing up on my earlier posts here about the structured search technologies OpenSearch and SRU, I wanted to reference three recent posts on our web publishing blog Nascent which discuss our new nature.com OpenSearch service:\n1. Service Describes the new nature.com OpenSearch service which provides a structured resource discovery facility for content hosted on nature.com. 2. Clients Points to a small gallery of demo web clients for nature.", "content": " (Click panels in figure to read related posts.)\nFollowing up on my earlier posts here about the structured search technologies OpenSearch and SRU, I wanted to reference three recent posts on our web publishing blog Nascent which discuss our new nature.com OpenSearch service:\n1. Service Describes the new nature.com OpenSearch service which provides a structured resource discovery facility for content hosted on nature.com. 2. Clients Points to a small gallery of demo web clients for nature.com OpenSearch which all use the text-based JSON interface. 3. Widgets Introduces the new nature.com search desktop widgets which interface with the nature.com OpenSearch service via an RSS feed. (See also the screencast posted to YouTube.) We hope that this new search service will prove to be useful and may also provide a model for other implementations.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/please-join-us-for-the-2009-crossref-technical-meeting./", "title": "Please join us for the 2009 Crossref Technical Meeting.", "subtitle":"", "rank": 1, "lastmod": "2009-09-08", "lastmod_ts": 1252368000, "section": "Blog", "tags": [], "description": "Crossref Technical Meeting*\nThe Charles Hotel, Cambridge, MA\nMonday, November 9th, 2009\n2:00 pm - 5:00 pm\nPlease register today!\nWe also encourage you to register for our 10th Anniversary Celebration Dinner, which will take place Monday, November 9th, 2009 at 6:30 pm following the Crossref Technical Meeting at the Museum of Science in Boston, MA. Transportation from the Charles Hotel to the Museum of Science will be provided. Our 2009 Annual Meeting will take place on Tuesday, November 10th at 9:00 am in the Charles Hotel in Cambridge, MA and we urge you to register soon (if you haven’t already done so)", "content": "Crossref Technical Meeting*\nThe Charles Hotel, Cambridge, MA\nMonday, November 9th, 2009\n2:00 pm - 5:00 pm\nPlease register today!\nWe also encourage you to register for our 10th Anniversary Celebration Dinner, which will take place Monday, November 9th, 2009 at 6:30 pm following the Crossref Technical Meeting at the Museum of Science in Boston, MA. Transportation from the Charles Hotel to the Museum of Science will be provided. Our 2009 Annual Meeting will take place on Tuesday, November 10th at 9:00 am in the Charles Hotel in Cambridge, MA and we urge you to register soon (if you haven’t already done so)\nas space is limited. You may register for both events here.\n*Please note that this year’s Technical Meeting will be on Monday afternoon.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/ipub/", "title": "IPub", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prc-report-and-ipub-revisited/", "title": "PRC Report and “iPub” revisited", "subtitle":"", "rank": 1, "lastmod": "2009-09-07", "lastmod_ts": 1252281600, "section": "Blog", "tags": [], "description": "OK, so this has nothing to do with any Crossref projects- but there is an interesting new PRC report out by Mark Ware in which he explores how SMEs (small/medium-sized enterprises) make use of scholarly articles and whether the scholarly publishing industry is doing anything to make their lives easier. This is a topic that is close to my heart. For the past few years I’ve been saying (most recently at SSP09) that I think scholarly publishers are much too quick to dismiss the possibility of creating an iTunes-like service for scholarly publications (aka “iPub”).", "content": "OK, so this has nothing to do with any Crossref projects- but there is an interesting new PRC report out by Mark Ware in which he explores how SMEs (small/medium-sized enterprises) make use of scholarly articles and whether the scholarly publishing industry is doing anything to make their lives easier. This is a topic that is close to my heart. For the past few years I’ve been saying (most recently at SSP09) that I think scholarly publishers are much too quick to dismiss the possibility of creating an iTunes-like service for scholarly publications (aka “iPub”). The report certainly seems to indicate that there is an important audience that would benefit from such a service (SMEs) and even goes so far as to cite my occasional rants on the subject. The summary of my iPub argument has been that:\nA very large percentage of the web visits that hit publishers web sites come from sources that are unrecognised. That is, they don’t come from a subscribing institution and they don’t seem to come from a registered user or anybody who has visited the site previously. For many publishers the level of such unrecognised visitors can amount to over 90% of all the traffic that hits their sites. Most industries would look at this percentage and work hard to figure out how to monetize some of it. Our industry seems to treat it like “noise”, reasoning that only people in recognised academic and professional institutions are going to desire or understand the content on scholarly journal sites. Evidence from the NSF shows that significantly more than 50% of US students who graduate with an S\u0026amp;E degree end up employed outside of directly S\u0026amp;E related fields. This represents a large percentage of potential consumers of scholarly and professional publications who are not part of a recognised academic or professional institution. SMEs anybody? These potential consumers are faced with a bewildering variety of sources for their content. They have to deal with multiple publisher sites with different interfaces and different PPV checkout procedures. And they have to navigate all this without the aid of library finding tools or the professional researcher’s understanding the scholarly journal environment. It is no wonder that they give up hope once they land on our abstract pages and face the gauntlet of another PPV checkout system. It seems to me that the industry could provide a single interface and PPV shopping cart interface targeted at allowing people who work outside of traditional subscribing institutions to easily purchase individual article downloads from scholarly publishers. The system would be modelled at least in part by Apple’s iTunes, a system that has been lauded (and denounced) for revolutionising the way in which consumers buy music online. The chief virtues of the iTunes system are often cited as being:\nIt contains a critical mass of content It provides a simple and consistent user interface It has a simple and inexpensive pricing model It disaggregated content (per song purchases) It interfaced transparently with the iPod. A scholarly publishing “iPub” system could seek to emulate many of these strengths but not all. Clearly such a system could not impose uniform pricing or dictate pricing, as that would be anti-competitive. The PRC report makes this same point.\nSome, including the PRC report, also claim that the publishing industry has no equivalent of the “iPod” and that this would be a weakness of the system. I don’t agree with this- I think that the “iPod” in this case is currently called “paper.” In the future we will almost certainly migrate to some iPod/Kindle-like device, but as far as fulfilling most of the iPod’s functionality (portable rendering of the content) right now, I suspect paper fits the bill.\nFinally, there is a another oft-expressed concern that such a system might confuse channels for existing audiences and that librarians in particular would be very hostile to such a system. The truth is, I don’t know how librarians would react to such a system. The few I’ve mentioned it to certainly seemed amenable to the idea. Maybe this is where the PRC should do some follow-up research?\nIn any case, it seems to me that there is potentially much to be gained by simply providing an easy PPV experience where users don’t have to register with multiple sites and cope with multiple shopping cart applications. Publishers can’t seriously think that they gain competitive advantage through their shopping carts? If not, then why not standardise on a uniform interface that is easily purchased from? Perhaps it doesn’t have to look like iTunes but can instead look like PayPal (PayPub?). Providing a simple mechanism like this might enable the industry to meet the needs of important and often overlooked audiences. I keep wondering if CCC could help publishers do something here?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-is-hiring-an-rd-developer-in-oxford/", "title": "Crossref is hiring an R&D Developer in Oxford", "subtitle":"", "rank": 1, "lastmod": "2009-08-20", "lastmod_ts": 1250726400, "section": "Blog", "tags": [], "description": "We are looking to hire an R\u0026amp;D Developer in our Oxford offices. We are look for somebody who:\nIs passionate about creating tools for online scholarly communication. Relishes working with metadata. Has experience delivering web-based applications using agile methodologies. Wants to learn new skills and work with a variety of programming languages. Enjoys working with a small, geographically dispersed team. Groks mixed-content model XML. Groks RDF. Groks REST. Has explored MapReduce-based database systems.", "content": "We are looking to hire an R\u0026amp;D Developer in our Oxford offices. We are look for somebody who:\nIs passionate about creating tools for online scholarly communication. Relishes working with metadata. Has experience delivering web-based applications using agile methodologies. Wants to learn new skills and work with a variety of programming languages. Enjoys working with a small, geographically dispersed team. Groks mixed-content model XML. Groks RDF. Groks REST. Has explored MapReduce-based database systems. Is expert in one or more popular development language (Java, C, C++, C#). Is expert in one or more popular scripting language (Ruby, Python, Javascript). Has deployed and maintained Linux/BSD-based systems. Understands relational databases (MySQL, Postgres, Oracle). Tests first. If you are interested, please see the full job description. If you are not interested, but know somebody who might be, please let them know about this great opportunity.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/strategic-reading/", "title": "Strategic Reading", "subtitle":"", "rank": 1, "lastmod": "2009-08-14", "lastmod_ts": 1250208000, "section": "Blog", "tags": [], "description": "Allen Renear and Carole Palmer have just published an article titled “Strategic Reading, Ontologies, and the Future of Scientific Publishing” in the current issue of Science (http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784). I’m particularly happy to see this paper published because I actually got to witness the genesis of these ideas in my living room back in 2006. Since then, Allen and Carole’s ideas have profoundly influenced my thinking on the application of technology to scholarly communication.", "content": "Allen Renear and Carole Palmer have just published an article titled “Strategic Reading, Ontologies, and the Future of Scientific Publishing” in the current issue of Science (http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.1157784). I’m particularly happy to see this paper published because I actually got to witness the genesis of these ideas in my living room back in 2006. Since then, Allen and Carole’s ideas have profoundly influenced my thinking on the application of technology to scholarly communication.\nThose who have seen me speak at conferences recently will have heard me do an awful lot of ranting about the how publishers and librarians need to help researchers practice the time-honored art of “reading avoidance” (or as Renear and Palmer politely put it- “strategic reading”). I even managed to squeeze this rant into a recent interview I did with Wiley-Blackwell.\nThe essence of my argument has been that our industries need not be bamboozled by the technical jargon and messianic hand-waving that typically accompany discussions of new technology trends like “web 2.0”, “text-mining”, “the semantic web”, “micro-blogging”, etc. This is because there is a fairly simple way for us to understand the relative import (or lack thereof) of new technologies to scholarly communication and that is to ask the following question:\n“Can the application of this technology in the realm of scholarly communication help researchers to read less?”\nIf the answer is “yes”, then you’d better pay very close attention to it.\nIn fact, I’d go so far as to say the history of scholarly publishing can be characterized by the successful adoption of conventions and tools that help researchers read strategically.\nNow I have something to cite when I rant.\nAnyway, congratulations to Allen \u0026amp; Carole.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/opensearch-formats-for-review/", "title": "OpenSearch Formats for Review", "subtitle":"", "rank": 1, "lastmod": "2009-07-23", "lastmod_ts": 1248307200, "section": "Blog", "tags": [], "description": "In an earlier post I talked about using the PAM (PRISM Aggregator Message) schema for an SRU result set. I have also noted in another post that a Search Web Service could support both SRU and OpenSearch interfaces. This does then beg the question of what a corresponding OpenSearch result set might look like for such a record.\nBased on the OpenSearch spec and also on a new Atom extension for SRU, I have contrived to show how a PAM record might be returned in a coomon OpenSearch format.", "content": "In an earlier post I talked about using the PAM (PRISM Aggregator Message) schema for an SRU result set. I have also noted in another post that a Search Web Service could support both SRU and OpenSearch interfaces. This does then beg the question of what a corresponding OpenSearch result set might look like for such a record.\nBased on the OpenSearch spec and also on a new Atom extension for SRU, I have contrived to show how a PAM record might be returned in a coomon OpenSearch format. Below I offer some mocked-up examples for each of the following formats for review purposes:\nRSS 1.0 ATOM JSON Just click the relevant figure for a text rendering of each result format for the following phrase search:\ncql.keywords adj “solar eclipse”\nIn this example we imagine that two records have been requested. (The example formats also include navigational links as per the OpenSearch spec examples.)\nNote that the JSON example closely follows the ATOM schema with a couple of main deviations:\nRepeated elements are gathered together in an array (e.g. “entry”, “dc:creator”) Attributes are broken out alongside their parent elements (e.g. “rel”, “href”) It would be interesting to hear what readers think of these examples - especially the JSON format.\n\u0026lt;td\u0026gt; \u0026lt;a href=\u0026quot;http://0-nurture-nature-com.libus.csd.mu.edu/opensearch/demo/solar2-atom.txt\u0026quot;\u0026gt;\u0026lt;img alt=\u0026quot;solar2-atom.jpg\u0026quot; border=\u0026quot;0\u0026quot; width=\u0026quot;159\u0026quot; height=\u0026quot;309\u0026quot; src=\u0026quot;/wp/blog/images/solar2-atom.jpg\u0026quot; /\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;a href=\u0026quot;http://0-nurture-nature-com.libus.csd.mu.edu/opensearch/demo/solar2-json.txt\u0026quot;\u0026gt;\u0026lt;img alt=\u0026quot;solar2-json.jpg\u0026quot; border=\u0026quot;0\u0026quot; width=\u0026quot;159\u0026quot; height=\u0026quot;309\u0026quot; src=\u0026quot;/wp/blog/images/solar2-json.jpg\u0026quot; /\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;/td\u0026gt; RSS 1.0 \u0026lt;th\u0026gt; ATOM \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; JSON \u0026lt;/th\u0026gt; (Click image to get text format.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oasis-drafts-of-sru-2.0-and-cql-2.0/", "title": "OASIS Drafts of SRU 2.0 and CQL 2.0", "subtitle":"", "rank": 1, "lastmod": "2009-07-22", "lastmod_ts": 1248220800, "section": "Blog", "tags": [], "description": "As posted here on the SRU Implementors list, the OASIS Search Web Services Technical Committee has announced the release of drafts of SRU and CQL version 2.0:\nsru-2-0-draft.doc cql-2-0-draft.doc The Committee is soliciting feedback on these two documents. Comments should be posted to the SRU list by August 13. ", "content": "As posted here on the SRU Implementors list, the OASIS Search Web Services Technical Committee has announced the release of drafts of SRU and CQL version 2.0:\nsru-2-0-draft.doc cql-2-0-draft.doc The Committee is soliciting feedback on these two documents. Comments should be posted to the SRU list by August 13. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-openurl-resolver/", "title": "Crossref OpenURL resolver", "subtitle":"", "rank": 1, "lastmod": "2009-07-07", "lastmod_ts": 1246924800, "section": "Blog", "tags": [], "description": "A new version of our OpenURL resolver was deployed July 2 which should handle higher traffic (e.g. we have re-enable the LibX plug-in ) Unfortunately there were a few hick ups with the new version which I believe are now corrected (a character encoding bug and a XML structure translation problem).\nSorry for any inconvenience.", "content": "A new version of our OpenURL resolver was deployed July 2 which should handle higher traffic (e.g. we have re-enable the LibX plug-in ) Unfortunately there were a few hick ups with the new version which I believe are now corrected (a character encoding bug and a XML structure translation problem).\nSorry for any inconvenience.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/openurl/", "title": "OpenURL", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-primer/", "title": "XMP Primer", "subtitle":"", "rank": 1, "lastmod": "2009-06-10", "lastmod_ts": 1244592000, "section": "Blog", "tags": [], "description": "There’s a new XMP Primer (PDF) by Ron Roskiewicz (ed. Dianne Kennedy) available from XMP-Open. This is copyrighted 2008 but I only just saw this now. This is a 43 page document which provides a very gentle introduction to metadata and labelling of media and then introduces XMP into the content lifecycle and talks to the business case for using XMP. The primer covers the following areas:\nIntroduction to Metadata Introduction to XMP XMP and the Content Lifecycle XMP in Action; Use Cases Additional XMP Resources One small gripe would be that this seems to have been prepared for US letter-sized pages and although is printable on A4 there is the slightest of clippings on the right-hand margin with no real loss of information but it does confer a sense of “incompleteness”.", "content": "There’s a new XMP Primer (PDF) by Ron Roskiewicz (ed. Dianne Kennedy) available from XMP-Open. This is copyrighted 2008 but I only just saw this now. This is a 43 page document which provides a very gentle introduction to metadata and labelling of media and then introduces XMP into the content lifecycle and talks to the business case for using XMP. The primer covers the following areas:\nIntroduction to Metadata Introduction to XMP XMP and the Content Lifecycle XMP in Action; Use Cases Additional XMP Resources One small gripe would be that this seems to have been prepared for US letter-sized pages and although is printable on A4 there is the slightest of clippings on the right-hand margin with no real loss of information but it does confer a sense of “incompleteness”. Really there can be little excuse these days for this parochialism. Also, for a document talking up the benefits of using XMP, it’s decidedly odd that it doesn’t make use of XMP itself - or rather there is a default XMP packet in the PDF with no real useful properties such as title, author, or date. Could have been a nice little object lesson in using XMP. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/aligning-opensearch-and-sru/", "title": "Aligning OpenSearch and SRU", "subtitle":"", "rank": 1, "lastmod": "2009-06-05", "lastmod_ts": 1244160000, "section": "Blog", "tags": [], "description": "[Update - 2009.06.07: As pointed out by Todd Carpenter of NISO (see comments below) the phrase “SRU by contrast is an initiative to update Z39.50 for the Web” is inaccurate. I should have said “By contrast SRU is an initiative recognized by ZING (Z39.50 International Next Generation) to bring Z39.50 functionality into the mainstream Web“.]\n[Update - 2009.06.08: Bizarrely I find in mentioning query languages below that I omitted to mention SQL. I don’t know what that means. Probably just that there’s no Web-based API. And that again it’s tied to a particular technology - RDBMS.]\n(Click image to enlarge.)\nThere are two well-known public search APIs for generic Web-based search: OpenSearch and SRU. (Note that the key term here is “generic”, so neither Solr/Lucene nor XQuery really qualify for that slot. Also, I am concentrating here on “classic” query languages rather than on semantic query languages such as SPARQL.)\nOpenSearch was created by Amazon’s A9.com and is a cheap and cheerful means to interface to a search service by declaring a template URL and returning a structured XML format. It therefore allows for structured result sets while placing no constraints on the query string. As outlined in my earlier post Search Web Service, there is support for search operation control parameters (pagination, encoding, etc.), but no inroads are made into the query string itself which is regarded as opaque.\nSRU by contrast is an initiative to update Z39.50 for the Web and is firmly focussed on structured queries and responses. Specifically a query can be expressed in the high-level query language CQL which is independent of any underlying implementation. Result records are returned using any declared W3C XML Schema format and are transported within a defined XML wrapper format for SRU. (Note that the SRU 2.0 draft provides support for arbitrary result formats based on media type.)\nOne can summarize the respective OpenSearch and SRU functionalities as in this table:\nStructure \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; OpenSearch \u0026lt;/th\u0026gt; \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; SRU \u0026lt;/th\u0026gt; query \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; results \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; control \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; diagnostics \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; What I wanted to discuss here was the OpenSearch and SRU interfaces to a Search Web Service such as outlined in my previous post. The diagram at top of this post shows query forms for OpenSearch and SRU and associated result types. The Search Web Service is taken to be exposing an SRU interface. It might be simplest to walk through each of the cases.\n(Continues below.)\n", "content": "[Update - 2009.06.07: As pointed out by Todd Carpenter of NISO (see comments below) the phrase “SRU by contrast is an initiative to update Z39.50 for the Web” is inaccurate. I should have said “By contrast SRU is an initiative recognized by ZING (Z39.50 International Next Generation) to bring Z39.50 functionality into the mainstream Web“.]\n[Update - 2009.06.08: Bizarrely I find in mentioning query languages below that I omitted to mention SQL. I don’t know what that means. Probably just that there’s no Web-based API. And that again it’s tied to a particular technology - RDBMS.]\n(Click image to enlarge.)\nThere are two well-known public search APIs for generic Web-based search: OpenSearch and SRU. (Note that the key term here is “generic”, so neither Solr/Lucene nor XQuery really qualify for that slot. Also, I am concentrating here on “classic” query languages rather than on semantic query languages such as SPARQL.)\nOpenSearch was created by Amazon’s A9.com and is a cheap and cheerful means to interface to a search service by declaring a template URL and returning a structured XML format. It therefore allows for structured result sets while placing no constraints on the query string. As outlined in my earlier post Search Web Service, there is support for search operation control parameters (pagination, encoding, etc.), but no inroads are made into the query string itself which is regarded as opaque.\nSRU by contrast is an initiative to update Z39.50 for the Web and is firmly focussed on structured queries and responses. Specifically a query can be expressed in the high-level query language CQL which is independent of any underlying implementation. Result records are returned using any declared W3C XML Schema format and are transported within a defined XML wrapper format for SRU. (Note that the SRU 2.0 draft provides support for arbitrary result formats based on media type.)\nOne can summarize the respective OpenSearch and SRU functionalities as in this table:\nStructure \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; OpenSearch \u0026lt;/th\u0026gt; \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; SRU \u0026lt;/th\u0026gt; query \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; results \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; control \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; diagnostics \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; What I wanted to discuss here was the OpenSearch and SRU interfaces to a Search Web Service such as outlined in my previous post. The diagram at top of this post shows query forms for OpenSearch and SRU and associated result types. The Search Web Service is taken to be exposing an SRU interface. It might be simplest to walk through each of the cases.\n(Continues below.)\nCase 1: OpenSearch (Native Client)\nAs noted, OpenSearch uses a URL template (declared in an OpenSearch description document) where recognized parameters are mapped to implementation-specific parameters. The bolded parameter “query” in the figure indicates an OpenSearch parameter “searchTerms” which has been mapped to the Search Web Service parameter “query“,\nAs also noted, SRU 2.0 offers support for alternate result formats (other than SRU XML) by allowing a media type (aka mime type) to be passed in an “http:accept” parameter. There is, however, no OpenSearch parameter corresponding to a format selector, so this must be hard coded directly into the URL template with a value of “application/rss+xml” - the standard media type for an RSS feed which is the common result format for OpenSearch.\n(In the diagram I have noted in parentheses that RSS in its RSS 1.0 form is RDF. And that format is a strong candidate for semantic interoperability. An alternate format would be Atom, which could be similarly selected with a value of “application/atom+xml”, but it is difficult to see at this time what advantage Atom confers. It does not conform to the RDF data model but may find better support in code libraries and applications.)\nThe third parameter shown for Case 1, is “queryType” which is another new SRU 2.0 parameter. I had noted earlier that an OpenSearch query string could be passed directly through to the Search Web Service and its associated CQL parser. It tuns out that this needs to be analyzed further. (And many thanks to Jonathan Rochkind for useful discussions on this.)\nI had naively assumed that an OpenSearch query string would either be packed as a CQL string or would be a simple text string which could be interpreted as CQL. The latter interpretation (text string) turns out to be true only for a single bare word or for a quoted string - both of which are recognized CQL query strings (i.e. a single search term which has a default index and relationship to that index). It fails, however, for the more general case of unquoted strings. See table below for these cases.\nQuery type \u0026lt;th width=\u0026quot;50%\u0026quot;\u0026gt; Query string \u0026lt;/th\u0026gt; A. bare word \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; this \u0026lt;/td\u0026gt; B. quoted string \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; \u0026amp;#8220;this is a query\u0026amp;#8221; \u0026lt;/td\u0026gt; C. unquoted string \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; this is a query \u0026lt;/td\u0026gt; Case C would fail a CQL parser. So we need to signal to the Search Web Service that this is not a CQL string. And that’s where the “queryType” parameter comes in. If it’s set to “cql” then the query string is to be parsed as CQL, otherwise it must be handled in an alternate fashion. (As of now there is no value set for this parameter that I am aware of so I am using the terms “plain” and “cql” to differentiate.)\nHow this should be handled by a CQL aware application is not immediately obvious. My first thought was to allow the application to silently quote such a string but that would change the semantics. It would be better to split the string into separate search clauses for each word and to join the search cluases by a default boolean operator, e.g. “AND“, so that case C in the table might be interpreted by the application as:\n``[Update - 2009.06.07: As pointed out by Todd Carpenter of NISO (see comments below) the phrase “SRU by contrast is an initiative to update Z39.50 for the Web” is inaccurate. I should have said “By contrast SRU is an initiative recognized by ZING (Z39.50 International Next Generation) to bring Z39.50 functionality into the mainstream Web“.]\n[Update - 2009.06.08: Bizarrely I find in mentioning query languages below that I omitted to mention SQL. I don’t know what that means. Probably just that there’s no Web-based API. And that again it’s tied to a particular technology - RDBMS.]\n(Click image to enlarge.)\nThere are two well-known public search APIs for generic Web-based search: OpenSearch and SRU. (Note that the key term here is “generic”, so neither Solr/Lucene nor XQuery really qualify for that slot. Also, I am concentrating here on “classic” query languages rather than on semantic query languages such as SPARQL.)\nOpenSearch was created by Amazon’s A9.com and is a cheap and cheerful means to interface to a search service by declaring a template URL and returning a structured XML format. It therefore allows for structured result sets while placing no constraints on the query string. As outlined in my earlier post Search Web Service, there is support for search operation control parameters (pagination, encoding, etc.), but no inroads are made into the query string itself which is regarded as opaque.\nSRU by contrast is an initiative to update Z39.50 for the Web and is firmly focussed on structured queries and responses. Specifically a query can be expressed in the high-level query language CQL which is independent of any underlying implementation. Result records are returned using any declared W3C XML Schema format and are transported within a defined XML wrapper format for SRU. (Note that the SRU 2.0 draft provides support for arbitrary result formats based on media type.)\nOne can summarize the respective OpenSearch and SRU functionalities as in this table:\nStructure \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; OpenSearch \u0026lt;/th\u0026gt; \u0026lt;th width=\u0026quot;33%\u0026quot; align=\u0026quot;center\u0026quot;\u0026gt; SRU \u0026lt;/th\u0026gt; query \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; results \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; control \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; diagnostics \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; no \u0026lt;/td\u0026gt; \u0026lt;td align=\u0026quot;center\u0026quot;\u0026gt; yes \u0026lt;/td\u0026gt; What I wanted to discuss here was the OpenSearch and SRU interfaces to a Search Web Service such as outlined in my previous post. The diagram at top of this post shows query forms for OpenSearch and SRU and associated result types. The Search Web Service is taken to be exposing an SRU interface. It might be simplest to walk through each of the cases.\n(Continues below.)\nCase 1: OpenSearch (Native Client)\nAs noted, OpenSearch uses a URL template (declared in an OpenSearch description document) where recognized parameters are mapped to implementation-specific parameters. The bolded parameter “query” in the figure indicates an OpenSearch parameter “searchTerms” which has been mapped to the Search Web Service parameter “query“,\nAs also noted, SRU 2.0 offers support for alternate result formats (other than SRU XML) by allowing a media type (aka mime type) to be passed in an “http:accept” parameter. There is, however, no OpenSearch parameter corresponding to a format selector, so this must be hard coded directly into the URL template with a value of “application/rss+xml” - the standard media type for an RSS feed which is the common result format for OpenSearch.\n(In the diagram I have noted in parentheses that RSS in its RSS 1.0 form is RDF. And that format is a strong candidate for semantic interoperability. An alternate format would be Atom, which could be similarly selected with a value of “application/atom+xml”, but it is difficult to see at this time what advantage Atom confers. It does not conform to the RDF data model but may find better support in code libraries and applications.)\nThe third parameter shown for Case 1, is “queryType” which is another new SRU 2.0 parameter. I had noted earlier that an OpenSearch query string could be passed directly through to the Search Web Service and its associated CQL parser. It tuns out that this needs to be analyzed further. (And many thanks to Jonathan Rochkind for useful discussions on this.)\nI had naively assumed that an OpenSearch query string would either be packed as a CQL string or would be a simple text string which could be interpreted as CQL. The latter interpretation (text string) turns out to be true only for a single bare word or for a quoted string - both of which are recognized CQL query strings (i.e. a single search term which has a default index and relationship to that index). It fails, however, for the more general case of unquoted strings. See table below for these cases.\nQuery type \u0026lt;th width=\u0026quot;50%\u0026quot;\u0026gt; Query string \u0026lt;/th\u0026gt; A. bare word \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; this \u0026lt;/td\u0026gt; B. quoted string \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; \u0026amp;#8220;this is a query\u0026amp;#8221; \u0026lt;/td\u0026gt; C. unquoted string \u0026lt;td align=\u0026quot;left\u0026quot;\u0026gt; this is a query \u0026lt;/td\u0026gt; Case C would fail a CQL parser. So we need to signal to the Search Web Service that this is not a CQL string. And that’s where the “queryType” parameter comes in. If it’s set to “cql” then the query string is to be parsed as CQL, otherwise it must be handled in an alternate fashion. (As of now there is no value set for this parameter that I am aware of so I am using the terms “plain” and “cql” to differentiate.)\nHow this should be handled by a CQL aware application is not immediately obvious. My first thought was to allow the application to silently quote such a string but that would change the semantics. It would be better to split the string into separate search clauses for each word and to join the search cluases by a default boolean operator, e.g. “AND“, so that case C in the table might be interpreted by the application as:\n`` Now, of course, we must not expect that a typical OpenSearch implementation would be aware of CQL (or any of the SRU technologies). Instead we can simply indicate in the URL template that the “queryType” is non-CQL, by hard coding “queryType=plain”. The actual URL template which is declared in the OpenSearch description would thus be something like the following (with whitespace added for clarity):\n\u0026lt;!-- 1. queryType=\"plain\" --\u0026gt; \u0026lt;Url type=\"application/rss+xml\" \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;template=\"http://www.example/search? \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;query={searchTerms} \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026queryType=plain \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026http:accept=application/rss+xml \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\" /\u0026gt; This URL template uses one OpenSearch parameter(“searchTerms”) and that is mapped to the SRU parameter “query”. The SRU 2.0 parameters “queryType” and “http:accept” are wired in. This means that a Search Web Service would be aware of the query, would know that it was not CQL (so might invoke a handler), and would be know that a result set in RSS was required.\nCase 2: OpenSearch (CQL-Aware Client)\nThe above case, works for a general OpenSearch client but now is problematic for a CQL-aware client. With the “queryType” set at “plain” there is no opportunity to indicate that a generic CQL string might be passed instead. We certainly wouldn’t want a non-CQL handler to operate on a valid CQL string. We need to vary the SRU 2.0 parameters and within the scope of OpenSearch this can only be done by recognizing the parameters as OpenSearch extensions. Basically, an extension is nothing more than a separately namespaced element or attribute. Recommendation is that the XML namespace would resolve to a specification document detailing the intention and format of the extension.\nThe URL template for a CQL-aware OpenSearch description could make use of the “queryType” and “http:accept” parameters as OpenSearch extensions (marked in bold italics in the figure) using a declaration like this:\n\u0026lt;!-- 2. queryType=\"cql\" --\u0026gt; \u0026lt;Url type=\"application/xml\" \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;xmlns:sru=\"http://opensearch.example/sru-extension\" \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;template=\"http://www.example/search? \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;query={searchTerms} \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026queryType={sru:queryType?} \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026http:accept={sru:httpAccept?} \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\" /\u0026gt; Note here that both parameters have been specified as being optional. Also the namespace here is pointed at a fictional OpenSearch extension document. (It doesn’t need to point to such a document - could be anything - but it is recommended that there be a specification.)\nI’m not aware of any such OpenSearch extension document for SRU currently existing but would be prepared to contribute to drafting such a document. It seems to me that it would be would be very useful for general OpenSearch/SRU compatibility and probably should detail all the SRU 2.0 parameters for “searchRetrieve”. In fact, that document could be the SRU spec itself, once that was established at a fixed URL. (Whether there should be a specific OpenSearch extension document depends on whether it would be useful to provide OpenSearch implementation details.)\nCase 3: SRU (Native Client)\nThis is easy. We’re on home ground now. The query type is by default CQL, and the result format is SRU XML. The only thing that might be specified is “recordSchema” to require a schema for the result records, if there are alternate schemas supported by the Search Web Service. A default for the result records is anyway supplied.\nCase 4: SRU (Media-Typed Client)\nAgain, we’re on familiar ground. For a media-savvy SRU interface we would need to use the SRU 2.0 parameter “http:accept”. This could be used to override the default SRU XML with an alternate format, e.g. RSS.\nAnd that’s about it for this review of aligning the OpenSearch and SRU interfaces. It seems that using URL templates and OpenSearch extensions as indicated should allow for an easy OpenSearch interface onto an SRU-based Search Web Service. At a minimum we just need a permanent URL for the SRU 2.0 spec (when finalized). Alternately a separate OpenSearch extension document could be drafted and registered. That would allow for details specific to OpenSearch to be provided, as well as bringing SRU closer into the OpenSearch realm. And such a document could be created now and updated with the URL for the SRU 2.0 spec as it progresses from draft to final.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/search-web-service/", "title": "Search Web Service", "subtitle":"", "rank": 1, "lastmod": "2009-05-30", "lastmod_ts": 1243641600, "section": "Blog", "tags": [], "description": "(Click image to enlarge graphic.)\nWhile the OASIS Search Web Services TC is currently working towards reconciling SRU and OpenSearch, I thought it would be useful to share here a simple graphic outlining how a search web service for structured search might be architected.\nBasically there are two views of this search web service (described in separate XML description files and discoverable through autodiscovery links added to HTML pages):", "content": "\n(Click image to enlarge graphic.)\nWhile the OASIS Search Web Services TC is currently working towards reconciling SRU and OpenSearch, I thought it would be useful to share here a simple graphic outlining how a search web service for structured search might be architected.\nBasically there are two views of this search web service (described in separate XML description files and discoverable through autodiscovery links added to HTML pages):\nOpenSearch SRU (Search and Retrieve by URL) One can see at a glance that there’s more happening down in the SRU layer. The SRU layer implements a heavyweight, robust service which provides a detailed listing of search indexes and index relations in the description document (‘SRU Explain’), is searchable using a standard query grammar - CQL (‘Contextual Query Language’), responds with result sets inside a standard XML wrapper and expressed as an XML record set (e.g. PAM) that is validatable using W3C XML Schema, and makes available a full roster of diagnostics.\nBy contrast the OpenSearch layer provides a lightweight view onto the search web service in which a simple opaque query string is sent to the server and a simple XML result set returned (usually RSS or Atom). Again a description document is made available (‘OpenSearch Description’) but this is much more coarse grained than the SRU description - e.g. it does not specify query components such as indexes or relations.\nIn practice, both views can be provided for by the same search web service. While OpenSearch does not specify any structured query it can make use of a CQL packaged query. That is, a single parameter value for the OpenSearch ‘query’ parameter can be unpacked by a CQL parser to yield a complex search query. The search query does not need to be splattered all over the URL querystring which is already using its parameter set to provide control information for the search (e.g. pagination, encoding and the like).\nAnd how would this relate to existing platform-hosted search services? Well, such services are usually bound to the host platform and are not intended to support remote applications. A search web service, on the other hand, would be ideally suited to offering direct support for running structured searches on platform-hosted content using off-platform apps.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/structured-search-using-prism-elements/", "title": "Structured Search Using PRISM Elements", "subtitle":"", "rank": 1, "lastmod": "2009-05-30", "lastmod_ts": 1243641600, "section": "Blog", "tags": [], "description": "We just registered in the SRU (Search and Retrieve by URL) search registry the following components:\nContext Sets\nPRISM Context Set version 2.0 PRISM Context Set version 2.1 Schemas\nPRISM Aggregator Message Record Schema Version 2.0 PRISM Aggregator Message Record Schema Version 2.1 This means that an SRU (Search and Retrieve by URL) search engine that supported one of the PRISM context sets registered above could accept CQL (Contextual Query Language) queries such as the following:", "content": "We just registered in the SRU (Search and Retrieve by URL) search registry the following components:\nContext Sets\nPRISM Context Set version 2.0 PRISM Context Set version 2.1 Schemas\nPRISM Aggregator Message Record Schema Version 2.0 PRISM Aggregator Message Record Schema Version 2.1 This means that an SRU (Search and Retrieve by URL) search engine that supported one of the PRISM context sets registered above could accept CQL (Contextual Query Language) queries such as the following:\nprism.doi = \u0026ldquo;10.1038/nature05398\u0026rdquo; prism.publicationName = \u0026ldquo;Nature\u0026rdquo; and prism.volume = \u0026ldquo;444\u0026rdquo; and prism.number = \u0026ldquo;7119\u0026rdquo; and prism.startingPage = \u0026ldquo;E9\u0026rdquo; dc.identifier = \u0026ldquo;doi:10.1038/nature05398\u0026rdquo; dc.creator = \u0026ldquo;Jones-Smith\u0026rdquo; and prism.publicationName = \u0026ldquo;Nature\u0026rdquo; and prism.publicationDate \u0026gt; \u0026ldquo;2006-01-01\u0026rdquo; dc.title any \u0026ldquo;fractal pollock\u0026rdquo; and prism.publicationName = \u0026ldquo;Nature\u0026rdquo; sortBy prism.publicationDate/sort.descending \u0026ldquo;fractal anlysis\u0026rdquo; and prism.publicationDate within \u0026ldquo;2005-01-01 2008-12-31\u0026rdquo; sortBy dc.creator/sort.ascending (Note that the quotes are only needed above for the DOI strings which contain a “/” character. Otherwise they are optional in the above examples.)\nAny query such as one of the above (here #1) could be sent to the server on a querystring like so:\n?version=1.1\u0026amp;operation=searchRetrieve\u0026amp;query=prism.doi=%2210.1038/nature05398%22\nand if the server were also equipped to respond with PAM (PRISM Aggregator Message) format for result records, a response might look like this:\nPAM was discussed here earlier.\nSuch a structured response would provide the metadata elements for applications to build various interfaces into the original article:\nWe think that these PRISM components (context sets and schemas) will be useful for structured search of scholarly publications.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oai-ore-workshop-slides/", "title": "OAI-ORE: Workshop Slides", "subtitle":"", "rank": 1, "lastmod": "2009-05-26", "lastmod_ts": 1243296000, "section": "Blog", "tags": [], "description": "An Overview of the OAI Object Reuse and Exchange Interoperability Framework\nView more Microsoft Word documents from hvdsomp. This is a very slick presentation by Herbert Van de Sompel on OAI-ORE which he’s due to give today for a workshop at the INFORUM 2009 15th Conference on Prrofessional Information Resources in Prague. It’s on the long side at 167 slides but even if you just flip though or sample it selectively you’ll be bound to come away with something.", "content": " An Overview of the OAI Object Reuse and Exchange Interoperability Framework\nView more Microsoft Word documents from hvdsomp. This is a very slick presentation by Herbert Van de Sompel on OAI-ORE which he’s due to give today for a workshop at the INFORUM 2009 15th Conference on Prrofessional Information Resources in Prague. It’s on the long side at 167 slides but even if you just flip though or sample it selectively you’ll be bound to come away with something.\nDescribing aggregations of resources is a subject that really has to be of interest to Crossref publishers.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prism-aggregator-message/", "title": "PRISM Aggregator Message", "subtitle":"", "rank": 1, "lastmod": "2009-05-08", "lastmod_ts": 1241740800, "section": "Blog", "tags": [], "description": "The new OAI-PMH interface to Nature.com sports one particular novelty which may well be of interest here: it makes use of the PRISM Aggregator Message. (For an announcement of this service see the post on our web publishing blog Nascent.)\nAs a protocol for the harvesting of metadata records within a digital repository, OAI-PMH records may be expressed in a variety of different metadata formats. For reasons of interoperability a base metadata format (‘Dublin Core’) is mandated for all OAI-PMH implementations. The expectation is that this base format would be augmented by community-specific vocabularies.\nOur natural inclination was to mirror the article descriptions which we already circulate in our RSS feeds and within our HTML pages (as META tags) and PDF files (as XMP packets). In these cases we have used open data models (e.g. RDF) with simple properties cherry-picked from the DC and PRISM namespaces. But OAI-PMH has a special ‘gotcha’ in this regard: any metadata format must allow for W3C XML Schema validation. That is, the properties need to be constrained by an XSD data model. Enter PRISM Aggregator Message (PAM).\n(Continues)\n", "content": "The new OAI-PMH interface to Nature.com sports one particular novelty which may well be of interest here: it makes use of the PRISM Aggregator Message. (For an announcement of this service see the post on our web publishing blog Nascent.)\nAs a protocol for the harvesting of metadata records within a digital repository, OAI-PMH records may be expressed in a variety of different metadata formats. For reasons of interoperability a base metadata format (‘Dublin Core’) is mandated for all OAI-PMH implementations. The expectation is that this base format would be augmented by community-specific vocabularies.\nOur natural inclination was to mirror the article descriptions which we already circulate in our RSS feeds and within our HTML pages (as META tags) and PDF files (as XMP packets). In these cases we have used open data models (e.g. RDF) with simple properties cherry-picked from the DC and PRISM namespaces. But OAI-PMH has a special ‘gotcha’ in this regard: any metadata format must allow for W3C XML Schema validation. That is, the properties need to be constrained by an XSD data model. Enter PRISM Aggregator Message (PAM).\n(Continues)\nFor the longest time I must confess I did not ‘get’ what PAM was about. PRISM was clearly a metadata vocabulary and yet with PAM there was all this wrangling with content, which as an academic publisher we frankly had no interest in as we already had our own journal article DTD and for interop we were beginning to look at NLM DTD. And then it dawned on me (albeit slowly) that the PAM DTD is the equivalent to NLM DTD but for trade magazine publishing, where there might not be such a strong practice of XML. And since the release of PRISM 2.0 (February 2008) there was now also an W3C XML Schema defined for PAM. (Note that the latest revision of PRISM 2.1 is about to be published, although the changes there do not have any bearing on this implementation.)\nSo, PAM defines PRISM elements to be used with XML content markup. Examining further reveals that within a PAM message there are one or more articles with metadata packaged into a head section, and content (if present) in a body section.\nSection 4.3 in the PAM 2.0 specification lists the allowable head elements by logical grouping, 11 in all: key elements, title, creative origin, publication, publication date, additional article ID, positional, topic, length, related content, rights \u0026amp; usage. Note that not all PRISM elements are supported; in fact only 43 of the 57 PRISM 2.0 elements are supported. Among the missing are ‘prism:endingPage‘. Also only 7 of the 15 DC elements are supported. Nevertheless we found that the bulk of the article descriptions could easily be accommodated within the PAM format. And because this is W3C XML Schema constrained there is an element ordering prescribed, and hence there is an interleaving of DC and PRISM elements.\nThe Nature.com OAI-PMH service has two access points:\nUser interface: http://0-www-nature-com.libus.csd.mu.edu/oai Service endpoint: http://0-www-nature-com.libus.csd.mu.edu/oai/request So, to work an example, if we want to get the record for doi:10.1038/nature01234 (which has an OAI-PMH identifier of oai:nature.com:10.1038/nature01234) we could use this call to get the description in PAM format:\nhttp://www.nature.com/oai/request?verb=GetRecord\u0026#038;identifier=10.1038/nature01234\u0026#038;metadataPrefix=pam\n(Note that as a convenience for the user we also allow a DOI to be used directly in place of the full OAI-PMH identifier as there is a one-to-one correspondence between the two within our repository. Simplifies cut and paste operations.)\nThis returns the following properties (shown in document order and by PAM logical grouping):\nWith PAM we are thus able to replicate in OAI-PMH the same journal article descriptions that we are currently disseminating through other service/content channels.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossrefs-openurl-query-interface/", "title": "Crossref’s OpenURL query interface", "subtitle":"", "rank": 1, "lastmod": "2009-05-06", "lastmod_ts": 1241568000, "section": "Blog", "tags": [], "description": "Over the past two weeks we’ve focused on our OpenURL query interface with the goal being to improve its reliability. I’d like to mention some things we’ve done.\nWe now require an OpenURL account to use this interface (see the registration page) . This account is still free, there are no fixed usage limits, and the terms of use have been greatly simplified.\nResources have been re-arranged dedicating more horse-power to the OpenURL function.", "content": "Over the past two weeks we’ve focused on our OpenURL query interface with the goal being to improve its reliability. I’d like to mention some things we’ve done.\nWe now require an OpenURL account to use this interface (see the registration page) . This account is still free, there are no fixed usage limits, and the terms of use have been greatly simplified.\nResources have been re-arranged dedicating more horse-power to the OpenURL function.\nThe OpenURL function is now in our advanced monitoring function which means some lucky staff member will be getting phone calls at 3AM (me included!).\nI should note that #1 has already reduced inappropriate usage. This also is not the end of planned changes. Crossref has undertaken a major rewrite of parts of our system and this will include the OpenURL interface.\nChuck\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oclc-defines-requirements-for-a-cooperative-identities-hub/", "title": "OCLC defines requirements for a “Cooperative Identities Hub”", "subtitle":"", "rank": 1, "lastmod": "2009-05-01", "lastmod_ts": 1241136000, "section": "Blog", "tags": [], "description": "OCLC has published a report (PDF) identifying some requirements for what they call a “Cooperative Identities Hub”. A quick glance through it seems to show that the use cases focus on what we are calling the “Knowledge Discovery” use cases. As I mentioned in my interview with Martin Fenner, there is also a category of “authentication” use cases that I think needs to be addressed by a contributor identifier system. Still, this is a good report that highlights many of the complexities that an identifier system needs to address.", "content": "OCLC has published a report (PDF) identifying some requirements for what they call a “Cooperative Identities Hub”. A quick glance through it seems to show that the use cases focus on what we are calling the “Knowledge Discovery” use cases. As I mentioned in my interview with Martin Fenner, there is also a category of “authentication” use cases that I think needs to be addressed by a contributor identifier system. Still, this is a good report that highlights many of the complexities that an identifier system needs to address.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/what-do-people-want-from-an-author-identifier/", "title": "What do people want from an author identifier?", "subtitle":"", "rank": 1, "lastmod": "2009-04-27", "lastmod_ts": 1240790400, "section": "Blog", "tags": [], "description": "Martin Fenner continues his interest in the subject of author identifiers. He recently posted an online poll asking people some specific questions about how they would like to see an author identifier implemented.*\nThe results of the poll are in and, though the sample was very small, the results are interesting. The responses are both gratifying -there seems to be a general belief that Crossref has a roll to play here- and perplexing -most think the identifier needs to identify other “contributors” to the scholarly communications process- yet there seems to be a preference for the moniker “digital author identifier”.", "content": "Martin Fenner continues his interest in the subject of author identifiers. He recently posted an online poll asking people some specific questions about how they would like to see an author identifier implemented.*\nThe results of the poll are in and, though the sample was very small, the results are interesting. The responses are both gratifying -there seems to be a general belief that Crossref has a roll to play here- and perplexing -most think the identifier needs to identify other “contributors” to the scholarly communications process- yet there seems to be a preference for the moniker “digital author identifier”. This latter preference is certainly a surprise to us as we had been focusing our efforts on identifying analog authors. The only “digital authors” I know of are this one at at MIT and possibly this one at Aberystwyth University. 😉\nAnyway, There are some additional reactions to Martin’s poll on FriendFeed.\nFinally, I should have blogged about this earlier, but the March issue of Science included a summary of the initiatives and discussions surrounding the creation of an industry “author identifier” in an article titled “Are You Ready to Become a Number” (http://0-dx-doi-org.libus.csd.mu.edu/10.1126/science.323.5922.1662).\nIn pointing people at this, I feel like I must make a clarification to the article. In short, I don’t think any of our members would “force” anybody to use an author identifier whether it came from Crossref or from anybody else. Though it is likely that in the interview I used the terms “carrot” and “stick”, in truth publisher’s would, instead of “a stick”, at most wield a Nerf bat. Having said that, the essential point remains- even if most major publishers strongly encouraged all of their authors to use the system, it would take several years before the system had a critical mass of data.\n*Note that I deliberately didn’t point CrossTech readers at this poll as it was being conducted because I thought doing so might introduce a Crossref bias.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/introductory-signals/", "title": "Introductory Signals", "subtitle":"", "rank": 1, "lastmod": "2009-03-23", "lastmod_ts": 1237766400, "section": "Blog", "tags": [], "description": "So while doing some background reading today I realized that legal citations already widely support a form of “citation typing” in the form of “Introductory Signals“. The 10 introductory signals break down as follows…\nIn support of an argument:\n1) [no signal]. (NB that, apparently, this is increasingly deprecated.)\n2) accord;\n3) see;\n4) see also;\n5) cf.;\nFor Comparisons:\n6) compare … with …;\nFor contradiction:\n7) but see;", "content": "So while doing some background reading today I realized that legal citations already widely support a form of “citation typing” in the form of “Introductory Signals“. The 10 introductory signals break down as follows…\nIn support of an argument:\n1) [no signal]. (NB that, apparently, this is increasingly deprecated.)\n2) accord;\n3) see;\n4) see also;\n5) cf.;\nFor Comparisons:\n6) compare … with …;\nFor contradiction:\n7) but see;\n8) but cf.;\nFor background:\n9) see generally;\nAnd for examples:\n10) e.g.\nClever lawyers.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/legal-citations/", "title": "Legal Citations", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/citation-typing-ontology/", "title": "Citation Typing Ontology", "subtitle":"", "rank": 1, "lastmod": "2009-03-20", "lastmod_ts": 1237507200, "section": "Blog", "tags": [], "description": "I was happy to read David Shotton’s recent Learned Publishing article, Semantic Publishing: The Coming Revolution in scientific journal publishing, and see that he and his team have drafted a Citation Typing Ontology.*\nAnybody who has seen me speak at conferences knows that I often like to proselytize about the concept of the “typed link”, a notion that hypertext pioneer, Randy Trigg, discussed extensively in his 1983 Ph.D. thesis.. Basically, Trigg points out something that should be fairly obvious- a citation (i.", "content": "I was happy to read David Shotton’s recent Learned Publishing article, Semantic Publishing: The Coming Revolution in scientific journal publishing, and see that he and his team have drafted a Citation Typing Ontology.*\nAnybody who has seen me speak at conferences knows that I often like to proselytize about the concept of the “typed link”, a notion that hypertext pioneer, Randy Trigg, discussed extensively in his 1983 Ph.D. thesis.. Basically, Trigg points out something that should be fairly obvious- a citation (i.e. “a link”) is not always a “vote” in favor of the thing being cited.\nIn fact, there are all sorts of reasons that an author might want to cite something. They might be elaborating on the item cited, they might be critiquing the item cited, they might even be trying to refute the item cited (For an exhaustive and entertaining survey of the use and abuse of citations in the humanities, Anthony Grafton‘s, The Footnote: A Curious History, is a rich source of examples)\nUnfortunately, the naive assumption that a citation is tantamount to a vote of confidence has become inshrined in everything from the way in which we measure scholarly reputation, to the way in which we fund universities and the way in which search engines rank their results. The distorting affect of this assumption is profound. If nothing else, it leads to a perverse situation in which people will often discuss books, articles, and blog postings that they disagree with without actually citing the relevant content, just so that they can avoid inadvertently conferring “wuffie” on the item being discussed. This can’t be right.\nHaving said that, there has been a half-hearted attempt to introduce a gross level of link typology with the introduction of the “nofollow” link attribute- an initiative started by Google in order to try to address the increasing problem of “Spamdexing”. But this is a pretty ham-fisted form of link typing- particularly in the way it is implemented by the Wikipedia where Crossref DOI links to formally published scholarly literature have a “nofollow” attribute attached to them but, inexplicably, items with a PMID are not so hobbled (view the HTML source of this page, for example). Essentially, this means that, the Wikipedia is a black-hole of reputation. That is, it absorbs reputation (through links too the Wikipedia), but it doesn’t let reputation back out again. Hell, I feel dirty for even linking to it here ;-).\nAnyway, scholarly publishers should certainly read Shotton’s article because it is full of good, and practical ideas about what can can be done with today’s technology in order to help us move beyond the “digital incunabula” that the industry is currently churning out. The sample semantic article that Shotton’s team created is inspirational and I particularly encourage people to look at the source file for the ontology-enhanced bibliography which reveals just how much more useful metadata can be associated with the humble citation.\nAnd now I wonder whether CiteULike, Connotea, 2Collab or Zotero will consider adding support for the CItation Typing Ontology into their respective services?\n* Disclosure:\na) I am on the editorial board of Learned Publishing\nb) Crossref has consulted with David Shotton on the subject of semantically enhancing journal articles\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/researcher-identification-primer/", "title": "Researcher Identification Primer", "subtitle":"", "rank": 1, "lastmod": "2009-03-11", "lastmod_ts": 1236729600, "section": "Blog", "tags": [], "description": "Discussions around “contributor Ids” (aka “Author ID, Researcher ID, etc.) seem to be becoming quite popular. In the interview that I pointed to in my last post, I mentioned that Crossref has been talking with a group of researchers who were very interested in creating some sort of authenticated contributor ID as a mechanism for controlling who gets trusted access to sensitive genome-wide aggregate genotype data.\nWell, I’m delighted to say that said group of researchers(at the GEN2PHEN project) have created a “Researcher Identification Primer” website in which they outline the many use-cases and issues around creating a mechanism for unambiguously identifying and/or authenticating researchers.", "content": "Discussions around “contributor Ids” (aka “Author ID, Researcher ID, etc.) seem to be becoming quite popular. In the interview that I pointed to in my last post, I mentioned that Crossref has been talking with a group of researchers who were very interested in creating some sort of authenticated contributor ID as a mechanism for controlling who gets trusted access to sensitive genome-wide aggregate genotype data.\nWell, I’m delighted to say that said group of researchers(at the GEN2PHEN project) have created a “Researcher Identification Primer” website in which they outline the many use-cases and issues around creating a mechanism for unambiguously identifying and/or authenticating researchers. This looks like a great resource and I expect it will serve as a useful focus for further discussion around the issue.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/an-interview-about-author-ids/", "title": "An interview about “Author IDs”", "subtitle":"", "rank": 1, "lastmod": "2009-02-19", "lastmod_ts": 1235001600, "section": "Blog", "tags": [], "description": "Over the past few months there seems to have been a sharp upturn in general interest around implementing an “author identifier” system for the scholarly community. This, in turn, has meant that more people have been getting in touch with us about our nascent “Contributor ID” project. The other day, after seeing my comments in the above thread, Martin Fenner asked if he could interview me about the issue of author identifiers for his blog on Nature Networks, Gobbledygook.", "content": "Over the past few months there seems to have been a sharp upturn in general interest around implementing an “author identifier” system for the scholarly community. This, in turn, has meant that more people have been getting in touch with us about our nascent “Contributor ID” project. The other day, after seeing my comments in the above thread, Martin Fenner asked if he could interview me about the issue of author identifiers for his blog on Nature Networks, Gobbledygook. I agreed and he posted the interview the other day.\nI warn you ahead of time, I did ramble on a bit and the interview is long. There is a lot of stuff at the beginning about the DOI and it might seem off-topic, but I do think that there is a lot that we can learn from our DOI experiences which would apply to any author identifier. Just be thankful I didn’t start talking about the privacy issues that will inevitably arise from any author identifier system. If I had, the interview would have probably gone on for another six pages ;-).\nAnyway, as most of our membership knows, we have a pilot project underway to explore what it would take to launch a “Crossref Contributor ID” system. We still haven’t concluded whether it makes sense for us to do it, but one thing is clear from the recent discussions we’ve had and that is that, if we don’t do it, somebody else almost certainly will.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/real-prism-in-the-rss-wilds/", "title": "Real PRISM in the RSS Wilds", "subtitle":"", "rank": 1, "lastmod": "2009-02-19", "lastmod_ts": 1235001600, "section": "Blog", "tags": [], "description": "Alf Eaton just posted a real nice analysis of ticTOCs RSS feeds. Good to see that almost half of the feeds (46%) are now in RDF and that fully a third (34%) are using PRISM metadata to disclose bibliographic fields.\nThe one downside from a Crossref point of view is that these feeds are still using the old PRISM version (1.2) and not the new version (2.0) which was released a year ago and blogged here.", "content": "Alf Eaton just posted a real nice analysis of ticTOCs RSS feeds. Good to see that almost half of the feeds (46%) are now in RDF and that fully a third (34%) are using PRISM metadata to disclose bibliographic fields.\nThe one downside from a Crossref point of view is that these feeds are still using the old PRISM version (1.2) and not the new version (2.0) which was released a year ago and blogged here. That version supports the elements prism:doi for the bare DOI, as well as prism:url for the DOI proxy server URL.\nThere are still some improvements to be made in serving up these feeds (as Alf’s analysis shows for record type), but overall things are looking pretty good. 🙂\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-in-an-iphone-application/", "title": "DOIs in an iPhone application", "subtitle":"", "rank": 1, "lastmod": "2009-02-12", "lastmod_ts": 1234396800, "section": "Blog", "tags": [], "description": "Very cool to see Alexander Griekspoor releasing an iPhone version of his award-winning Papers application. A while ago Alex intigrated DOI metadata lookup into the Mac version of papers and now I can get a silly thrill from seeing Crossref DOIs integrated in an iPhone app. Alex has just posted a preview video of the iPhone application and it includes a cameo appearance by a DOI. Yay.", "content": "Very cool to see Alexander Griekspoor releasing an iPhone version of his award-winning Papers application. A while ago Alex intigrated DOI metadata lookup into the Mac version of papers and now I can get a silly thrill from seeing Crossref DOIs integrated in an iPhone app. Alex has just posted a preview video of the iPhone application and it includes a cameo appearance by a DOI. Yay.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/curie-syntax-1.0/", "title": "CURIE Syntax 1.0", "subtitle":"", "rank": 1, "lastmod": "2009-01-19", "lastmod_ts": 1232323200, "section": "Blog", "tags": [], "description": "The W3C has recently (Jan. 16) released CURIE Syntax 1.0 as a Candidate Recommendation and is inviting implementations.\n(Note that I made a fuller post here on CURIEs and erroneously confused the Editor’s Draft (Oct. 23, ’08) as being a Candidate Recommendation. Well, at least it’s got there now.)", "content": "The W3C has recently (Jan. 16) released CURIE Syntax 1.0 as a Candidate Recommendation and is inviting implementations.\n(Note that I made a fuller post here on CURIEs and erroneously confused the Editor’s Draft (Oct. 23, ’08) as being a Candidate Recommendation. Well, at least it’s got there now.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/standard-inchi-defined/", "title": "Standard InChI Defined", "subtitle":"", "rank": 1, "lastmod": "2009-01-17", "lastmod_ts": 1232150400, "section": "Blog", "tags": [], "description": "IUPAC has just released the final version (1.02) of its InChI software, which generates Standard InChIs and Standard InChIKeys. (InChI is the IUPAC International Chemical Identifier.)\nThe Standard InChI “removes options for properties such as tautomerism and stereoconfiguration”, so that a molecule will always generate the same stable identifier - a unique InChI - which facilitates “interoperability/compatibility between large databases/web searching and information exchange”. Note also that any “shortcomings in Standard InChI may be addressed using non-Standard InChI (currently obtainable using InChI version 1.", "content": "IUPAC has just released the final version (1.02) of its InChI software, which generates Standard InChIs and Standard InChIKeys. (InChI is the IUPAC International Chemical Identifier.)\nThe Standard InChI “removes options for properties such as tautomerism and stereoconfiguration”, so that a molecule will always generate the same stable identifier - a unique InChI - which facilitates “interoperability/compatibility between large databases/web searching and information exchange”. Note also that any “shortcomings in Standard InChI may be addressed using non-Standard InChI (currently obtainable using InChI version 1.02beta)”.\nOn a practical level this means that the 27-character length InChIKeys (a hashed form of the InChI), with the following generic form\nAAAAAAAAAAAAAA-BBBBBBBBFV-P\ncan now be readily and reliably generated and will start to be used in search indexing and linking applications.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-library-for-flash/", "title": "XMP Library for Flash", "subtitle":"", "rank": 1, "lastmod": "2009-01-16", "lastmod_ts": 1232064000, "section": "Blog", "tags": [], "description": "Update about new XMP Library from Adobe Labs:\n“The new Adobe XMP Library for ActionScript is now available for download on Adobe Labs. Adobe Extensible Metadata Platform (XMP) is a labeling technology that allows you to embed data about a file, known as metadata, into the file itself. XMP is an open technology based on RDF and RDF/XML. With this new library you can read existing XMP metadata from Flash based file formats via the Adobe Flash Player.", "content": "Update about new XMP Library from Adobe Labs:\n“The new Adobe XMP Library for ActionScript is now available for download on Adobe Labs. Adobe Extensible Metadata Platform (XMP) is a labeling technology that allows you to embed data about a file, known as metadata, into the file itself. XMP is an open technology based on RDF and RDF/XML. With this new library you can read existing XMP metadata from Flash based file formats via the Adobe Flash Player.“\nAny volunteers?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/poorboy-metadata-hack/", "title": "Poorboy Metadata Hack", "subtitle":"", "rank": 1, "lastmod": "2009-01-06", "lastmod_ts": 1231200000, "section": "Blog", "tags": [], "description": "I was playing around recently and ran across this little metadata hack. At first, I thought somebody was doing something new. But no, nothing so forward apparently. (Heh! 🙂\nI was attempting to grab the response headers from an HTTP request on an article page and was using by default the Perl LWP library. For some reason I was getting metadata elements being spewed out as response headers - at least from some of the sites I tested.", "content": "I was playing around recently and ran across this little metadata hack. At first, I thought somebody was doing something new. But no, nothing so forward apparently. (Heh! 🙂\nI was attempting to grab the response headers from an HTTP request on an article page and was using by default the Perl LWP library. For some reason I was getting metadata elements being spewed out as response headers - at least from some of the sites I tested. With some further investigation I tracked this back to LWP itself which parses HTML headers and generates HTTP pseudo-headers using an X-Meta- style header. (This can be viewed either as a feature of LWP or a bug as this article bemoans.)\nWhat this means anyway is that I can issue a simple call like this to get the HTML metadata - shown here for doi:10.1087/095315108X288947:\n``I was playing around recently and ran across this little metadata hack. At first, I thought somebody was doing something new. But no, nothing so forward apparently. (Heh! 🙂\nI was attempting to grab the response headers from an HTTP request on an article page and was using by default the Perl LWP library. For some reason I was getting metadata elements being spewed out as response headers - at least from some of the sites I tested. With some further investigation I tracked this back to LWP itself which parses HTML headers and generates HTTP pseudo-headers using an X-Meta- style header. (This can be viewed either as a feature of LWP or a bug as this article bemoans.)\nWhat this means anyway is that I can issue a simple call like this to get the HTML metadata - shown here for doi:10.1087/095315108X288947:\n``\nThis shows a simple (read lazy) means of accessing metadata added as \u0026lt;meta\u0026gt; tags in HTML headers, such as those we added for Nature. (Of course, machine readable metadata is best added using RDFa as noted earlier, but does not preclude also adding in \u0026lt;meta\u0026gt; tags which are also usable with HTML as well as XHTML.)\n(Btw, wouldn’t it be fun if Crossref had a random DOI facility? That would be real handy for testing as well as giving users a feel for what real-life DOIs look like and what lies at the other end of them.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2008/", "title": "2008", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/and-the-doi-is/", "title": "And the DOI is …", "subtitle":"", "rank": 1, "lastmod": "2008-12-22", "lastmod_ts": 1229904000, "section": "Blog", "tags": [], "description": "Once structured metadata is added to a file then retrieving a given metadata element is usually a doddle. For example, for PDFs with embedded XMP one can use Phil Harvey’s excellent Exiftool utility.\nExiftool is a Perl library and application which I’ve blogged about here earlier which is available as a ‘.zip‘ file for Windows (no Perl required) or ‘.dmg‘ for MacOS. Note that Phil maintains this actively and has done so over the last five years.", "content": "Once structured metadata is added to a file then retrieving a given metadata element is usually a doddle. For example, for PDFs with embedded XMP one can use Phil Harvey’s excellent Exiftool utility.\nExiftool is a Perl library and application which I’ve blogged about here earlier which is available as a ‘.zip‘ file for Windows (no Perl required) or ‘.dmg‘ for MacOS. Note that Phil maintains this actively and has done so over the last five years. (And when I say actively I mean just that. I once made the mistake of printing out the change file.)\nIf Perl’s not your thing, then there’s a Ruby wrapper gem (MiniExiftool) to access the Exiftool command in trouper OO fashion. Here’s an example Ruby one-liner to get the DOI from a PDF (broken here to meet column width restriction):\n% ruby -rubygems -e 'require \u0026quot;mini_exiftool\u0026quot;;\u0026lt;br /\u0026gt; \u0026amp;nbsp;\u0026amp;nbsp;\u0026amp;nbsp;\u0026amp;nbsp;puts MiniExiftool.new(\u0026quot;test.pdf\u0026quot;)[\u0026quot;doi\u0026quot;]'\u0026lt;br /\u0026gt; 10.1038/nphoton.2008.200\nOf course, that could also have been run against an image, audio or video file with XMP packet.\n(Makes one wonder vaguely about the feasibility of having a Swiss Army knife type of utility that could read any file to get the DOI using the embedded XMP, RDFa, RDF, HTML headers, COiNS, etc. Possibly even as last resort fall back to scanning the raw text - if any.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmas-xmp/", "title": "Xmas XMP", "subtitle":"", "rank": 1, "lastmod": "2008-12-19", "lastmod_ts": 1229644800, "section": "Blog", "tags": [], "description": "Well, as I blogged on our web publishing blog Nascent we just went live with XMP labelling on Nature in yesterday’s double issue. We will be adding XMP to all new issues of Nature as well as rolling out across all our other titles in the next few weeks and months.\nThe screenshots below from Acrobat (File \u0026gt; Properties, CMD-D / CTL-D) show what the user might see both with (bottom-left) and without (top-right) semantic markup.", "content": "Well, as I blogged on our web publishing blog Nascent we just went live with XMP labelling on Nature in yesterday’s double issue. We will be adding XMP to all new issues of Nature as well as rolling out across all our other titles in the next few weeks and months.\nThe screenshots below from Acrobat (File \u0026gt; Properties, CMD-D / CTL-D) show what the user might see both with (bottom-left) and without (top-right) semantic markup.\nAs to the actual contents of the metadata record, see this sample I posted to the semantic web list.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ore-powder-remarks-on-ratings/", "title": "ORE/POWDER: Remarks on Ratings", "subtitle":"", "rank": 1, "lastmod": "2008-12-06", "lastmod_ts": 1228521600, "section": "Blog", "tags": [], "description": "I wanted to make some remarks about the “Ease of use” and “Learn curve” ratings which I gave in the ORE/POWDER comparison table that I blogged about here the other day. It may seem that I came out a little harsh on ORE and a little easy on POWDER. I just wanted to rationalize the justification for calling it that way. (By the way, the revised comparison table includes a qualification to those ratings.)\nMy primary interest was from the perspective of a data provider rather than a data consumer. What does it take to get a resource description document (“resource map”, “description resource” or “sitemap”) ready for publication?\n(Continues)\n", "content": "I wanted to make some remarks about the “Ease of use” and “Learn curve” ratings which I gave in the ORE/POWDER comparison table that I blogged about here the other day. It may seem that I came out a little harsh on ORE and a little easy on POWDER. I just wanted to rationalize the justification for calling it that way. (By the way, the revised comparison table includes a qualification to those ratings.)\nMy primary interest was from the perspective of a data provider rather than a data consumer. What does it take to get a resource description document (“resource map”, “description resource” or “sitemap”) ready for publication?\n(Continues)\nTo look at POWDER first, it defines two sets of semantics: an “operational semantics” which is embodied in the simple XML that is intended as the primary publication vehicle, and a “formal semantics” embodied in the RDF/OWL document that would typically be generated by a POWDER processor.\nThe operational semantics (XML) document requires minimal RDF understanding (and arguably none at all): it only requires that URI resources be organized into groups by pattern matching, and that metadata be attached to those groups using groups.\nURI patterns are specified using any of the following XML elements for inclusive patterns:\n\u0026gt; **\u0026lt;includeschemes\u0026gt;**, **\u0026lt;includehosts\u0026gt;**, **\u0026lt;includeexactpaths\u0026gt;**, **\u0026lt;includepathcontains\u0026gt;**, **\u0026lt;includepathstartswith\u0026gt;**, **\u0026lt;includepathendswith\u0026gt;**, **\u0026lt;includeports\u0026gt;**, **\u0026lt;includequerycontains\u0026gt;**, **\u0026lt;includeiripattern\u0026gt;**, **\u0026lt;includeregex\u0026gt;**, **\u0026lt;includeresources\u0026gt;** and their exclusive counterparts\n**\u0026lt;excludeschemes\u0026gt;**, \u0026amp;#8230;\nThese are turned into corresponding regular expressions by a POWDER processor which then emits RDF/OWL classes using those expressions as property restrictions on set membership. But a publisher is not required to understand this transformation nor the formal semantics generated from the simple XML document that was authored.\nNow, as to metadata. Resource group descriptors are either free text (tags) or properties from a published namespace. For example, the property name from a namespace ex: would be added in one of two ways, depending on whether it were a simple literal string (“value”, say) or a resource URI:\nhttp://example.org/value \u0026lt;ex:name rdf:resource=”http://example.org/value“/\u0026gt; While technically this is RDF/XML it hardly qualifies, I think, as requiring any great knowledge of RDF, more a knowledge of XML namespaces alone would be sufficient.\nAnd that’s about it – all that is required for publication of a POWDER “description resource” document. (The guidelines for discovery mechanisms of a POWDER document might also need to be consulted.)\nSo, on that basis I would judge POWDER to be at most “medium” on the “Learn curve”. However, as soon as the mapping to the formal semantics (POWDER-S) using RDF/OWL is considered, then that learn curve rating would automatically swing to “high”.\nNow, ORE on the other hand is a straightforward RDF application. What does make ORE a bit of a challenge are the following two aspects:\n1. concept of named aggregation * abstract data model - no fixed bindings\u0026lt;/ol\u0026gt; Well, the first aspect is what ORE is all about \u0026amp;ndash; its USP \u0026amp;ndash; and what it gives us beyond the simpler POWDER approach of merely describing resource bundles. Still, it’s a concept that needs to be grokked. All too easy to take it for granted. It is the second aspect that may make ORE appear to be \u0026amp;#8220;difficult\u0026amp;#8221;. It does not prescribe a single binding or set of bindings but provides an abstract data model. That means that a prospective user must endeavour to understand something of the model before deploying. But enough of that. Because who really reads instruction manuals anyway? So to deploy there are user guides available for one standalone document format (RDF/XML), and two carrier document formats (Atom, RDFa). That means right there that the publisher must either embrace RDF/XML or learn how to weave it into an existing document markup. (By the way, it should be remarked that there is an excellent [primer][3] available - as there is also for POWDER - and user guides for each of the formats.) So that I think warrants the \u0026amp;#8220;high\u0026amp;#8221; rating for ORE on the learn curve, and the corresponding \u0026amp;#8220;low\u0026amp;#8221; ease of use. But that is not to say that the two initiatives are in any competition and that one should be favoured over the other. They serve different purposes. Any yet they may also have compatibilities as the previous [mapping of ORE in POWDER][4] attempts to show. I’ll leave that task for other commentators. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/resource-maps-encoded-in-powder/", "title": "Resource Maps Encoded in POWDER", "subtitle":"", "rank": 1, "lastmod": "2008-12-05", "lastmod_ts": 1228435200, "section": "Blog", "tags": [], "description": "Following right on from yesterday’s post on ORE and POWDER, I’ve attempted to map the worked examples in the ORE User Guide for RDF/XML (specifically Sect. 3) to POWDER to show that POWDER can be used to model ORE, see\nResource Maps Encoded in POWDER\n(A full explanation for each example is given in the RDF/XML Guide, Sect. 3 which should be consulted.)\nThis could just all be sheer doolally or might possibly turn out to have a modicum of instructional value – I don’t know.", "content": "Following right on from yesterday’s post on ORE and POWDER, I’ve attempted to map the worked examples in the ORE User Guide for RDF/XML (specifically Sect. 3) to POWDER to show that POWDER can be used to model ORE, see\nResource Maps Encoded in POWDER\n(A full explanation for each example is given in the RDF/XML Guide, Sect. 3 which should be consulted.)\nThis could just all be sheer doolally or might possibly turn out to have a modicum of instructional value – I don’t know. (It would be real good to get some feedback here.) There are, however, a couple points to note in mapping ORE to POWDER:\nThe POWDER form is actually more long-winded because it splits the RDF triples into subject and predicate/object divisions, with the first listed within an iriset and the second within a descriptorset. The net effect, however, may be somewhat cleaner since POWDER uses a simple XML format rather than RDF/XML. POWDER only supports simple object types (literals or resources) so the blank nodes in the RDF/XML examples for dcterms:creator cannot be mapped as such. I have chosen here to use either foaf:name or foaf:page value. Likewise, and as far as I am aware, POWDER does not support datatyping but I could be wrong on this. I have thus dropped the datatypes on dcterms:created and dcterms:modified. This is a fairly naïve mapping. POWDER’s real strength comes in defining groups of resources with its powerful pattern matching capabilities, whereas here I am using a named single resource in each iriset through the includeresource element. I think, though, this does show how the abstract ORE data model can be serialized in yet another format. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/describing-resource-sets-ore-vs-powder/", "title": "Describing Resource Sets: ORE vs POWDER", "subtitle":"", "rank": 1, "lastmod": "2008-12-04", "lastmod_ts": 1228348800, "section": "Blog", "tags": [], "description": "I’ve been reading up on POWDER recently (the W3C Protocol for Web Description Resources) which is currently in last call status (with comments due in tomorrow). This is an effort to describe groups of Web resources and as such has clear similarities to the Open Archives Initiative ORE data model, which has been blogged about here before.\nIn an attempt to better understand the similarities (and differences) between the two data models, I’ve put up the table which directly compares the two heavyweight contendors OAI-ORE and POWDER and also (unfairly) places them alongside the featherweight Sitemaps Protocol for reference.", "content": "I’ve been reading up on POWDER recently (the W3C Protocol for Web Description Resources) which is currently in last call status (with comments due in tomorrow). This is an effort to describe groups of Web resources and as such has clear similarities to the Open Archives Initiative ORE data model, which has been blogged about here before.\nIn an attempt to better understand the similarities (and differences) between the two data models, I’ve put up the table which directly compares the two heavyweight contendors OAI-ORE and POWDER and also (unfairly) places them alongside the featherweight Sitemaps Protocol for reference.\nThis is very much a draft document and I will aim to update the table based on my own further reading and on any feedback that I may get (contributions gratefully received). I’m all too aware that my understanding of the respective data models is painfully limited and I, for one, hope to profit through this exercise. There will be certainly errors which I will aim to fix as soon as I get wind of them. 🙂\nBy the way, the ORE work especially is of interest to Crossref members and has obvious synergies with the multiple resolution potential that DOI has long promised but not quite delivered on.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/curies-a-cure-for-uris/", "title": "CURIEs - A Cure for URIs", "subtitle":"", "rank": 1, "lastmod": "2008-12-03", "lastmod_ts": 1228262400, "section": "Blog", "tags": [], "description": "A quick straw poll of a few folks at London Online yesterday revealed that they had not heard of CURIE’s. And there was I thinking that most everybody must have heard of them by now. 🙂 So anyway here’s something brief by way of explanation.\nCURIE stands for Compact URI and does the signal job or rendering long and difficult to read URI strings into something more manageable. (URIs do have the particular gift of being “human transcribable” but in practice their length and the actual characters used in the URI strings tend to muddy things for the reader.) So given that the Web is built upon a bedrock of URIs, anything that then makes URIs easier to handle is going to be an important contributor to our overall ease of interaction with the Web.\n(Continues)\n", "content": "A quick straw poll of a few folks at London Online yesterday revealed that they had not heard of CURIE’s. And there was I thinking that most everybody must have heard of them by now. 🙂 So anyway here’s something brief by way of explanation.\nCURIE stands for Compact URI and does the signal job or rendering long and difficult to read URI strings into something more manageable. (URIs do have the particular gift of being “human transcribable” but in practice their length and the actual characters used in the URI strings tend to muddy things for the reader.) So given that the Web is built upon a bedrock of URIs, anything that then makes URIs easier to handle is going to be an important contributor to our overall ease of interaction with the Web.\n(Continues)\nTen years back (in February 1998) when XML was first introduced it presented a flat naming system for document markup. For purposes of modularity and markup reuse the XML Namespaces specification released the following year allowed for element and attribute names to be replaced by expanded names in which the hitherto simple names would be replaced by name pairs consisting of a namespace name and a local name. The use of URIs for the namespace name thus opened the doors to assigning globally unique names for XML element/attribute names. As a practical point (both to keep the names short and to deal with URI characters), the notion of a qualified name (or QName) was introduced, whereby the local name would be qualified by a prefix which stood in for the namespace name.\nThis was such a successful device that over time it was applied to URIs in general as a mechanism for abbreviation. Especially in RDF/XML schema elements were referenced by QName. And the practice has spilled over into non-XML syntaxes (e.g. the N3 and Turtle RDF grammars which use a “@prefix” directive). But there were problems since the device was grounded in XML the local names were constrained by allowable characters for XML elements and attributes (e.g. names cannot start with a digit character), as well as there being no specification for applying this same device to non-XML grammars.\nCURIE is an initiative to generalize this notion of qualified names for URIs beyond the immediate XML context for naming elements and attributes (which would also allow their use in attribute values), to a more general use in applications beyond XML. The development of CURIE is based upon work done in the definition of XHTML2, and upon work done by the RDF-in-HTML Task Force, a joint task force of the Semantic Web Best Practices and Deployment Working Group and XHTML 2 Working Group. The Editor’s draft CURIE Syntax 1.0 is currently a W3C Candidate Recommendation which is receiving comments through Jan 15, 2009, at which time it is intended to put it forward as a W3C Proposed Recommendation. Meantime, though, the new W3C Recommendation RDFa Syntax in XHTML (published Oct 14, 2008) has a normative section on CURIEs (see Sect. 7).\nSo, what do CURIEs look like? Taking a simple RDFa example for DOI we might have a fragment such as:\n\u0026lt;div xmlns:doi=\"https://0-doi-org.libus.csd.mu.edu/\" xmlns:dcterms=\"http://0-purl-org.libus.csd.mu.edu/dc/terms/\"\u0026gt; \u0026nbsp;\u0026nbsp;\u0026lt;div about=\"doi:10.1038/nature07184\"\u0026gt; \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026lt;span property=\"dcterms:hasPart\" resource=\"[doi:10.1038/nature07184]\"/\u0026gt; \u0026nbsp;\u0026nbsp;\u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; This would be processed by an RDFa processor to yield the RDF triple (in N3/Turtle):\n\u0026lt;doi:10.1038/nature07184\u0026gt; dcterms:hasPart \u0026lt;https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184\u0026gt; . This triple (or fact) says that the resource identified by DOI 10.1038/nature07184 has as a component part (cf. DCTERMS vocabulary) the resource identified by https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184. (The abstract work identified by the DOI has as a component part the splash page identified by the proxy URL.)\nOK, so what’s going on? The “property” attribute takes a CURIE as value where the prefix “dcterms” is standing in for the XML namespace URI. The “about” and “resource” attributes both take a URI or CURIE as value, but because of any potential confusion a (so-called) “Safe CURIE” must be used which is a CURIE wrapped in brackets. The above example does not use brackets for the “about” attribute and therefore an RDFa processor would read this as being a full URI, i.e. \u0026amp;lt’doi:10.1038/nature07184\u0026gt;, whereas it does use brackets for the “resource” attribute and therefore this would be read as being a (Safe) CURIE, i.e. https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184.\nWe can turn this around as follows:\n\u0026lt;div xmlns:doi=\"https://0-doi-org.libus.csd.mu.edu/\" xmlns:dcterms=\"http://0-purl-org.libus.csd.mu.edu/dc/terms/\"\u0026gt; \u0026nbsp;\u0026nbsp;\u0026lt;div about=\"[doi:10.1038/nature07184]\"\u0026gt; \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026lt;span property=\"dcterms:isPartOf\" resource=\"doi:10.1038/nature07184\"/\u0026gt; \u0026nbsp;\u0026nbsp;\u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; This would be processed by an RDFa processor to yield the RDF triple (in N3/Turtle):\n\u0026lt;https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184\u0026gt; dcterms:isPartOf \u0026lt;doi:10.1038/nature07184\u0026gt; . This triple (or fact) says that the resource identified by https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184 is a component part (cf. DCTERMS vocabulary) of the resource identified by DOI 10.1038/nature07184. (The splash page identified by the proxy URL is a component part of the abstract work identified by the DOI.)\nSo what do CURIEs give us? Nothing more than a generic means to be able to make human-friendly statements such as\n\u0026lt;doi:10.1038/nature07184\u0026gt; dcterms:hasPart doi:10.1038/nature07184 . instead of having to spell it out in full triples form using long-winded URIs:\n\u0026lt;doi:10.1038/nature07184\u0026gt; \u0026nbsp;\u0026nbsp;\u0026lt;http://http://purl.org/dc/terms/hasPart\u0026gt; \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026lt;https://0-doi-org.libus.csd.mu.edu/10.1038/nature07184\u0026gt; .", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ubiquity-commands-for-crossref-services/", "title": "Ubiquity commands for Crossref services", "subtitle":"", "rank": 1, "lastmod": "2008-12-03", "lastmod_ts": 1228262400, "section": "Blog", "tags": [], "description": "So the other day Noel O’Boyle made me feel guilty when he pinged me and asked about the possibility using one of the Crossref APIs for creating a Ubiquity extension. You see, I had played with the idea myself and had not gotten around to doing much about it. This seemed inexcusable- particularly given how easy it is to build such extensions using the API we developed for the WordPress and Moveable Type plugins that we announced earlier in the year.", "content": "So the other day Noel O’Boyle made me feel guilty when he pinged me and asked about the possibility using one of the Crossref APIs for creating a Ubiquity extension. You see, I had played with the idea myself and had not gotten around to doing much about it. This seemed inexcusable- particularly given how easy it is to build such extensions using the API we developed for the WordPress and Moveable Type plugins that we announced earlier in the year. So I dug up my half-finished code, cleaned it up a bit and have posted the results.\nNote that the back-end that supports the plugins has been moved to more stable machines and the index is now being automatically updated with journal and conference proceeding deposits (sorry, no books yet).\nAlso note that we are hoping that others will look at the code for the WordPress, Moveable Type and Ubiquity plugins and create more such extensions. If you do, please let us know about them at citation-plugin@crossref.org.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rss-good-practice-guidelines/", "title": "RSS Good Practice Guidelines", "subtitle":"", "rank": 1, "lastmod": "2008-11-24", "lastmod_ts": 1227484800, "section": "Blog", "tags": [], "description": "I just wanted to flag up here Lisa Rogers’ recent review article on RSS in FUMSI (the online magazine for information professionals published by Free Pint Ltd)\nRSS and Scholarly Journal Tables of Contents: the ticTOCs Project, and Good Practice Guidelines for Publishers\nEspecially of interest is the diagram in Fig. 2 which breaks out the metadata elements that might be encountered in a rich web feed. Worthwhile pointing out that this reflects current practice and that under the item elements one would soon hope to see publishers routinely adding in prism:doi (with the bare DOI as value) and prism:url (with DOI target URL as value) from the PRISM 2.", "content": "I just wanted to flag up here Lisa Rogers’ recent review article on RSS in FUMSI (the online magazine for information professionals published by Free Pint Ltd)\nRSS and Scholarly Journal Tables of Contents: the ticTOCs Project, and Good Practice Guidelines for Publishers\nEspecially of interest is the diagram in Fig. 2 which breaks out the metadata elements that might be encountered in a rich web feed. Worthwhile pointing out that this reflects current practice and that under the item elements one would soon hope to see publishers routinely adding in prism:doi (with the bare DOI as value) and prism:url (with DOI target URL as value) from the PRISM 2.0 vocabulary published earlier this year. Publishers should also be aware of the new PRISM Usage Rights vocabulary which is expected to be published some time in the new year.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/machine-readable-are-we-there-yet/", "title": "Machine Readable: Are We There Yet?", "subtitle":"", "rank": 1, "lastmod": "2008-11-19", "lastmod_ts": 1227052800, "section": "Blog", "tags": [], "description": "The guidelines for Crossref publishers (“DOI Name Information and Guidelines” - [PDF, 210K][1]) has this to say in “Sect. 6.3 The response page” regarding the response page for a DOI:\n“A minimal response page must contain a full bibliographic citation displayed to the user. A response page without bibliographic information should never be presented to a user.”\nwhich would seem to be all fine and dandy. But if that user is a machine (or an agent acting for a user) they’ll likely be out of luck as the metadata in the bibliographic citation is generally targeted at human users.\nSo here’s a quick and dirty implementation of what a machine readable page could look like using RDFa. (The demo uses Jeni Tennison’s wonderful [rdfQuery][2] plugin which I [blogged][3] about earlier.)\nClicking the DOI link below will bring up in a sub-window a bibliographic citation which might be found in a typical DOI repsonse page. If you now click the “Read Me” link you should see an alert message which presents the bibliographic metadata as a complete RDF document (in a simple N3 – or Notation3 – format). This document is assembled on the fly by rdfQuery using the RDFa markup embedded in the page.\nSee the “View Source” link to list the actual XHTML markup and the RDFa properties which have been added. And note also that some of the properties are partially “hidden” to the human reader, e.g. a publication date is given in year form only whereas the machine record has the date in full, and some of the properties are fully “hidden”: print and electronic ISSNs, issue number, ending page, etc.\n(Continues below.)\n", "content": "The guidelines for Crossref publishers (“DOI Name Information and Guidelines” - [PDF, 210K][1]) has this to say in “Sect. 6.3 The response page” regarding the response page for a DOI:\n“A minimal response page must contain a full bibliographic citation displayed to the user. A response page without bibliographic information should never be presented to a user.”\nwhich would seem to be all fine and dandy. But if that user is a machine (or an agent acting for a user) they’ll likely be out of luck as the metadata in the bibliographic citation is generally targeted at human users.\nSo here’s a quick and dirty implementation of what a machine readable page could look like using RDFa. (The demo uses Jeni Tennison’s wonderful [rdfQuery][2] plugin which I [blogged][3] about earlier.)\nClicking the DOI link below will bring up in a sub-window a bibliographic citation which might be found in a typical DOI repsonse page. If you now click the “Read Me” link you should see an alert message which presents the bibliographic metadata as a complete RDF document (in a simple N3 – or Notation3 – format). This document is assembled on the fly by rdfQuery using the RDFa markup embedded in the page.\nSee the “View Source” link to list the actual XHTML markup and the RDFa properties which have been added. And note also that some of the properties are partially “hidden” to the human reader, e.g. a publication date is given in year form only whereas the machine record has the date in full, and some of the properties are fully “hidden”: print and electronic ISSNs, issue number, ending page, etc.\n(Continues below.)\nSo, what’s new about this? There are already various means of adding metadata to pages using e.g. metadata tags (see [here][4] for an earlier post on this), or COinS objects, or even RDF/XML in comment sections. All of these have their various utilities but are still just early attempts at automation. What makes this new and compelling is that RDFa allows publishers to embed machine readable metadata that can be read as a complete machine description in RDF using pretty much off-the-shelf tools and that this markup is embedded unobtrusively into the content in the proper context.\nNote that there are some similarities here between embedding an XMP packet (which includes metadata) into an arbitrary binary object, e.g. a PDF file, and embedding RDF into a section of a web page – or perhaps “draping” the RDF over the document markup would be a better term – so that the metadata travels along with the actual content.\nBy the way, the RDFa can be processed to yield valid RDF (as is shown in the demo) and which can also be seen by running the web page through the [RDFa Distiller][5]. (You just need to cut and paste the link of the demo page given above into the Distiller form box.) This will produce RDF in various serializations (N3, XML, Triples) from the RDFa.\nSo, is there really any longer any reason not to have machine readable metadata at the end of the DOI? Are we there yet?\n[1]: Crossref DOI display guidelines [2]: http://code.google.com/p/rdfquery/wiki/RdfPlugin [3]: /blog/rdfquery [4]: /blog/natures-metadata-for-web-pages [5]: http://www.w3.org/2007/08/pyRdfa/\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rdfquery/", "title": "rdfQuery", "subtitle":"", "rank": 1, "lastmod": "2008-11-17", "lastmod_ts": 1226880000, "section": "Blog", "tags": [], "description": "Whaddya know? I was just on the point of blogging about the real nice demo given by Jeni Tennison at last week’s SWIG UK meeting at HP Labs in Bristol of rdfQuery (an RDF plugin for jQuery - the zip file is here). And there today on her blog I see that she has a full writeup on rdfQuery, so I’ll defer to the expert. :~)\nAll I can really add to that is that rdfQuery is a pretty darn cool way to add and manipulate RDFa using jQuery.", "content": "Whaddya know? I was just on the point of blogging about the real nice demo given by Jeni Tennison at last week’s SWIG UK meeting at HP Labs in Bristol of rdfQuery (an RDF plugin for jQuery - the zip file is here). And there today on her blog I see that she has a full writeup on rdfQuery, so I’ll defer to the expert. :~)\nAll I can really add to that is that rdfQuery is a pretty darn cool way to add and manipulate RDFa using jQuery. Does it get any better?\nAnd now that RDFa is a W3C Rec since last month (see Primer and Syntax) it will be interesting to see how Crossref members might begin to deploy it on their pages - especially on DOI landing pages.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prism-2.1/", "title": "PRISM 2.1", "subtitle":"", "rank": 1, "lastmod": "2008-10-24", "lastmod_ts": 1224806400, "section": "Blog", "tags": [], "description": "Yesterday a new PRISM spec (v2.1) was released for public comment. (Comment period lasts up to Dec. 3, ’08.)\nChanges are listed in pages 8 and 9 of the Introduction document. Some highlights:\nNew PRISM Usage Rights namespace Accordingly usage of prism:copyright, prism:embargoDate, and prism:expirationDate no longer recommended New element prism:isbn introduced for book serials An updated mod_prism RSS 1.0 module is available which lists all versions of PRISM specs including the forthcoming v2.", "content": "Yesterday a new PRISM spec (v2.1) was released for public comment. (Comment period lasts up to Dec. 3, ’08.)\nChanges are listed in pages 8 and 9 of the Introduction document. Some highlights:\nNew PRISM Usage Rights namespace Accordingly usage of prism:copyright, prism:embargoDate, and prism:expirationDate no longer recommended New element prism:isbn introduced for book serials An updated mod_prism RSS 1.0 module is available which lists all versions of PRISM specs including the forthcoming v2.1 spec. I will see about getting this added now to a more permanent location. Current version of PRISM remains at v2.0. Versions 2.0 and 2.1 are especially of interest to users of Crossref because of their support for prism:doi and prism:url and users should consider upgrading their applications, e.g. RSS feeds. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-marches-on/", "title": "XMP Marches On", "subtitle":"", "rank": 1, "lastmod": "2008-10-20", "lastmod_ts": 1224460800, "section": "Blog", "tags": [], "description": "For those who may be interested in the progress of XMP, Adobe’s Gunar Penikis has just announced 1 two new releases of XMP SDKs: XMP Toolkit 4.4 (with support for new file formats), and FileInfo SDK (for customizing CS4 UIs). More importantly, though, may be the new edition of the XMP spec - see here, which is bumped from a modest 112 page document to a 3-parter at 199 pages.", "content": "For those who may be interested in the progress of XMP, Adobe’s Gunar Penikis has just announced 1 two new releases of XMP SDKs: XMP Toolkit 4.4 (with support for new file formats), and FileInfo SDK (for customizing CS4 UIs). More importantly, though, may be the new edition of the XMP spec - see here, which is bumped from a modest 112 page document to a 3-parter at 199 pages.\nLooks to be quite a thorough spec bar one telling particular: there is no version number and no date! The previous version was likewise unnumbered though at least dated as “September 2005”. Btw, I’m not sure of there is any archive of XMP specs being maintained by Adobe. At least, I’m not aware of any page with that information. Perhaps we can refer to our earlier call to have XMP turned over to a standards organization to formalize a public spec.\nUpdate Aug 2022: the announcement blog post mentioned above was previously at blogs.adobe.com/gunar/2008/10/new_xmp_sdks_released.html but is no longer live.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/yer-basic-one-liner/", "title": "Yer Basic One-Liner", "subtitle":"", "rank": 1, "lastmod": "2008-10-14", "lastmod_ts": 1223942400, "section": "Blog", "tags": [], "description": "Here\u0026rsquo;s your basic one-line handle client (all of it) for the browser:\nOpenHandle.Util().getHandleData(\u0026#34;10.1038/nature05826\u0026#34;, function(data) { alert(OpenHandle.Util().helloWorld(data)); }); Can\u0026rsquo;t see how to make that much shorter (bar tossing spaces). But here\u0026rsquo;s one attempt (shorter though now it\u0026rsquo;s not strictly a one-liner):\nvar u = OpenHandle.Util(); u.getHandleData(\u0026#34;10.1038/nature05826\u0026#34;, function(_) { alert(u.helloWorld(_)); }); Here I\u0026rsquo;ve used two utility convenience methods from the OpenHandle client library:\nOpenHandle.Util().getHandleData(handle, callback, [server]) OpenHandle.Util().helloWorld(JSON) You will though need to include a couple of libraries: openhandle.", "content": "\nHere\u0026rsquo;s your basic one-line handle client (all of it) for the browser:\nOpenHandle.Util().getHandleData(\u0026#34;10.1038/nature05826\u0026#34;, function(data) { alert(OpenHandle.Util().helloWorld(data)); }); Can\u0026rsquo;t see how to make that much shorter (bar tossing spaces). But here\u0026rsquo;s one attempt (shorter though now it\u0026rsquo;s not strictly a one-liner):\nvar u = OpenHandle.Util(); u.getHandleData(\u0026#34;10.1038/nature05826\u0026#34;, function(_) { alert(u.helloWorld(_)); }); Here I\u0026rsquo;ve used two utility convenience methods from the OpenHandle client library:\nOpenHandle.Util().getHandleData(handle, callback, [server]) OpenHandle.Util().helloWorld(JSON) You will though need to include a couple of libraries: openhandle.js and jquery.js. (Note that the getHandleData() method supplied in the openhandle.js library uses jQuery. Feel free to overwrite that.) A complete working document can thus be implemented as:\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34; src=\u0026#34;http://jqueryjs.googlecode.com/files/jquery-1.2.6.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34; src=\u0026#34;http://openhandle.googlecode.com/files/openhandle-0.2.3.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; jQuery().ready(function() { /* action when body content is loaded */ var u = OpenHandle.Util(); u.getHandleData(\u0026#34;10.1038/nature05826\u0026#34;, function(_) { alert(u.helloWorld(_)); }); }); \u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; Boo! \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Let me know if this doesn\u0026rsquo;t work for you. I\u0026rsquo;ve tried to test this and seems to function OK but sure as the sun rises I ain\u0026rsquo;t no JS ninja.\nOf course, we normally want to do more than just dump the values. So, given that it\u0026rsquo;s pretty straightforward to grab and to manipulate a handle\u0026rsquo;s data values over the Web, how can we put this into practice?\nLet\u0026rsquo;s consider a couple of Crossref use cases.\n(Disclaimer: These examples are not intended as being in any way a replacement for the existing Crossref services but merely show how those services could be implemented on the client side. These illustrations will be useful for new bespoke services accessing other data elements that may be registered with the DOI.)\nSingle Resolution\nHere is how one could implement the regular URL redirect service from the client:\nvar handle = \u0026#34;10.1038/nature05826\u0026#34;; var callback = function(json) { var hv = new OpenHandle.Handle(json).getValuesByType(‘URL\u0026#39;)[0]; var url = new OpenHandle.HandleValue(hv).getData(); // alert(\u0026#34;Redirecting to \u0026#34; + url); window.location = url; }; OpenHandle.Util().getHandleData(handle, callback); The getValuesByType(‘URL')[0] call returns the first handle value of type \u0026lsquo;URL\u0026rsquo;. The next line just parses this value as a handle value object and gets the data field, i.e. the URL itself.\nNote here that this client can show the URL that the user will be redirected to. With normal DOI resolution the resolution takes place on the proxy server (dx.doi.org) and the URL is not available to the user - until they are so redirected. In fact, the user may never get to see the URL stored in the handle value if this is the head of a redirect chain.\nTo recap, Crossref DOIs are not resolved by the user to URLs - rather, they invoke a service on the server which returns a content page.\nMultiple Resolution\nLet\u0026rsquo;s now take a look at a case of Crossref multiple resolution. This code uses the getValues() method to return all values:\nvar handle = \u0026#34;10.1130/B25510.1\u0026#34;; var callback = function(json) { var s = \u0026#34;\u0026#34;; var hv = (new OpenHandle.Handle(json)).getValues(); for (var i = 0; i \u0026lt; hv.length; i++) { var v = new OpenHandle.HandleValue(hv[i]); s += v.getType() + \u0026#34;: \u0026#34; + v.getData(); } alert(s); }; OpenHandle.Util().getHandleData(handle, callback); which yields\n700050: 200508231619480000 HS_ADMIN: [object Object] URL.0: http://www.gsajournals.org/gsaonline/?request=get-abstract\u0026amp;doi=10%2E1130%2FB25510%2E1 URL.1: http://bulletin.geoscienceworld.org/cgi/doi/10.1130/B25510.1 CR-LR: \u0026lt;MR\u0026gt;\u0026lt;LI label=\u0026#34;GeoScienceWorld\u0026#34; resource=\u0026#34;URL.1\u0026#34; /\u0026gt;\u0026lt;LI label=\u0026#34;Geological Society of America\u0026#34; resource=\u0026#34;URL.0\u0026#34; /\u0026gt; Oops! Too much information. This includes types such as \u0026lsquo;700050\u0026rsquo; and \u0026lsquo;HS_ADMIN\u0026rsquo; which are used by the Crossref application, and not intended for the end user. Maybe we should just limit it to the URL types with getValuesByType('URL'):\ngetValuesByType(‘URL\u0026#39;): var handle = \u0026#34;10.1130/B25510.1\u0026#34;; var callback = function(json) { var s = \u0026#34;\u0026#34;; var hv = (new OpenHandle.Handle(json)).getValuesByType(‘URL\u0026#39;); for (var i = 0; i \u0026lt; hv.length; i++) { var v = new OpenHandle.HandleValue(hv[i]); s += v.getType() + \u0026#34;: \u0026#34; + v.getData(); } alert(s); }; OpenHandle.Util().getHandleData(handle, callback); which yields\nURL.0: http://www.gsajournals.org/gsaonline/?request=get-abstract\u0026amp;doi=10%2E1130%2FB25510%2E1 URL.1: http://bulletin.geoscienceworld.org/cgi/doi/10.1130/B25510.1 _(By the way, the previous example shows the unregulated state of handle types. We have everything but the kitchen sink in this one example:\nsimple types, both well-known (\u0026lsquo;URL\u0026rsquo;) and opaque (\u0026lsquo;700050\u0026rsquo;) compound, or namespaced, types with various hierarchy delimiters: dot (\u0026lsquo;URL.0\u0026rsquo;, \u0026lsquo;URL.1\u0026rsquo;), underscore (\u0026lsquo;HS_ADMIN\u0026rsquo;), and hyphen (\u0026lsquo;CR-LR\u0026rsquo;) Well, they\u0026rsquo;re all in there now so we gotta deal with that, but generally one would probably have preferred well-known types and where namespaces are used the usual dot notation as this is a) familiar to programmers, and b) supported by the handle client library code. The underscore is used in the handle RFCs for system types so that can be viewed as a sort of inline namespacing. Seems to be no obvious excuse for hyphens though.)\nBack to the example we can see that the first URL goes to a Crossref service which we can dispense with since this example is to be run client side. That leaves us with the two actual URL targets. But how to differentiate those for a user choice? That\u0026rsquo;s where that other type \u0026lsquo;CR-LR\u0026rsquo; comes in which provides an XML fragment that relates label to type. There are obviously many ways to support resource labelling - this is just the method used by Crossref.\nLet\u0026rsquo;s parse out the XML fragment for labels and resources and save those in an object keyed on resource:\nvar labels = {}; var hv_ = (new OpenHandle.Handle(json)).getValuesByType(‘CR-LR\u0026#39;)[0]; var v = new OpenHandle.HandleValue(hv_); var xml = v.getData(); var li = xml.match(/\u0026lt;li [^\\\u0026gt;]* \\/\u0026gt;/ig); for (var i = 0; i \u0026lt; li.length; i++) { var a = li[i].match(/label=\\\u0026#34;([^\\\u0026#34;]+)\\\u0026#34; resource=\\\u0026#34;([^\\\u0026#34;]+)\\\u0026#34;/i); labels[a[2]] = a[1]; } Now we\u0026rsquo;ll also need to build a similar object for the URLs:\nvar urls = {}; var hv = (new OpenHandle.Handle(json)).getValuesByType(‘URL\u0026#39;); for (var i = 0; i \u0026lt; hv.length; i++) { var v = new OpenHandle.HandleValue(hv[i]); urls[v.getType()] = v.getData(); } And now with both these objects we can build a set of labelled links as:\nvar s = \u0026#34;\u0026#34;; for (item in labels) { s += \u0026#34;\u0026lt;a href=\\\u0026#34;\u0026#34; + urls[item] + \u0026#34;\\\u0026#34;\u0026gt;\u0026#34; + labels[item] + \u0026#34;\u0026lt;/a\u0026gt;\u0026#34;; } alert(s); to yield\n\u0026lt;a href=\u0026#34;http://bulletin.geoscienceworld.org/cgi/doi/10.1130/B25510.1\u0026#34;\u0026gt;GeoScienceWorld\u0026lt;/a\u0026gt; \u0026lt;a href=\u0026#34;http://www.gsajournals.org/gsaonline/?request=get-abstract\u0026amp;doi=10%2E1130%2FB25510%2E1\u0026#34;gt;Geological Society of America\u0026lt;/a\u0026gt; How to build a page with those labelled links is now a simple exercise. (The actual Crossref service for doi:10.1130/B25510.1 returns a page with labelled links, logos, and metadata pulled from the Crossref database.)\nNext Steps\nThe aim of this work has been to show that getting access to handle data values and manipulating those values in the browser can be fairly straightforward. How additional values get to be added to DOIs (or other handles) and what those values refer to is another matter, but services to access such values do not need to be centralized. User-generated services are also a possibility.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/openhandle-javascript-api/", "title": "OpenHandle JavaScript API", "subtitle":"", "rank": 1, "lastmod": "2008-10-08", "lastmod_ts": 1223424000, "section": "Blog", "tags": [], "description": "(Click figure for PDF.)\nI just posted updated versions of the OpenHandle JavaScript Client Library (v0.2.2) and Utilities (v0.2.2) to the project site. Mainly this post is just by way of saying that there’s now a “cheat sheet” for the API (see figure above, click for PDF) which will give some idea of scope. The JavaScript API attempts to reflect the Java Client Library API for Handle data structures, and has in excess of 100 methods.", "content": " (Click figure for PDF.)\nI just posted updated versions of the OpenHandle JavaScript Client Library (v0.2.2) and Utilities (v0.2.2) to the project site. Mainly this post is just by way of saying that there’s now a “cheat sheet” for the API (see figure above, click for PDF) which will give some idea of scope. The JavaScript API attempts to reflect the Java Client Library API for Handle data structures, and has in excess of 100 methods. A change log is available.\nThe new API supports:\nSingle namespace Introspection Unit testing, see here Why is this of interest to Crossref? Well, if DOIs are ever to begin take advantage of their innate Multiple Resolution capabiities there needs to be nimbler means of accessing the data items stored with a DOI. A JavaScript API would allow the data to be manipulated in the browser down by the user and so enable bespoke services. That, at least, is the idea. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/handle-clients-1-2-3/", "title": "Handle Clients #1, #2, #3", "subtitle":"", "rank": 1, "lastmod": "2008-10-01", "lastmod_ts": 1222819200, "section": "Blog", "tags": [], "description": "Three alternate clients for viewing a Handle (or DOI): #1 (sky - text), #2 (black - tuples), #3 (white - cards) - the image above is clickable. When Handle clients become JavaScript-able, one really can have it one’s own way. (The JavaScript library is here, the demo service interface here - the code for setting up a new service interface can be got from the OpenHandle project.)\nNoted: As of February 2023, most of the links in this blog are not longer available.", "content": " Three alternate clients for viewing a Handle (or DOI): #1 (sky - text), #2 (black - tuples), #3 (white - cards) - the image above is clickable. When Handle clients become JavaScript-able, one really can have it one’s own way. (The JavaScript library is here, the demo service interface here - the code for setting up a new service interface can be got from the OpenHandle project.)\nNoted: As of February 2023, most of the links in this blog are not longer available.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-last-mile/", "title": "The Last Mile", "subtitle":"", "rank": 1, "lastmod": "2008-10-01", "lastmod_ts": 1222819200, "section": "Blog", "tags": [], "description": "The figure above (click to enlarge) is probably self-explanatory but a few words may be in order.\nWith no end-to-end delivery of data from the Handle System to the user’s application (browser or reader), getting data out of the Handle System has traditionally meant using the Web (ie. HTTP) as a courier - in effect, this is the “last mile” for Handle data. Typically an upstream (Handle) client provides services to the user.", "content": "\nThe figure above (click to enlarge) is probably self-explanatory but a few words may be in order.\nWith no end-to-end delivery of data from the Handle System to the user’s application (browser or reader), getting data out of the Handle System has traditionally meant using the Web (ie. HTTP) as a courier - in effect, this is the “last mile” for Handle data. Typically an upstream (Handle) client provides services to the user. The most well known of these services is the URL redirect service which underpins the Crossref reference linking service. Another hosted service is the web form which displays data stored in the Handle records in a simple HTML table for user browsing. See panel a) in the figure above.\nBy contrast, the OpenHandle proposal aims to move data in the Handle record in structured form (JSON or XML) over the Web for downstream processing - either in the user’s browser or on the desktop. See panel b). Advantages are that the Handle data and data structures are moved closer to the user and the services provided are capable of being better targeted and made more relevant. Data mobility as a whole is much improved. The data are accessible using standard Web description and scripting languages. One might almost say (to paraphrase the well-known Java slogan “write once, run anywhere“) that this is a case of “read once, write anywhere”.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/look-ma-no-plugins/", "title": "Look Ma, No Plugins!", "subtitle":"", "rank": 1, "lastmod": "2008-09-22", "lastmod_ts": 1222041600, "section": "Blog", "tags": [], "description": "var f = function (OpenHandleJson) {\nvar h = new OpenHandle(OpenHandleJson);\nvar hv = h.getHandleValues();\nfor (var i = 0; i \u0026lt; hv.length; i++) { var v = new HandleValue(hv[i]); if (v.hasType(\u0026lsquo;URL\u0026rsquo;)) { print(v.getData()); } else if (v.hasType(\u0026lsquo;HS_ADMIN\u0026rsquo;)) { var a = new AdminRecord(v.getData()); print(a.getAdminPermissionString()) } } }\n\"And that, gentlemen, is how we do that.\" - Apollo 13 Following on from my earlier Client Handle Demo post, this entry is just to mention the availability of a port of (part of) the Handle client library (in Java) to JavaScript: openhandle-0.", "content": "var f = function (OpenHandleJson) {\nvar h = new OpenHandle(OpenHandleJson);\nvar hv = h.getHandleValues();\nfor (var i = 0; i \u0026lt; hv.length; i++) { var v = new HandleValue(hv[i]); if (v.hasType(\u0026lsquo;URL\u0026rsquo;)) { print(v.getData()); } else if (v.hasType(\u0026lsquo;HS_ADMIN\u0026rsquo;)) { var a = new AdminRecord(v.getData()); print(a.getAdminPermissionString()) } } }\n\"And that, gentlemen, is how we do that.\" - Apollo 13 Following on from my earlier Client Handle Demo post, this entry is just to mention the availability of a port of (part of) the Handle client library (in Java) to JavaScript: openhandle-0.1.1.js which is being maintained on the OpenHandle site. The JavaScript module contains methods for three classes: OpenHandle, HandleValue and AdminRecord.\nWhat does all that mean? It means that Handles and their constituent values and value fields can be accessed directly within **any Web browser (using an OpenHandle service) which allows a dynamic Handle client to be generated and presented within a user context. No plugins required. The port mirrors the class methods in the standard Java client library for Handle.\nAs a demo of this JavaScript module in action, see this Inspector app for a card index view of Handle (and by implication DOI) records.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-is-hiring-an-rd-software-engineer/", "title": "Crossref is hiring an R&D software engineer", "subtitle":"", "rank": 1, "lastmod": "2008-09-18", "lastmod_ts": 1221696000, "section": "Blog", "tags": [], "description": "Crossref is hiring an R\u0026amp;D software engineer to work in our Oxford office. This is a fantastic opportunity to work on wide range of projects that promise to revolutionize scholarly publishing.", "content": "Crossref is hiring an R\u0026amp;D software engineer to work in our Oxford office. This is a fantastic opportunity to work on wide range of projects that promise to revolutionize scholarly publishing.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/multiple-resolution/", "title": "Multiple Resolution", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/multiple-resolution/", "title": "Multiple Resolution", "subtitle":"", "rank": 1, "lastmod": "2008-08-22", "lastmod_ts": 1219363200, "section": "Blog", "tags": [], "description": "I’ve been meaning for some time to write something about DOI and so-called “Multiple Resolution”, which to be honest is the only technology feature of any real interest as concerns DOI. (DOI as a business and social compact for guaranteeing name persistence of Web resources has been an extraordinarily successful venture in the academic publishing world with more than 32m items registered and maintained over eight years of operation but that may not have required any specialized technology. More a consensus to adopt a single location service in the DOI proxy.)\nMultiple resolution, though. Now, that’s something else. Seems like it should be able to offer a lot of general funkiness and yet it has not been much used up to now. And I have to wonder why.\n(Continues below.)\n", "content": "I’ve been meaning for some time to write something about DOI and so-called “Multiple Resolution”, which to be honest is the only technology feature of any real interest as concerns DOI. (DOI as a business and social compact for guaranteeing name persistence of Web resources has been an extraordinarily successful venture in the academic publishing world with more than 32m items registered and maintained over eight years of operation but that may not have required any specialized technology. More a consensus to adopt a single location service in the DOI proxy.)\nMultiple resolution, though. Now, that’s something else. Seems like it should be able to offer a lot of general funkiness and yet it has not been much used up to now. And I have to wonder why.\n(Continues below.)\nI guess we should start out with some definitions: the DOI Handbook, the (draft) ISO standard, and Crossref:\nDOI Handbook - From Sect. 3.3 Multiple resolution:\n“Multiple resolution allows one entity to be resolved to multiple other entities; it can be used to embody e.g a parent-children relationship, or any other relationship. … A DOI name can be resolved to an arbitrary number of different points on the Internet: multiple URLs, other DOI names, and other data types.”\nISO CD 26324 - I’ve blogged here before about the ISO standardization of DOI which is now available as a Committee Draft. Multiple resolution is specifically mentioned in Sects. 3.2 and 6.2 and discussed in Sect. 6.1. From Sect. 3 “Terms and definitions” we have this definition:\n“Multiple resolution is the simultaneous return as output of several pieces of current information related to the object, in defined data structures.”\nAnd then Section 6 “Resolution of DOI name” goes on to say this:\n_“DOI resolution records may include one or more URLs, where the object may be located, and other\ninformation provided about the entity to which a DOI name has been assigned, optionally including but not restricted to: names, identifiers, descriptions, types, classifications, locations, times, measurements, and relationships to other entities.”_\nCrossref - In the help page Multiple Resolution Intro there is this:\n“As of May 2008 the Crossref main system will support assigning more than one URL to a single DOI, a concept known as multiple resolution (MR). “\nThe intro goes on to talk about the two pilot forms of multiple resolution service that have been trialled: a) interim page, and b) menu pop-up. The pop-up service is no longer supported. Only the interim page is currently offered as a production service. The help page Interim Page multiple resolution overview leads off thus:\n“Crossref’s MR service provides an interim page solution which presents a list of link choices to the end user. Each choice represents a location at which the item may be obtained and are commonly services that are co-hosting the content under agreement with the content’s Copyright holder.”\nSo, there it is. That’s DOI multiple resolution. The real important thing to note is that the official DOI position (IDF, ISO) is invitingly open while both the Crossref implementation and the description of multiple resolution itself is unduly restrictive. Multiple resolution as described by Crossref is essentially the deposition of additional URLs (pointing to copies of the same resource) for alternate routing (for geographical reasons, co-hosting arrangements, etc.) with a service presentation of alternate locations for user selection.\nMultiple resolution proper (as per the DOI Handbook and ISO draft) is the deposition of arbitrary data values and return of same with no particular services implied. Use cases for multiple resolution include the addition of URLs for referencing different (but related) network objects, e.g. a metadata record, or other resources such as supplementary information, datasets, etc. Deposit of arbitrary data types is not yet catered for by Crossref. There could, I would suggest, at least be some rudimentary provision for depositing vanilla type/value pairs (subject to policy constraints). (There is currently some work under way in defining handle data types but this need not be any showstopper to depositing new data types as any type management system will likely need to evolve over time.)\nAn obvious use case for multiple resolution (to me, anyway) would be the registration of a second URL which would point not onto a copy of the resource but onto a public metadata record. (I have earlier posted here about architectures and options for exposing public data.)\nWith more than one data value in a resolution record, the process of resolving such a record is potentially complicated. As the ISO CD says in Sect. 62f:\n“Resolution requests should be capable of returning all associated values of current information, individual values, or all values of one data type.”\nThe DOI Handbook itself recognizes the problems that multiple resolution may present.\n“If the DOI name can point to many different possible “resolutions”, how is the choice made between different options? At its simplest, the user may be provided with a list from which to make a manual choice. However, this is not a scalable solution for an increasingly complex and automated environment. The DOI name will increasingly depend on automation of “service requests”, through which users (and, more importantly, users’ application software) can be passed seamlessly from a DOI name to the specific service that they require.”\nIndeed, services like OpenHandle will make it much easier to programmatically access data stored in the handle record associated with a DOI name. (I have blogged previously about the OpenHandle project and its languages support.) Note that presentation of data values to a human user may be a non-issue for mediated services.\nAnd talking of computer languages it may be amusing to ruminate briefly on their own built-in support for multiple return values. Perhaps unsurprisingly, of the dominant languages Java has no such support as this recent post addresses:\n“Today was one of those days when I wished Java would support multiple return values… but Java allows you to return only one value either an object or a primitive type.”\nBy contrast, languages such as Common Lisp do have support for multiple return values. See this post for some gory details and insights. Interesting also to reflect that as in the world of computing languages where there is a decided tilt towards a mainstream family of languages based on (or derived from) C, there may be dominant protocols at large on the Internet but no single “winner takes all”.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/mod_prism-updated/", "title": "mod_prism (Updated)", "subtitle":"", "rank": 1, "lastmod": "2008-08-21", "lastmod_ts": 1219276800, "section": "Blog", "tags": [], "description": "I’ve just put up for comment a revised mod_prism (0.3) of the existing mod_prism RSS 1.0 module. This is now updated to the current PRISM version (v2.0) which was released in February ’08 and reissued with Errata in July ’08. The current mod_prism draft is registered here.\nThe new draft charts all (five) versions of the PRISM specification (v1.0-v2.0) and maps PRISM terms to RSS 1.0 elements. Though not required as such for use of terms within an RSS 1.", "content": "I’ve just put up for comment a revised mod_prism (0.3) of the existing mod_prism RSS 1.0 module. This is now updated to the current PRISM version (v2.0) which was released in February ’08 and reissued with Errata in July ’08. The current mod_prism draft is registered here.\nThe new draft charts all (five) versions of the PRISM specification (v1.0-v2.0) and maps PRISM terms to RSS 1.0 elements. Though not required as such for use of terms within an RSS 1.0 feed, an RSS 1.0 module does allow for easy housekeeping as well as providing usage guidelines and examples for how to use PRISM terms within an RSS 1.0 feed.\nThe main interest for Crossref members will be the opportunity to update their current RSS 1.0 feeds to include the new PRISM terms prism:doi and prism:url. I blogged earlier here about prism:doi as it first appeared. The suggestions I put forward there were subsequently incorporated into the Errata for 2.0 which were published in July and are avaliable as a zip file here.\nI would be very interested in receiving any feedback. I guess I should add to the v1.2 example of an RSS item in the draft an example also of a v2.0 RSS item which makes use of both prism:doi and prism:url.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/search-web-services-new-committee-drafts/", "title": "Search Web Services - New Committee Drafts", "subtitle":"", "rank": 1, "lastmod": "2008-07-29", "lastmod_ts": 1217289600, "section": "Blog", "tags": [], "description": "As posted here on the SRU Implementors list, the OASIS Search Web Services Technical Committee has announced the release of five Committee Drafts, informally known as:\nAbstract Protocol Definition (APD) Binding for SRU 1.2 Auxiliary Binding for HTTP GET CQL 1.2 Binding for OpenSearch Links to specific document formats are given at the bottom of the mail. A list of the TC public documents is also available here.\nThe next phase of work for the TC will be the development of SRU/CQL 2.", "content": "As posted here on the SRU Implementors list, the OASIS Search Web Services Technical Committee has announced the release of five Committee Drafts, informally known as:\nAbstract Protocol Definition (APD) Binding for SRU 1.2 Auxiliary Binding for HTTP GET CQL 1.2 Binding for OpenSearch Links to specific document formats are given at the bottom of the mail. A list of the TC public documents is also available here.\nThe next phase of work for the TC will be the development of SRU/CQL 2.0, and the Description Language.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/does-size-matter/", "title": "Does Size Matter?", "subtitle":"", "rank": 1, "lastmod": "2008-07-28", "lastmod_ts": 1217203200, "section": "Blog", "tags": [], "description": "Interesting post from Google, in which they say:\n“Recently, even our search engineers stopped in awe about just how big the web is these days — when our systems that process links on the web to find new content hit a milestone: 1 trillion (as in 1,000,000,000,000) unique URLs on the web at once!”\nPuts Crossref’s 32,639,020 unique DOIs into some kind of perspective: 0.0033%. But nonetheless that trace percentage still seems to me to be reasonably large, especially in view of it forming a persistent and curated set.", "content": "Interesting post from Google, in which they say:\n“Recently, even our search engineers stopped in awe about just how big the web is these days — when our systems that process links on the web to find new content hit a milestone: 1 trillion (as in 1,000,000,000,000) unique URLs on the web at once!”\nPuts Crossref’s 32,639,020 unique DOIs into some kind of perspective: 0.0033%. But nonetheless that trace percentage still seems to me to be reasonably large, especially in view of it forming a persistent and curated set.\nUpdate: Talking of Google numbers, pingdom has a post “Map of all Google data center locations” with maps of US, Europe and World locations.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/five-years/", "title": "Five Years", "subtitle":"", "rank": 1, "lastmod": "2008-07-28", "lastmod_ts": 1217203200, "section": "Blog", "tags": [], "description": "Oh wow! A rather remarkable plea here from Dan Brickley on the public-lod mailing list which calls for the registrant of the dbpedia.org DNS entry to top it up with another 5+ years worth of clocktime. Some quotes:\n_“The idea of such a cool RDF namespace having only 6 months left on the DNS registration gives me the worries.”\n“If you could add another 5-10 years to the DNS registration I’d sleep easier at night.", "content": "Oh wow! A rather remarkable plea here from Dan Brickley on the public-lod mailing list which calls for the registrant of the dbpedia.org DNS entry to top it up with another 5+ years worth of clocktime. Some quotes:\n_“The idea of such a cool RDF namespace having only 6 months left on the DNS registration gives me the worries.”\n“If you could add another 5-10 years to the DNS registration I’d sleep easier at night.”\n“Let me stress I’m not suggesting that this domain is actually at risk. Just that the not-at-risk-ness isn’t readily evident from a quick look in the DNS.”\n“Those in the know are probably confident this is all in hand, but as the SW gets bigger I suspect we ought to establish practices such as “vocabularies that seek global adoption should always have 5+ years on their DNS registries”.”_\nYes, and maybe those cool URIs should have kite marks, too. 😉\n(Btw, for those who may not already know the maximum length of time that any DNS name may be leased out in a single registration is 10 years, see the FAQ put out by ICANN.)\nSo, pity the poor user of a given semantic web application who may not know what the expectancy is behind the nodes in an RDF graph of assertions. Shifting sands, indeed.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/knols-and-citations/", "title": "Knols and Citations", "subtitle":"", "rank": 1, "lastmod": "2008-07-24", "lastmod_ts": 1216857600, "section": "Blog", "tags": [], "description": "So, Google’s Knol is now live (see this announcement on Google’s Blog). There’ll be comment aplenty about the merits of this service and how it compares to other user contributed content sites. But one curious detail struck me. In terms of citeability, compare how a Knol contribution (or “knol”) may be linked to as may be a corresponding entry in Wikipedia (here I’ve chosen the subject “Eclipse”):\nKnol\nhttps://web.archive.org/web/20080730124803/http://knol.google.com/k/jay-pasachoff/eclipse/IDZ0Z-SC/wTLUGw\nWikipedia", "content": "So, Google’s Knol is now live (see this announcement on Google’s Blog). There’ll be comment aplenty about the merits of this service and how it compares to other user contributed content sites. But one curious detail struck me. In terms of citeability, compare how a Knol contribution (or “knol”) may be linked to as may be a corresponding entry in Wikipedia (here I’ve chosen the subject “Eclipse”):\nKnol\nhttps://web.archive.org/web/20080730124803/http://knol.google.com/k/jay-pasachoff/eclipse/IDZ0Z-SC/wTLUGw\nWikipedia\nhttp://en.wikipedia.org/wiki/Eclipse\nThe Knol link includes author name, subject, and service gunk, while the Wikipedia link includes only the subject. That makes the Wikipedia link both more readily citeable as well as being to some degree discoverable. I wonder what Google’s intentions, if any, are with respect to the citing of their pages (or “knols”) as authoritative sources of information. They don’t seem to be doing themselves many favours.\nI am minded of this post on Jeff Young’s Q6 which cites this passage from the HTTP spec (see RFC 2616, Sect. 3.2):\n“As far as HTTP is concerned, Uniform Resource Identifiers are simply formatted strings which identify-via name, location, or any other characteristic-a resource.”\nURIs bearing these so-called “characteristics” are what I would call a service URI in contrast to a name URI (something that I will elaborate on in a separate post). For now, however, I would just note that the Knol URI looks more like a service URI and the Wikipedia URI more like a name URI. I know which URI form I would prefer to cite.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/knols-and-citations-part-ii/", "title": "Knols and Citations Part II", "subtitle":"", "rank": 1, "lastmod": "2008-07-24", "lastmod_ts": 1216857600, "section": "Blog", "tags": [], "description": "Tony’s post highlights Knol’s “service” URIs. Another issue is that many Knol entries have nice long lists of unlinked references. The HTML code behind the references is very sparse.\nMight the DOI be of use in linking out from these references? I think so. Then, of course, there’s the issue of DOIs for Knols…", "content": "Tony’s post highlights Knol’s “service” URIs. Another issue is that many Knol entries have nice long lists of unlinked references. The HTML code behind the references is very sparse.\nMight the DOI be of use in linking out from these references? I think so. Then, of course, there’s the issue of DOIs for Knols…\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/blog/", "title": "Blog", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crosstech-by-numbers/", "title": "CrossTech By Numbers", "subtitle":"", "rank": 1, "lastmod": "2008-07-21", "lastmod_ts": 1216598400, "section": "Blog", "tags": [], "description": "CrossTech is two years old (less one month) and we have now seen some 145 posts. Breaking the posts down by poster we arrive at the following chart:\nNote this is not any real attempt at vainglory, more a simple excuse to play with the wonderful Google Chart API. Also, above I’ve taken the liberty of putting up an image (.png), although the chart could have been generated on the fly from this link (or tinyurl here).", "content": "CrossTech is two years old (less one month) and we have now seen some 145 posts. Breaking the posts down by poster we arrive at the following chart:\nNote this is not any real attempt at vainglory, more a simple excuse to play with the wonderful Google Chart API. Also, above I’ve taken the liberty of putting up an image (.png), although the chart could have been generated on the fly from this link (or tinyurl here).\nWhat is of interest in the chart is that approximately 3/4 of the posts are by Crossref members (TH, EN, RK) and 1/4 by Crossref staff (EP, GB, AT, CK). Certainly Crossref staffers are doing their bit for this blog. There’s also way too many posts from me. It would be really interesting to see some others’ views or observations per the CrossTech logo legend (“…, collaboration, …”).\nI guess the real impediment is that one needs to request an account before posting. (Certainly there’s no reason for any member to be shy about requesting an account and posting.) Note that I haven’t considered the number of commentators to the blog which is larger than the number of posters. Also a number of Crossref members are very active with their own blogs. Those blogs with a tech focus could (should?) be scooped up by a Planet style aggregator if there would be sufficient interest in maintaining a publishing technology hub.\nOne can only hope that the numbers will continue to grow (by direct posts or by aggregations) and that there will be a wider info share over the next couple of years.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/library-apis/", "title": "Library APIs", "subtitle":"", "rank": 1, "lastmod": "2008-07-21", "lastmod_ts": 1216598400, "section": "Blog", "tags": [], "description": "Roy Tennant in a post to XML4Lib announces a new list of library APIs hosted at\nhttps://web.archive.org/web/20080730080413/http://techessence.info/apis//\nA useful rough guide for us publishers to consider as we begin cultivating the multiple access routes into our own content platforms and tending to the “alphabet soup” that taken together comprises our public interfaces.", "content": "Roy Tennant in a post to XML4Lib announces a new list of library APIs hosted at\nhttps://web.archive.org/web/20080730080413/http://techessence.info/apis//\nA useful rough guide for us publishers to consider as we begin cultivating the multiple access routes into our own content platforms and tending to the “alphabet soup” that taken together comprises our public interfaces.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-matters/", "title": "Metadata Matters", "subtitle":"", "rank": 1, "lastmod": "2008-07-21", "lastmod_ts": 1216598400, "section": "Blog", "tags": [], "description": "Andy Powell has published on Slideshare this talk about metadata - see his eFoundations post for notes. It’s 130 slides long and aims\n“to cover a broad sweep of history from library cataloguing, thru the Dublin Core, Web search engines, IEEE LOM, the Semantic Web, arXiv, institutional repositories and more.”\nDon’t be fooled by the length though. This is a flip through and is a readily accessible overview on the importance of metadata.", "content": "Andy Powell has published on Slideshare this talk about metadata - see his eFoundations post for notes. It’s 130 slides long and aims\n“to cover a broad sweep of history from library cataloguing, thru the Dublin Core, Web search engines, IEEE LOM, the Semantic Web, arXiv, institutional repositories and more.”\nDon’t be fooled by the length though. This is a flip through and is a readily accessible overview on the importance of metadata. Slides 86-91 might be of interest here. 😉\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prism-press-release/", "title": "PRISM Press Release", "subtitle":"", "rank": 1, "lastmod": "2008-07-09", "lastmod_ts": 1215561600, "section": "Blog", "tags": [], "description": "The PRISM metadata standards group issued a press release yesterday which covered three points:\nPRISM Cookbook\nThe Cookbook provides “a set of practical implementation steps for a chosen set of use cases and provides insights into more sophisticated PRISM capabilities. While PRISM has 3 profiles, the cookbook only addresses the most commonly used profile #1, the well-formed XML profile. All recipes begin with a basic description of the business purpose it fulfills, followed by ingredients (typically a set of PRISM metadata fields or elements), and, closes with a step-by-step implementation method with sample XMLs and illustrative images.", "content": "The PRISM metadata standards group issued a press release yesterday which covered three points:\nPRISM Cookbook\nThe Cookbook provides “a set of practical implementation steps for a chosen set of use cases and provides insights into more sophisticated PRISM capabilities. While PRISM has 3 profiles, the cookbook only addresses the most commonly used profile #1, the well-formed XML profile. All recipes begin with a basic description of the business purpose it fulfills, followed by ingredients (typically a set of PRISM metadata fields or elements), and, closes with a step-by-step implementation method with sample XMLs and illustrative images.”\nPRISM 2.0 Errata The Errata “addresses a range of issues, from editorial to technical, that have been reported by the PRISM user community.”\nPRISM 2.1\nThe next version of the PRISM Specification, PRISM 2.1, is slated for release in late 2008. “This release will address complex rights for multi-platform and global distribution channels.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/now-what-about-xmp/", "title": "Now What About XMP?", "subtitle":"", "rank": 1, "lastmod": "2008-07-08", "lastmod_ts": 1215475200, "section": "Blog", "tags": [], "description": "With PDF now passed over to ISO as keeper of the format (as blogged here on CrossTech), Kas Thomas (on CMS Watch’s TrendWatch) blogs here that Adobe should now do the right thing by XMP and look to hand that over too in order to establish it as a truly open standard. As he says:\n“Let’s cut to the chase. If Adobe wants to demonstrate its commitment to openness, it should do for XMP what it has already done for PDF: Put it in the hands of a legitimate standards body.", "content": "With PDF now passed over to ISO as keeper of the format (as blogged here on CrossTech), Kas Thomas (on CMS Watch’s TrendWatch) blogs here that Adobe should now do the right thing by XMP and look to hand that over too in order to establish it as a truly open standard. As he says:\n“Let’s cut to the chase. If Adobe wants to demonstrate its commitment to openness, it should do for XMP what it has already done for PDF: Put it in the hands of a legitimate standards body. Right now it’s open in name only. “\nAnd this:\n“Adobe is pushing the XMP standard … at Adobe’s pace and in ways that benefit Adobe. (The parallels with PDF are numerous and obvious.) There are lingering technical issues waiting to be solved, however. Issues whose solutions shouldn’t have to be dependent on Adobe’s needs only.”\nHe’s absolutely bang on. With XMP on the threshold of finally shining through we really could do with Adobe cutting it loose. It’s time to leave home.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/iso-standard-for-pdf/", "title": "ISO Standard for PDF", "subtitle":"", "rank": 1, "lastmod": "2008-07-03", "lastmod_ts": 1215043200, "section": "Blog", "tags": [], "description": "I blogged here back in Jan. 2007 about Adobe submitting PDF 1.7 for standardization by ISO. From yesterday’s ISO press release this:\n“The new standard, ISO 32000-1, Document management – Portable document format – Part 1: PDF 1.7, is based on the PDF version 1.7 developed by Adobe. This International Standard supplies the essential information needed by developers of software that create PDF files (conforming writers), software that reads existing PDF files and interprets their contents for display and interaction (conforming readers), and PDF products that read and/or write PDF files for a variety of other purposes (conforming products).", "content": "I blogged here back in Jan. 2007 about Adobe submitting PDF 1.7 for standardization by ISO. From yesterday’s ISO press release this:\n“The new standard, ISO 32000-1, Document management – Portable document format – Part 1: PDF 1.7, is based on the PDF version 1.7 developed by Adobe. This International Standard supplies the essential information needed by developers of software that create PDF files (conforming writers), software that reads existing PDF files and interprets their contents for display and interaction (conforming readers), and PDF products that read and/or write PDF files for a variety of other purposes (conforming products).”\nCongrats to Adobe Systems!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/q6/", "title": "Q6", "subtitle":"", "rank": 1, "lastmod": "2008-07-03", "lastmod_ts": 1215043200, "section": "Blog", "tags": [], "description": "For anybody interested in the why’s and wherefore’s of OpenURL, Jeff Young at OCLC has started posting over on his blog Q6: 6 Questions - A simpler way to understand OpenURL 1.0: Who, What, Where, When, Why, and How (note: no longer available online). He’s already amassing quite a collection of thought provoking posts. His latest is The Potential of OpenURL (note: no longer available online), from which:\nOpenURL has effectively cornered the niche market where Referrers need to be decoupled from Resolvers.", "content": "For anybody interested in the why’s and wherefore’s of OpenURL, Jeff Young at OCLC has started posting over on his blog Q6: 6 Questions - A simpler way to understand OpenURL 1.0: Who, What, Where, When, Why, and How (note: no longer available online). He’s already amassing quite a collection of thought provoking posts. His latest is The Potential of OpenURL (note: no longer available online), from which:\nOpenURL has effectively cornered the niche market where Referrers need to be decoupled from Resolvers.\nBlog has UML diags, definitions, musings, etc. - something for everybody. Definitely worth checking out.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/client-handle-demo/", "title": "Client Handle Demo", "subtitle":"", "rank": 1, "lastmod": "2008-07-01", "lastmod_ts": 1214870400, "section": "Blog", "tags": [], "description": "This test form shows handle value data being processed by JavaScript in the browser using an OpenHandle service. This is different from the handle proxy server which processes the handle data on the server - the data here is processed by the client.\nEnter a handle and the standard OpenHandle “Hello World” document is printed. Other processing could equally be applied to the handle values. (Note that the form may not work in web-based feed readers.", "content": "This test form shows handle value data being processed by JavaScript in the browser using an OpenHandle service. This is different from the handle proxy server which processes the handle data on the server - the data here is processed by the client.\nEnter a handle and the standard OpenHandle “Hello World” document is printed. Other processing could equally be applied to the handle values. (Note that the form may not work in web-based feed readers.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/exposing-public-data-options/", "title": "Exposing Public Data: Options", "subtitle":"", "rank": 1, "lastmod": "2008-07-01", "lastmod_ts": 1214870400, "section": "Blog", "tags": [], "description": "This is a follow-on to an earlier post which set out the lie of the land as regards DOI services and data for DOIs registered with Crossref. That post differentiated between a native DOI resolution through a public DOI service which acts upon the “associated values held in the DOI resolution record” (per ISO CD 26324) and other related DOI protected and/or private services which merely use the DOI as a key into non-public database offering.\nFollowing the service architecture outlined in that post, options for exposing public data appear as follows:\nPrivate Service Publisher hosted – Publisher private service Protected Service Crossref hosted – Industry protected service Crossref routed – Publisher private service Public Service Handle System (DOI handle) – Global public service (native DOI service) Handle System (DOI ‘buddy’ handle) – Publisher public service (Continues below.) \u0026lt;p\u0026gt; ", "content": "This is a follow-on to an earlier post which set out the lie of the land as regards DOI services and data for DOIs registered with Crossref. That post differentiated between a native DOI resolution through a public DOI service which acts upon the “associated values held in the DOI resolution record” (per ISO CD 26324) and other related DOI protected and/or private services which merely use the DOI as a key into non-public database offering.\nFollowing the service architecture outlined in that post, options for exposing public data appear as follows:\nPrivate Service Publisher hosted – Publisher private service Protected Service Crossref hosted – Industry protected service Crossref routed – Publisher private service Public Service Handle System (DOI handle) – Global public service (native DOI service) Handle System (DOI ‘buddy’ handle) – Publisher public service (Continues below.) \u0026lt;p\u0026gt; Option #1 would make public data available through a private service at a publisher host based on the DOI. This places certain constraints on service discovery and persistence. Autodiscovery links can be placed into Web pages, but there is no opportunity to ‘embed’ the services into the DOI itself, and hence these cannot be considered native DOI services. Without a published API (and hence some degree of commitment from the publisher) the service access points (and possibly the services, too) are fragile.\nOption #2 would require Crossref to develop a service which would either a) deliver some public data on behalf of the publisher, or b) route requests through to a bespoke publisher service. Both options would require development at Crossref and an upload mechanism for the publisher to pass along data or service address. Both options would be offered as a new member service and would thus likely be subject to membership policy arrangements. One should consider that there would be some restrictions on service operation. One possible restriction might be that this would be a one-time service registration at Crossref and that any additional services would need to be added at the publisher end.\nOption #3 uses the existing Handle System infrastructure and provides a public read service. There are two possibilities: a) add a record (or records) to the DOI handle, or b) add records to a DOI ‘buddy’ handle under publisher control. Both require further explanation:\nOption #3a would require Crossref consent. Unless these records (handle values) were registered by Crossref there would be concerns over interoperability. That and security concerns would almost certainly require that Crossref writes the record. But this would then need to be developed as per Option #2 above. And if a mechanism were put in place it could be restrictive in practice, e.g. not allowing additional records to be inserted (as already noted in Option #2).\nOption #3b requires no prior Crossref consent. It is an option available to publishers who run a handle server. This can best be viewed as deploying a platform (a DOI ‘buddy’ handle) for hosting service access points with an intention to upload into the DOI handle (effectively Option #3a) as common public services are developed. In short, a public service incubator. Meantime the platform provides for an independent deployment and multiple services can be added as required. An uplink from a so-called DOI ‘buddy’ handle to the DOI handle would be maintained, and also as Crossref allows a down link from the DOI handle to the DOI ‘buddy’ handle (a ‘see also’ type link) could be established thus pairing off these two handles. (Of course, additional values whether held in the DOI resolution record or especially in an associated DOI ‘buddy’ record would be subject to common typing constraints for semantic interoperability.)\nMy personal feeling is that public data is best exposed via a public resolution record with no strings attached. That is the surest way to guarantee both data persistence and accessibility. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-thing-about-doi/", "title": "The Thing About DOI", "subtitle":"", "rank": 1, "lastmod": "2008-06-30", "lastmod_ts": 1214784000, "section": "Blog", "tags": [], "description": "With Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:\nWith Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:\nhttps://lccn.loc.gov/2003556443 http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/pubmed/16481614 (although https://web.archive.org/web/20090106151604/http://pubmed.com/1386390 also works as noted here) And the DOI itself can be resolved using an equally simple URL structure:\nhttp://dx.doi.org/10.1000/1 So, why does DOI not just present itself as a simple database number which is accessed through a simple web link and have done with it, e.g. a page for the object named by the DOI “10.1000/1” is retrieved from the DOI proxy server at http://0-dx-doi-org.libus.csd.mu.edu/?\nEssentially the typical DOI link presents an elementary web-based URL which performs a useful redirect service. What is different about this and, say a PURL, which offers a similar redirect service? What’s the big deal?\n(Continues below.)\n", "content": "With Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:\nWith Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:\nhttps://lccn.loc.gov/2003556443 http://0-www-ncbi-nlm-nih-gov.libus.csd.mu.edu/pubmed/16481614 (although https://web.archive.org/web/20090106151604/http://pubmed.com/1386390 also works as noted here) And the DOI itself can be resolved using an equally simple URL structure:\nhttp://dx.doi.org/10.1000/1 So, why does DOI not just present itself as a simple database number which is accessed through a simple web link and have done with it, e.g. a page for the object named by the DOI “10.1000/1” is retrieved from the DOI proxy server at http://0-dx-doi-org.libus.csd.mu.edu/?\nEssentially the typical DOI link presents an elementary web-based URL which performs a useful redirect service. What is different about this and, say a PURL, which offers a similar redirect service? What’s the big deal?\n(Continues below.)\nWell, the thing about DOI is that it is built upon a directory service - the Handle System - and can be accessed either through native directory calls or more likely through standard web interfaces. From a web point of view we are usually interested in the latter. Differently from a simple lookup and/or redirect service which has a fixed entry point on the Web, the DOI can be serviced at any DOI service access point on the Internet. There are potentially multiple entry points which can be hosted by different organizations with separate IP addresses and/or DNS names.\nFor example, the [DOI proxy][8] (described [here][9]) is just _one instance_ of such a service. Others could equally exist. And, in fact, they do. The following handle web services will also take the DOI and do the business: * \u0026lt;http://0-hdl-handle-net.libus.csd.mu.edu/10.1000/1\u0026gt; * \u0026lt;http://0-hdl-nature-com.libus.csd.mu.edu/10.1000/1\u0026gt;\u0026lt;/ul\u0026gt; With handle we have in essence a redirect to a redirect. Or in the case of a web service, a redirect (from HTTP to HDL) to a redirect (from HDL to HDL) to a redirect (from HDL to HTTP). That is, switch down from the web interface to the native handle layer, route the call from this local handle sever (via the global handle server) to the DOI handle server, fetch the URL stored with the DOI and switch back to the Web at that location. But there’s more. The standard URL redirect is just _one_ example of a DOI service. But multiple services can also be provided for the DOI. Currently the DOI travels light and is bound to the minimum of useful data, essentially just the URL for a splash page in the case of many Crossref DOIs. But it could also carry pointers to structured information or to relationships with other objects. As yet, the DOI is a fledgling in terms of realizing its true potential as a seasoned actor that can play out many roles - assume many guises. A queen bee, in effect, with a hive of worker bees servicing it. It is not joined at the hip with any particular web service as might be commonly understood with the current simple redirect service. It offers much more. It is, however, true that both for reasons of link persistency and in order to maintain link ranking with search crawlers that a preferred web entry point is via the [DOI proxy][8]. It just doesn’t have to be that way - that’s all. Hard linking is something we are beginning to unlearn and instead we are taking our first steps towards embracing service-mediated links such as OpenURL and DOI can both offer. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/handle-system-workshop/", "title": "Handle System Workshop", "subtitle":"", "rank": 1, "lastmod": "2008-06-20", "lastmod_ts": 1213920000, "section": "Blog", "tags": [], "description": "I was invited to speak at the Handle System Workshop which was run back to back with an IDF Open Meeting earlier this week in Brussels and hosted at the Office for Official Publications of the European Union. (Location was in the Charlemagne Building, at left in image, within the rather impressive meeting room Jean Durieux, at right.)\nMy talk (‘A Distributed Metadata Architecture‘) was focussed on how OpenHandle and XMP could be leveraged to manage dispersed media assets.", "content": " I was invited to speak at the Handle System Workshop which was run back to back with an IDF Open Meeting earlier this week in Brussels and hosted at the Office for Official Publications of the European Union. (Location was in the Charlemagne Building, at left in image, within the rather impressive meeting room Jean Durieux, at right.)\nMy talk (‘A Distributed Metadata Architecture‘) was focussed on how OpenHandle and XMP could be leveraged to manage dispersed media assets. (The OpenHandle work makes the Handle and DOI systems more readily acessible to applications.)\nOther speakers were Norman Paskin (IDF), Gordon Dunsire (Centre for Digital Library Research, University of Strathclyde), Brian Green (Editeur), Jill Cousins (European Digital Library Foundation), Jan Brase (TIB, Germany), Larry Lannom (CNRI), Ed Pentz (Crossref), Nigel Ward (Link Affiliates), and Dan Broeder (CLARIN/MPG).\nThe agendas for the two meetings are posted here (DOI) and here (Handle).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/pubmed-central-links-to-publisher-full-text/", "title": "PubMed Central Links to Publisher Full Text", "subtitle":"", "rank": 1, "lastmod": "2008-06-12", "lastmod_ts": 1213228800, "section": "Blog", "tags": [], "description": "A Crossref Member Briefing is available that explains how PubMed Central (PMC) links to publisher full text, how PMC uses DOIs and how PMC should be using DOIs. The briefing is entitled “Linking to Publisher Full Text from PubMed Central” (PDF 85k).\nCrossref considers it very important the PMC uses DOIs as the main means to link to the publisher version of record for an article and we are recommending that publishers try to convince PMC to use DOIs in an automated way.", "content": "A Crossref Member Briefing is available that explains how PubMed Central (PMC) links to publisher full text, how PMC uses DOIs and how PMC should be using DOIs. The briefing is entitled “Linking to Publisher Full Text from PubMed Central” (PDF 85k).\nCrossref considers it very important the PMC uses DOIs as the main means to link to the publisher version of record for an article and we are recommending that publishers try to convince PMC to use DOIs in an automated way. Almost all of the PMC articles contain DOIs but they aren’t linked. This seems like a waste considering that publishers have invested a lot in Crossref and DOIs as unique identifiers and persistent links.\nThis issue will be of interest to anyone who publishers journal articles that are the result of NIH funding and fall under the NIH Public Access Policy.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/robots-one-standard-fits-all/", "title": "Robots: One Standard Fits All", "subtitle":"", "rank": 1, "lastmod": "2008-06-04", "lastmod_ts": 1212537600, "section": "Blog", "tags": [], "description": "Interesting post from Yahoo! Search’s Director of Product Management, Priyank Garg, “One Standard Fits All: Robots Exclusion Protocol for Yahoo!, Google and Microsoft“. Interesting also for what it doesn’t talk about. No mention here of ACAP.", "content": "Interesting post from Yahoo! Search’s Director of Product Management, Priyank Garg, “One Standard Fits All: Robots Exclusion Protocol for Yahoo!, Google and Microsoft“. Interesting also for what it doesn’t talk about. No mention here of ACAP.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/exposing-public-data/", "title": "Exposing Public Data", "subtitle":"", "rank": 1, "lastmod": "2008-05-31", "lastmod_ts": 1212192000, "section": "Blog", "tags": [], "description": "As the range of public services (e.g. RSS) offered by publishers has matured this gives rise to the question: How can they expose their public data so that a user may discover them? Especially, with DOI there is now in place a persistence link infrastructure for accessing primary content. How can publishers leverage that infrastructure to advantage?\nAnyway, I offer this figure as to how I see the current lie of the land as regards DOI services and data.", "content": "As the range of public services (e.g. RSS) offered by publishers has matured this gives rise to the question: How can they expose their public data so that a user may discover them? Especially, with DOI there is now in place a persistence link infrastructure for accessing primary content. How can publishers leverage that infrastructure to advantage?\nAnyway, I offer this figure as to how I see the current lie of the land as regards DOI services and data.\nLegend - Current DOI service architecture showing data repositories, service access points, and open/closed data domains. The figure above shows the three data repositories and service access points in the current DOI services architecture. At right and bottom of the figure are the two types of service (public services and private services) that together are instrumental in getting a user from a DOI-based link (on a third-party site) to the correct page of content (from the primary content provider). (Note that a fourth, private data repository – the institutional repository – comes into play when OpenURL user context-sensitive linking is added.)\nAt left of the figure are services operated by Crossref on its own metadata database which support a) publisher lookups of DOI, and b) third-party metadata services (DOI-to-metadata and metadata-to-DOI conversions). These might best be labelled protected services since they are not freely available: the first is open to members at a cost, while the second is free but to associated organizations only – members, affiliates, etc.\nThe term open data is used here in the sense implied by the current W3C SWEO LOD (Linking Open Data) Project. Open data is public data unencumbered by any access restrictions. By contrast, closed data is data that has some access restrictions placed on it – even data that is open to affiliates. (This is not an issue that LOD addresses directly, although it is implied that data is globally ‘open’, i.e. public.)\nThe current DOI service architecture thus breaks down as:\nNative DOI services – resolving the DOI token Public – DOI Proxy Server (‘dx.doi.org’) Related DOI services – using the DOI token Protected – Crossref Private – Publisher Note that a DOI is ‘resolved’ into state data registered with it, or as ISO CD 26324 puts it: “Resolution is the process of submitting a specific DOI name to the DOI system and receiving in return the associated values held in the DOI resolution record for one or more types of data relating to the object identified by that DOI name.”\nSo, how might publishers best leverage this DOI service architecture to expose their public data?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dark-side-of-the-doi/", "title": "Dark Side of the DOI", "subtitle":"", "rank": 1, "lastmod": "2008-05-29", "lastmod_ts": 1212019200, "section": "Blog", "tags": [], "description": "(Click to enlarge.)\nFor infotainment only (and because it’s a pretty printing). Glimpse into the dark world of DOI. Here, the handle contents for doi:10.1038/nature06930 exposed as a standard OpenHandle ‘Hello World’ document. Browser image courtesy of Processing.js and Firefox 3 RC1.", "content": "\n(Click to enlarge.)\nFor infotainment only (and because it’s a pretty printing). Glimpse into the dark world of DOI. Here, the handle contents for doi:10.1038/nature06930 exposed as a standard OpenHandle ‘Hello World’ document. Browser image courtesy of Processing.js and Firefox 3 RC1.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/referencing-openurl/", "title": "Referencing OpenURL", "subtitle":"", "rank": 1, "lastmod": "2008-05-29", "lastmod_ts": 1212019200, "section": "Blog", "tags": [], "description": "So, why is it just so difficult to reference OpenURL?\nApart from the standard itself (hardly intended for human consumption - see abstract page here and PDF and don’t even think to look at those links - they weren’t meant to be cited!), seems that the best reference is to the Wikipedia page. There is the OpenURL Registry page at http://0-alcme-oclc-org.libus.csd.mu.edu/openurl/servlet/OAIHandler?verb=ListSets but this is just a workshop. Not much there beyond the OpenURL registered items.", "content": "So, why is it just so difficult to reference OpenURL?\nApart from the standard itself (hardly intended for human consumption - see abstract page here and PDF and don’t even think to look at those links - they weren’t meant to be cited!), seems that the best reference is to the Wikipedia page. There is the OpenURL Registry page at http://0-alcme-oclc-org.libus.csd.mu.edu/openurl/servlet/OAIHandler?verb=ListSets but this is just a workshop. Not much there beyond the OpenURL registered items. (And why does the page seem uncertain as to whether it’s a “repository” or a “registry”? Is there no difference between those terms?) The only other links are to a mix of HTML and PDF resources. (There really should be a health warning on links to PDFs - they are just not browser friendly documents.) And, I do have to wonder at this: the registry page has a link to the unofficial 0.1 version but not to the 1.0 standard. Er, why? And don’t even try this link: http://openurl.info/. Not much info there.\nWhere else to go? The NISO site allows a search on “openurl” which returns links to the standard and to other related documents.\nAnd then there’s the community site https://web.archive.org/web/20091027024029/http://openurl.code4lib.org/ targeted at developers and its Planet OpenURL which is a useful source for current awareness.\nMe, I’m sticking with the Wikipedia page as the best reference for OpenURL. How odd that OpenURL aimed at improving linking on the Web should not have it’s own simple access point. Thank heavens at least that DOI has a single reference point: http://0-doi-org.libus.csd.mu.edu/.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/tombstone/", "title": "Tombstone", "subtitle":"", "rank": 1, "lastmod": "2008-05-23", "lastmod_ts": 1211500800, "section": "Blog", "tags": [], "description": "So, the big guns have decided that XRI is out. In a message from the TAG yesterday, variously noted as being “categorical” (Andy Powell, eFoundations) and a “proclamation” (Edd Dumbill, XML.com), the co-chairs (Tim Berners-Lee and Stuart Williams) had this to say:\n“We are not satisfied that XRIs provide functionality not readily available from http: URIs. Accordingly the TAG recommends against taking the XRI specifications forward, or supporting the use of XRIs as identifiers in other specifications.", "content": "So, the big guns have decided that XRI is out. In a message from the TAG yesterday, variously noted as being “categorical” (Andy Powell, eFoundations) and a “proclamation” (Edd Dumbill, XML.com), the co-chairs (Tim Berners-Lee and Stuart Williams) had this to say:\n“We are not satisfied that XRIs provide functionality not readily available from http: URIs. Accordingly the TAG recommends against taking the XRI specifications forward, or supporting the use of XRIs as identifiers in other specifications.”\nAlas, poor XRI. But what might this also mean for other URI schemes (note the reference above to “http: URIs)? Well, the message starts out with this:\n“In The Architecture of the World Wide Web 1 the TAG sets out the reasons why http: URIs are the foundation of the value proposition for the Web, and should be used for naming on the Web. “\nNow I’m not sure that this is quite what AWWW actually says. I don’t find it to be that insistent that “http” URIs … should be used for naming on the Web” but I would need to read it more carefully. Certainly, “http: URIs” fit the bill and are top of the class. But there is also a general recognition that other schemes than “http:” do exist.\nInteresting times anyway with a “winner takes all” approach to identification. I wonder what this all means for DOI.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-reuse-policies/", "title": "Metadata Reuse Policies", "subtitle":"", "rank": 1, "lastmod": "2008-05-20", "lastmod_ts": 1211241600, "section": "Blog", "tags": [], "description": "Following on from yesterday’s post about making metadata available on our Web pages, I wanted to ask here about “metadata reuse policies”. Does anybody have a clue as to what might constitute a best practice in this area? I’m specifically interested in license terms, rather than how those terms would be encoded or carried. Increasingly we are finding more channels to distribute metadata (RSS, HTML, OAI-PMH, etc.) but don’t yet have any clear statement for our customers as to how they might reuse that data.", "content": "Following on from yesterday’s post about making metadata available on our Web pages, I wanted to ask here about “metadata reuse policies”. Does anybody have a clue as to what might constitute a best practice in this area? I’m specifically interested in license terms, rather than how those terms would be encoded or carried. Increasingly we are finding more channels to distribute metadata (RSS, HTML, OAI-PMH, etc.) but don’t yet have any clear statement for our customers as to how they might reuse that data.\nTime to put the caveats aside and focus on the actuals.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/natures-metadata-for-web-pages/", "title": "Nature’s Metadata for Web Pages", "subtitle":"", "rank": 1, "lastmod": "2008-05-19", "lastmod_ts": 1211155200, "section": "Blog", "tags": [], "description": "Well, we may not be the first but wanted anyway to report that Nature has now embedded metadata (HTML meta tags) into all its newly published pages including full text, abstracts and landing pages (all bar four titles which are currently being worked on). Metadata coverage extends back through the Nature archives (and depth of coverage varies depending on title). This conforms to the W3C’s Guideline 13.2 in the Web Content Accessibility Guidelines 1.0 which exhorts content publishers to “provide metadata to add semantic information to pages and sites”.\nMetadata is provided in both DC and PRISM formats as well as in Google’s own bespoke metadata format. This generally follows the DCMI recommendation “Expressing Dublin Core metadata using HTML/XHTML meta and link elements, and the earlier RFC 2731 “Encoding Dublin Core Metadata in HTML”. (Note that schema name is normalized to lowercase.) Some notes:\nThe DOI is included in the “dc.identifier” term in URI form which is the Crossref recommendation for citing DOI. We could consider adding also “prism.doi” for disclosing the native DOI form. This requires the PRISM namespace declaration to be bumped to v2.0. We might consider synchronizing this change with our RSS feeds which are currently pegged at v1.2, although note that the RSS module mod_prism currently applies only to PRISM v1.2. We could then also add in a “prism.url” term to link back (through the DOI proxy server) to the content site. The namespace issue listed above still holds. The “citation_” terms are not anchored in any published namespace which does make this term set problematic in application reuse. It would be useful to be able to reference a namespace (e.g. “rel=\u0026quot;schema.gs\u0026quot; href=\u0026quot;...\u0026quot;“) for these terms and to cite them as e.g. “gs.citation_title“. The HTML metadata sets from an example landing page are presented below. ", "content": "Well, we may not be the first but wanted anyway to report that Nature has now embedded metadata (HTML meta tags) into all its newly published pages including full text, abstracts and landing pages (all bar four titles which are currently being worked on). Metadata coverage extends back through the Nature archives (and depth of coverage varies depending on title). This conforms to the W3C’s Guideline 13.2 in the Web Content Accessibility Guidelines 1.0 which exhorts content publishers to “provide metadata to add semantic information to pages and sites”.\nMetadata is provided in both DC and PRISM formats as well as in Google’s own bespoke metadata format. This generally follows the DCMI recommendation “Expressing Dublin Core metadata using HTML/XHTML meta and link elements, and the earlier RFC 2731 “Encoding Dublin Core Metadata in HTML”. (Note that schema name is normalized to lowercase.) Some notes:\nThe DOI is included in the “dc.identifier” term in URI form which is the Crossref recommendation for citing DOI. We could consider adding also “prism.doi” for disclosing the native DOI form. This requires the PRISM namespace declaration to be bumped to v2.0. We might consider synchronizing this change with our RSS feeds which are currently pegged at v1.2, although note that the RSS module mod_prism currently applies only to PRISM v1.2. We could then also add in a “prism.url” term to link back (through the DOI proxy server) to the content site. The namespace issue listed above still holds. The “citation_” terms are not anchored in any published namespace which does make this term set problematic in application reuse. It would be useful to be able to reference a namespace (e.g. “rel=\u0026quot;schema.gs\u0026quot; href=\u0026quot;...\u0026quot;“) for these terms and to cite them as e.g. “gs.citation_title“. The HTML metadata sets from an example landing page are presented below. If you view the page source you should see something like the text below. (Note that you may have to scroll past whitespace which is emitted by the HTML template generator.)\n\u0026lt;pre\u0026gt;\u0026amp;lt;link title=\u0026quot;schema(DC)\u0026quot; rel=\u0026quot;schema.dc\u0026quot; href=\u0026quot;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026quot; /\u0026amp;gt; \u0026lt;meta name=\u0026ldquo;dc.publisher\u0026rdquo; content=\u0026ldquo;Nature Publishing Group\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.language\u0026rdquo; content=\u0026ldquo;en\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.rights\u0026rdquo; content=\u0026quot;© 2008 Nature Publishing Group\u0026quot; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.title\u0026rdquo; content=\u0026ldquo;Crystal structure of squid rhodopsin\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.creator\u0026rdquo; content=\u0026ldquo;Midori Murakami\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.creator\u0026rdquo; content=\u0026ldquo;Tsutomu Kouyama\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;dc.identifier\u0026rdquo; content=\u0026ldquo;doi:10.1038/nature06925\u0026rdquo; /\u0026gt; \u0026lt;link title=\u0026ldquo;schema(PRISM)\u0026rdquo; rel=\u0026ldquo;schema.prism\u0026rdquo; href=\u0026ldquo;https://web.archive.org/web/20080516191035/http://www.prismstandard.org//namespaces/1.2/basic/\" /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.copyright\u0026rdquo; content=\u0026rdquo;© 2008 Nature Publishing Group\u0026quot; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.rightsAgent\u0026rdquo; content=\u0026ldquo;permissions@nature.com\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.publicationName\u0026rdquo; content=\u0026ldquo;Nature\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.issn\u0026rdquo; content=\u0026ldquo;0028-0836\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.eIssn\u0026rdquo; content=\u0026ldquo;1476-4687\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.volume\u0026rdquo; content=\u0026ldquo;453\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.number\u0026rdquo; content=\u0026ldquo;7193\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.startingPage\u0026rdquo; content=\u0026ldquo;363\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;prism.endingPage\u0026rdquo; content=\u0026ldquo;367\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_journal_title\u0026rdquo; content=\u0026ldquo;Nature\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_publisher\u0026rdquo; content=\u0026ldquo;Nature Publishing Group\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_authors\u0026rdquo; content=\u0026ldquo;Midori Murakami, Tsutomu Kouyama\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_title\u0026rdquo; content=\u0026ldquo;Crystal structure of squid rhodopsin\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_volume\u0026rdquo; content=\u0026ldquo;453\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_issue\u0026rdquo; content=\u0026ldquo;7193\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_firstpage\u0026rdquo; content=\u0026ldquo;363\u0026rdquo; /\u0026gt; \u0026lt;meta name=\u0026ldquo;citation_doi\u0026rdquo; content=\u0026ldquo;doi:10.1038/nature06925\u0026rdquo; /\u0026gt; While it is not expected that search engines will index these terms directly and that no direct SEO is intended, we think there is enough value for applications to make use of these terms. The terms are reasonably accessible to simple scripts, etc. Note that even in [RFC 2731][4] (published in 1999) there is a Perl script listed in Section 9 which allows the metadata name/value pairs to be easily pulled out. Running this over the example page yields the following output: \u0026lt;pre\u0026gt;@(urc; @|MISSING ELEMENT NAME; text/css @|MISSING ELEMENT NAME; text/html; charset=iso-8859-1 @|robots; noarchive @|keywords; Nature, science, science news, biology, physics, genetics, astronomy, astrophysics, quantum physics, evolution, evolutionary biology, geophysics, climate change, earth science, materials science, interdisciplinary science, science policy, medicine, systems biology, genomics, transcriptomics, palaeobiology, ecology, molecular biology, cancer, immunology, pharmacology, development, developmental biology, structural biology, biochemistry, bioinformatics, computational biology, nanotechnology, proteomics, metabolomics, biotechnology, drug discovery, environmental science, life, marine biology, medical research, neuroscience, neurobiology, functional genomics, molecular interactions, RNA, DNA, cell cycle, signal transduction, cell signalling. @|description; Nature is the international weekly journal of science: a magazine style journal that publishes full-length research papers in all disciplines of science, as well as News and Views, reviews, news, features, commentaries, web focuses and more, covering all branches of science and how science impacts upon all aspects of society and life. @|dc.publisher; Nature Publishing Group @|dc.language; en @|dc.rights; #169; 2008 Nature Publishing Group @|dc.title; Crystal structure of squid rhodopsin @|dc.creator; Midori Murakami @|dc.creator; Tsutomu Kouyama @|dc.identifier; doi:10.1038/nature06925 @|prism.copyright; © 2008 Nature Publishing Group @|prism.rightsAgent; permissions@nature.com @|prism.publicationName; Nature @|prism.issn; 0028-0836 @|prism.eIssn; 1476-4687 @|prism.volume; 453 @|prism.number; 7193 @|prism.startingPage; 363 @|prism.endingPage; 367 @|citation_journal_title; Nature @|citation_publisher; Nature Publishing Group @|citation_authors; Midori Murakami, Tsutomu Kouyama @|citation_title; Crystal structure of squid rhodopsin @|citation_volume; 453 @|citation_issue; 7193 @|citation_firstpage; 363 @|citation_doi; doi:10.1038/nature06925 @)urc; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dois-and-pubmed-central-why-no-links/", "title": "DOIs and PubMed Central - why no links?", "subtitle":"", "rank": 1, "lastmod": "2008-05-14", "lastmod_ts": 1210723200, "section": "Blog", "tags": [], "description": "Further to my previous post “NIH Mandate and PMCIDs” we’ve been looking into linking to articles on publishers’ sites from PubMed Central (PMC). There are a couple of ways this happens currently (see details below) but these are complicated and will lead to broken links and more difficulty for PMC and publishers in managing the links. Crossref is going to be putting together a briefing note for its members on this soon.\nThe main issue we are raising with PMC, and that we will encourage publishers to raise too, is why doesn’t PMC just automatically link DOIs? Most of the articles in PMC have DOIs so this would require very little effort from PMC and no effort from publishers and would give readers a perisistent link to the publisher’s version of an article.\n", "content": "Further to my previous post “NIH Mandate and PMCIDs” we’ve been looking into linking to articles on publishers’ sites from PubMed Central (PMC). There are a couple of ways this happens currently (see details below) but these are complicated and will lead to broken links and more difficulty for PMC and publishers in managing the links. Crossref is going to be putting together a briefing note for its members on this soon.\nThe main issue we are raising with PMC, and that we will encourage publishers to raise too, is why doesn’t PMC just automatically link DOIs? Most of the articles in PMC have DOIs so this would require very little effort from PMC and no effort from publishers and would give readers a perisistent link to the publisher’s version of an article.\nCurrent PMC linking methods. 1) Links on Author Manuscripts in PMC are pulled in from PubMed’s LinkOut service which requires the publisher to register with PubMed and provide linking files. The DOI can be specified as the linking mechanism via LinkOut.\nFor final version of articles in PMC the journal image at the top of the page can be linked to the journal homepage or can have a “this article” link to the publisher’s site. The publisher has to sign up with PMC for specifying the header graphic and the links. https://web.archive.org/web/20080916065531/http://0-www-pubmedcentral-nih-gov.libus.csd.mu.edu/pmcdoc/pubsetup.doc (word doc) say “The static base (http://0-www-biomedcentral-com.libus.csd.mu.edu/) of the URLs for this link comes from the HTML template. PMC then dynamically completes the URL by adding an issn/vol/page. ” and then says that any item in the XML (such as the DOI) can be used. Both of the approaches outlined above require extra work and will be difficult for smaller publishers. In addition, the links will be fragile by not being based on DOIs. Publishers can specify that DOIs can be used but it isn’t easy. We’d like to leverage the resources that publishers have already put into the DOI system but automatically making the DOIs active links - it would be very easy.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/pubmed/", "title": "PubMed", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/openhandle-languages-support/", "title": "OpenHandle: Languages Support", "subtitle":"", "rank": 1, "lastmod": "2008-04-21", "lastmod_ts": 1208736000, "section": "Blog", "tags": [], "description": "Following up the earlier post on OpenHandle, there are now a number of language examples which have been contributed to the project. The diagram below shows the OpenHandle service in schematic with various languages support. Briefly, OpenHandle aims to provide a web services interface to the Handle System to simplify access to the data stored for a given Handle.\n(Note that the diagram is an HTML imagemap and all elements are “clickable”.", "content": "Following up the earlier post on OpenHandle, there are now a number of language examples which have been contributed to the project. The diagram below shows the OpenHandle service in schematic with various languages support. Briefly, OpenHandle aims to provide a web services interface to the Handle System to simplify access to the data stored for a given Handle.\n(Note that the diagram is an HTML imagemap and all elements are “clickable”.)\nNote: this diagram is no longer available online as of 2023. We show the code here for reference.\n\u0026lt;map name=\u0026#34;GraffleExport\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;302,133,273,117,244,133,266,149,261,157,274,150,302,133\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeLisp\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;359,93,330,77,302,93,324,109,318,117,332,110,359,93\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeFSharp\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;186,93,157,77,129,93,151,109,145,117,159,110,186,93\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeAppleScript\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;244,93,215,77,186,93,208,109,203,117,217,110,244,93\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeCSharp\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;244,174,215,157,186,174,208,189,203,197,217,190,244,174\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodePython\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;244,133,215,117,186,133,208,149,203,157,217,150,244,133\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeJavaScript\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;186,174,157,157,129,174,151,189,145,197,159,190,186,174\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodePhp\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;302,93,273,77,244,93,266,109,261,117,274,110,302,93\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeErlang\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;359,133,330,117,302,133,324,149,318,157,332,150,359,133\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodePerl\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;302,174,273,157,244,174,266,189,261,197,274,190,302,174\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeRuby\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;359,174,330,157,302,174,324,189,318,197,332,190,359,174\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeSmalltalk\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;186,133,157,117,129,133,151,149,145,157,159,150,186,133\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/wiki/OpenHandleCodeJava\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;255,237,255,260,266,260,244,272,222,260,233,260,233,237,222,237,244,225,266,237,255,237\u0026#34; href=\u0026#34;http://www.ietf.org/rfc/rfc2616.txt\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;174,527,174,495,196,491,217,495,217,527,196,531,174,527\u0026#34; href=\u0026#34;http://0-hdl-handle-net.libus.csd.mu.edu/\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;270,527,270,495,292,491,314,495,314,527,292,531,270,527\u0026#34; href=\u0026#34;http://0-hdl-handle-net.libus.csd.mu.edu/\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;152,268,210,268,210,300,152,304,152,268\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10100/nature\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=rdf\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;201,307,258,307,258,339,201,343,201,307\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10100/nature\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=n3\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;267,297,325,297,325,329,267,333,267,297\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10100/nature\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=json\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;255,426,255,450,266,450,244,461,222,450,233,450,233,426,222,426,244,414,266,426,255,426\u0026#34; href=\u0026#34;http://www.ietf.org/rfc/rfc3651.txt\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;262,355,262,388,226,388,226,355,262,355\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10100/nature\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=rdf\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;277,223,292,208,320,208,344,220,330,235,301,235,277,223\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10.1000/1\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=json\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;148,244,162,229,191,229,215,241,201,256,172,256,148,244\u0026#34; href=\u0026#34;http://0-nascent-nature-com.libus.csd.mu.edu/openhandle/handle?id=10100/nature\u0026amp;#038;mimetype=text/plain\u0026amp;#038;format=json\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;222,507,222,475,244,471,266,475,266,507,244,511,222,507\u0026#34; href=\u0026#34;http://0-hdl-handle-net.libus.csd.mu.edu/\u0026#34;\u0026gt; \u0026lt;area shape=poly coords=\u0026#34;49,215,89,102,191,62,305,79,401,143,393,282,315,350,198,363,105,317,49,215,120,481,140,501,178,501,199,474,186,442,152,436,122,447,120,481,49,215,57,569,79,571,89,547,71,534,53,548,57,569,49,215,57,569\u0026#34; href=\u0026#34;http://code.google.com/p/openhandle/\u0026#34;\u0026gt; \u0026lt;/map\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/nih-mandate-and-pmcids/", "title": "NIH Mandate and PMCIDs", "subtitle":"", "rank": 1, "lastmod": "2008-04-15", "lastmod_ts": 1208217600, "section": "Blog", "tags": [], "description": "The NIH Public Access Policy says “When citing their NIH-funded articles in NIH applications, proposals or progress reports, authors must include the PubMed Central reference number for each article” and the FAQ provides some examples of this:\nExamples:\nVarmus H, Klausner R, Zerhouni E, Acharya T, Daar A, Singer P. 2003. PUBLIC HEALTH: Grand Challenges in Global Health. Science 302(5644): 398-399. PMCID: 243493\nZerhouni, EA. (2003) A New Vision for the National Institutes of Health.", "content": "The NIH Public Access Policy says “When citing their NIH-funded articles in NIH applications, proposals or progress reports, authors must include the PubMed Central reference number for each article” and the FAQ provides some examples of this:\nExamples:\nVarmus H, Klausner R, Zerhouni E, Acharya T, Daar A, Singer P. 2003. PUBLIC HEALTH: Grand Challenges in Global Health. Science 302(5644): 398-399. PMCID: 243493\nZerhouni, EA. (2003) A New Vision for the National Institutes of Health. Journal of Biomedicine and Biotechnology (3), 159-160. PMCID: 400215\nIt’s interesting to note that on PMC itself both the [The NIH Public Access Policy says “When citing their NIH-funded articles in NIH applications, proposals or progress reports, authors must include the PubMed Central reference number for each article” and the FAQ provides some examples of this:\nExamples:\nVarmus H, Klausner R, Zerhouni E, Acharya T, Daar A, Singer P. 2003. PUBLIC HEALTH: Grand Challenges in Global Health. Science 302(5644): 398-399. PMCID: 243493\nZerhouni, EA. (2003) A New Vision for the National Institutes of Health. Journal of Biomedicine and Biotechnology (3), 159-160. PMCID: 400215\nIt’s interesting to note that on PMC itself both the][3] - but the DOI isn’t linked. Two things occur to me - 1) should Crossref map DOIs to PMCIDs and vice versa and make PMCIDs available in it’s query interfaces and 2) shouldn’t publishers ask that the PMC copy of the article link back to the publisher version? It would be very easy with the DOI.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/word-add-in-for-scholarly-authoring-and-publishing/", "title": "Word Add-in for Scholarly Authoring and Publishing", "subtitle":"", "rank": 1, "lastmod": "2008-03-26", "lastmod_ts": 1206489600, "section": "Blog", "tags": [], "description": "Last week Pablo Fernicola sent me email announcing that Microsoft have finally released a beta of their Word plugin for marking-up manuscripts with the NLM DTD. I say “finally” because we’ve know this was on the way and have been pretty excited to see it. We once even hoped that MS might be able to show the plug-in at the ALPSP session on the NLM DTD, but we couldn’t quite manage it.", "content": "Last week Pablo Fernicola sent me email announcing that Microsoft have finally released a beta of their Word plugin for marking-up manuscripts with the NLM DTD. I say “finally” because we’ve know this was on the way and have been pretty excited to see it. We once even hoped that MS might be able to show the plug-in at the ALPSP session on the NLM DTD, but we couldn’t quite manage it.\nThe plugin is targeted at production/editorial staff, but, of course, it will be interesting to see if any of this work can be pushed back to the author. I won’t hold my breath on the latter score, but it will be fun to watch.\nOne thing I would note is that the NLM DTD can also be used in the humanities and social sciences, so, frankly, I think they should market it more broadly.\nAnyway- the plugin can be downloaded from the Microsoft site.\nAnd Pablo has setup a blog where testers can discuss the add-in.\nAnd there is also an entry for the project on the Microsoft Research site (an interesting place to peruse, if you have a moment).\nCongatulations to Pablo and his team.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/openhandle-google-code-project/", "title": "OpenHandle: Google Code Project", "subtitle":"", "rank": 1, "lastmod": "2008-03-07", "lastmod_ts": 1204848000, "section": "Blog", "tags": [], "description": "Just announced on the handle-info and semantic-web mailing lists is the OpenHandle project on Google Code. This may be of some interest to the DOI community as it allows the handle record underpinning the DOI to be exposed in various common text-based serializations to make the data stored within the records more accessible to Web applications. Initial serializations include RDF/XML, RDF/N3, and JSON.\nWe’d be very interested in receiving feedback on this project - either on this blog or over on the project wiki.", "content": "Just announced on the handle-info and semantic-web mailing lists is the OpenHandle project on Google Code. This may be of some interest to the DOI community as it allows the handle record underpinning the DOI to be exposed in various common text-based serializations to make the data stored within the records more accessible to Web applications. Initial serializations include RDF/XML, RDF/N3, and JSON.\nWe’d be very interested in receiving feedback on this project - either on this blog or over on the project wiki.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/object-reuse-and-exchange/", "title": "Object Reuse and Exchange", "subtitle":"", "rank": 1, "lastmod": "2008-03-05", "lastmod_ts": 1204675200, "section": "Blog", "tags": [], "description": "On March 3rd the Open Archives Initiative held a roll out meeting of the first alpha release of the ORE specification (ives.org/ore/) . According to Herbert Van de Sompel a beta release is planned for late March / early April and a 1.0 release targeted for September. The presentations focused on the aggregation concepts behind ORE and described an ATOM based implementation. ORE is the second project from the OAI but unlike its sibling PMH it is not exclusively a repository technology.", "content": "On March 3rd the Open Archives Initiative held a roll out meeting of the first alpha release of the ORE specification (http://www.openarchives.org/ore/) . According to Herbert Van de Sompel a beta release is planned for late March / early April and a 1.0 release targeted for September. The presentations focused on the aggregation concepts behind ORE and described an ATOM based implementation. ORE is the second project from the OAI but unlike its sibling PMH it is not exclusively a repository technology. ORE provides machine readable manifests for related Web resources in any context. For instance, DOI landing pages (aka splash page) are human readable resources containing links to any number of resources related to the work identified by the DOI. An ORE instance for the DOI (called a Rem or resource map) would describe the same set of resources in a machine friendly format. A standardized form of redirection understood by the DOI proxy would yield the Rem instead of normal page e.g.\nhttp://dx.doi.org/10.5555/abcd?type=rem which could be useful for crawlers.\nA second roll out meeting is planned during the Sparc-08 workshops in early April.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/iso/cd-26324-doi/", "title": "ISO/CD 26324 (DOI)", "subtitle":"", "rank": 1, "lastmod": "2008-02-22", "lastmod_ts": 1203638400, "section": "Blog", "tags": [], "description": "Following on from my previous post about prism:doi I didn’t mention, or reference, the ongoing ISO work on DOI, Indeed I hadn’t realized that the DOI site now has a status update on the ISO work:\n_“The DOI® System is currently being standardised through ISO. It is expected that the process will be finalised during 2008. In December 2007 the Working Group for this project approved a final draft as a Committee Draft (standard for voting) which is now being processed by ISO.", "content": "Following on from my previous post about prism:doi I didn’t mention, or reference, the ongoing ISO work on DOI, Indeed I hadn’t realized that the DOI site now has a status update on the ISO work:\n_“The DOI® System is currently being standardised through ISO. It is expected that the process will be finalised during 2008. In December 2007 the Working Group for this project approved a final draft as a Committee Draft (standard for voting) which is now being processed by ISO. Copies of the Committee Draft (SC9N475) and an accompanying explanatory document detailing issues dealt with during the standards process (SC9N474) are provided here for information.\nCommittee Draft 26324 is subject to ISO’s copyright and is for information only to those interested in the project; it may not be re-distributed. This is currently undergoing the formal ISO voting process; the deadline for comments on CD 26324 from TC46/SC9’s national bodies is April 25, 2008: please contact your national member of ISO TC46/SC9 if you would like it contribute to comments on this draft standard. Other documents for the ISO DOI Working Group are available on a DOI Project Register.”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prismdoi/", "title": "prism:doi", "subtitle":"", "rank": 1, "lastmod": "2008-02-22", "lastmod_ts": 1203638400, "section": "Blog", "tags": [], "description": "The new PRISM spec (v. 2.0) was published this week, see the press release. (Downloads are available here.)\nThis is a significant development as there is support for XMP profiles, to complement the existing XML and RDF/XML profiles. And, as PRISM is one of the major vocabularies being used by publishers, I would urge you all to go take a look at it and to consider upgrading your applications to using it.", "content": "The new PRISM spec (v. 2.0) was published this week, see the press release. (Downloads are available here.)\nThis is a significant development as there is support for XMP profiles, to complement the existing XML and RDF/XML profiles. And, as PRISM is one of the major vocabularies being used by publishers, I would urge you all to go take a look at it and to consider upgrading your applications to using it.\nOne caveat. There’s a new element \u0026lt;tt\u0026gt;prism:doi\u0026lt;/tt\u0026gt; (PRISM Namespace, 4.2.13) which sits alongside another new element \u0026lt;tt\u0026gt;prism:url\u0026lt;/tt\u0026gt; (PRISM Namespace, 4.2.55). Unfortunately the \u0026lt;tt\u0026gt;prism:doi\u0026lt;/tt\u0026gt; element is shown to take DOI proxy URL as its value - and not the DOI string itself, e.g.\nModel #1\n\u0026lt;prism:doi rdf:resource=”http://0-dx-doi-org.libus.csd.mu.edu/10.1030/03054”/\u0026gt; Model #2\n\u0026lt;prism:doi\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.1030/03054\u0026lt;/prism:doi\u0026gt; This seems to me to just plain wrong. The DOI in itself is not a URL (or URI) - although can, and should, be represented in URI form when used in Web contexts (i.e. pretty much most of the time). As a literal it should be used in its native form as specified in ANSI/NISO Z39.84 - 2005 Syntax for the Digital Object Identifier. This would only satisfy Model #2 above.\nTo satisfy Model #1 above a URI form for DOI would be required. And this is not the service URI denoted by the proxy. It would either have to be:\nModel #1 - Registered URI Form\n\u0026lt;prism:doi rdf:resource=”info:doi/10.1030/03054”/\u0026gt; * Model #1 - Unregistered URI Form\n\u0026lt;prism:doi rdf:resource=”doi:10.1030/03054”/\u0026gt; Any comments? Some guidelines from Crossref would be useful - although maybe further discussion is required. It is, of course, a constant bugbear that “doi:” remains an unregistered URI scheme.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/added-xml-format-parameter-to-crossrefs-openurl-resolver/", "title": "Added XML format parameter to Crossref’s OpenURL resolver", "subtitle":"", "rank": 1, "lastmod": "2008-02-13", "lastmod_ts": 1202860800, "section": "Blog", "tags": [], "description": "From the beginning our OpenURL resolver has had a non standard feature of returning metadata in response to a request instead of redirecting to the referrent. This feature returned one of our older XML formats which is a bit limited as to the fields it contains.\nSometime after our resolver was deployed we introduced a more verbose XML format for DOI metadata called ‘UNIXREF”. This was always available to regular queries against the Crossref system but was never introduced to the OpenURL resolver (for no particular reason).", "content": "From the beginning our OpenURL resolver has had a non standard feature of returning metadata in response to a request instead of redirecting to the referrent. This feature returned one of our older XML formats which is a bit limited as to the fields it contains.\nSometime after our resolver was deployed we introduced a more verbose XML format for DOI metadata called ‘UNIXREF”. This was always available to regular queries against the Crossref system but was never introduced to the OpenURL resolver (for no particular reason).\nWe’ve since learned that some user’s are relying on the OpenURL’s metadata feature to build proper references in situations where they have a DOI and that the older XML format is insufficient. Therefor I’ve added a ‘format’ parameter to our OpenURL resolver which allows one to request the more verbose UNIXREF. (see www.crossref.org/openurl)\nAs always please feel free to contact us regarding new features or changes to existing features that might be helpful.\nRegards,\nChuck\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-citation-plugin-for-wordpress/", "title": "Crossref Citation Plugin (for WordPress)", "subtitle":"", "rank": 1, "lastmod": "2008-02-09", "lastmod_ts": 1202515200, "section": "Blog", "tags": [], "description": "OK, after a number of delays due to everything from indexing slowness to router problems, I’m happy to say that the first public beta of our WordPress citation plugin is available for download via SourceForge. A Movable Type version is in the works.\nAnd congratulations to Trey at OpenHelix who became laudably impatient, found the SourceForge entry for the plugin back on February 8th and seems to have been testing it since. He has a nice description of how it works (along with screenshots), so I won’t repeat the effort here.\nHaving said that, I do include the text of the README after the jump. Please have a look at it before you install, because it might save you some mystification.\n", "content": "OK, after a number of delays due to everything from indexing slowness to router problems, I’m happy to say that the first public beta of our WordPress citation plugin is available for download via SourceForge. A Movable Type version is in the works.\nAnd congratulations to Trey at OpenHelix who became laudably impatient, found the SourceForge entry for the plugin back on February 8th and seems to have been testing it since. He has a nice description of how it works (along with screenshots), so I won’t repeat the effort here.\nHaving said that, I do include the text of the README after the jump. Please have a look at it before you install, because it might save you some mystification.\nDescription A WordPress plugin that allows you to search Crossref metadata using citations or partial citations. When you find the reference that you want, insert the formatted and DOI-linked citation into your blog posting along with supporting COINs metadata. The plugin supports both a long citation format and a short (op. cit.) format.\nWarnings, Caveats and Weasel Words Please note the following about this plugin:\nWe are releasing this as a test. It is running on R\u0026amp;D equipment in a non-production environment and so it may disappear without warning or perform erratically. If it isn’t working for some reason, come back later and try again. If it seems to be broken for a prolonged period of time, then please report the problem to us via sourceforge. There is currently a 20 item limit on the number of hits returned per query. This might seem arbitrary and stingy, but please remember- we are not trying to create a fully blown search engine- we’re just trying to create a citation lookup service. Of course, if, after looking at how the service is used, it looks like we need to up this limit, we will. If you look in the plugin options (or at the code), you will see that the system includes an API key. At the moment we have no restrictions on use of this service, but have included this in case we need to protect the system from abuse. The bulk of the functionality we have developed is actually at the back-end. This plugin is just a lightweight interface to that back-end. You can examine the guts of the plugin in order to easily figure out how to create similar functionality for your favorite blog platform, wiki, etc. If you do create something, please let us know. We’d love to see what people are building. We are continuing to experiment with the metadata search function in order to increase its accuracy and flexibility. Again, this might result in seemingly inconsistent behavior. Did we mention that this is a test? Please note that this API is not meant for bulk harvesting of Crossref metadata. If you need such facilities, then please look at our web site for information about our metadata services. The data currently behind the plugin is *just* a December 2007 snapshot of our our complete journal article metadata. We have not added books or proceedings yet. We will do so soon and we will start updating the metadata weekly. We welcome your ideas for tools that we can provide to help researchers. Please, please, please send comments, requests, queries and ideas to us at:\ncitation-plugin@crossref.org\n", "headings": ["Description","Warnings, Caveats and Weasel Words"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/claddier-final-report/", "title": "CLADDIER Final Report", "subtitle":"", "rank": 1, "lastmod": "2008-01-15", "lastmod_ts": 1200355200, "section": "Blog", "tags": [], "description": "I just ran across the final report from the CLADDIER project. CLADDIER comes from the JISC and stands for “CITATION, LOCATION, And DEPOSITION IN DISCIPLINE \u0026amp; INSTITUTIONAL REPOSITORIES”. I suspect JISC has an entire department dedicated to creating impossible acronyms (the JISC Acronym Preparation Executive?)\nAnyhoo- the report describes a distributed citation location and updating service based on the linkback mechanism that is widely used in the blogging community.\nI think this is an interesting approach and is one that I talked about briefly (PDF) at the UKSG’s Measure for Measure seminar last June.", "content": "I just ran across the final report from the CLADDIER project. CLADDIER comes from the JISC and stands for “CITATION, LOCATION, And DEPOSITION IN DISCIPLINE \u0026amp; INSTITUTIONAL REPOSITORIES”. I suspect JISC has an entire department dedicated to creating impossible acronyms (the JISC Acronym Preparation Executive?)\nAnyhoo- the report describes a distributed citation location and updating service based on the linkback mechanism that is widely used in the blogging community.\nI think this is an interesting approach and is one that I talked about briefly (PDF) at the UKSG’s Measure for Measure seminar last June. I think that, like most proponents of p2p distributed architectures, they massively underestimate the problem of trust in the network. They fully knowledge the problem of linkback spam, but their hand-wavy-solution(tm) of using whitelists just means the system effectively becomes semi-centralized again (you have to have trusted keepers of the whitelists).\nAnd of course I was mildly exasperated by the report’s characterization of one of the perceived “disadvantages” of the Crossref architectural model being a :\n“Centralised service hosting a large persistent store – with the need for a (possibly commercial) business model to justify providing the service.”\nThough DOI registries like Bowker and Nielsen Bookdata are commercial, Crossref, the organization that services the industry that the JISC is concerned with, is *not* a commercial service.\nAlso if you replaced the phrase “justify providing” with the word “sustain”, the sentence wouldn’t sound like such a “disadvantage.”\nBut aside from these quibbles, the report makes an interesting (if technical) read.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/bisg-paper-on-identifying-digital-book-content/", "title": "BISG Paper on Identifying Digital Book Content", "subtitle":"", "rank": 1, "lastmod": "2008-01-14", "lastmod_ts": 1200268800, "section": "Blog", "tags": [], "description": "BISG and BIC have published a discussion paper called “The identification of digital book content” - https://web.archive.org/web/20090920075334/http://www.bisg.org/docs/DigitalIdentifiers_07Jan08.pdf. The paper discusses ISBN, ISTC and DOI amongst other things and makes a series of recommendations which basically say to consider applying DOI, ISBN and ISTC to digital book content. The paper highlights in a positive way that DOI and ISBN are different but can work together (the idea of the “actionable ISBN” and aiding discovery of content).", "content": "BISG and BIC have published a discussion paper called “The identification of digital book content” - https://web.archive.org/web/20090920075334/http://www.bisg.org/docs/DigitalIdentifiers_07Jan08.pdf. The paper discusses ISBN, ISTC and DOI amongst other things and makes a series of recommendations which basically say to consider applying DOI, ISBN and ISTC to digital book content. The paper highlights in a positive way that DOI and ISBN are different but can work together (the idea of the “actionable ISBN” and aiding discovery of content). However, it doesn’t go into much depth on any of the issues or really explain how all these identifiers would work together and the critical role that metadata plays.\nNevertheless it’s great that the paper has been put forward as a discussion document - Crossref plans to respond and be part of the ongoing discussion in this area.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2007/", "title": "2007", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/on-google-knol/", "title": "On Google Knol", "subtitle":"", "rank": 1, "lastmod": "2007-12-14", "lastmod_ts": 1197590400, "section": "Blog", "tags": [], "description": "The recently discussed (announced?) Google Knol project could make Google Scholar look like a tiny blip in the the scholarly publishing landscape.\nI love the comment an authority:\n“Books have authors’ names right on the cover, news articles have bylines, scientific articles always have authors — but somehow the web evolved without a strong standard to keep authors names highlighted. We believe that knowing who wrote what will significantly help users make better use of web content.", "content": "The recently discussed (announced?) Google Knol project could make Google Scholar look like a tiny blip in the the scholarly publishing landscape.\nI love the comment an authority:\n“Books have authors’ names right on the cover, news articles have bylines, scientific articles always have authors — but somehow the web evolved without a strong standard to keep authors names highlighted. We believe that knowing who wrote what will significantly help users make better use of web content.”\nAnd so I suppose this means they are assigning author identifiers….\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/zotero-and-the-ia/", "title": "Zotero and the IA", "subtitle":"", "rank": 1, "lastmod": "2007-12-14", "lastmod_ts": 1197590400, "section": "Blog", "tags": [], "description": "Dan Cohen at Zotero reports (Zotero and the Internet Archive Join Forces) on a very interesting tie up that will allow researchers using Zotero to deposit content in the Internet Archive and have OCR done on scanned material for free under a two year Mellon grant. Each piece of content will be given a “permanent URI that includes a time and date stamp in addition to the URL” ( would Handle or DOI add value here?", "content": "Dan Cohen at Zotero reports (Zotero and the Internet Archive Join Forces) on a very interesting tie up that will allow researchers using Zotero to deposit content in the Internet Archive and have OCR done on scanned material for free under a two year Mellon grant. Each piece of content will be given a “permanent URI that includes a time and date stamp in addition to the URL” ( would Handle or DOI add value here?) and be part of Zotero Commons (things can also be kept private within a group).\nZotero Commons is related to but different from Nature Precedings and WebCite in that it’s intended focus is on public domain stuff on researchers hard drives rather than someone else’s material or website that is cited (WebCite) or preprints, datasets, technical reports that are given at least an initial screening (Nature Precedings).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/conference/", "title": "Conference", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stm-innovations-2007/", "title": "STM Innovations 2007", "subtitle":"", "rank": 1, "lastmod": "2007-12-10", "lastmod_ts": 1197244800, "section": "Blog", "tags": [], "description": "After a busy Online Information conference, Friday was the STM Innovations Meeting in London (presentations not online yet). There was a very nice selection of tea which helped get the morning off to a good start.\nPatricia Seybold kicked off with a review of Web 2.0 that mentioned lots of sites and some good case studies:\nAlexander Street Press (https://alexanderstreet.com/) - user tags combined with a taxonomy.\nSlideshare (http://www.slideshare.net) - share presentations\nThreadless (http://www.threadless.com/) - design and vote on t-shirts\nThe most interesting parts of the talk were the case studies of how National Instruments and Staples have built a vibrant community of customers. Staples invited top purchasers on the their site to create product categories and sales went up 30% and now they use the categorization in physical stores and customer reviews from the web are used in stores.\n", "content": "After a busy Online Information conference, Friday was the STM Innovations Meeting in London (presentations not online yet). There was a very nice selection of tea which helped get the morning off to a good start.\nPatricia Seybold kicked off with a review of Web 2.0 that mentioned lots of sites and some good case studies:\nAlexander Street Press (https://alexanderstreet.com/) - user tags combined with a taxonomy.\nSlideshare (http://www.slideshare.net) - share presentations\nThreadless (http://www.threadless.com/) - design and vote on t-shirts\nThe most interesting parts of the talk were the case studies of how National Instruments and Staples have built a vibrant community of customers. Staples invited top purchasers on the their site to create product categories and sales went up 30% and now they use the categorization in physical stores and customer reviews from the web are used in stores.\nNI has a whole suite of tools that allow customers to build products and get their jobs done (using NI products and services).\nFive steps to Web 2.0 success –\nFocus on findability\nSolicit sutomers’ reviews, ratings and opinions\nEmpower users to classify and organize content\nNurture community, social networks, communities of practice\nGet lead users to strut their stuff, using your IP to build their IP\nThe most useful part came in the questions when Geoffrey Bilder asked about “astroturfing” - this is a problem for Web 2.0. Interestingly, the NI and Staples examples are closed communities and other sites have to have moderators to try and track this stuff down. Often you don’t hear about these types of issues amid the web 2.0 boosterism.\nJoris van Rossum gave an very good overview of Scirus’ wiki-based Topic Pages (https://web.archive.org/web/20071231210906/http://topics.scirus.com/). It’s interesting to see the creative way Elsevier is experimenting. Joris said that it is Elsevier’s vision that wiki forms a promising topic-centered platform for informal collaboration and the sharing of highly relevant info within STM in addition to the traditional peer-reviewed system. There is a critical issuem though - will researchers go to publishers for this type of thing or will they self-organize using inexpensive tools? The danger here is that publishers will do their own thing leading to a replay of the portal craze in the late 90s.\nGeoffrey Bilder gave a very good talk entitled “Anonymous Bosh: Attribution in a Mashed-up World” about trust and CrossReg (contributor ID).\nSimon Willison gave a very good explanation and update on OpenID. Some resources for more information - http://openid.net\nhttp://www.openidenabled.com\nhttps://web.archive.org/web/20070715235636/http://simonwillison.net/tags/openid/\nMark Bide wrapped things up with an update on ACAP (https://web.archive.org/web/20071019045302/http://www.the-acap.org/)- “an evolving, open, royalty-free standard for expression of permissions in machine readable form” - that was launched in November. Will the search engines pay any attention?\nOverall, the day was very thought provoking.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/search-web-services-document/", "title": "Search Web Services Document", "subtitle":"", "rank": 1, "lastmod": "2007-11-09", "lastmod_ts": 1194566400, "section": "Blog", "tags": [], "description": "The OASIS Search Web Services TC has just put out the following document for public review (Nov 7- Dec 7, 2007):\n_Search Web Services v1.0 Discussion Document\nEditable Source: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.doc PDF: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.pdf HTML: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.html From the OASIS announcement:\n“This document: “Search Web Services Version 1.0 - Discussion Document - 2 November 2007”, was prepared by the OASIS Search Web Services TC as a strawman proposal, for public review, intended to generate discussion and interest.", "content": "The OASIS Search Web Services TC has just put out the following document for public review (Nov 7- Dec 7, 2007):\n_Search Web Services v1.0 Discussion Document\nEditable Source: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.doc PDF: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.pdf HTML: http://docs.oasis-open.org/search-ws/v1.0/DiscussionDocument.html From the OASIS announcement:\n“This document: “Search Web Services Version 1.0 - Discussion Document - 2 November 2007”, was prepared by the OASIS Search Web Services TC as a strawman proposal, for public review, intended to generate discussion and interest. It has no official status; it is not a Committee Draft. The specification is based on the SRU (Search Retrieve via URL) specification which can be found at http://www.loc.gov/standards/sru/. It is expected that this standard, when published, will deviate from SRU. How much it will deviate cannot be predicted at this time. The fact that the SRU spec is used as a starting point for development should not be cause for concern that this might be an effort to rubberstamp or fasttrack SRU. The committee hopes to preserve the useful features of SRU, eliminate those that are not considered useful, and add features that are not in SRU but are considered useful. “\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dc-in-xhtml-meta/links/", "title": "DC in (X)HTML Meta/Links", "subtitle":"", "rank": 1, "lastmod": "2007-11-06", "lastmod_ts": 1194307200, "section": "Blog", "tags": [], "description": "This message posted out yesterday on the dc-general list (with following extract) may be of interest:\n_“Public Comment on encoding specifications for Dublin Core metadata in HTML and XHTML\n2007-11-05, Public Comment is being held from 5 November through 3 December 2007 on the DCMI Proposed Recommendation, “Expressing Dublin Core metadata using HTML/XHTML meta and link elements” \u0026laquo;http://dublincore.org/documents/2007/11/05/dc-html/\u0026raquo; by Pete Johnston and Andy Powell. Interested members of the public are invited to post comments to the DC-ARCHITECTURE mailing list \u0026laquo;http://www.", "content": "This message posted out yesterday on the dc-general list (with following extract) may be of interest:\n_“Public Comment on encoding specifications for Dublin Core metadata in HTML and XHTML\n2007-11-05, Public Comment is being held from 5 November through 3 December 2007 on the DCMI Proposed Recommendation, “Expressing Dublin Core metadata using HTML/XHTML meta and link elements” \u0026laquo;http://dublincore.org/documents/2007/11/05/dc-html/\u0026raquo; by Pete Johnston and Andy Powell. Interested members of the public are invited to post comments to the DC-ARCHITECTURE mailing list \u0026laquo;http://www.jiscmail.ac.uk/lists/dc-architecture.html\u0026raquo; , including “[DC-HTML Public Comment]” in the subject line. Depending on comments received, the specification may be finalized after the comment period as a DCMI Recommendation.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stix-fonts-in-beta/", "title": "STIX Fonts in Beta", "subtitle":"", "rank": 1, "lastmod": "2007-11-06", "lastmod_ts": 1194307200, "section": "Blog", "tags": [], "description": "Well, Howard already blogged on Nascent last week about the STIX fonts (Scientific and Technical Information Exchange) being launched and now freely available in beta. And today the STM Association also have blogged this milestone mark. So, just for the record, I’m noting here on CrossTech those links for easy retrieval. As Howard says:\n“I recommend all publishers download the fonts from the STIX web site at www.stixfonts.org today.”\n(And for those who want to see more of Howard, he can be found in interview here on the SIIA Executive FaceTime Webcast Series.", "content": "Well, Howard already blogged on Nascent last week about the STIX fonts (Scientific and Technical Information Exchange) being launched and now freely available in beta. And today the STM Association also have blogged this milestone mark. So, just for the record, I’m noting here on CrossTech those links for easy retrieval. As Howard says:\n“I recommend all publishers download the fonts from the STIX web site at www.stixfonts.org today.”\n(And for those who want to see more of Howard, he can be found in interview here on the SIIA Executive FaceTime Webcast Series. 🙂\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/dcmi-identifiers-community/", "title": "DCMI Identifiers Community", "subtitle":"", "rank": 1, "lastmod": "2007-10-17", "lastmod_ts": 1192579200, "section": "Blog", "tags": [], "description": "Another DCMI invitation. And a list. Lovely.\nSee this message (copied below) from Douglas Campbell, National Library of New Zealand, to the dc-general mailing list.\n(Continues)\n", "content": "Another DCMI invitation. And a list. Lovely.\nSee this message (copied below) from Douglas Campbell, National Library of New Zealand, to the dc-general mailing list.\n(Continues)\n_“Hi all,\nI would like to alert members of this list to the new DCMI Identifiers Community established at the recent Dublin Core Metadata Initiative (DCMI) Advisory Board meeting in Singapore. It is moderated by Douglas Campbell (National Library of New Zealand).\nThe community is a forum for individuals and organisations with an interest in the design and use of identifiers in metadata. It also serves as a liaison channel for those involved in identifier efforts in other domains.\nThere was a lot of interest in identifiers at the recent DCMI conference. Identifiers are fundamental to the Web and for managing digital content, but most of us don’t know where to begin in designing and assigning them. The level of confusion can be seen in the number of meetings and workshops held just about identifiers. DCMI is in a unique position to bring together the thinking (and doing) around identifiers from multiple domains.\nI would like to encourage you to share your identifier efforts and thinking amongst the DCMI community on our Identifiers wiki at:\nhttp://dublincore.org/identifierswiki\nYou can join the community by signing up to our JISCMAIL list, linked from our community homepage at:\nhttp://www.dublincore.org/groups/identifiers/\nor by going direct to jiscmail:\nhttp://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=dc-identifiers\u0026#038;A=1\nThanx,\nDouglas”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hybrid/", "title": "Hybrid", "subtitle":"", "rank": 1, "lastmod": "2007-10-17", "lastmod_ts": 1192579200, "section": "Blog", "tags": [], "description": "So, back on the old XMP tack. The simple vision from the XMP spec is that XMP packets are embedded in media files and transported along with them - and as such are relatively self-contained units, see Fig 1.\nFig. 1 - Media files with fully encapsulated descriptions.\nBut this is too simple. Some preliminary considerations lead us to to see why we might want to reference additional (i.e. external) sources of metadata from the original packet:\nPDFs PDFs are tightly structured and as such it can be difficult to write a new packet, or to update an existing packet. One solution proposed earlier is to embed a minimal packet which could then reference a more complete description in a standalone packet. (And in turn this standalone packet could reference additional sources of metadata.) Images While considerably simpler to write into web-delivery image formats (e.g. JPEG, GIF, PNG), it is the case that metadata pertinent to the image only is likely to be embedded. Also, of interest is the work from which the image is derived which is most likely to be presented externally to the image as a standalone document. (And in turn this standalone packet could reference additional sources of metadata.) (Continues)\n", "content": "So, back on the old XMP tack. The simple vision from the XMP spec is that XMP packets are embedded in media files and transported along with them - and as such are relatively self-contained units, see Fig 1.\nFig. 1 - Media files with fully encapsulated descriptions.\nBut this is too simple. Some preliminary considerations lead us to to see why we might want to reference additional (i.e. external) sources of metadata from the original packet:\nPDFs PDFs are tightly structured and as such it can be difficult to write a new packet, or to update an existing packet. One solution proposed earlier is to embed a minimal packet which could then reference a more complete description in a standalone packet. (And in turn this standalone packet could reference additional sources of metadata.) Images While considerably simpler to write into web-delivery image formats (e.g. JPEG, GIF, PNG), it is the case that metadata pertinent to the image only is likely to be embedded. Also, of interest is the work from which the image is derived which is most likely to be presented externally to the image as a standalone document. (And in turn this standalone packet could reference additional sources of metadata.) (Continues)\nThus the two cases - PDF documents and images - are not dissimilar. Fig. 2 shows a “wall-to-wall” XMP architecture whereby the standalone metadata documents for the work and for additional sources are expressed in XMP.\nFig. 2 - XMP “wall-to-wall” architecture.\nFig. 3 presents a variant on this theme whereby additional sources are presented as generic RDF/XML. (In the most general case only RDF need be assumed, the serialization being a matter of choice.)\nFig. 3 - XMP authority metadata with references to generic RDF/XML\nAnd finally, Fig. 4 shows the most extreme case whereby XMP is used merely to “bootstrap” RDF descriptions for media objects. The XMP is used to embed a minimal description into the media file with references to a fuller work description and to additional sources which are presented as generic RDF/XML. That is, the metadata descriptions use generic RDF/XML exclusively and only resort to the idiomatic RDF/XML employed by XMP for embedding descriptions into binary structures.\nFig. 4 - XMP “bootstrap” only - metadata descriptions proper are generic RDF/XML.\nIf I were to choose I might opt for the scenario presented in Fig. 3, but the scenarios in both Figs. 2 and 4 leave room for thought. Such a hybrid solution may be a means to bridge two different concerns:\nGeneric RDF/XML for unconstrained descriptions. Idiomatic RDF/XML (aka XMP) for embedding the head of a metadata trail. I’m not sure that I see the XMP spec loosening up any time soon to accommodate generic RDF/XML. Nor, likewise is XMP likely to be provided (or even tolerated) down the metadata trail. And the metadata is not going to be fully encapsulated within a media file. The media file will merely encapsulate the head of the metadata trail. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/nlm-blog-citation-guidelines/", "title": "NLM Blog Citation Guidelines", "subtitle":"", "rank": 1, "lastmod": "2007-10-15", "lastmod_ts": 1192406400, "section": "Blog", "tags": [], "description": "I’ve just returned from Frankfurt Book fair and noticed that there has been some recent in the The NLM Style Guide for Authors, Editors and Publishers recommendations concerning citing blogs.\nWhich reminds me of an issue that has periodically been raised here at Crossref- should we be doing something to try and provide a service for reliably citing more ephemeral content such as blogs, wikis, etc.?\n", "content": "I’ve just returned from Frankfurt Book fair and noticed that there has been some recent in the The NLM Style Guide for Authors, Editors and Publishers recommendations concerning citing blogs.\nWhich reminds me of an issue that has periodically been raised here at Crossref- should we be doing something to try and provide a service for reliably citing more ephemeral content such as blogs, wikis, etc.?\nPersonally, I cringe when I see people include plain old URLs (POUs?) in citations. What’s the point? They are almost guaranteed to fail to resolve after a few years. In citing them, you are hardly helping to preserve the scholarly record. You might as well just record the metadata associated with the content.\nSo why don’t we simply allow individuals to assign DOIs to their content?\nAs Chuck Koscher says, “Crossref DOIs are only as persistent as Crossref staff.” Crossref depends on its ability to chase down and berate member publishers when they fail to update their DOI records. Its hard enough doing this with publishers, so just imagine what it would be like trying to chase down individuals. In short, it just wouldn’t scale.\nBut what if we provided a different service for more informal content? Recently we have been in talking with Gunther Eysenbach, the creator of the very cool WebCite service about whether Crossref could/should operate a citation caching service for ephemera.\nAs I said, I think WebCite is wonderful, but I do see a few problems with it in its current incarnation.\nThe first is that, the way it works now, it seems to effectively leech usage statistics away from the source of the content. If I have a blog entry that gets cited frequently, I certainly don’t want all the links (and their associated Google-juice) redirected away from my blog. As long as my blog is working, I want traffic coming to my copy of the content, not some cached copy of the content (gee- the same problem publishers face, no?). I would also, ideally, like that traffic to continue to come to to my blog if I move hosting providers, platforms (WordPress, Moveable Type) , blog conglomerates (Gawker, Weblogs, Inc.), etc.\nThe second issue I have with WebCite is simpler. I don’t really fancy having to actually recreate and run a web-caching infrastructure when there is already a formidable one in existence.\nSo what if we ran a service for individuals that worked like this:\nFor a fee, you can assign DOIs to your ephemeral, CC-licensed content.\nWhen you assign a DOI to an item of content (or update an existing DOI), we will immediately archive said content with the Internet Archive (who, incidentally, charges for this service)\nWe will direct those DOIs to your web site as long as you are both:\nPaying the fee\nUpdating your URLs to point to the correct content\nIf you fail in either “a” or “b”, we will then redirect said DOIs to the cached version of the content on the Internet Archive (after having warned you repeatedly via automated e-mail).\n(Note, as an aside, that we could in theory provide a similar dark-archive service for publishers with non free content using something like JStore as the archive)\nThis approach would help to ensure that a blogger’s version of content was always linked to as long it was available. It would also preserve the “persistence” of Crossref DOIs by making sure that we could always resolve the DOI even if we were not able to get the owner of said DOI to update it.\nSo back to the NLM guidelines… On the one hand, I’m delighted to see that the NLM has issued guidelines on citing blogs. It seems glaringly obvious that informal (and ephemeral) content such as blogs and wikis are increasingly becoming vital parts of the scholarly record. On the other hand, it also seems to me that recommending that somebody “cite” with a broken pointer (i.e. a URL) to content verges on tokenism. This isn’t the NLM’s fault- there just isn’t a reliable mechanism for citing informal content in a manner that ensures you can then retrieve and look at said content in the future.\nAnd this is no longer a problem confined to the Scholarly/Professional publishing space. As Jon Udell has occasionally pointed out, citation is increasingly an important currency for *any* professional writer on the web. It seems to me that a system for reliably citing blogs and wikis would benefit many communities. I could easily see commercial hosted Blog services (Blogger, WordPress) offering a “Cached-DOI” feature as a premium service to their clients.\nSo what do you think? What am I missing? is this something we should be looking at?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/opendocument-adds-rdf/", "title": "OpenDocument Adds RDF", "subtitle":"", "rank": 1, "lastmod": "2007-10-14", "lastmod_ts": 1192320000, "section": "Blog", "tags": [], "description": "Bruce D’Arcus left a comment here in which he linked to post of his: “OpenDocument’s New Metadata System“. Not everybody reads comments so I’m repeating it here. His post is worth reading on two counts:\nHe talks about the new metadata functionality for OpenDocument 1.2 which uses generic RDF. As he says: \u0026gt; _\u0026amp;#8220;Unlike Microsoft’s custom schema support, we provide this through the standard model of RDF. What this means is that implementors can provide a generic metadata API in their applications, based on an open standard, most likely just using off-the-shelf code libraries.", "content": "Bruce D’Arcus left a comment here in which he linked to post of his: “OpenDocument’s New Metadata System“. Not everybody reads comments so I’m repeating it here. His post is worth reading on two counts:\nHe talks about the new metadata functionality for OpenDocument 1.2 which uses generic RDF. As he says: \u0026gt; _\u0026amp;#8220;Unlike Microsoft’s custom schema support, we provide this through the standard model of RDF. What this means is that implementors can provide a generic metadata API in their applications, based on an open standard, most likely just using off-the-shelf code libraries.\u0026amp;#8221;_ This is great. It means that description is left up to the user rather than being restricted by any vendor limitation. (Ideally we would like to see the same for XMP. But Adobe is unlikely to budge because of the legacy code base and documents. It’s a wonder that Adobe still wants XMP to breathe.) * He cites a wonderful passage from Rob Weir of IBM (something which I had been considering to blog but too late now) about the changing shape of documents. Can only say, go read [Bruce’s post][2] and then [Rob’s post][3]. But anyway a spoiler here: \u0026gt; _\u0026amp;#8220;The concept of a document as being a single storage of data that lives in a single place, entire, self-contained and complete is nearing an end. A document is a stream, a thread in space and time, connected to other documents, containing other documents, contained in other documents, in multiple layers of meaning and in multiple dimensions.\u0026amp;#8221;_\u0026lt;/ol\u0026gt; I think the ODF initiative is fantastic and wish that Adobe could follow suit. However, I do still hold out something for XMP. After all, nobody else AFAICT is doing anything remotely similar for multimedia. Where’s the W3C and co. when you really need them? (Oh yeah, [faffing][4] about the new [Semantic Web logo][5]. 😉 ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/i-want-my-xmp/", "title": "I Want My XMP", "subtitle":"", "rank": 1, "lastmod": "2007-10-13", "lastmod_ts": 1192233600, "section": "Blog", "tags": [], "description": "Now, assuming XMP is a good idea - and I think on balance it is (as blogged earlier), why are we not seeing any metadata published in scholarly media files? The only drawbacks that occur to me are:\nHard to write - it’s too damn difficult, no tools support, etc. Hard to model - rigid, “simple” XMP data model, both complicates and constrains the RDF data model Well, I don’t really believe that 1) is too difficult to overcome. A little focus and ingenuity should do the trick. I do, however, think 2) is just a crazy straitjacket that Adobe is forcing us all to wear but if we have to live with that then so be it. Better in Bedlam than without. (RSS 1.0 wasn’t so much better but allowed us to do some useful things. And that came from the RDF community itself.) We could argue this till the cows come home but I don’t see any chance of any change any time soon.\n(Continues)\n", "content": "Now, assuming XMP is a good idea - and I think on balance it is (as blogged earlier), why are we not seeing any metadata published in scholarly media files? The only drawbacks that occur to me are:\nHard to write - it’s too damn difficult, no tools support, etc. Hard to model - rigid, “simple” XMP data model, both complicates and constrains the RDF data model Well, I don’t really believe that 1) is too difficult to overcome. A little focus and ingenuity should do the trick. I do, however, think 2) is just a crazy straitjacket that Adobe is forcing us all to wear but if we have to live with that then so be it. Better in Bedlam than without. (RSS 1.0 wasn’t so much better but allowed us to do some useful things. And that came from the RDF community itself.) We could argue this till the cows come home but I don’t see any chance of any change any time soon.\n(Continues)\nSo, putting the RDF issue aside for the moment (as if RDF didn’t have problems of its own - XML, URI, etc.) let’s just look at the options for writing the stuff. (Btw, I’m not referencing any tools or toolkits. This is just in the round.) There are various means of publishing metadata in XMP:\n**Sidecar** : XMP can be produced as standalone files - see [XMP Specification, (Sept. ’05)][3], p. 36. (These are called \u0026amp;#8220;sidecar\u0026amp;#8221; files if the file has the same name as the main document and is in the same directory.) The only things needed to produce these files are a text editor and a good grasp of the XMP serialization. A template will do for that. The main problem with a standalone file is that it does not travel with the media file and so risks being left behind. Worth a note here. Not standalone as such but the [Mars][4] format (the draft XML formalization for PDF) discloses its metadata in an independent XMP file \u0026amp;#8220;metadata.xml\u0026amp;#8221; under the \u0026amp;#8220;META-INF/\u0026amp;#8221; directory. For distribution the whole directory structure is packaged up as a zip file and so the XMP is embedded in a \u0026amp;#8220;.mars\u0026amp;#8221; file, but accessed directly from the zip file or from the unpackaged directory the XMP can be manipulated just like any other XML document. **Embedded** : This is the normal means of distributing XMP - embedded within the media file. Some graphics formats are essentially linear (JPEG, PNG, GIF) and it is relatively straightforward to add in an XMP packet. Other formats (PDF, TIFF) have internal cross-referencing and are more difficult to deal with. **Embedded + Sidecar** : One possible method for dealing with the difficulty of writing XMP is to note that some media (especially PDFs) already have embedded XMP packets. As noted earlier, much if not all of the metadata in these XMP packets will be workflow-related and thus dispensible for final-form products where authority work-related metadata is desired. These packets may, or may not, be writeable and thus include additional padding whitespace. Even for read-only packets there is much (if not all) that can be discarded and also sometimes unnecesary bulk (e.g. default namespace declarations which are never used). _The bottom line is that any legacy XMP packet may typically be 2-3K in size and, just as in transplanting a cell nucleus, the XMP packet innards can be deftly substituted with a minimal XMP packet content, say 1K in size, which would be guaranteed to fit with suitable padding._ A packet that size would be sufficient to provide at minimum for a DOI and for a reference to additional metadata, e.g. a more complete standalone XMP packet. The two forms can coexist. The third way option here allows embedding a minimal XMP packet into \u0026amp;#8220;difficult\u0026amp;#8221; packaging structures while pointing out to a fully-formed XMP packet. The \u0026amp;#8220;simple\u0026amp;#8221; packaging structures may both include a fully-formed XMP packet while also possibly referencing extended metadata sources as per my previous post [here][4]. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-for-the-record/", "title": "Metadata - For the Record", "subtitle":"", "rank": 1, "lastmod": "2007-10-13", "lastmod_ts": 1192233600, "section": "Blog", "tags": [], "description": "Interesting post from Gunar Penikis of Adobe entitled “Permanent Metadata” Oct. ’04). 1.\nHe talks about the the issues of embedding metadata in media and comes up with this:\n“It may be the case that metadata in the file evolves to become a “cache of convenience” with the authoritative information living on a web service. The web service model is designed to provide the authentication and permissions needed. The link between the two provided by unique IDs.", "content": "Interesting post from Gunar Penikis of Adobe entitled “Permanent Metadata” Oct. ’04). 1.\nHe talks about the the issues of embedding metadata in media and comes up with this:\n“It may be the case that metadata in the file evolves to become a “cache of convenience” with the authoritative information living on a web service. The web service model is designed to provide the authentication and permissions needed. The link between the two provided by unique IDs. In fact, unique IDs are already created by Adobe applications and stored in the XMP - that is what the XMP Media Management properties are all about.”\nAn intriguing idea. Of course, Gunar’s (and Adobe’s) preoccupations with metadata revolve mainly around document workflow whereas, at least as things stand currently, scholarly publisher concerns are mainly with the dissemination of media in final form. Hence some differences in thinking:\nSubject As just noted Adobe are more interested in workflow than in work. Scholarly articles are rich in descriptive metadata about the work itself and have a well-developed ctation model. Academic interest is in the intellectual content rather than the vehicle used to carry and preserve that content - the file format. Unique IDs Workflow IDs are UUIDs which identify specific instances and expressions, but do not identify the abstract work. UUIDs provide a unique identifier but there is no central registry for such identifiers, hence they cannot be “looked up”. Crossref publishers should be concerned to associate closely the DOI for the underlying work with a given media file. That’s the identifier that this community is actively promoting.\nRead/Write Because of the focus on workflow, the XMP specification recommends that XMP packets be “writeable”, that is that they be marked as “writeable” and that they include padding whitespace which can accommodate updates without changing packet size. Publishers distributing final form documents are more likely to want to distribute “read-only” metadata which is authoritative and which describes the work, rather than the document format and workflow. Of course, this should not preclude additional sources of metadata which may be added “by reference” rather than “by value”. That is, a pointer to a web page (or service) may be sufficient to relate additional publisher terms and user annotations instead of embedding them directly in the file for various reasons: a) file integrity, b) limiting growth of file size, c) term authority, d) dynamic production (in forward time), and e) multiple sources. Update Aug 2022: the blog post mentioned below was previously at blogs.adobe.com/gunar/2007/10/permanent_metadata.html but is no longer live.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/datanet/", "title": "DataNet", "subtitle":"", "rank": 1, "lastmod": "2007-10-12", "lastmod_ts": 1192147200, "section": "Blog", "tags": [], "description": "Last week, my colleague Ian Mulvany posted on Nascent an entry about NSF’s recent call for proposals on DataNet (aka “A Sustainable Digital Data Preservation and Access Network”). Peter Brantley, of DLF, has set up a public group DataNet on Nature Network where all are welcome to join in the discussion on what NSF effectively are viewing as the challenge of dealing with “big data”. As Ian notes in a mail to me:", "content": "Last week, my colleague Ian Mulvany posted on Nascent an entry about NSF’s recent call for proposals on DataNet (aka “A Sustainable Digital Data Preservation and Access Network”). Peter Brantley, of DLF, has set up a public group DataNet on Nature Network where all are welcome to join in the discussion on what NSF effectively are viewing as the challenge of dealing with “big data”. As Ian notes in a mail to me:\n“It seems that for a fully integrated flow of data then publisher involvement is going to be required, and it is clear from the proposal that the NSF are also interested in rights management or at negotiating that issue.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/categories/otmi/", "title": "OTMI", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Categories", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/otmi-applied-means-more-search-hits/", "title": "OTMI Applied - Means More Search Hits", "subtitle":"", "rank": 1, "lastmod": "2007-10-09", "lastmod_ts": 1191888000, "section": "Blog", "tags": [], "description": "(Click image to enlarge.)\nFollowing up on previous posts on OTMI (the proposal from NPG for scholarly publishers to syndicate their full text to drive text-mining applications), Fabien Campagne from Cornell, a long-time OTMI supporter, has created an OTMI-driven search engine (based on his Twease work). This may be the first publicly accessible OTMI-based service. It currently only contains NPG content from the OTMI archive online - some 2 years worth of Nature and four other titles.", "content": "\n(Click image to enlarge.)\nFollowing up on previous posts on OTMI (the proposal from NPG for scholarly publishers to syndicate their full text to drive text-mining applications), Fabien Campagne from Cornell, a long-time OTMI supporter, has created an OTMI-driven search engine (based on his Twease work). This may be the first publicly accessible OTMI-based service. It currently only contains NPG content from the OTMI archive online - some 2 years worth of Nature and four other titles. (When will we begin to see other publishers on board?)\nWhat’s happening here? Well, Twease is a web-based front-end to searching Medline abstracts. As such, a search will retrieve a set of results labeled by PMID and list all lines in the abstract where a match occurs. By contrast, with Twease-OTMI a search is run over the article full text and a will retrieve all text “snippets” (for Nature we use sentences, although other units of text are possible) which match. See the figure above where the top three results are all labeled by the same DOI and show text matches from various points within the document.\nThis shows that a far superior search match rate is possible using the article full text (as distributed in OTMI format) where text integrity as publishable asset is not compromised.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/mars-bar/", "title": "Mars Bar", "subtitle":"", "rank": 1, "lastmod": "2007-10-08", "lastmod_ts": 1191801600, "section": "Blog", "tags": [], "description": "Just noticed that there is now (as of last month) a blog for Mars (“Mars: Comments on PDF, Acrobat, XML, and the Mars file format”). See this from the initial post:\n“The Mars Project at Adobe is aimed at creating an XML representation for PDF documents. We use a component-based model for representing different aspects of the document and we use the Universal Container Format (a Zip-based packaging format) to hold the pieces.", "content": "Just noticed that there is now (as of last month) a blog for Mars (“Mars: Comments on PDF, Acrobat, XML, and the Mars file format”). See this from the initial post:\n“The Mars Project at Adobe is aimed at creating an XML representation for PDF documents. We use a component-based model for representing different aspects of the document and we use the Universal Container Format (a Zip-based packaging format) to hold the pieces. Mars uses XML to represent the individual components where that makes sense, but otherwise uses industry standard formats to represent other components. Examples of these include Fonts (we use OpenType), Images (PNG, GIF, JPEG, JPEG2000), Color (ICC Color Profiles), etc.. We use SVG to represent page content, which fits as both an XML format and an industry standard.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/scholarly-dc/", "title": "Scholarly DC", "subtitle":"", "rank": 1, "lastmod": "2007-10-05", "lastmod_ts": 1191542400, "section": "Blog", "tags": [], "description": "This This was just sent out to the DC-GENERAL mailing list about the new DCMI Community for Scholarly Communications. As Julie Allinson says:\n“The aim of the group is to provide a central place for individuals and organisations to exchange information, knowledge and general discussion on issues relating to using Dublin Core for describing items of ‘scholarly communications’, be they research papers, conference presentations, images, data objects. With digital repositories of scholarly materials increasingly being established across the world, this group would like to offer a home for exploring the metadata issues faced.", "content": "This This was just sent out to the DC-GENERAL mailing list about the new DCMI Community for Scholarly Communications. As Julie Allinson says:\n“The aim of the group is to provide a central place for individuals and organisations to exchange information, knowledge and general discussion on issues relating to using Dublin Core for describing items of ‘scholarly communications’, be they research papers, conference presentations, images, data objects. With digital repositories of scholarly materials increasingly being established across the world, this group would like to offer a home for exploring the metadata issues faced.”\nThere’s also a DC-SCHOLAR mailing list (subscribe here). Not too much there yet, but it may be useful to track - or even to participate. 🙂\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-names-project/", "title": "The Names Project", "subtitle":"", "rank": 1, "lastmod": "2007-10-05", "lastmod_ts": 1191542400, "section": "Blog", "tags": [], "description": "Was reminded to blog about this after reading Lorcan’s post on the Names Project being run by JISC. From the blurb:\n_“The project is going to scope the requirements of UK institutional and subject repositories for a service that will reliably and uniquely identify names of individuals and institutions.\nIt will then go on to develop a prototype service which will test the various processes involved. This will include determining the data format, setting up an appropriate database, mapping data from different sources, populating the database with records and testing the use of the data.", "content": "Was reminded to blog about this after reading Lorcan’s post on the Names Project being run by JISC. From the blurb:\n_“The project is going to scope the requirements of UK institutional and subject repositories for a service that will reliably and uniquely identify names of individuals and institutions.\nIt will then go on to develop a prototype service which will test the various processes involved. This will include determining the data format, setting up an appropriate database, mapping data from different sources, populating the database with records and testing the use of the data.”_\nOne immediate project tangible is the landscape report (‘A review of the current landscape in relation to a proposed Name Authority Service for UK repositories of research outputs’) which summarizes some current initiatives in author identification from a UK perspective, including inter alia Elsevier’s Scopus Author Identifier.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/inchikey/", "title": "InChIKey", "subtitle":"", "rank": 1, "lastmod": "2007-10-02", "lastmod_ts": 1191283200, "section": "Blog", "tags": [], "description": "The InChI (International Chemical Identifier from IUPAC) has been blogged earlier here. RSC have especially taken this on board in their Project Prospect and now routinely syndicate InChI identifiers in their RSS feeds as blogged here.\nAs reported variously last month (see here for one such review) IUPAC have now released a new (1.02beta) version of their software which allows hashed versions (fixed length 25-character) of the InChI, so-called InChIKey’s, to be generated which are much more search engine friendly.", "content": "The InChI (International Chemical Identifier from IUPAC) has been blogged earlier here. RSC have especially taken this on board in their Project Prospect and now routinely syndicate InChI identifiers in their RSS feeds as blogged here.\nAs reported variously last month (see here for one such review) IUPAC have now released a new (1.02beta) version of their software which allows hashed versions (fixed length 25-character) of the InChI, so-called InChIKey’s, to be generated which are much more search engine friendly. Compare a regular InChI identifier:\nInChI=1/C49H70N14O11/c1-26(2)39(61-42(67)33(12-8-18-55\n-49(52)53)57-41(66)32(50)23-38(51)65)45(70)58-34(20-29-1\n4-16-31(64)17-15-29)43(68)62-40(27(3)4)46(71)59-35(22-30\n-24-54-25-56-30)47(72)63-19-9-13-37(63)44(69)60-36(48(7\n3)74)21-28-10-6-5-7-11-28/h5-7,10-11,14-17,24-27,32-3\n7,39-40,64H,8-9,12-13,18-23,50H2,1-4H3,(H2,51,65)(H,54,56\n)(H,57,66)(H,58,70)(H,59,71)(H,60,69)(H,61,67)(H,62,68)(H,73,74)\n(H4,52,53,55)/f/h56-62,73H,51-53H2\nwith its InChIKey counterpart:\nInChIKey=JYPVVOOBQVVUQV-UHFFFAOYAR\nThat’s some saving.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oh-no-not-you-again/", "title": "Oh No, Not You Again!", "subtitle":"", "rank": 1, "lastmod": "2007-10-02", "lastmod_ts": 1191283200, "section": "Blog", "tags": [], "description": "Oh dear. Yesterday’s post “Using ISO URNs” was way off the mark. I don’t know. I thought that walk after lunch had cleared my mind. But apparently not. I guess I was fixing on eyeballing the result in RDF/N3 rather than the logic to arrive at that result.\n(Continues.)\n", "content": "Oh dear. Yesterday’s post “Using ISO URNs” was way off the mark. I don’t know. I thought that walk after lunch had cleared my mind. But apparently not. I guess I was fixing on eyeballing the result in RDF/N3 rather than the logic to arrive at that result.\n(Continues.)\nThere are three namespace cases (and I was only wrong in two out of the three, I think):\n“pdf:” I was originally going to suggest the use of “data:” for the PDF information dictionary terms here but then lunged at using an HTTP URI (the URI of the page for the PDF Reference manual on the Adobe site) for regular orthodox conformancy and good churchgoing:\n@prefix pdf: \u0026lt;http://www.adobe.com/devnet/pdf/pdf_reference.html\u0026gt; . This was wrong on two counts:\na) Afaik no such use for this URI as a namespace has ever been made by Adobe. And it is in the gift of the DNS tenant (elsewhere called “owner”) to mint URIs under that namespace and to ascribe meanings to those URIs.\nb) Also the URI is not best suited to a role as namespace URI since RDF namespaces typically end in “/” or “#” to make the division between namespace and term clearer. (In XML it doesn’t make a blind bit of difference as XML namespaces are just a scoping mechanism.) So to have a property URI as\nhttp://www.adobe.com/devnet/pdf/pdf_reference.htmlAuthor does the job but looks pretty rough and more importantly precludes (at least, complicates) the possibility of dereferencing the URI to return a page with human or machine readable semantics. Better in RDF terms is one of the following:\na) http://www.adobe.com/devnet/pdf/pdf_reference/Author b) http://www.adobe.com/devnet/pdf/pdf_reference#Author c) http://www.adobe.com/devnet/pdf/pdf_reference.html#Author In the absence of any published namespace from Adobe for these terms, I think it would have been more prudent to fall back on “data:” URIs. So\n@prefix pdf: \u0026lt;data:,\u0026gt; . leading to\ndata:,Author data:,CreationDate data:,Creator etc. This is correct (afaict) and merely provides a URI representation for bare strings.\nHad we wanted to relate those terms to the PDF Reference we might have tried something like:\ndata:,PDF%20Reference:Author data:,PDF%20Reference:CreationDate data:,PDF%20Reference:Creator etc. And if we had wanted to make those truly secondary RDF resources related to a primary RDF resource for the “namespace” we could have attempted something like:\ndata:,PDF%20Reference#Author data:,PDF%20Reference#CreationDate data:,PDF%20Reference#Creator etc. Note though that the “data:” specification is not clear about the implications of using “#”. (Is it allowed, or isn;t it?) We must suspect that it is not allowed, but see this mail from Chris Lilley (W3C) which is most insightful.\n“pdfx:” The example was just for demo purposes, but (as per 1a above) it is incumbent on the namespace authority (here ISO) to publish a URI for the term to be used. Anyhow, the namespace URI I cited\n@prefix pdfx: \u0026lt;urn:iso:std:iso-iec:15930:-1:2001\u0026gt; . would not have been correct and would have led to these mangled URIs:\nurn:iso:std:iso-iec:15930:-1:2001GTS_PDFXVersion urn:iso:std:iso-iec:15930:-1:2001GTS_PDFXConformance It should have been something closer to\n@prefix pdfx: \u0026lt;urn:iso:std:iso-iec:15930:-1:2001:\u0026gt; . leading to\nurn:iso:std:iso-iec:15930:-1:2001:GTS_PDFXVersion urn:iso:std:iso-iec:15930:-1:2001:GTS_PDFXConformance “_usr:” This was the one correct call in yesterday’s post.\n@prefix _usr: \u0026lt;data:,\u0026gt; . The only problem here would be to differentiate these terms from the terms listed in the PDF Reference manual, although the PDF information dictionary makes no such distinction itself.\nTo sum up, perhaps the best way of rendering the PDF information dictionary keys in RDF would be to use “data:” URIs for all (i.e. a methodology for URI-ifying strings) and to bear in mind that at some point ISO might publish URNs for the PDF/X mandated keys: ‘GTS_PDFXVersion‘ and ‘GTS_PDFXConformance‘. So,\n# document infodict (object 58: 476983): @prefix: pdfx: \u0026lt;data:,\u0026gt; . @prefix: pdf: \u0026lt;data:,\u0026gt; . @prefix: _usr: \u0026lt;data:,\u0026gt; . \u0026lt;\u003e _usr:Apag_PDFX_Checkup \"1.3\"; pdf:Author \"Scott B. Tully\"; pdf:CreationDate \"D:20020320135641Z\"; pdf:Creator \"Unknown\"; pdfx:GTS_PDFXConformance \"PDF/X-1a:2001\"; pdfx:GTS_PDFXVersion \"PDF/X-1:2001\"; pdf:Keywords \"PDF/X-1\"; pdf:ModDate \"D:20041014121049+10'00'\"; pdf:Producer \"Acrobat Distiller 4.05 for Macintosh\"; pdf:Subject \"A document from our PDF archive. \"; pdf:Title \"Tully Talk November 2001\"; pdf:Trapped \"False\" . ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/using-iso-urns/", "title": "Using ISO URNs", "subtitle":"", "rank": 1, "lastmod": "2007-10-01", "lastmod_ts": 1191196800, "section": "Blog", "tags": [], "description": "(Update - 2007.10.02: Just realized that there were some serious flaws in the post below regarding publication and form of namespace URIs which I’ve now addressed in a subsequent post here.)\nBy way of experimenting with a use case for ISO URNs, below is a listing of the document metadata for an arbitrary PDF. (You can judge for yourselves whether the metadata disclosed here is sufficient to describe the document.) Here, the metadata is taken from the information dictionary and from the document metadata stream (XMP packet).\nThe metadata is expressed in RDF/N3. That may not be a surprise for the XMP packet which is serialized in RDF/XML, as it’s just a hop, skip and a jump to render it as RDF/N3 with properties taken from schema whose namespaces are identified by URI. What may be more unusual is to see the document information dictionary metadata (the “normal” metadata in a PDF) rendered as RDF/N3 since the information dictionary is not nodelled on RDF, not expressed in XML, and not namespaced. Here, in addition to the trusty HTTP URI scheme, I’ve made use of two particular URI schemes: “iso:” URN namespaces, and “data:” URIs.\n(Continues.)\n", "content": "(Update - 2007.10.02: Just realized that there were some serious flaws in the post below regarding publication and form of namespace URIs which I’ve now addressed in a subsequent post here.)\nBy way of experimenting with a use case for ISO URNs, below is a listing of the document metadata for an arbitrary PDF. (You can judge for yourselves whether the metadata disclosed here is sufficient to describe the document.) Here, the metadata is taken from the information dictionary and from the document metadata stream (XMP packet).\nThe metadata is expressed in RDF/N3. That may not be a surprise for the XMP packet which is serialized in RDF/XML, as it’s just a hop, skip and a jump to render it as RDF/N3 with properties taken from schema whose namespaces are identified by URI. What may be more unusual is to see the document information dictionary metadata (the “normal” metadata in a PDF) rendered as RDF/N3 since the information dictionary is not nodelled on RDF, not expressed in XML, and not namespaced. Here, in addition to the trusty HTTP URI scheme, I’ve made use of two particular URI schemes: “iso:” URN namespaces, and “data:” URIs.\n(Continues.)\nAs far as I am aware, there is no formal identifier for entries in the document information dictionary as specified by the PDF Reference from Adobe Systems, so it may be appropriate to use the HTTP URI for the Adobe homepage for the PDF Reference manual, from which specific editions are available.\nFor the PDF/X keys which are specified in the ISO standard ISO 15930-1 2001, I have used an ISO URN. (I don’t expect this to be correct in all details but it should give some idea of how it might be used. It may be that the URI should express the term itself, rather than the document from which the term was defined.) And finally, for the one additional user-supplied key here I have made use of a “data:” URI with no body (i.e. I’m speechless). One could have provided some text within the body of the “data:” URI if one wanted to differentiate between alternate user keys or to otherwise annotate these keys.\nNote that the prefixes used in the information dictionary and in the metadata stream are unrelated, as are the mappings of property elements to schemas.\nWell, that’s all really just for fun but it may show two things: 1) how a general description might be described with RDF and how general properties can be mapped to URIs (with possibly limited machine utility), and 2) how an ISO URN might be used.\n# document infodict (object 58: 476983): @prefix: pdfx: \u0026lt;urn:iso:std:iso-iec:15930:-1:2001\u0026gt; . @prefix: pdf: \u0026lt;http://www.adobe.com/devnet/pdf/pdf_reference.html\u0026gt; . @prefix: _usr: \u0026lt;data:,\u0026gt; . \u0026lt;\u003e _usr:Apag_PDFX_Checkup \"1.3\"; pdf:Author \"Scott B. Tully\"; pdf:CreationDate \"D:20020320135641Z\"; pdf:Creator \"Unknown\"; pdfx:GTS_PDFXConformance \"PDF/X-1a:2001\"; pdfx:GTS_PDFXVersion \"PDF/X-1:2001\"; pdf:Keywords \"PDF/X-1\"; pdf:ModDate \"D:20041014121049+10'00'\"; pdf:Producer \"Acrobat Distiller 4.05 for Macintosh\"; pdf:Subject \"A document from our PDF archive. \"; pdf:Title \"Tully Talk November 2001\"; pdf:Trapped \"False\" . # document metadata stream (object 41: 472418): @prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . @prefix pdf: \u0026lt;http://ns.adobe.com/pdf/1.3/\u0026gt; . @prefix pdfx: \u0026lt;http://ns.adobe.com/pdfx/1.3/\u0026gt; . @prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpMM: \u0026lt;http://ns.adobe.com/xap/1.0/mm/\u0026gt; . \u0026lt;\u003e pdf:Keywords \"PDF/X-1\"; pdf:Producer \"Acrobat Distiller 4.05 for Macintosh\"; pdfx:Apag_PDFX_Checkup \"1.3\"; pdfx:GTS_PDFXConformance \"PDF/X-1a:2001\"; pdfx:GTS_PDFXVersion \"PDF/X-1:2001\"; xmp:CreateDate \"2002-03-20T13:56:41Z\"; xmp:CreatorTool \"Unknown\"; xmp:MetadataDate \"2004-10-14T12:10:49+10:00\"; xmp:ModifyDate \"2004-10-14T12:10:49+10:00\"; xmpMM:DocumentID \"uuid:bd7ae9a1-1110-43c0-8e84-632f2dbb55ab\"; dc:creator [ a rdf:Seq; rdf:_1 \"Scott B. Tully\" ]; dc:description [ a rdf:Alt; rdf:_1 \"A document from our PDF archive. \"@x-default ]; dc:format \"application/pdf\"; dc:title [ a rdf:Alt; rdf:_1 \"Tully Talk November 2001\"@x-default ] . ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/whole-lotta-id/", "title": "Whole Lotta ID", "subtitle":"", "rank": 1, "lastmod": "2007-10-01", "lastmod_ts": 1191196800, "section": "Blog", "tags": [], "description": "ISO has registered with the IANA a URN namespace identifier (“iso:”) for ISO persistent resources. From the Internet-Draft:\n“This URN NID is intended for use for the identification of persistent resources published by the ISO standards body (including documents, document metadata, extracted resources such as standard schemata and standard value sets, and other resources).”\nThe toplevel grammar rules (ABNF) give some indication of scope:\nNSS = std-nss std-nss = \u0026ldquo;std:\u0026rdquo; docidentifier *supplement *docelement [addition]", "content": "ISO has registered with the IANA a URN namespace identifier (“iso:”) for ISO persistent resources. From the Internet-Draft:\n“This URN NID is intended for use for the identification of persistent resources published by the ISO standards body (including documents, document metadata, extracted resources such as standard schemata and standard value sets, and other resources).”\nThe toplevel grammar rules (ABNF) give some indication of scope:\nNSS = std-nss std-nss = \u0026ldquo;std:\u0026rdquo; docidentifier *supplement *docelement [addition]\nJust wanted to quote here one of the funkier examples cited in the document:\nurn:iso:std:iso:9999:-1:ed-1:v1-amd1.v1:en,fr:amd:2:v2:en:clause:3.1,a.2-b.9\n“refers to (sub)clauses 3.1 and A.2 to B.9 in the corrected version of Amendment 2, in English, which amends the document comprising the 1st version of edition 1 of ISO 9999-1 incorporating the 1st version of Amendment 1, in English/French (bilingual document)”\nWow! That’s some ID. That’s something else.\nAs far as DOI is concerned there is nothing obvious to be learned. It is interesting to see such a level of granularity supported though. And since all these documents issue from a central publisher they can be prescriptive about the identifier syntax. Something which cannot be mandated for the many Crossref publishers with their own commercial arrangements. Hence DOI is generally agnostic about suffix strings.\nSeems to be a little confusion about the registration though. The NID was approved Jan. 15, ’07 by the IESG and the IANA Registry of URN Namespaces (last updated Aug. 22, ’07) lists the namespace “iso” with the provisional (unnumbered) RFC labelled “RFC-goodwin-iso-urn-01.txt” (being the -01 draft). However, the IETF I-D Tracker reports this status for draft-goodwin-iso-urn, which shows that a new I-D (an -02 draft) was submitted in Sept. 7, ’07:\n“A Uniform Resource Name (URN) Namespace for the International Organization for Standardization (ISO), draft-goodwin-iso-urn-02.txt“\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/authors-in-context/", "title": "Authors in Context?", "subtitle":"", "rank": 1, "lastmod": "2007-09-30", "lastmod_ts": 1191110400, "section": "Blog", "tags": [], "description": "On the subject of author IDs (a subject Crossref is interested in and on which held a meeting earlier this year, as blogged about here), this post by Karen Coyle “Name authority control, aka name identification” may be worth a read. She starts off with this:\n“Libraries do something they call “name authority control”. For most people in IT, this would be called “assigning unique identifiers to names.” Identifying authors is considered one of the essential aspects of library cataloging, and it isn’t done in any other bibliographic environment, as far as I know.", "content": "On the subject of author IDs (a subject Crossref is interested in and on which held a meeting earlier this year, as blogged about here), this post by Karen Coyle “Name authority control, aka name identification” may be worth a read. She starts off with this:\n“Libraries do something they call “name authority control”. For most people in IT, this would be called “assigning unique identifiers to names.” Identifying authors is considered one of the essential aspects of library cataloging, and it isn’t done in any other bibliographic environment, as far as I know.”\nand concludes thus:\n“Perhaps the days of looking at lists of authors’ names is over. Maybe users need to see a cloud of authors connected to topic areas in which they have published, or related to books titles or institutional affiliations. In this time of author abundance, names are not meaningful without some context.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-ville/", "title": "XMP-Ville", "subtitle":"", "rank": 1, "lastmod": "2007-09-25", "lastmod_ts": 1190678400, "section": "Blog", "tags": [], "description": "Been so busy looking into the technical details of XMP that I almost forgot to check out the current landcsape. Luckily I chanced on these articles by Ron Roszkiewicz for The Seybold Report (and apologies for lifting the title of this post from his last). The articles about XMP are well worth reading and chart the painful progress made to date:\nThe Brief Tortured Life of XMP (July ’05) [Thought Leaders Hammer out Metadata Standard] (April ’07) [Metadata Persistence and “Save for Web…”] (July ’07) From the earlier characterization of XMP as “underachieving teenager” Roszkiewicz is cautiously optimistic that IDEAlliance’s XMP Open initiative (an initiative to advance XMP as an open industry specification) will help outreach and foster adoption of this fledgling technology.\n(Continues.)\n", "content": "Been so busy looking into the technical details of XMP that I almost forgot to check out the current landcsape. Luckily I chanced on these articles by Ron Roszkiewicz for The Seybold Report (and apologies for lifting the title of this post from his last). The articles about XMP are well worth reading and chart the painful progress made to date:\nThe Brief Tortured Life of XMP (July ’05) [Thought Leaders Hammer out Metadata Standard] (April ’07) [Metadata Persistence and “Save for Web…”] (July ’07) From the earlier characterization of XMP as “underachieving teenager” Roszkiewicz is cautiously optimistic that IDEAlliance’s XMP Open initiative (an initiative to advance XMP as an open industry specification) will help outreach and foster adoption of this fledgling technology.\n(Continues.)\nThere has been some activity here. Following on from an industry open day event last year:\n* [IDEAlliance XMP Open Day][5], New York, March ’06\u0026lt;/ul\u0026gt; there have been two metadata summits earlier this year co-sponsored by Adobe Systems and IDEAlliance: * [Metadata Directions in Advertising and Branding][6], San Francisco, January ’07 * [Content Metadata Summit 1.1][7] New York, March ’07\u0026lt;/ul\u0026gt; Promising bestirrings. (And also with the recent public airing of the PRISM 2.0 draft with its support for XMP which was reviewed at the PRISM WG F2F last week for publication as a standard.) But generally the state of XMP-Ville at this time is rather sleepy. There’s not much by way of news on the [XMP Open][8] website. At least promise, if no promises. Back to the articles. The really interesting thing of note (to me at any rate) in Roszkiewicz’s review of the last summit is the almost total absence of any mention of the Web. It is as if XMP users (both consumers and providers) would be content to play within the walled garden of the CS3 product portfolio. I don’t get that. The Web changes everything. Although XMP maps its native data model to RDF (and RDF is an inherently open technology allowing arbitrary schemas to be mixed at will), XMP betrays its application roots by seeming to want to impose some kind of veto on the schemas to be used. Or rather, how they are to be used. It also seems to be all fussed up by centralized notions such as a cross-mapping schema registry. (As if that were part of its remit.) As Roszkiewicz notes: \u0026gt; _\u0026amp;#8220;The consortia [IDEAlliance and the stakeholders] will have ownership responsibility for name space registry, cross-map definition and support, standards group outreach and coordination, compliance certification and logo and the “XMP Open” brand.\u0026amp;#8221;_ And elsewhere: \u0026gt; _\u0026amp;#8220;So while the standard for XMP might be defined, the data that will be fed into files is not, for want of an IDEAlliance-like standards management body to filter and rationalize the many [schema] into a few.\u0026amp;#8221;_ And then more worryingly, this: \u0026gt; _\u0026amp;#8220;That schema should be managed by a government agency such as the Library of Congress which could manage the dictionaries and schema, certify them, register the namespace and provide a centralized location to distribute them.\u0026amp;#8221;_ Well, I don’t see what this matters to the core technology of XMP which is just a specification for the sneaking in of an XML document into arbitrary media files. And the use of RDF/XML would seem to be a further indication that XMP is to be independent of the schema used. The use of both RDF plus XML technologies should allow XMP to present itself as a framework or \u0026amp;#8220;platform\u0026amp;#8221; for metadata exchange and to get out of the way of what is actually carried by the XMP packets. App neutrality, if you will. Again the notion of Web as just an alternate channel is apparent in the third of the articles where Roszkiewicz talks about the Device Central tool which allows a user of a CS3 product to \u0026amp;#8220;Save for Web or Devices\u0026amp;#8230;\u0026amp;#8221;. This article talks about the clumsy handling of metadata in such device saves, whereby the packet may be abbreviated - and metadata terms dropped - when printing to small footprint devices. Not a feature to be retained for too long, I would hope. So, where are we currently with XMP? According to Roszkiewicz: \u0026gt; _\u0026amp;#8220;As the developer of a suite of applications that relies on XMP as the vehicle for managing metadata, Adobe has too much invested in its development to allow any substantive changes by outsiders. So “open” primarily will mean open to suggestions, with an official channel in place to process them.\u0026amp;#8221;_ And as to that channel? \u0026gt; _\u0026amp;#8220;As the principal conduit to Adobe for changes to XMP, IDEAlliance will act as a gateway and support organization to the user community - a role for which it is well-suited. \u0026amp;#8230; As a sponsor-supported, not-for-profit organization, IDEAlliance can serve as a credible buffer for Adobe to the user community and synchronize and standardize third-party development efforts.\u0026amp;#8221;_ And goes on: \u0026gt; _\u0026amp;#8220;The principal unanswered questions at this point are: Will the stakeholders represent all of the key industries; will Adobe provide timely support for considering user input and updating the XMP Toolkit; and can Adobe, IDEAlliance and IDEAlliance workgroups manage all of the responsibilities that will fall upon them when the deal is struck. The hand-over doesn’t seem to have taken place yet, and we are still examining the scope and feasibility of the proposal.\u0026amp;#8221;_ It seems to me that Adobe is the party girl, IDEAlliance is the special guest, and Crossref publishers are the neighbourly gatecrashers who want to play with the toys. And not perhaps too nicely neither. I just hope that the toys aren’t taken away from us. They’re too much fun. Ironic really that we’re on the outside of this since scholarly publishers have a very clearcut grasp of what to do with metadata and a ready application in terms of citation linking. XMP is worth it. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-names-the-thing/", "title": "The Name’s The Thing", "subtitle":"", "rank": 1, "lastmod": "2007-09-20", "lastmod_ts": 1190246400, "section": "Blog", "tags": [], "description": "I’m always curious about names and where they come from and what they mean. Hence, my interest was aroused with the constant references to “XAP” in XMP. As the XMP Specification (Sept. 2005) says:\n“NOTE: The string “XAP” or “xap” appears in some namespaces, keywords, and related names in this document and in stored XMP data. It reflects an early internal code name for XMP; the names have been preserved for compatibility purposes.”\nActually, it occurs in most of the core namespaces: XAP, rather than XMP.\n(Continues.)\n", "content": "I’m always curious about names and where they come from and what they mean. Hence, my interest was aroused with the constant references to “XAP” in XMP. As the XMP Specification (Sept. 2005) says:\n“NOTE: The string “XAP” or “xap” appears in some namespaces, keywords, and related names in this document and in stored XMP data. It reflects an early internal code name for XMP; the names have been preserved for compatibility purposes.”\nActually, it occurs in most of the core namespaces: XAP, rather than XMP.\n(Continues.)\nAn earlier XMP Specification from 2001 (v. 1.5 - and see here for an earlier post of mine about XMP’s missing version numbers, and here about Adobe’s lack of archiving for XMP specifications) says almost the same thing:\n“NOTE: Many namespaces, keywords, and related names in this document are prefaced with the string “XAP”, which was an early internal code name for XMP metadata. Because the Acrobat 5.0 product shipped using those names and keywords, they were retained for compatibility purposes.”\nSo, there’s no indication in either of these specifications as to what the original name signified.\nBut then I turned up this issue in the Adobe Developer Knowledgebase:\n_“Known Issue: The metadate framework name was changed from XAP to XMP\nSummary\nXAP (Extensible Authoring Publishing) was an early internal code name for XMP (Extensible Metadata Platform).\nIssue\nWhy are many namespaces, keywords, data structures, and related names in the documents and XMP toolkit code prefaced with the string “XAP” rather than “XMP”?\nSolution\nXAP (Extensible Authoring and Publishing) was an early internal code name for XMP (Extensible Metadata Platform) metadata. Because Acrobat 5.0 used those names, they were retained for compatibility purposes. XMP is the formal name used the framework specification.”_\nAha! Now it’s all clear. And now I’m also wondering if this original name still reflects Adobe’s thinking on the purpose of XMP that it be primarily an authoring utility rather than a workflow utility. That is, is Adobe’s XMP more geared to individual authors of Adobe’s Creative Suite products entering in metadata by hand as part of the authoring act, rather than as a batch entry process within an automated publishing workflow? The emphasis that Adobe put on Custom File Info panels for their CS products would seem to foster the view that Adobe see XMP as an interactive authoring device for adding metadata. But what about the publishers and their workflows? The SDK is a rather poor effort at garnering any widespread support of XMP within the publishing industry.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/acap-any-chance-of-success/", "title": "ACAP - Any chance of success?", "subtitle":"", "rank": 1, "lastmod": "2007-09-19", "lastmod_ts": 1190160000, "section": "Blog", "tags": [], "description": "ACAP has released some documents outlining the use cases they will be testing and some proposed changes to the Robots Exclusion Protocol (REP) - both robots.txt and META tags. There are some very practical proposals here to improve search engine indexing. However, the only search engine publicly participating in the project is http://www.exalead.com/ (which according to Alexa attracted 0.0043% of global internet visits over the last three months). The main docs are “ACAP pilot Summary use cases being tested”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 1”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 2”, “ACAP Technical Framework - Usage Definitions - draft for pilot testing”.", "content": "ACAP has released some documents outlining the use cases they will be testing and some proposed changes to the Robots Exclusion Protocol (REP) - both robots.txt and META tags. There are some very practical proposals here to improve search engine indexing. However, the only search engine publicly participating in the project is http://www.exalead.com/ (which according to Alexa attracted 0.0043% of global internet visits over the last three months). The main docs are “ACAP pilot Summary use cases being tested”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 1”, “ACAP Technical Framework - Robots Exclusion Protocol - strawman proposals Part 2”, “ACAP Technical Framework - Usage Definitions - draft for pilot testing”.\nWhat would cause other search engines to recognize the ACAP protocols rather than ignore them? A lot of publishers implementing this and requiring search engines to recognize it to index content could put pressure on the engines. Maybe.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/style-guides-recommend-doi-strings/", "title": "Style Guides Recommend DOI strings", "subtitle":"", "rank": 1, "lastmod": "2007-09-19", "lastmod_ts": 1190160000, "section": "Blog", "tags": [], "description": "A couple of recent posts - from A couple of recent posts - from at Jefferson University and IFST at Univ of Delaware- note that the AMA and APA style guides now recommend using a DOI, if one is assigned, in a journal article citation.\nA citation in the APA style with a DOI would be:\nConley, D., Pfeiffera, K. M., \u0026amp; Velez, M. (2007). Explaining sibling differences in achievement and behavioral outcomes: The importance of within- and between-family factors. Social Science Research36(3), 1087-1104. doi:10.1016/j.ssresearch.2006.09.002\nIn the AMA style a reference would be:\nKitajima TS, Kawashima SA, Watanabe Y. The conserved kinetochore protein shugoshin protects centromeric cohesion during meiosis. Nature. 2004;427(6974):510-517. doi:10.1038/nature02312\nThis is great news. I haven’t looked at the full style guides but it’s not clear if information is given about linking DOIs via http://0-dx-doi-org.libus.csd.mu.edu/\n", "content": "A couple of recent posts - from A couple of recent posts - from at Jefferson University and IFST at Univ of Delaware- note that the AMA and APA style guides now recommend using a DOI, if one is assigned, in a journal article citation.\nA citation in the APA style with a DOI would be:\nConley, D., Pfeiffera, K. M., \u0026amp; Velez, M. (2007). Explaining sibling differences in achievement and behavioral outcomes: The importance of within- and between-family factors. Social Science Research36(3), 1087-1104. doi:10.1016/j.ssresearch.2006.09.002\nIn the AMA style a reference would be:\nKitajima TS, Kawashima SA, Watanabe Y. The conserved kinetochore protein shugoshin protects centromeric cohesion during meiosis. Nature. 2004;427(6974):510-517. doi:10.1038/nature02312\nThis is great news. I haven’t looked at the full style guides but it’s not clear if information is given about linking DOIs via http://0-dx-doi-org.libus.csd.mu.edu/\nInformation on the APA Style Guide is available - http://0-apastyle-apa-org.libus.csd.mu.edu/ with specific info on electronic references, URLs and DOIs and here is the AMA info.\nThis raises the existential question of a DOI as a URI. Is\nConley, D., Pfeiffera, K. M., \u0026amp; Velez, M. (2007). Explaining sibling differences in achievement and behavioral outcomes: The importance of within- and between-family factors. Social Science Research36(3), 1087-1104. doi:10.1016/j.ssresearch.2006.09.002 http://0-dx-doi-org.libus.csd.mu.edu/10.1016/j.ssresearch.2006.09.002\nunnecessary or redundant?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/chapter-9-the-closed-book/", "title": "Chapter 9 - The Closed Book", "subtitle":"", "rank": 1, "lastmod": "2007-09-15", "lastmod_ts": 1189814400, "section": "Blog", "tags": [], "description": "Hadn’t really noticed before but was fairly gobsmacked by this notice I just saw on the DOI® Handbook:\n**Please note that Chapter 9, Operating Procedures is for Registration Agency personnel only.**\nDOI® Handbook\ndoi:10.1000/182\nhttp://www.doi.org/hb.html\nAnd, indeed, the Handbook’s TOC only reconfirms this:\n9 Operating procedures*\n*The RA password is required for viewing Chapter 9.\n9.1 Registering a DOI name with associated metadata\n9.2 Prefix assignment\n9.3 Transferring DOI names from one Registrant to another", "content": "Hadn’t really noticed before but was fairly gobsmacked by this notice I just saw on the DOI® Handbook:\n**Please note that Chapter 9, Operating Procedures is for Registration Agency personnel only.**\nDOI® Handbook\ndoi:10.1000/182\nhttp://www.doi.org/hb.html\nAnd, indeed, the Handbook’s TOC only reconfirms this:\n9 Operating procedures*\n*The RA password is required for viewing Chapter 9.\n9.1 Registering a DOI name with associated metadata\n9.2 Prefix assignment\n9.3 Transferring DOI names from one Registrant to another\n9.4 Handle System® policies and procedures\n9.4.1 Overview\n9.4.2 Policies and Procedures\n9.4.3 Requirements for Administrators of Resolution Services\n9.4.4 Protocols and Interfaces\n9.5 DOI® System error messages\nThat’s spooky. A book with a hidden chapter. I really don’t like that at all. Especially on a book aiming to provide general information and guidance. Seems to be that if that information needs to be kept private to RA’s then it has no business rubbing shoulders with public information. I would suggest that the material be opened up or else moved out. Makes me feel so second class.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/custom-panel-for-cc/", "title": "Custom Panel for CC", "subtitle":"", "rank": 1, "lastmod": "2007-09-15", "lastmod_ts": 1189814400, "section": "Blog", "tags": [], "description": "Creative Commons now have a custom panel for adding CC licenses using Adobe apps - see here.\nInteresting on two counts:\nMachine readable licenses XMP metadata But I still think that batch solutions for adding XMP metadata are really required for publishing workflows. And ideally there should be support for adding arbitrary XMP packets if we’re going to have truly rich metadata. I rather fear the constraints that custom panels place upon the publisher.", "content": "Creative Commons now have a custom panel for adding CC licenses using Adobe apps - see here.\nInteresting on two counts:\nMachine readable licenses XMP metadata But I still think that batch solutions for adding XMP metadata are really required for publishing workflows. And ideally there should be support for adding arbitrary XMP packets if we’re going to have truly rich metadata. I rather fear the constraints that custom panels place upon the publisher. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/last-orders-please/", "title": "Last Orders Please!", "subtitle":"", "rank": 1, "lastmod": "2007-09-13", "lastmod_ts": 1189641600, "section": "Blog", "tags": [], "description": "Public comment period on the PRISM 2.0 draft ends Saturday (Sept. 15) ahead of next week’s WG meeting to review feedback and finalize the spec.\n(I put in some comments about XMP already. Hope they got that.)", "content": "Public comment period on the PRISM 2.0 draft ends Saturday (Sept. 15) ahead of next week’s WG meeting to review feedback and finalize the spec.\n(I put in some comments about XMP already. Hope they got that.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/marking-up-doi/", "title": "Marking up DOI", "subtitle":"", "rank": 1, "lastmod": "2007-09-11", "lastmod_ts": 1189468800, "section": "Blog", "tags": [], "description": "(Update - 2007.09.15: Clean forgot to add in the rdf: namespace to the examples for xmp:Identifier in this post. I’ve now added in that namespace to the markup fragments listed. Also added in a comment here which shows the example in RDF/XML for those who may prefer that over RDF/N3.)\nSo, as a preliminary to reviewing how a fuller metadata description of a Crossref resource may best be fitted into an XMP packet for embedding into a PDF, let’s just consider how a DOI can be embedded into XMP. And since it’s so much clearer to read let’s just conduct this analysis using RDF/N3. (Life is too short to be spent reading RDF/XML or C++ code. :~)\n(And further to Chris Shillum’s comment [(Update - 2007.09.15: Clean forgot to add in the rdf: namespace to the examples for xmp:Identifier in this post. I’ve now added in that namespace to the markup fragments listed. Also added in a comment here which shows the example in RDF/XML for those who may prefer that over RDF/N3.)\nSo, as a preliminary to reviewing how a fuller metadata description of a Crossref resource may best be fitted into an XMP packet for embedding into a PDF, let’s just consider how a DOI can be embedded into XMP. And since it’s so much clearer to read let’s just conduct this analysis using RDF/N3. (Life is too short to be spent reading RDF/XML or C++ code. :~)\n(And further to Chris Shillum’s comment]2 on my earlier post Metadata in PDF: 2. Use Cases where he notes that Elsevier are looking to upgrade their markup of DOI in PDF to use XMP, I’m really hoping that Elsevier may have something to bring to the party and share with us. A consensus rendering of DOI within XMP is going to be of benefit to all.)\n(Continues.)\n", "content": "(Update - 2007.09.15: Clean forgot to add in the rdf: namespace to the examples for xmp:Identifier in this post. I’ve now added in that namespace to the markup fragments listed. Also added in a comment here which shows the example in RDF/XML for those who may prefer that over RDF/N3.)\nSo, as a preliminary to reviewing how a fuller metadata description of a Crossref resource may best be fitted into an XMP packet for embedding into a PDF, let’s just consider how a DOI can be embedded into XMP. And since it’s so much clearer to read let’s just conduct this analysis using RDF/N3. (Life is too short to be spent reading RDF/XML or C++ code. :~)\n(And further to Chris Shillum’s comment [(Update - 2007.09.15: Clean forgot to add in the rdf: namespace to the examples for xmp:Identifier in this post. I’ve now added in that namespace to the markup fragments listed. Also added in a comment here which shows the example in RDF/XML for those who may prefer that over RDF/N3.)\nSo, as a preliminary to reviewing how a fuller metadata description of a Crossref resource may best be fitted into an XMP packet for embedding into a PDF, let’s just consider how a DOI can be embedded into XMP. And since it’s so much clearer to read let’s just conduct this analysis using RDF/N3. (Life is too short to be spent reading RDF/XML or C++ code. :~)\n(And further to Chris Shillum’s comment]2 on my earlier post Metadata in PDF: 2. Use Cases where he notes that Elsevier are looking to upgrade their markup of DOI in PDF to use XMP, I’m really hoping that Elsevier may have something to bring to the party and share with us. A consensus rendering of DOI within XMP is going to be of benefit to all.)\n(Continues.)\nWithin an XMP packet our first idea might be to include the DOI using the Dublin Core (DC) schema element dc:identifier in minimalist fashion:\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . \u0026lt;\u0026gt; dc:identifier \"10.1038/nrg2158\" . This simply says that the current document (denoted by the empty URI “\u0026lt;\u0026gt;“) has a string property \u0026ldquo;10.1038/nrg2158\u0026rdquo; which is of type identifier from the dc (or Dublin Core) schema which is identified by the URI http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/.\nNow, since this is just a DOI and the wider public cannot be expected to know about DOIs, it would surely be better to present the DOI in URI form (doi:) as\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . \u0026lt;\u0026gt; dc:identifier \"doi:10.1038/nrg2158\" . or, using a registered URI form (info:) as\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . \u0026lt;\u0026gt; dc:identifier \"info:doi/10.1038/nrg2158\" . Aside: This shows up a limitation of XMP where the DC schema property value for dc:identifier is fixed as type Text. The natural way to express the above in RDF/N3 would be as:\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . \u0026lt;\u0026gt; dc:identifier \u0026lt;info:doi/10.1038/nrg2158\u0026gt; . which says that the value is a URI (type URI in XMP terms), not a string (type Text in XMP terms). We either have to flout the XMP specification or else live with this restriction. We’ll opt for the latter for now.\nBut, the XMP Spec deprecates the use of dc:identifier since the context is not specific. (Note that that’s what was just discussed above. The limitation is built into XMP which builds on RDF but does not fully endorse the RDF world view.) Instead the XMP Spec recommends using xmp:Identifier since the context can be set using a qualified property as:\n@prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"DOI\"; rdf:value \"10.1038/nrg2158\" ] ] . This says the string \u0026ldquo;10.1038/nrg2158\u0026rdquo; belongs to the scheme \u0026ldquo;DOI\u0026rdquo;.\nHere we have used the scheme “DOI” and, as noted above, for wider recognition it would be better to employ one of the URI forms, e.g.\n@prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"URI\"; rdf:value \"doi:10.1038/nrg2158\" ] ] . This says the string \u0026ldquo;doi:10.1038/nrg2158\u0026rdquo;belongs to the scheme \u0026ldquo;URI\u0026rdquo;.\nBut this is the unregistered URI form (doi:), so should we be using instead the registered form (info:)? Well, turns out that this construct for xmp:Identifier is an rdf:Bag so we can include more than one term. How about using this construct then:\n@prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"URI\"; rdf:value \"info:doi/10.1038/nrg2158\" ]; rdf:_2 [ xmpidq:Scheme \"URI\"; rdf:value \"doi:10.1038/nrg2158\" ] ] . Now we’ve got both forms, which is fair enough since these are equivalent. In RDF terms we can make the statement that:\ndoi:10.1038/nrg2158 owl:sameAs info:doi10.1038/nrg2158 . which asserts that the two URIs are equivalent and that they reference the same resource.\nSo, what if we want to include a native DOI without the URI garb? We can easily do that:\n@prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"URI\"; rdf:value \"info:doi/10.1038/nrg2158\" ]; rdf:_2 [ xmpidq:Scheme \"URI\"; rdf:value \"doi:10.1038/nrg2158\" ]; rdf:_3 [ xmpidq:Scheme \"DOI\"; rdf:value \"10.1038/nrg2158\" ] ] . OK, that takes care of the XMP direction to use xmp:Identifier, but, while deprecated by XMP, we note that back in the real world folks will be looking at the DC elements which is the schema with the greatest purchase. So, why not also add in a dc:identifier element such as would be used typically for DOI in citations. How about this:\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . @prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"URI\"; rdf:value \"info:doi/10.1038/nrg2158\" ]; rdf:_2 [ xmpidq:Scheme \"URI\"; rdf:value \"doi:10.1038/nrg2158\" ]; rdf:_3 [ xmpidq:Scheme \"DOI\"; rdf:value \"10.1038/nrg2158\" ] ]; dc:identifier \"doi:10.1038/nrg2158\" . Right, so we’ve taken care of the identfiers. But maybe there’s something missing? There’s no link to the DOI proxy. For widest applicability we should not assume prior knowledge of the DOI system. Perhaps we could include this link using the property dc:relation? Seems feasible though would really like to get some feedback on this. Any ideas?\nSo here, then, is a fairly full and complete expression of DOI within the XMP packet.\n@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; . @prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; . @prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; . @prefix xmpidq: \u0026lt;http://ns.adobe.com/xmp/Identifier/qual/1.0/\u0026gt; . \u0026lt;\u0026gt; xmp:Identifier [ a rdf:Bag; rdf:_1 [ xmpidq:Scheme \"URI\"; rdf:value \"info:doi/10.1038/nrg2158\" ]; rdf:_2 [ xmpidq:Scheme \"URI\"; rdf:value \"doi:10.1038/nrg2158\" ]; rdf:_3 [ xmpidq:Scheme \"DOI\"; rdf:value \"10.1038/nrg2158\" ] ]; dc:identifier \"doi:10.1038/nrg2158\"; dc:relation \"http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nrg2158\" . Ta-da!\n(Of course, this is all premised on having freedom in writing out the XMP packet. If one is dependent on commercial applications to write out the packet then things may be different. Actually, they will be very different. They may not even be workable.)\nFeedback would be very welcome.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/the-second-wave/", "title": "The Second Wave", "subtitle":"", "rank": 1, "lastmod": "2007-09-11", "lastmod_ts": 1189468800, "section": "Blog", "tags": [], "description": "You might have been wondering why I’ve been banging on about XMP here. Why the emphasis on one vendor technology on a blog focussed on an industry linking solution? Well, this post is an attempt to answer that.\nFour years ago we at Nature Publishing Group, along with a select few early adopters, started up our RSS news feeds. We chose to use RSS 1.0 as the platform of choice which allowed us to embed a rich metadata term set using multiple schemas - especially Dublin Core and PRISM. We evangelized this much at the time and published documents on XML.com (Jul. ’03) and in D-Lib Magazine (Dec. ’04) as well as speaking about this at various meetings and blogging about it. Since that time many more publishers have come on board and now provide RSS routinely, many of them choosing to enrich their feeds with metadata.\nWell, RSS can be seen in hindsight as being the First Wave of projecting a web presence beyond the content platform using standard markup formats. With this embedded metadata a publisher can expand their web footprint and allow users to link back to their content server.\nNow, XMP with its potential for embedding metadata in rich media can be seen as a Second Wave. Media assets distributed over the network can now carry along their own metadata and identity which can be leveraged by third-party applications to provide interesting new functionalities and link-back capability. Again a projection of web presence.\n(Continues.)\n", "content": "You might have been wondering why I’ve been banging on about XMP here. Why the emphasis on one vendor technology on a blog focussed on an industry linking solution? Well, this post is an attempt to answer that.\nFour years ago we at Nature Publishing Group, along with a select few early adopters, started up our RSS news feeds. We chose to use RSS 1.0 as the platform of choice which allowed us to embed a rich metadata term set using multiple schemas - especially Dublin Core and PRISM. We evangelized this much at the time and published documents on XML.com (Jul. ’03) and in D-Lib Magazine (Dec. ’04) as well as speaking about this at various meetings and blogging about it. Since that time many more publishers have come on board and now provide RSS routinely, many of them choosing to enrich their feeds with metadata.\nWell, RSS can be seen in hindsight as being the First Wave of projecting a web presence beyond the content platform using standard markup formats. With this embedded metadata a publisher can expand their web footprint and allow users to link back to their content server.\nNow, XMP with its potential for embedding metadata in rich media can be seen as a Second Wave. Media assets distributed over the network can now carry along their own metadata and identity which can be leveraged by third-party applications to provide interesting new functionalities and link-back capability. Again a projection of web presence.\n(Continues.)\nXMP has much in common with RSS 1.0. They are both profiles of RDF/XML. They are both flawed in certain respects because of self-imposed limitations. But they both build on a robust and open data model for the web (RDF) and are reasonably open, at least they are extensible. One (RSS 1.0) was defined in an open process by committee, the other is an open (i.e published) specification provided by a vendor.\nFrom our point of view both specifications are sufficiently advanced to be immediately useful. I’m not sure how one could interact with the further development of either specification. RSS 1.0 is essentially frozen with Atom being posed as a successor technology, although Atom does not conform to the RDF model. (The upshot is that an RSS 1.0 feed can be consumed completely by an RDF-aware application, while an Atom feed would need to be pre-processed before any RDF “goodness” could be gleaned from it.) By contrast, XMP is a vendor-defined technology and alive, if not perhaps kicking. I am unaware of any process to formally contribute to the XMP development apart from shouting from the terraces. None the less, both technologies are usable as is.\nIt is curious that no consistent packaging (and delivery) of metadata has yet been achieved with HTML, the original web interface. The HTML \u003c/tt\u003e and \u003ctt\u003e\u003cmeta\u003e\u003c/tt\u003e elements are employed by publishers with various degrees of consistency. There are also RDF islands that can be embedded within HTML comments (as used e.g. by \u003ca href=\"http://creativecommons.org/\" target=\"_blank\"\u003eCC licenses\u003c/a\u003e). And then there are \u003ca href=\"https://web.archive.org/web/20090927174724/http://ocoins.info/\" target=\"_blank\"\u003eCOinS\u003c/a\u003e objects. But it’s all a bit of a mish-mash to date. Certainly, I don’t recall seeing any guidelines from Crossref as to how machine readable metadata (even markup for the DOI itself) may be embedded within HTML pages, rather than on HTML pages for human readers.\n\u003cp\u003eThis lack of uniform metadata deployment for HTML pages could be something to do with context. With RSS and XMP we are dealing with remote objects, whereas with HTML we are generally accessing this directly on the content server and so have a semantic context. It could be though that metadata delivery from HTML pages will finally be more uniformly available with the further development of standards such as \u003ca href=\"http://microformats.org/\" target=\"_blank\"\u003emicroformats\u003c/a\u003e and especially \u003ca href=\"ww.w3.org/2006/07/SWD/RDFa/syntax/\" target=\"_blank\"\u003eRDFa\u003c/a\u003e, \u003ca href=\"http://www.w3.org/2004/01/rdxh/spec\" target=\"_blank\"\u003eGRDDL\u003c/a\u003e, etc. It is also interesting to note that an XMP packet could just as easily be embedded within the HTML page, and if this technology were to be adopted more widely for embedding in other media assets then why not consider the same technology for ordinary web pages?\n\u003cp\u003eI can’t help feeling though that XMP has a lot of promise and is very timely. There are only three real obstacles: creating XMP packets, writing them and reading them. To my mind, once one has a good grasp of XMP then creating the packets can be done with common tools. The same, more or less, for reading the packets. I have shown earlier that this is readily achievable. The only major block is writing the packets into media files although there is support for create/write (if patchy) by open source libraries, as well as there being support (perhaps limited) from products for create/write. But, anyway, it’s certainly do-able.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/w5m0mpcehihzreszntczkc9d/", "title": "W5M0MpCehiHzreSzNTczkc9d", "subtitle":"", "rank": 1, "lastmod": "2007-09-10", "lastmod_ts": 1189382400, "section": "Blog", "tags": [], "description": "What on earth can this string mean: ‘W5M0MpCehiHzreSzNTczkc9d’? This occurs in the XMP packet header:\n\u003c?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d'?\u003e\nWell from the XMP Specification (September 2005) which is available here (PDF) there is this text:\n“The required id attribute must follow begin. For all packets defined by this version of the syntax, the value of id is the following string: W5M0MpCehiHzreSzNTczkc9d”\n(See: 3 XMP Storage Model / XMP Packet Wrapper / Header / Attribute: id)\nOK, so it’s no big deal to cut and paste that string, it’s just mighty curious why this cryptic key is needed in an open specification, especially since (contrary to what might be implied by the text) it doesn’t seem to vary with version. (Or hasn’t yet, at any rate - more below.)\n", "content": "What on earth can this string mean: ‘W5M0MpCehiHzreSzNTczkc9d’? This occurs in the XMP packet header:\n\u003c?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d'?\u003e\nWell from the XMP Specification (September 2005) which is available here (PDF) there is this text:\n“The required id attribute must follow begin. For all packets defined by this version of the syntax, the value of id is the following string: W5M0MpCehiHzreSzNTczkc9d”\n(See: 3 XMP Storage Model / XMP Packet Wrapper / Header / Attribute: id)\nOK, so it’s no big deal to cut and paste that string, it’s just mighty curious why this cryptic key is needed in an open specification, especially since (contrary to what might be implied by the text) it doesn’t seem to vary with version. (Or hasn’t yet, at any rate - more below.)\nRight, so now we get down to it. Just what is the version number of the current XMP Specification anyways? I couldn’t for the life of me find one. (Note that I am talking about the XMP Specification itself and not the XMP Toolkit which is versioned at 4.1.1.) I am assuming that I have the latest version, else I really don’t know where else to look. This link\nhttp://www.adobe.com/products/xmp/\nleads me to\nhttp://www.adobe.com/devnet/xmp/\nwhich leads me to\nhttps://web.archive.org/web/20210811233806/https://www.adobe.com/devnet/xmp.html\nwhich by the way is also the same version that ships with the SDK.\nI do know that there was a Version 1.5 published in September 14, 2001. (You can see that this is a fairly slow changing technology - the published spec is from 2 years back, and an earlier - the earlier? - version is from 6 years back). Note that this version has a version number (1.5) but still uses the same XMP packer header ‘id’ attribute.\nNo good, by the way, peeking inside the XMP of the XMP Spec either. Here’s a dump (using the DumpMainXMP utility with the SDK):\n% xmpd xmp_spec.xmp\r// -----------------------------------\r// Dumping main XMP for xmp_spec.xmp :\rFile info : format = \" \", handler flags = 00000260\rPacket info : offset = 0, length = 4051\rInitial XMP from xmp_spec.xmp\rDumping XMPMeta object \"\" (0x0)\rhttp://ns.adobe.com/pdf/1.3/ pdf: (0x80000000 : schema)\rpdf:Producer = \"Acrobat Distiller 7.0 (Windows)\"\rpdf:Copyright = \"2005 Adobe Systems Inc.\"\rpdf:Keywords = \"XMP metadata schema XML RDF\"\rhttp://ns.adobe.com/xap/1.0/ xap: (0x80000000 : schema)\rxap:CreateDate = \"2005-09-23T15:19:07Z\"\rxap:ModifyDate = \"2005-09-23T15:19:07Z\"\rxap:CreatorTool = \"FrameMaker 7.1\"\rhttp://purl.org/dc/elements/1.1/ dc: (0x80000000 : schema)\rdc:description (0x1E00 : isLangAlt isAlt isOrdered isArray)\r[1] = \"XMP metadata specification\" (0x50 : hasLang hasQual)\r? xml:lang = \"x-default\" (0x20 : isQual)\rdc:creator (0x600 : isOrdered isArray)\r[1] = \"Adobe Developer Technologies\"\rdc:title (0x1E00 : isLangAlt isAlt isOrdered isArray)\r[1] = \"Extensible Metadata Platform (XMP) Specification\" (0x50 : hasLang hasQual)\r? xml:lang = \"x-default\" (0x20 : isQual)\rdc:format = \"application/pdf\"\rhttp://ns.adobe.com/pdfx/1.3/ pdfx: (0x80000000 : schema)\rpdfx:Copyright = \"2005 Adobe Systems Inc.\"\rhttp://ns.adobe.com/xap/1.0/mm/ xapMM: (0x80000000 : schema)\rxapMM:InstanceID = \"uuid:99b91701-a78b-4652-84e5-6bccaeb7534e\"\rxapMM:DocumentID = \"uuid:374ea24b-3931-4b83-944d-5b9daa42277e\"\ror in more readable form (courtesy of ‘cwm‘):\n% xmp2n3q docs/XMP-Specification.pdf\r#Processed by Id: cwm.py,v 1.164 2004/10/28 17:41:59 timbl Exp\r# using base file:/Users/tony/Sources/Build/XMP-SDK/\r# Notation3 generation by\r# notation3.py,v 1.166 2004/10/28 17:41:59 timbl Exp\r# Base was: file:/Users/tony/Sources/Build/XMP-SDK/\r@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; .\r@prefix pdf: \u0026lt;http://ns.adobe.com/pdf/1.3/\u0026gt; .\r@prefix pdfx: \u0026lt;http://ns.adobe.com/pdfx/1.3/\u0026gt; .\r@prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; .\r@prefix xmpMM: \u0026lt;http://ns.adobe.com/xap/1.0/mm/\u0026gt; .\r\u0026lt;\u0026gt; pdf:Copyright \"2005 Adobe Systems Inc.\";\rpdf:Keywords \"XMP metadata schema XML RDF\";\rpdf:Producer \"Acrobat Distiller 7.0 (Windows)\";\rpdfx:Copyright \"2005 Adobe Systems Inc.\";\rxmp:CreateDate \"2005-09-23T15:19:07Z\";\rxmp:CreatorTool \"FrameMaker 7.1\";\rxmp:ModifyDate \"2005-09-23T15:19:07Z\";\rxmpMM:DocumentID \"uuid:374ea24b-3931-4b83-944d-5b9daa42277e\";\rxmpMM:InstanceID \"uuid:99b91701-a78b-4652-84e5-6bccaeb7534e\";\rdc:creator [\ra rdf:Seq;\rrdf:_1 \"Adobe Developer Technologies\" ];\rdc:description [\ra rdf:Alt;\rrdf:_1 \"XMP metadata specification\"@x-default ];\rdc:format \"application/pdf\";\rdc:title [\ra rdf:Alt;\rrdf:_1 \"Extensible Metadata Platform (XMP) Specification\"@x-default ] .\r#ENDS\rSo, just what then is the version number of the XMP Specification which the id string ‘W5M0MpCehiHzreSzNTczkc9d’ is marking?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-some-other-gripes/", "title": "XMP - Some Other Gripes", "subtitle":"", "rank": 1, "lastmod": "2007-09-10", "lastmod_ts": 1189382400, "section": "Blog", "tags": [], "description": "Following on from the missing XMP Specification version number discussed in the previous post here below are listed some miscellaneous gripes I’ve got with XMP (on what otherwise is a very promising technology). I would be more than happy to be proved wrong on any of these points.\n", "content": "Following on from the missing XMP Specification version number discussed in the previous post here below are listed some miscellaneous gripes I’ve got with XMP (on what otherwise is a very promising technology). I would be more than happy to be proved wrong on any of these points.\n1. XMP version history and archive\nThere doesn’t appear to be any XMP version history or archive hosted by Adobe as far as I can tell.\n2. Unpublished schemas\nAlso there is nothing published - outside the XMP Spec itself - on the core schemas used by XMP. There’s nothing to be gleaned from the namespace URIs used. The Adobe namespaces, e.g.\nhttps://web.archive.org/web/20070929102516/http://www.adobe.com/products/xmp/ (listed in XMP Spec)\nhttps://web.archive.org/web/20070929102516/http://www.adobe.com/products/xmp/ (not listed in XMP Spec)\nseem to all resolve to this page\nhttp://www.adobe.com/products/xmp/.\nSo, that can leave us with undocumented terms (e.g. ‘xmpMM:Manifest‘ used by Adobe InDesign CS2 4.0.5) from documented schemas and also undocumented schemas (e.g. ‘pdfx‘).\n3. UUID\nNote also that many Adobe apps do not use the URN syntax for ‘uuid:‘. The XMP Spec also has this to say:\n_“There is no formal standard for URIs that are based on an abstract UUID. The following proposal may be relevant:\nhttps://datatracker.ietf.org/doc/rfc4122/;\n(see: 3 XMP Storage Model / Serializing XMP / rdf:Description elements / rdf:about attribute)”\nI guess the XMP Spec (Sept. ’05) had just been bedded down more or less when the URN namespace for ‘uuid:‘ was published as RFC 4122 in July ’05.\n4. RDF/XML serialization\nThe biggie.\nXMP schemas specify fixed property value types in RDF/XML, i.e. they specify a fixed profile of RDF/XML instead of generic RDF/XML. This has been commented on recently by myself on the semantic-web list, and also here by Bruce D’Arcus speaking about OpenDocument, and here by Mike Linksvayer speaking for CC.\nThis profiling of RDF/XML leads to real problems. For example, Adobe have defined a Dublin Core (DC) schema which lists the property value types that DC values can assume in an XMP serialization. Meantime, the PRISM 2.0 draft spec defines an incompatible mapping of DC terms to XMP property values. Since both schemas would make use of the same DC namespace (though PRISM haven’t actually specified a DC namespace for use with XMP but do use elsewhere the regular DC namespace) this isn’t going to work. I did supply some feedback on this to the PRISM WG but have heard nothing back from them. So, PRISM XMP looks uncertain at this time. Which, for us, must be a shame.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/connecting-things-bioguid-ispiders-and-doi/", "title": "connecting things: bioGUID, iSpiders and DOI", "subtitle":"", "rank": 1, "lastmod": "2007-09-07", "lastmod_ts": 1189123200, "section": "Blog", "tags": [], "description": "David Shorthouse and Rod Page have developed some great tools for linking references by tying together a number of services and using the Crossref OpenURL interface amongst other things. See David’s post - Gimme That Scientific Paper Part III and Rod’s post on OpenURL and using ParaTools - “OpenURL and Spiders“.\nUnfortunately our planned changes to the Crossref OpenURL interface (the 100 queries per day limit in particular) caused some concern for David (“Crossref Takes a Step Back“) - but make sure you read the comments to see my response!\nWe decided to drop the 100 per day query limit for the OpenURL interface and there will be no charges for non-commercial use of the interface - https://0-apps-crossref-org.libus.csd.mu.edu/requestaccount/\n", "content": "David Shorthouse and Rod Page have developed some great tools for linking references by tying together a number of services and using the Crossref OpenURL interface amongst other things. See David’s post - Gimme That Scientific Paper Part III and Rod’s post on OpenURL and using ParaTools - “OpenURL and Spiders“.\nUnfortunately our planned changes to the Crossref OpenURL interface (the 100 queries per day limit in particular) caused some concern for David (“Crossref Takes a Step Back“) - but make sure you read the comments to see my response!\nWe decided to drop the 100 per day query limit for the OpenURL interface and there will be no charges for non-commercial use of the interface - https://0-apps-crossref-org.libus.csd.mu.edu/requestaccount/\nWe want to encourage innovative uses of Crossref services and disseminate DOIs as effectively as possible so we appreciate feedback and encourage the type of development David and Rod are doing. It will be interesting to see if what they are doing has wider applicability. Maybe Crossref could host a webpage to point to tools like this and encourage more development?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stop-press/", "title": "Stop Press", "subtitle":"", "rank": 1, "lastmod": "2007-08-28", "lastmod_ts": 1188259200, "section": "Blog", "tags": [], "description": "Boy, was I ever so wrong! Contrary to what I said in yesterday’s post, the new PRISM 2.0 spec does support XMP value type mappings for its terms. See the table below which lists the PRISM basic vocabulary terms and the XMP value types.\nMany thanks to Dianne Kennedy and the rest of the PRISM Working Group for having added this support to PRISM 2.0.\n", "content": "Boy, was I ever so wrong! Contrary to what I said in yesterday’s post, the new PRISM 2.0 spec does support XMP value type mappings for its terms. See the table below which lists the PRISM basic vocabulary terms and the XMP value types.\nMany thanks to Dianne Kennedy and the rest of the PRISM Working Group for having added this support to PRISM 2.0.\nSection \u0026lt;th\u0026gt; PRISM Term \u0026lt;/th\u0026gt; \u0026lt;th\u0026gt; XMP Value Type \u0026lt;/th\u0026gt; 4.2.1 \u0026lt;td\u0026gt; prism:alternateTitle \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.2 \u0026lt;td\u0026gt; prism:byteCount \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Integer\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.3 \u0026lt;td\u0026gt; prism:channel \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.4 \u0026lt;td\u0026gt; prism:complianceProfile \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Choice: \u0026amp;#8220;one\u0026amp;#8221;, \u0026amp;#8220;two\u0026amp;#8221;, \u0026amp;#8220;three\u0026amp;#8221;\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.5 \u0026lt;td\u0026gt; prism:copyright \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.6 \u0026lt;td\u0026gt; prism:corporateEntity \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.7 \u0026lt;td\u0026gt; prism:coverDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.8 \u0026lt;td\u0026gt; prism:coverDisplayDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.9 \u0026lt;td\u0026gt; prism:creationDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.10 \u0026lt;td\u0026gt; prism:distributor \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.11 \u0026lt;td\u0026gt; prism:edition \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.12 \u0026lt;td\u0026gt; prism:eIssn \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.13 \u0026lt;td\u0026gt; prism:embargoDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.14 \u0026lt;td\u0026gt; prism:endingPage \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.15 \u0026lt;td\u0026gt; prism:event \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.16 \u0026lt;td\u0026gt; prism:expirationDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.17 \u0026lt;td\u0026gt; prism:hasAlternative \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.18 \u0026lt;td\u0026gt; prism:hasCorrection \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.19 \u0026lt;td\u0026gt; prism:hasTranslation \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.20 \u0026lt;td\u0026gt; prism:industry \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.21 \u0026lt;td\u0026gt; prism:isCorrectionOf \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.22 \u0026lt;td\u0026gt; prism:issn \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.23 \u0026lt;td\u0026gt; prism:issueIdentifier \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.24 \u0026lt;td\u0026gt; prism:issueName \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.25 \u0026lt;td\u0026gt; prism:isTranslationOf \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.26 \u0026lt;td\u0026gt; prism:killDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.27 \u0026lt;td\u0026gt; prism:location \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.28 \u0026lt;td\u0026gt; prism:modificationDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.29 \u0026lt;td\u0026gt; prism:number \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.30 \u0026lt;td\u0026gt; prism:object \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.31 \u0026lt;td\u0026gt; prism:origin \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Choice: \u0026amp;#8220;email\u0026amp;#8221;, \u0026amp;#8220;mobile\u0026amp;#8221;, \u0026amp;#8220;broadcast\u0026amp;#8221;, \u0026amp;#8220;web\u0026amp;#8221;, \u0026amp;#8220;print\u0026amp;#8221;, \u0026amp;#8220;recordableMedia\u0026amp;#8221;, \u0026amp;#8220;other\u0026amp;#8221;\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.32 \u0026lt;td\u0026gt; prism:organization \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.33 \u0026lt;td\u0026gt; prism:pageRange \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.34 \u0026lt;td\u0026gt; prism:person \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.35 \u0026lt;td\u0026gt; prism:postDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.36. \u0026lt;td\u0026gt; prism:publicationDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.37 \u0026lt;td\u0026gt; prism:publicationName \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.38 \u0026lt;td\u0026gt; prism:receptionDate \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Date\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.39 \u0026lt;td\u0026gt; prism:rightsAgent \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.40 \u0026lt;td\u0026gt; prism:section \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.41 \u0026lt;td\u0026gt; prism:startingPage \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.42 \u0026lt;td\u0026gt; prism:subsection1 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.43 \u0026lt;td\u0026gt; prism:subsection2 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.44 \u0026lt;td\u0026gt; prism:subsection3 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.45 \u0026lt;td\u0026gt; prism:subsection4 \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;bag Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.46 \u0026lt;td\u0026gt; prism:teaser \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.47 \u0026lt;td\u0026gt; prism:versionIdentifier \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.48 \u0026lt;td\u0026gt; prism:volume \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Text\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; 4.2.49 \u0026lt;td\u0026gt; prism:wordCount \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;b\u0026gt;Integer\u0026lt;/b\u0026gt; \u0026lt;/td\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/exiftool/", "title": "ExifTool", "subtitle":"", "rank": 1, "lastmod": "2007-08-27", "lastmod_ts": 1188172800, "section": "Blog", "tags": [], "description": "(Update - 2007.08.28: I inadvertently missed out the term names in the last example of XMP as RDF/N3 with QNames and have now added these in. Also - a biggie - I said that PRISM had no XMP schema defined. This is actually wrong and as I blogged here today, the new PRISM 2.0 spec does indeed have a mapping of PRISM terms to XMP value types. Should actually have read the spec instead of just blogging about it earlier here. :~)\nHaving previously stooped to an extremely crass hack for pulling out a document information dictionary from PDFs (for which no apologies are sufficient but it does often work) I feel I should make some kind of amends and mention the wonderful ExifTool by Phil Harvey for reading and writing metadata to media files. This is both a Perl library and command-line application (so it’s cross-platform - a Windows .exe and Mac OS .dmg are also provided.) Besides handling EXIF tags in image files this veritable swissknife of metadata inspectors can also read PDFs for the information dictionary and the document XMP packet. And moreover, intriguingly, can dump the raw (document) XMP packet.\nI’m still experimenting with it. There’s quite a number of features to explore. But some preliminary finds are listed below.\n", "content": "(Update - 2007.08.28: I inadvertently missed out the term names in the last example of XMP as RDF/N3 with QNames and have now added these in. Also - a biggie - I said that PRISM had no XMP schema defined. This is actually wrong and as I blogged here today, the new PRISM 2.0 spec does indeed have a mapping of PRISM terms to XMP value types. Should actually have read the spec instead of just blogging about it earlier here. :~)\nHaving previously stooped to an extremely crass hack for pulling out a document information dictionary from PDFs (for which no apologies are sufficient but it does often work) I feel I should make some kind of amends and mention the wonderful ExifTool by Phil Harvey for reading and writing metadata to media files. This is both a Perl library and command-line application (so it’s cross-platform - a Windows .exe and Mac OS .dmg are also provided.) Besides handling EXIF tags in image files this veritable swissknife of metadata inspectors can also read PDFs for the information dictionary and the document XMP packet. And moreover, intriguingly, can dump the raw (document) XMP packet.\nI’m still experimenting with it. There’s quite a number of features to explore. But some preliminary finds are listed below.\nTaking one of our standard (metadata poor) PDFs we get this dump:\n% exiftool nature05428.pdf\rExifTool Version Number : 6.95\rFile Name : nature05428.pdf\rDirectory : .\rFile Size : 367 kB\rFile Modification Date/Time : 2007:07:26 14:01:23\rFile Type : PDF\rMIME Type : application/pdf\rPage Count : 3\rProducer : Acrobat Distiller 6.0.1 (Windows)\rMod Date : 2006:12:19 15:03:23+08:00\rCreation Date : 2006:12:18 16:57:58+08:00\rCreator : 3B2 Total Publishing System 7.51n/W\rCreator Tool : 3B2 Total Publishing System 7.51n/W\rModify Date : 2006:12:19 15:03:23+08:00\rCreate Date : 2006:12:18 16:57:58+08:00\rMetadata Date : 2006:12:19 15:03:23+08:00\rDocument ID : uuid:f598740b-ad11-41c5-a49e-7caffea783f0\rFormat : application/pdf\rTitle : untitled\rBy way of comparison, if we take a demo (metadata rich) PDF with added descriptive DC and PRISM metadata terms, we then get this dump:\n% exiftool 445037a.pdf\rExifTool Version Number : 6.95\rFile Name : 445037a.pdf\rDirectory : .\rFile Size : 265 kB\rFile Modification Date/Time : 2007:07:26 16:18:17\rFile Type : PDF\rMIME Type : application/pdf\rPage Count : 1\rCreator Tool : InDesign: pictwpstops filter 1.0\rMetadata Date : 2006:12:22 12:10:07Z\rDocument ID : uuid:4cd39128-2c8e-41c0-9cad-eea2a1fdb64f\rIdentifier : doi:10.1038/445037a\rDescription : doi:10.1038/445037a\rSource : Nature 445, 37 (2007)\rDate : 2007:01:04\rFormat : application/pdf\rPublisher : Nature Publishing Group\rLanguage : en\rRights : © 2007 Nature Publishing Group\rPublication Name : Nature\rIssn : 0028-0836\rE Issn : 1476-4679\rPublication Date : 2007-01-04\rCopyright : © 2007 Nature Publishing Group\rRights Agent : permissions@nature.com\rVolume : 445\rNumber : 7123\rStarting Page : 37\rEnding Page : 37\rSection : News and Views\rModify Date : 2006:12:22 12:10:07Z\rCreate Date : 2006:12:22 11:46:18Z\rTitle : 4.1 N\u0026V NS NEW.indd\rTrapped : False\rCreator : InDesign: pictwpstops filter 1.0\rGTS PDFX Version : PDF/X-1:2001\rGTS PDFX Conformance : PDF/X-1a:2001\rAuthor : x\rProducer : Acrobat Distiller 6.0.1 for Macintosh\rNote that the DC and PRISM terms are encoded as my earlier examples and do not take account of a) how DC is defined as an XMP schema (i.e. the XMP value types for the seperate terms), or b) how PRISM might (because it isn’t yet) be defined as an XMP schema. Nor are identifier considerations fully taken into account. Nonetheless this gives more than an idea of what things could look like.\nNow, with ExifTool it is also possible to list out the terms by group, e.g.\n% exiftool -g1 445037a.pdf\r---- ExifTool ----\rExifTool Version Number : 6.95\r---- File ----\rFile Name : 445037a.pdf\rDirectory : .\rFile Size : 265 kB\rFile Modification Date/Time : 2007:07:26 16:18:17\rFile Type : PDF\rMIME Type : application/pdf\r---- PDF ----\rPage Count : 1\rModify Date : 2006:12:22 12:10:07Z\rCreate Date : 2006:12:22 11:46:18Z\rTitle : 4.1 N\u0026V NS NEW.indd\rTrapped : False\rCreator : InDesign: pictwpstops filter 1.0\rGTS PDFX Version : PDF/X-1:2001\rGTS PDFX Conformance : PDF/X-1a:2001\rAuthor : x\rProducer : Acrobat Distiller 6.0.1 for Macintosh\r---- XMP-xmp ----\rCreator Tool : InDesign: pictwpstops filter 1.0\rMetadata Date : 2006:12:22 12:10:07Z\r---- XMP-xmpMM ----\rDocument ID : uuid:4cd39128-2c8e-41c0-9cad-eea2a1fdb64f\r---- XMP-dc ----\rIdentifier : doi:10.1038/445037a\rDescription : doi:10.1038/445037a\rSource : Nature 445, 37 (2007)\rDate : 2007:01:04\rFormat : application/pdf\rPublisher : Nature Publishing Group\rLanguage : en\rRights : © 2007 Nature Publishing Group\r---- XMP-prism ----\rPublication Name : Nature\rIssn : 0028-0836\rE Issn : 1476-4679\rPublication Date : 2007-01-04\rCopyright : © 2007 Nature Publishing Group\rRights Agent : permissions@nature.com\rVolume : 445\rNumber : 7123\rStarting Page : 37\rEnding Page : 37\rSection : News and Views\rGoing back to the first example we can extract the (document) XMP packet as:\n% exiftool -xmp -b nature05428.pdf\r\u0026lt;?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d' bytes='1753'?\u0026gt;\r\u0026lt;rdf:RDF xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#'\rxmlns:iX='http://ns.adobe.com/iX/1.0/'\u0026gt;\r\u0026lt;rdf:Description about='uuid:3d686cee-18e6-483c-b1c9-e128e9f0d009'\rxmlns='http://ns.adobe.com/pdf/1.3/'\rxmlns:pdf='http://ns.adobe.com/pdf/1.3/'\u0026gt;\r\u0026lt;pdf:Producer\u0026gt;Acrobat Distiller 6.0.1 (Windows)\u0026lt;/pdf:Producer\u0026gt;\r\u0026lt;pdf:ModDate\u0026gt;2006-12-19T15:03:23+08:00\u0026lt;/pdf:ModDate\u0026gt;\r\u0026lt;pdf:CreationDate\u0026gt;2006-12-18T16:57:58+08:00\u0026lt;/pdf:CreationDate\u0026gt;\r\u0026lt;pdf:Title\u0026gt;untitled\u0026lt;/pdf:Title\u0026gt;\r\u0026lt;pdf:Creator\u0026gt;3B2 Total Publishing System 7.51n/W\u0026lt;/pdf:Creator\u0026gt;\r\u0026lt;/rdf:Description\u0026gt;\r\u0026lt;rdf:Description about='uuid:3d686cee-18e6-483c-b1c9-e128e9f0d009'\rxmlns='http://ns.adobe.com/xap/1.0/'\rxmlns:xap='http://ns.adobe.com/xap/1.0/'\u0026gt;\r\u0026lt;xap:CreatorTool\u0026gt;3B2 Total Publishing System 7.51n/W\u0026lt;/xap:CreatorTool\u0026gt;\r\u0026lt;xap:ModifyDate\u0026gt;2006-12-19T15:03:23+08:00\u0026lt;/xap:ModifyDate\u0026gt;\r\u0026lt;xap:CreateDate\u0026gt;2006-12-18T16:57:58+08:00\u0026lt;/xap:CreateDate\u0026gt;\r\u0026lt;xap:Format\u0026gt;application/pdf\u0026lt;/xap:Format\u0026gt;\r\u0026lt;xap:Title\u0026gt;\r\u0026lt;rdf:Alt\u0026gt;\r\u0026lt;rdf:li xml:lang='x-default'\u0026gt;untitled\u0026lt;/rdf:li\u0026gt;\r\u0026lt;/rdf:Alt\u0026gt;\r\u0026lt;/xap:Title\u0026gt;\r\u0026lt;xap:MetadataDate\u0026gt;2006-12-19T15:03:23+08:00\u0026lt;/xap:MetadataDate\u0026gt;\r\u0026lt;/rdf:Description\u0026gt;\r\u0026lt;rdf:Description about='uuid:3d686cee-18e6-483c-b1c9-e128e9f0d009'\rxmlns='http://ns.adobe.com/xap/1.0/mm/'\rxmlns:xapMM='http://ns.adobe.com/xap/1.0/mm/'\u0026gt;\r\u0026lt;xapMM:DocumentID\u0026gt;uuid:f598740b-ad11-41c5-a49e-7caffea783f0\u0026lt;/xapMM:DocumentID\u0026gt;\r\u0026lt;/rdf:Description\u0026gt;\r\u0026lt;rdf:Description about='uuid:3d686cee-18e6-483c-b1c9-e128e9f0d009'\rxmlns='http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/'\rxmlns:dc='http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/'\u0026gt;\r\u0026lt;dc:format\u0026gt;application/pdf\u0026lt;/dc:format\u0026gt;\r\u0026lt;dc:title\u0026gt;untitled\u0026lt;/dc:title\u0026gt;\r\u0026lt;/rdf:Description\u0026gt;\r\u0026lt;/rdf:RDF\u0026gt;\r\u0026lt;?xpacket end='r'?\u0026gt;%\rNote that this PDF also included XMP packets for illustrations but the tool extracted the main, or document, XMP packet.\nAnd now that it’s easier to extract the metadata one can look to do something more interesting. For example, if one has cwm installed (Tim BL’s Closed World Machine for semweb dabblings - a Python application, so again cross-platform) one can pipe the XMP packet into cwm as RDF/XML, verify it as valid RDF and read out in another format, e.g. RDF/N3. For the above example we can so this as follows.\nBut let me first define a pipeline to extract the XMP, a couple filters to strip out processing instructions (includes the open and close bracketing \u0026lt;?xpacket\u0026gt; XMP PI’s as well as an undocumented - legacy? - \u0026lt;?adobe\u0026gt; Adobe PI), and then fed into cwm as RDF/XML and read out as RDF/N3. (Note that instead of ExifTool to extract the XMP another tool could have been used, e.g. something based on the sample apps shipped with the Adobe XMP SDK, or something bespoke.)\n% alias get_n3\rexiftool -xmp -b !$ | grep -v \"\u0026lt;?\" | grep -v xmpmeta | cwm --rdf --n3\rWe can then simply request to get the metadata from this PDF in RDF/N3 format:\n% get_n3 nature05428.pdf\r#Processed by Id: cwm.py,v 1.164 2004/10/28 17:41:59 timbl Exp\r# using base file:/Users/tony/Xcode/xmp/dev/\r# Notation3 generation by\r# notation3.py,v 1.166 2004/10/28 17:41:59 timbl Exp\r# Base was: file:/Users/tony/Xcode/xmp/dev/\r@prefix rdf: \u0026lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026gt; .\r\u0026lt;uuid:3d686cee-18e6-483c-b1c9-e128e9f0d009\u0026gt; \u0026lt;http://ns.adobe.com/pdf/1.3/CreationDate\u0026gt; \"2006-12-18T16:57:58+08:00\";\r\u0026lt;http://ns.adobe.com/pdf/1.3/Creator\u0026gt; \"3B2 Total Publishing System 7.51n/W\";\r\u0026lt;http://ns.adobe.com/pdf/1.3/ModDate\u0026gt; \"2006-12-19T15:03:23+08:00\";\r\u0026lt;http://ns.adobe.com/pdf/1.3/Producer\u0026gt; \"Acrobat Distiller 6.0.1 (Windows)\";\r\u0026lt;http://ns.adobe.com/pdf/1.3/Title\u0026gt; \"untitled\";\r\u0026lt;http://ns.adobe.com/xap/1.0/CreateDate\u0026gt; \"2006-12-18T16:57:58+08:00\";\r\u0026lt;http://ns.adobe.com/xap/1.0/CreatorTool\u0026gt; \"3B2 Total Publishing System 7.51n/W\";\r\u0026lt;http://ns.adobe.com/xap/1.0/Format\u0026gt; \"application/pdf\";\r\u0026lt;http://ns.adobe.com/xap/1.0/MetadataDate\u0026gt; \"2006-12-19T15:03:23+08:00\";\r\u0026lt;http://ns.adobe.com/xap/1.0/ModifyDate\u0026gt; \"2006-12-19T15:03:23+08:00\";\r\u0026lt;http://ns.adobe.com/xap/1.0/Title\u0026gt; [\ra rdf:Alt;\rrdf:_1 \"untitled\"@x-default ];\r\u0026lt;http://ns.adobe.com/xap/1.0/mm/DocumentID\u0026gt; \"uuid:f598740b-ad11-41c5-a49e-7caffea783f0\";\r\u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/format\u0026gt; \"application/pdf\";\r\u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/title\u0026gt; \"untitled\" .\r#ENDS\rOr writing that out again with QNames for readability (and dropping the UUID as RDF subject as recommended by latest XMP spec) we have:\n#Processed by Id: cwm.py,v 1.164 2004/10/28 17:41:59 timbl Exp\r# using base file:/Users/tony/Xcode/xmp/dev/\r# Notation3 generation by\r# notation3.py,v 1.166 2004/10/28 17:41:59 timbl Exp\r# Base was: file:/Users/tony/Xcode/xmp/dev/\r@prefix dc: \u0026lt;http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; .\r@prefix pdf: \u0026lt;http://ns.adobe.com/pdf/1.3/\u0026gt; .\r@prefix xmp: \u0026lt;http://ns.adobe.com/xap/1.0/\u0026gt; .\r@prefix xmpMM: \u0026lt;http://ns.adobe.com/xap/1.0/mm/\u0026gt; .\r\u0026lt;\u0026gt; pdf:CreationDate \"2006-12-18T16:57:58+08:00\";\rpdf:Creator \"3B2 Total Publishing System 7.51n/W\";\rpdf:ModDate \"2006-12-19T15:03:23+08:00\";\rpdf:Producer \"Acrobat Distiller 6.0.1 (Windows)\";\rpdf:Title \"untitled\";\rxmp:CreateDate \"2006-12-18T16:57:58+08:00\";\rxmp:CreatorTool \"3B2 Total Publishing System 7.51n/W\";\rxmp:Format \"application/pdf\";\rxmp:MetadataDate \"2006-12-19T15:03:23+08:00\";\rxmp:ModifyDate \"2006-12-19T15:03:23+08:00\";\rxmp:Title [\ra rdf:Alt;\rrdf:_1 \"untitled\"@x-default ];\rxmpMM:DocumentID \"uuid:f598740b-ad11-41c5-a49e-7caffea783f0\";\rdc:format \"application/pdf\";\rdc:title \"untitled\" .\r#ENDS\rNow just imagine that there were something a little more interesting in there. Like a DOI. Like descriptive metadata, perhaps. 🙂\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/pdfa.org/", "title": "pdfa.org", "subtitle":"", "rank": 1, "lastmod": "2007-08-23", "lastmod_ts": 1187827200, "section": "Blog", "tags": [], "description": "Following on from yesterday’s post I just came across this very useful source of information on PDF/A: the PDF/A Conformance Center. This provides links to resources such as this whitepaper PDF/A - A new Standard for Long-Term Archiving, and a number of technical notes, especially Metadata and PDF/A-1(also available as a PDF). (This latter corrects some errors in the ISO standard which are to be redressed in a forthcoming Technical Corrigendum later this year.", "content": "Following on from yesterday’s post I just came across this very useful source of information on PDF/A: the PDF/A Conformance Center. This provides links to resources such as this whitepaper PDF/A - A new Standard for Long-Term Archiving, and a number of technical notes, especially Metadata and PDF/A-1(also available as a PDF). (This latter corrects some errors in the ISO standard which are to be redressed in a forthcoming Technical Corrigendum later this year.)\nThe site also links to the standard, to a FAQ, to PDF/A products and to news and events. There’s also an RSS feed and a discussion forum.\nStill difficult to find examples of PDF/A though (the discussion forum doesn’t throw up too much on that score) although at least the Technical Note linked to above is a PDF/A-1 document as can be seen from this XMP description:\n\u0026lt;rdf:Description rdf:about=\"\" xmlns:pdfaid=\"http://www.aiim.org/pdfa/ns/id/\"\u0026gt; \u0026lt;pdfaid:part\u0026gt;1\u0026lt;/pdfaid:part\u0026gt; \u0026lt;pdfaid:conformance\u0026gt;A\u0026lt;/pdfaid:conformance\u0026gt; \u0026lt;/rdf:Description\u0026gt; As noted before, PDF/A may be more (and less) than Crossref publishers require at this time, but nonetheless it is certainly a useful yardstick as regards embedding metadata within a PDF and is anyway a technology worth tracking in its own right.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/weird-scenes-inside-the-gold-mine/", "title": "Weird Scenes Inside the Gold Mine", "subtitle":"", "rank": 1, "lastmod": "2007-08-22", "lastmod_ts": 1187740800, "section": "Blog", "tags": [], "description": "So, following up on my recent posts here on Metadata in PDFs (Strategies, Use Cases, Deployment), I finally came across PDF/A and PDF/X, two ISO standardized subsets of PDF. the former (ISO 19005-1:2005) for archiving and the latter (ISO 15929:2002, ISO 15930-1:2001, etc.) for prepress digital data exchange.\nBoth formats share some common ground such as minimizing surprises between producer and consumer and keeping things open and predictable. But my interest here is specifically in metadata and to see what guidance these standards might provide us. Not unsurprisingly, metadata is a key issue for PDF/A, less so for PDF/X. I’ll discuss PDF/X briefly but the bulk of this post is focussed on PDF/A. See below.\n", "content": "So, following up on my recent posts here on Metadata in PDFs (Strategies, Use Cases, Deployment), I finally came across PDF/A and PDF/X, two ISO standardized subsets of PDF. the former (ISO 19005-1:2005) for archiving and the latter (ISO 15929:2002, ISO 15930-1:2001, etc.) for prepress digital data exchange.\nBoth formats share some common ground such as minimizing surprises between producer and consumer and keeping things open and predictable. But my interest here is specifically in metadata and to see what guidance these standards might provide us. Not unsurprisingly, metadata is a key issue for PDF/A, less so for PDF/X. I’ll discuss PDF/X briefly but the bulk of this post is focussed on PDF/A. See below.\nPDF/X\nThe main reference I am using here is the “Application Notes for PDF/X Standards” cited below [PDF/X 2]. There are two key sections which deal with metadata in PDF/X: “2.3 Identification and conformance”, and “2.20 Document identification and metadata”.\nSection 2.3 states that a conforming PDF/X file has the key “/GTS_PDFXVersion” in the document information dictionary, and (depending on version) may or may not have the key “/GTS_PDFXConformance“.\nSection 2.20 then talks about inclusion of a document ID within the document trailer to ensure correct identification of the file. It then goes on specifically to say:\n“Additionally, the use of the PDF version 1.4 Metadata key is allowed. Note that although information placed using this mechanism may be beneficial to production processes, any reader that is not PDF version 1.4 compliant may ignore this information.”\nThat is, PDF/X requires the use of a document information dictionary with the key “/GTS_PDFXVersion” (and as version demands also the key “/GTS_PDFXConformance“) to signal conformance. It is lukewarm, though with regard to the inclusion of XMP metadata (as would be indicated by the “/Metadata” key in the document catalog).\nPDF/A\nThe main reference I’m using here is the “ISO DIS 19005-1:2005” draft cited below [PDF/A, 1].\nCompletely differently from PDF/X, PDF/A puts all its attention on the XMP metadata, while at the same time acknowledging that the document information dictionary may be used. Note 1 in Section 6.7.3 notes that:\n“Since a document information dictionary is allowed within a conforming file, it is possible for a single file to be both PDF/A-1 and PDF/X [12, 13] conformant.”\nThe non-normative Annex B also has this to say:\n“Use of non-XMP metadata at the file level is strongly discouraged as there is no assurance that such metadata can be preserved in accordance with this specification. In cases where non-XMP metadata is present, the preference is to convert it to XMP, embed it in the file, and describe the conversion in the xmpMM:History property.”\nIt’s not fully clear here whether “file level” is intended to be the same as “document level”. But note that this anyway is from a non-normative section and does not reflect the actual normative wording used in the standard (Section 6.7.3) which allows the use of the document information dictionary.\nThe key section for our purposes in the standard is “6.7 Metadata”.\nSection “6.7.2 Properties” says:\n“The document catalog dictionary of a conforming file shall contain the Metadata key. The metadata stream that forms the value of that key shall conform to XMP Specification. All metadata properties pertaining to a file that are embedded in that file, except for document information dictionary entries that have no analogue in predefined XMP schemas as defined in 6.7.3, shall be in the form of one or more XMP packets as defined by XMP Specification, 3. Metadata properties shall be specified in predefined XMP schemas or in one or more extension schemas that comply with XMP requirements. Metadata object stream dictionaries shall not contain the Filter key.”\nThis is quite something. Not only is PDF/A fully supportive of XMP (even if Adobe sometimes appear to be less than enthusiastic) it actually requires it. Further it says that the XMP packets shall be human readable (well, apart from the small matter of XML, that is :).\nSection “6.7.3 Document information dictionary” then goes on to say:\n“A document information dictionary may appear within a conforming file. If it does appear, then all of its entries that have analogous properties in predefined XMP schemas, as defined by Table 1, shall also be embedded in the file in XMP form with equivalent values. Any document information dictionary entry not listed in Table 1 shall not be embedded using a predefined XMP schema property.”\nThis says that the primary source of metadata will be the XMP packet and that, as far as possible, metadata properties in the document information dictionary will be mapped directly to the XMP packet as specified and will not cause any conflict.\nI’m not quite sure how to read the last sentence. Does that mean that is one were to use an “/Identifier” key in the document information dictionary then one couldn’t map it as “dc:identifier“, say, in the XMP. I think that would be OK. My read is that it precludes the use of a predefined term within the information dictionary, so one couldn’t have something like “dc:identifier” in the information dictionary.\nNote also that the one quirky mapping in Table 1 which arises from the need to sync the information dictionary entries with the XMP properties is this:\n“If the dc:creator property is present in XMP metadata then it shall be represented by an ordered Text array of length one whose single entry shall consist of one or more names. The value of dc:creator and the document information dictionary Author entry shall be equivalent.”\nThis means that:\n“The document information dictionary entry:\n/Author (Peter, Paul, and Mary) is equivalent to the XMP property:\n\u0026lt;dc:creator\u0026gt; \u0026lt;rdf:Seq\u0026gt; \u0026lt;rdf.:li\u0026gt;Peter, Paul, and Mary\u0026lt;/rdf:li\u0026gt; \u0026lt;/rdf:Seq\u0026gt; \u0026lt;/dc:creator\u0026gt; “\nWeird, or what? Well, of course, I see the rationale, but …\nThe remaining sections of interest here are “6.7.6 File identifiers” which says that:\n“A conforming file should have one or more metadata properties to characterize, categorize, and otherwise identify the file. This part of ISO 19005 does not mandate any specific identification scheme. Identifiers may be externally based, such as an International Standard Book Number (ISBN) or a Digital Object Identifier (DOI), or internally based, such as a Globally Unique Identifier/Universally Unique Identifier (GUID/UUID) or another designation assigned during workflow operations.”\nHmm, not that DOI is a file identifier necessarily. And certainly not in the Crossref usage where is denotes a work rather than a manifestation.\nSection “6.7.8 Extension schemas” talks about the need to rigorously declare any extension (undefined) schema with the following PDF/A extension schema description schema properties:\npdfaSchema:schema\npdfaSchema:namespaceURI\npdfaSchema:prefix\npdfaSchema:property\npdfaSchema:valueTypeI think this means that were PRISM terms to be used the extension schema terms would need to be defined.\nAnd finally, the section “6.7.11 Version and conformance level identification” says that:\n“The PDF/A version and conformance level of a file shall be specified using the PDF/A Identification extension schema defined in this clause.”\nThis uses the PDF/A identification schema properties:\npdfaid:part pdfaid:amd\npdfaid:conformanceSummary\nWhat does this all mean? Main lessons are to be learned from PDF/A which endorses (well, actually mandates) the use of XMP. Moreover, it requires that the document information dictionary and the XMP packet be in sync. Why it signals conformance through the XMP packet rather than through the information dictionary (as does PDF/X) is a mystery. Or at least not specify a means to also signal conformance through the information dictionary. The latter is readily get-at-able. A very crude hack to extract a PDF information dictionary can be as simple as\n% strings \u0026lt;filename.pdf\u0026gt; | grep \"/Producer\" \u0026lt;span \u0026gt;or some other likely key. That will usually pull a line containing the full dictionary. The XMP packet is much harder to extract and then you’re still left with XML to parse.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;My gut feeling is that both mechanisms should be required (and sync’ed). And it’s hard not to see the DOI being required in both sections. Leads to considerations on which schemas/terms to use and how to render the DOI. I am biased and would prefer to see it rendered in URI form, i.e. in an inclusive rather than an exclusive representation. DOI is special - but not that special. Other identifiers are also useful.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;As per my \u0026lt;a href=\u0026quot;/blog/metadata-in-pdf-1.-strategies/\u0026quot;\u0026gt;earlier post\u0026lt;/a\u0026gt;, I could imagine that both DC and PRISM terms could be added to an XMP packet. I’m not sure whether there is any real interest at this time to follow the PDF/A specification or rather to be informed by it. There seems to be a lot of overhead and I’m still looking to meet up with some examples (either in the wild or fabricated) to see what it might look like in practice.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;Interested as always in others’ views.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;\u0026lt;b\u0026gt;\u0026lt;i\u0026gt;References\u0026lt;/i\u0026gt;\u0026lt;/b\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;So, note that these are ISO documents and as such are available for purchase from the \u0026lt;a href=\u0026quot;https://web.archive.org/web/20070614003151/http://www.iso.org/iso/en/prods-services/ISOstore/store.html\u0026quot;\u0026gt;ISO Store\u0026lt;/a\u0026gt;. (The citations above are linked to the relevant ISO Store pages.)\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;See also this recent post (August 1, 2007) by Rick Jelliffe on XML.com: \u0026lt;a href=\u0026quot;http://www.oreillynet.com/xml/blog/2007/08/where_to_get_iso_standards_on.html\u0026quot;\u0026gt;Where to get ISO Standards on the Internet free\u0026lt;/a\u0026gt;.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;There appear to be three main sources of information for these technologies: the ISO standards, application notes and FAQs. NPES (The Association for Suppliers of Printing, Publishing and Converting Technologies) hosts pages with relevant links - see \u0026lt;a href=\u0026quot;https://web.archive.org/web/20050504132522/http://www.npes.org/standards/\u0026quot;\u0026gt;here\u0026lt;/a\u0026gt;.\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;Below are listed specific links to freely available documentation that may be useful. Note that I have not purchased the ISO standards but have made use of an ISO DIS (draft international standard) for PDF/A and Application Notes for PDF/X by CGATS. (As yet there are no links to Application Notes for PDF/A.)\u0026lt;/span\u0026gt; \u0026lt;span \u0026gt;\u0026lt;a href=\u0026quot;http://www.npes.org/standards/toolspdfx.html\u0026quot;\u0026gt;PDF/X\u0026lt;/a\u0026gt;\u0026lt;/span\u0026gt; 1. \u0026lt;span \u0026gt;(No Draft International Standard found.)\u0026lt;/span\u0026gt; Application Notes for PDF/X Standards Version 3, September 2002, CGATS Application Notes for PDF/X Standards Version 4 (PDF/X-1a:2003, PDF/X-2:2003 \u0026amp; PDF/X-3:2003), September 2006 , CGATS Frequently Asked Questions, November 2005, Martin Bailey, Chair, ISO/TC130/WG2/TF2 (PDF/X)PDF/A Draft International Standard ISO/DIS 19005-1, ISO/TC171/SC2, Document management— Electronic document file format for long-term preservation — Part 1: Use of PDF 1.4 (PDF/A-1) (No Application Notes for PDF/A available yet.) Frequently Asked Questions (FAQs), ISO 19005-1:2005, PDF/A-1, July 2006, PDF/A Joint Working Group ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-sru-1.2-website/", "title": "New SRU (1.2) Website", "subtitle":"", "rank": 1, "lastmod": "2007-08-08", "lastmod_ts": 1186531200, "section": "Blog", "tags": [], "description": "From Ray Denenberg’s post to the SRU Listserv yesterday:\n_“The new SRU web site is now up: http://www.loc.gov/sru/\nIt is completely reorganized and reflects the version 1.2 specifications.\n(It also includes version 1.1 specifications, but is oriented to version\n1.2.)\n…\nThere is an official 1.1 archive under the new site,\nhttps://web.archive.org/web/20080724063403/http://www.loc.gov/sru/sru1-1archive/. And note also, that the new spec incorporates both version 1.1 and 1.2 (anything specific to version 1.1 is annotated as such).", "content": "From Ray Denenberg’s post to the SRU Listserv yesterday:\n_“The new SRU web site is now up: http://www.loc.gov/sru/\nIt is completely reorganized and reflects the version 1.2 specifications.\n(It also includes version 1.1 specifications, but is oriented to version\n1.2.)\n…\nThere is an official 1.1 archive under the new site,\nhttps://web.archive.org/web/20080724063403/http://www.loc.gov/sru/sru1-1archive/. And note also, that the new spec incorporates both version 1.1 and 1.2 (anything specific to version 1.1 is annotated as such).”_\nInterested to learn if any Crossref publishers are currently implementing SRU.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/handle-plugin-some-notes/", "title": "Handle Plugin: Some Notes", "subtitle":"", "rank": 1, "lastmod": "2007-08-02", "lastmod_ts": 1186012800, "section": "Blog", "tags": [], "description": "The first thing to note is that this demo (the Acrobat plugin) is an application. And that comes with its own baggage, i.e. this is a Windows only plugin and is targeted at Acrobat Reader 8. On a wider purview the application merely bridges an identifier embedded in the media file and the handle record filed against that identifier and delivers some relevant functionality. The data (or metadata) declared in the PDF and in the associated handle if rich enough and structured openly can also be used by other applications. I think this is a key point worth bearing in mind, that the demo besides showing off new functionalities is also demonstrating how data (or metadata) can be embedded at the respective endpoints (PDF, handle).\nSome initial observations follow below.\n", "content": "The first thing to note is that this demo (the Acrobat plugin) is an application. And that comes with its own baggage, i.e. this is a Windows only plugin and is targeted at Acrobat Reader 8. On a wider purview the application merely bridges an identifier embedded in the media file and the handle record filed against that identifier and delivers some relevant functionality. The data (or metadata) declared in the PDF and in the associated handle if rich enough and structured openly can also be used by other applications. I think this is a key point worth bearing in mind, that the demo besides showing off new functionalities is also demonstrating how data (or metadata) can be embedded at the respective endpoints (PDF, handle).\nSome initial observations follow below.\nInstall problems\nAs noted in my previous post I had to haul out the old HP laptop and engage in a dialog with our IT folks to get both Acrobat Reader 8 and the plugin installed as I did not have admin privileges on my own machine. Wasn’t pretty but they were kind.\nUseability\nI don’t know what’s happening here but from our network it seems as if the first attempts to contact the handle server are timing out and the handle client in the plugin is failing over to an alternate route (HTTP?). So, the plugin doesn’t work as expected since the user has to wait an untenably long time (somewhere between 60s and 90s). Of course, if a certain network access policy is required that would need to be specified and implemented by institutions for their users.\nI used both Firefox and Internet Explorer browsers and ran into occasional Acrobat plugin crashes which would lock up the browser. Due to the severe network access problems noted above I wasn’t able to rigorously test this further apart from to note that it was “buggy”.\nFunctionality\nI tested most of the demo cases, but was hampered by the useability restrictions noted above. I didn’t see the “Related Links” or get the “Collections” to work but did see all the other cases and tried the buttons provided.\nOne thing of note is that the Crossref metadata record was spoofed and returned from a stored data file rather than an active query to Crossref. A real query would have been been interesting to guage the impact of network latency, although the lookup point is made by hardwiring a response.\nPDF Metadata\nOK, so the document DOI is embedded in the PDF both in the document information dictionary and in the (document) metadata stream within an XMP packet. This is great although I do have some specific comments about how the DOI is actually disclosed. See my Metadata in PDF: 2. Use Cases post for details.\nHandle Data\nHandle types are generally a matter for the handle administrators to oversee, although the unregulated use of new types is not going to help foster interoperability between handle applications. In passing I note that the handles used in this demo\n10.5555/pdftest-collection 10.5555/pdftest-collection-item1 10.5555/pdftest-collection-item2 10.5555/pdftest-collection-item3 10.5555/pdftest-crossref 10.5555/pdftest-kernelmetadata 10.5555/pdftest-multires 10.5555/pdftest-rights 10.5555/pdftest-version make use of the following handle types (periods and underscores used as below)\nCOLLECTION COLLECTION_ITEM HS_ADMIN HS_MODIFIED HDL_MD HDL.RIGHTS HDL.XREF There is some degree of variability here which presumably will be managed better with a central handle type registry.\nDOI/Handle\nAnd lastly, this demo raises questions again about DOI and handle boundaries. From a handle viewpoint a DOI is nothing more than a branded handle, whereas from a DOI viewpoint a DOI is a specific handle profile with governance and policies, and its own service portfolio. The two terms should not be used interchangeably which I fear is where some of the demo details would lead us. As a very crude analogy (and with apologies to Bob Kahn) but I would see the relationship between DOI and handle as not being dissimilar from that between TCP and IP.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-in-pdf-3.-deployment/", "title": "Metadata in PDF: 3. Deployment", "subtitle":"", "rank": 1, "lastmod": "2007-08-02", "lastmod_ts": 1186012800, "section": "Blog", "tags": [], "description": "So, assuming we know the form of the metadata we wish to add to our PDFs (or else to comply with if there is already a set of guidelines, or some industry initiative in effect) how can we realize this? And, on the flip side, how can we make it easier for consumers to extract metadata we have embedded in our PDFs.\nBelow are some considerations on deploying metadata in PDFs and consumer access.\n", "content": "So, assuming we know the form of the metadata we wish to add to our PDFs (or else to comply with if there is already a set of guidelines, or some industry initiative in effect) how can we realize this? And, on the flip side, how can we make it easier for consumers to extract metadata we have embedded in our PDFs.\nBelow are some considerations on deploying metadata in PDFs and consumer access.\nWrite New\nObviously the best option would be to speak to one’s suppliers and to get metadata added to the PDF at create time. This leads to questions such as:\nWhat metadata do we have available in the workflow process? Do we have the full set we wish to write, or just a subset? Do we include metadata in the document information dictionary, or in the document metadata stream, or both? OK, so we’ve decided to (also) include an XMP packet. So, now do we make that XMP packet read only or write? That is, do we allow the possibility of further edits by adding in trailing whitespace and marking it as “write”? Write Update\nWhat possibilities exist for updating legacy PDF archives?\nThe cleanest means of updating a PDF is in-place edits. This maintains the number of PDF objects together with their lengths and byte offests. Specifically we are interested in metadata objects. There isn’t too much one can do with the document information dictionary apart from overwriting a field value or substituting a field. This is something that may be possible on a “one off” basis only. On the other hand, XMP packets are ripe for updating if they are set in “write” mode and have trailing whitespace. This can be used to supplement the metadata already contained in the packet.\nThere is some “wiggle” room, however, even in read-only XMP packets which have no trailing whitespace. Some XMP packets may include unused default namespace declarations and/or empty elements. These could be safely stripped and used for more positive purposes. This may not be enough to write in a full metadata set, but could be enough to squeeze in the DOI.\nThe usual way to update a PDF file is to append new objects. This means that a replacement document information dictionary and (document) metadata stream can be provided without worrying about shoe-horning the data into any leftover space in the original objects.\nAnd this would be just fine, but for the small matter of Linerarized PDFs. These are widely deployed as web friendly PDFs ready for byte serving and are written out in a strictly determined ordering. (See Appendix F, “Linearized PDF” in the PDF Reference Manual.) The manual does, however, say (Section F.4.6, “Accessing an Updated File”) this about updating a Linearized PDF:\n_“As stated earlier, if a Linearized PDF file subsequently has an incremental update appended to it, the linearization and hints are no longer valid. Actually, this is not necessarily true, but the viewer application must do some additional work to validate\nthe information.\n…\nFor a PDF file that has received only a small update, this approach may be worthwhile. Accessing the file this way is quicker than accessing it without hints or retrieving the entire file before displaying any of it.”\nThis may warrant some further investigation.\nRead\nNow for consumers, how can publishers help users to read the metadata embedded in a file? The document information dictionary is reasobaly accessible and is in the clear. It probably would not provide for much in terms of metadata but should anyway hopefully contain the DOI.\nThe XMP SDK is still far too unwieldy for wide use. Things would be much improved if there were even some SWIG wrappers for more popular languages such as Perl, Python, Ruby, etc. around the C++ code. The other thing to bear in mind is that the XMP SDK is dealing with generalities such as constructing and parsing XMP objects for reading and updating in a range of binary files. A consumer metadata app would only be interested in extracting the RDF/XML from the PDF. This can then be dealt with as appropriate to the application. Another problem concerns multiple XMP packets occurring in the same PDF, only one of them being the main (or document) XMP packet. This may be a non-problem in that all the RDF/XML could be extracted and the main XMP packet would be identifiable through the metadata it provided.\nI suggest the best way to really help consumers is to go ahead and embed metadata in the first place, then there would be a clear impetus for extracting it. Even if a fuller metadata set is not being considered at this time, then at least the DOI should be considered for embedding in the PDF as a “hook” for further services. The handle plugin is a really good example of just such a downstream application.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prism-2.0/", "title": "PRISM 2.0", "subtitle":"", "rank": 1, "lastmod": "2007-08-02", "lastmod_ts": 1186012800, "section": "Blog", "tags": [], "description": "Only just caught up with this but the PRISM 2.0 draft is now available (since July 12) for public comment. See this posted by Dianne Kennedy:\n_“Just a note to let you know that PRISM 2.0 has just been posted at www.prismstandard.org . This is the first major revision to PRISM. We have incorporated new elements to support online content and have expanded and revised our controlled vocabularies. In addition we have added a profile to support PRISM in an XMP environment.", "content": "Only just caught up with this but the PRISM 2.0 draft is now available (since July 12) for public comment. See this posted by Dianne Kennedy:\n_“Just a note to let you know that PRISM 2.0 has just been posted at www.prismstandard.org . This is the first major revision to PRISM. We have incorporated new elements to support online content and have expanded and revised our controlled vocabularies. In addition we have added a profile to support PRISM in an XMP environment.\nWe invite you to review the new specification (in 6 documents organized by namespace) and provide your comments before September 15. Please just email comments and questions to me, dkennedy@idealliance.org. “\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-in-pdf-1.-strategies/", "title": "Metadata in PDF: 1. Strategies", "subtitle":"", "rank": 1, "lastmod": "2007-08-01", "lastmod_ts": 1185926400, "section": "Blog", "tags": [], "description": "Emboldened by my own researches, by the recent handle plugin announcement from CNRI (on which, more in a follow-on post), and by Alexander Griekspoor’s comment to my earlier post, I thought I’d write a more extensive piece about embedding metadata in PDF with a view to the following:\nDiscover what other publishers are currently doing Stimulate discussions between content providers and/or consumers Lay groundwork for a Crossref best practice guidelines Why should Crossref be interested?", "content": "Emboldened by my own researches, by the recent handle plugin announcement from CNRI (on which, more in a follow-on post), and by Alexander Griekspoor’s comment to my earlier post, I thought I’d write a more extensive piece about embedding metadata in PDF with a view to the following:\nDiscover what other publishers are currently doing Stimulate discussions between content providers and/or consumers Lay groundwork for a Crossref best practice guidelines Why should Crossref be interested? Well, at minimum to embed the DOI along with the digital asset would seem to be inherently “a good thing”. (And, in fact, this is precisely the approach that CNRI have taken for their plugin demos. I’ll look later at what they actually did and consider whether that is a model that Crossref publishers might usefully follow.)\nWhy include the DOI as an explicit piece of metadata rather than have it included by virtue of its appearance in a content section? The main reason is that it is then unambiguously accessible. Content sections in PDFs are typically filtered and sometimes encrypted), whereas metadata is usually plain text and moreover is marked up as to field type.\nAnother question concerns whether to add in the identifier alone, or to embed a full metadata set. Why not just embed the identifier and visit Crossref for the metadata? This is feasible in some cases although it does involve an extra network trip, requires an application to service the identifier and is obviously not workable in offline contexts. Seems like a “no-brainer” to include a fuller description from the outset. Note that publishers frequently make some of this information available anyway in other metadata delivery channels, e.g. RSS feeds.\nThere are two (complementary) approaches to embedding document-level metadata in a PDF:\nA - Document Information Dictionary This is an optional object (a dictionary) referenced from the PDF trailer dictionary. Example: \u0026lt;\u0026lt; /Title ( PostScript Language Reference, Third Edition ) /Author ( Adobe Systems Incorporated ) /Creator ( Adobe FrameMaker 5.5.3 for Power Macintosh\u0026amp;reg; ) /Producer ( Acrobat Distiller 3.01 for Power Macintosh ) /CreationDate ( D:19970915110347-08\u0026#39;00\u0026#39; ) /ModDate ( D:19990209153925-08\u0026#39;00\u0026#39; ) \u0026gt;\u0026gt; endobj B - (Document) Metadata Stream This is an optional object (a stream) referenced from the document catalog, itself referenced from the PDF trailer dictionary. Example: 2 0 obj \u0026lt;\u0026lt; /Type /Metadata /Subtype /XML /Length 1706 \u0026gt;\u0026gt; stream \u0026lt;?xpacket begin=\u0026#39;\u0026#39; id=\u0026#39;W5M0MpCehiHzreSzNTczkc9d\u0026#39;?\u0026gt; \u0026lt;!-- RDF/XML goes here --\u0026gt; \u0026lt;?xpacket end=\u0026#39;w\u0026#39;? \u0026gt; endstream endobj Both approaches usually make the embedded metadata in the PDF available in the clear, whereas content is frequently filtered and sometimes encrypted. (Note that the information dictionary is always in the clear, while the metadata stream can be filtered and rendered unreadable although in practice this tends not to be filtered.)\nBelow I examine both approaches and see how they can be used to encode the kind of metadata that scholarly publishers are accustomed to.\nA - Document Information Dictionary\nNote that keys in the document information dictionary divide equally between the logical document description (non-asterisked keys) and the physical asset description (asterisked keys):\nTitle Author Subject Keywords \u0026amp;nbsp; * Creator * Producer * CreationDate * ModDate * Trapped This is the complete listing of keys in the PDF specification, although foreign keys are allowed (and ignored).\nWhat is missing here is any document identifier and/or any other descriptive metadata. From a Crossref point of view the identifier (the DOI) is a “hook” into the metadata record and so at minimum this could usefully be added. The question then is how? Either the identifier can be squeezed into one of the existing fields (“Title”, “Author”, “Subject”, “Keywords”) or else a new foreign key could be created.\nIMO if an existing keyword is used then I would opt for “Subject” or “Keywords”, and probably the former. If, on the other hand, a new foreign key were to be created I would choose something generic and (in keeping with the other terms) use something like “Identifier” (rather than, say, “DOI”).\nOf preference, I think I would go for the latter (“Identifier”) but if one wanted to make this more robust one could think of also adding in a known term (e.g. “Subject” or “Keywords”). So, to include metadata for the news article “Cosmology: Ripples of early starlight” printed in Nature magazine Nature 445, 37 (2007): doi:10.1038/445037a, we might include the following terms in the document information dictionary as:\n\u0026lt;\u0026lt; /Title ( Cosmology: Ripples of early starlight ) /Author ( Craig J. Hogan ) /Subject ( doi:10.1038/445037a ) /Keywords ( cosmology infrared protogalaxy starlight ) \u0026lt;b\u0026gt;/Identifier ( doi:10.1038/445037a )\u0026lt;/b\u0026gt; /Creator ( ... ) /Producer ( ... ) /CreationDate ( ... ) /ModDate ( ... ) \u0026gt;\u0026gt; endobj where the bolded term represents a foreign key/value pair.\nNote: This (including the DOI in the “Subject” field) is a fix intended to get the DOI listed by Adobe apps which would not otherwise recognize the foreign key “Identifier”.\nSince it is not really feasible to include separate enumerated fields within the information dictionary (although it could be done), one might also consider including a descriptive citation field as a foreign key, e.g., something like:\n/Source (Nature 445, 37 \\(2007\\)) Aternatively that might better be presented as the “Subject” along with the DOI. Which would then limit the number of foreign keys to one (“Identifier”).\nB - (Document) Metadata Stream\nThe metadata stream with its use of XMP packets (wrapping RDF/XML instances) is a much more flexible approach to embedding metadata and allows multiple schemas to be used. As noted in my previous post here on XMP, PDFs with XMP packets mostly use media-specific terms and schemas, although there is also a token showing of DC. From a descriptive metadata point of view we would more likely make use of DC and PRISM for our schemas.\nReprising the example from the previous post (and again using citation example listed above) this would mean we may be inclined to include the following terms for a scholarly work (here in RDF/N3 for readability):\ndc:title \u0026#34;Cosmology: Ripples of early starlight\u0026#34; ; dc:identifier \u0026#34;doi:10.1038/445037a\u0026#34; ; dc:source \u0026#34;Nature 445, 37 (2007)\u0026#34; ; dc:date \u0026#34;2007-01-04\u0026#34; ; dc:format \u0026#34;application/pdf\u0026#34; ; dc:publisher \u0026#34;Nature Publishing Group\u0026#34; ; dc:language \u0026#34;en\u0026#34; ; dc:rights \u0026#34;© 2007 Nature Publishing Group\u0026#34; ; \u0026amp;nbsp; prism:publicationName \u0026#34;Nature\u0026#34; ; prism:issn \u0026#34;0028-0836\u0026#34; ; prism:eIssn \u0026#34;1476-4679\u0026#34; ; prism:publicationDate \u0026#34;2007-01-04\u0026#34; ; prism:copyright \u0026#34;© 2007 Nature Publishing Group\u0026#34; ; prism:rightsAgent \u0026#34;permissions@nature.com\u0026#34; ; prism:volume \u0026#34;445\u0026#34; ; prism:number \u0026#34;7123\u0026#34; ; prism:startingPage \u0026#34;37\u0026#34; ; prism:endingPage \u0026#34;37\u0026#34; ; prism:section \u0026#34;News and Views\u0026#34; ; This would look something like the following as an XMP packet within a PDF metadata stream (the RDF now being serialized as RDF/XML):\n\u0026lt;\u0026lt; /Type /Metadata /Subtype /XML /Length 1706 \u0026gt;\u0026gt; stream \u0026lt;?xpacket begin=\u0026#39;\u0026#39; id=\u0026#39;W5M0MpCehiHzreSzNTczkc9d\u0026#39;?\u0026gt; \u0026lt;rdf:RDF xmlns:rdf=\u0026#34;http://www.w3.org/1999/02/22-rdf-syntax-ns#\u0026#34;\u0026gt; \u0026lt;rdf:Description rdf:about=\u0026#34;\u0026#34; xmlns:dc=http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\u0026gt; \u0026lt;dc:creator\u0026gt;Craig J. Hogan\u0026lt;/dc:creator\u0026gt; \u0026lt;dc:title\u0026gt;Cosmology: Ripples of early starlight\u0026lt;/dc:title\u0026gt; \u0026lt;dc:identifier\u0026gt;doi:10.1038/445037a\u0026lt;/dc:identifier\u0026gt; \u0026lt;dc:source\u0026gt;Nature 445, 37 (2007)\u0026lt;/dc:source\u0026gt; \u0026lt;dc:date\u0026gt;2007-01-04\u0026lt;/dc:date\u0026gt; \u0026lt;dc:format\u0026gt;application/pdf\u0026lt;/dc:format\u0026gt; \u0026lt;dc:publisher\u0026gt;Nature Publishing Group\u0026lt;/dc:publisher\u0026gt; \u0026lt;dc:language\u0026gt;en\u0026lt;dc:language\u0026gt; \u0026lt;dc:rights\u0026gt;© 2007 Nature Publishing Group\u0026lt;/dc:rights\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026amp;nbsp; \u0026lt;rdf:Description rdf:about=\u0026#34;\u0026#34; xmlns:prism=[https://web.archive.org/web/20140228105237/http://prismstandard.org/namespaces/1.2/basic/](https://web.archive.org/web/20140228105237/http://prismstandard.org/namespaces/1.2/basic/)\u0026gt; \u0026lt;prism:publicationName\u0026gt;Nature\u0026lt;/prism:publicationName\u0026gt; \u0026lt;prism:issn\u0026gt;0028-0836\u0026lt;/prism:issn\u0026gt; \u0026lt;prism:eIssn\u0026gt;1476-4679\u0026lt;/prism:eIssn\u0026gt; \u0026lt;prism:publicationDate\u0026gt;2007-01-04\u0026lt;/prism:publicationDate\u0026gt; \u0026lt;prism:copyright\u0026gt;© 2007 Nature Publishing Group\u0026lt;/prism:copyright\u0026gt; \u0026lt;prism:rightsAgent\u0026gt;permissions@nature.com\u0026lt;/prism:rightsAgent\u0026gt; \u0026lt;prism:volume\u0026gt;445\u0026lt;/prism:volume\u0026gt; \u0026lt;prism:number\u0026gt;7123\u0026lt;/prism:number\u0026gt; \u0026lt;prism:startingPage\u0026gt;37\u0026lt;/prism:startingPage\u0026gt; \u0026lt;prism:endingPage\u0026gt;37\u0026lt;/prism:endingPage\u0026amp; \u0026lt;prism:section\u0026gt;News and Views\u0026lt;/prism:section\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026lt;?xpacket end=\u0026#39;w\u0026#39;?\u0026gt; endstream endobj References\nSome useful references are:\nAdobe® Portable Document Format, Version 1.7, November 2006 (see http://www.adobe.com/devnet/pdf/pdf_reference.html).\nAdobe® XMP Specification, September 2005 (see http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf\nEmbedding XMP Metadata in Application Files, September 2001 (see http://xml.coverpages.org/XMP-Embedding.pdf\nNote a): See Section 10.2, “Metadata” in Ref. 1.\nNote b): Ref. [3] is a fairly brief draft which covers both the Information Dictionary and Metadata Dictionary (XMP) approaches. There is an Adobe-hosted update to this document from June 2002 but that only discusses the XMP approach.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/metadata-in-pdf-2.-use-cases/", "title": "Metadata in PDF: 2. Use Cases", "subtitle":"", "rank": 1, "lastmod": "2007-08-01", "lastmod_ts": 1185926400, "section": "Blog", "tags": [], "description": "Well, this is likely to be a fairly brief post as I’m not aware of many use cases of metadata in PDFs from scholarly publishers. Certainly, I can say for Nature that we haven’t done much in this direction yet although are now beginning to look into this.\nI’ll discuss a couple cases found in the wild but invite comment as to others’ practices. Let me start though with the CNRI handle plugin demo for Acrobat which I blogged here.\n", "content": "Well, this is likely to be a fairly brief post as I’m not aware of many use cases of metadata in PDFs from scholarly publishers. Certainly, I can say for Nature that we haven’t done much in this direction yet although are now beginning to look into this.\nI’ll discuss a couple cases found in the wild but invite comment as to others’ practices. Let me start though with the CNRI handle plugin demo for Acrobat which I blogged here.\nHandle Plugin\nFirst off, the handle plugin PDF samples do include an embedded (test) DOI in both the document information dictionary\n5 0 obj \u0026lt;\u0026lt; /CreationDate (D:20070614140125-04'00') /Author (Simon) /Creator (PScript5.dll Version 5.2.2) /Producer (Acrobat Distiller 8.1.0 \\(Windows\\)) /ModDate (D:20070614140240-04'00') /HDL (10.5555/pdftest-crossref) /Title (Microsoft Word - crossref-rev.doc) \u0026gt;\u0026gt; endobj and in the (document) metadata stream\n\u0026lt;rdf:Description rdf:about=\"\" xmlns:pdfx=\"http://ns.adobe.com/pdfx/1.3/\"\u0026gt; \u0026lt;pdfx:HDL\u0026gt;10.5555/pdftest-crossref\u0026lt;/pdfx:HDL\u0026gt; \u0026lt;/rdf:Description\u0026gt; Bar any fuller disclosure of metadata terms at large (and one of the demo cases makes use of DOI to retrieve metadata form Crossref) this is excellent. I would, however, quibble with the use of “HDL” as a foreign key for the information dictionary. I realize this is just a test but the term “HDL” (or “DOI”, for that’s what it really is) is somewhat specific and a more general term such as “Identifier” would probably have more mileage, e.g.\n5 0 obj \u0026lt;\u0026lt; ... /Identifier (doi:10.5555/pdftest-crossref) ... \u0026gt;\u0026gt; endobj In the second example from the metadata dictionary I don’t think the term “HDL” from the PDF extension schema “pdfx” is very helpful. (Is that namespace actually defined anywhere?) From a descriptive metadata viewpoint a more usual schema such as DC would have wider coverage. So again the second example would be better rendered as\n\u0026lt;rdf:Description rdf:about=\"\" xmlns:dc=\"http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\"\u0026gt; \u0026lt;dc:identifier\u0026gt;doi:10.5555/pdftest-crossref\u0026lt;/dc:identifier\u0026gt; \u0026lt;/rdf:Description\u0026gt; or, alternately,\n\u0026lt;rdf:Description rdf:about=\"\" xmlns:dc=\"http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/\"\u0026gt; \u0026lt;dc:identifier\u0026gt;info:hdl/10.5555/pdftest-crossref\u0026lt;/dc:identifier\u0026gt; \u0026lt;/rdf:Description\u0026gt; Elsevier\nWell, we have Alexander Griekspoor’s comment earlier that Elsevier are including the DOI in their PDFs. I don’t know how consistently this is being done but I’ve checked a couple sample articles and it would seem that they have embedded the DOI (here from Cancer Cell, doi:0.1016/j.ccr.2007.06.004) in the title element which shows up in the information dictionary as\n361 0 obj \u0026lt;\u0026lt; /Producer (Adobe LiveCycle PDFG 7.2) /Creator (Elsevier) /Author () /Keywords () /Title (doi:10.1016/j.ccr.2007.06.004) /ModDate (D:20070630031637+05'30') /Subject () /CreationDate (D:00000101000000Z) \u0026gt;\u0026gt; endobj and in the (document) metadata dictionary as\n365 0 obj \u0026lt;\u0026lt; /Type /Metadata /Subtype /XML /Length 1526 \u0026gt;\u0026gt; stream \u0026lt;?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d' bytes='1526'?\u0026gt; \u0026nbsp; \u0026lt;rdf:RDF xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#' xmlns:iX='http://ns.adobe.com/iX/1.0/'\u0026gt; \u0026nbsp; \u0026lt;rdf:Description about='' xmlns='http://ns.adobe.com/pdf/1.3/' xmlns:pdf='http://ns.adobe.com/pdf/1.3/'\u0026gt; \u0026lt;pdf:Producer\u0026gt;Adobe LiveCycle PDFG 7.2\u0026lt;/pdf:Producer\u0026gt; \u0026lt;pdf:ModDate\u0026gt;2007-06-30T03:16:37+05:30\u0026lt;/pdf:ModDate\u0026gt; \u0026lt;pdf:Title\u0026gt;doi:10.1016/j.ccr.2007.06.004\u0026lt;/pdf:Title\u0026gt; \u0026lt;pdf:Creator\u0026gt;Elsevier\u0026lt;/pdf:Creator\u0026gt; \u0026lt;pdf:Author\u0026gt;\u0026lt;/pdf:Author\u0026gt; \u0026lt;pdf:Keywords\u0026gt;\u0026lt;/pdf:Keywords\u0026gt; \u0026lt;pdf:Subject\u0026gt;\u0026lt;/pdf:Subject\u0026gt; \u0026lt;pdf:CreationDate\u0026gt;0-01-01T00:00:00Z\u0026lt;/pdf:CreationDate\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026nbsp; \u0026lt;rdf:Description about='' xmlns='http://ns.adobe.com/xap/1.0/' xmlns:xap='http://ns.adobe.com/xap/1.0/'\u0026gt; \u0026lt;xap:CreatorTool\u003eElsevier\u0026lt;/xap:CreatorTool\u0026gt; \u0026lt;xap:ModifyDate\u003e2007-06-30T03:16:37+05:30\u0026lt;/xap:ModifyDate\u0026gt; \u0026lt;xap:Title\u0026gt; \u0026lt;rdf:Alt\u0026gt; \u0026lt;rdf:li xml:lang='x-default'\u003edoi:10.1016/j.ccr.2007.06.004\u0026lt;/rdf:li\u0026gt; \u0026lt;/rdf:Alt\u0026gt; \u0026lt;/xap:Title\u0026gt; \u0026lt;xap:Author\u0026gt;\u0026lt;/xap:Author\u0026gt; \u0026lt;xap:Description\u0026gt; \u0026lt;rdf:Alt\u0026gt; \u0026lt;rdf:li xml:lang='x-default'/\u0026gt; \u0026lt;/rdf:Alt\u0026gt; \u0026lt;/xap:Description\u0026gt; \u0026lt;xap:CreateDate\u0026gt;0-01-01T00:00:00Z\u0026lt;/xap:CreateDate\u0026gt; \u0026lt;xap:MetadataDate\u003e2007-06-30T03:16:37+05:30\u0026lt;/xap:MetadataDate\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026nbsp; \u0026lt;rdf:Description about='' xmlns='http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/' xmlns:dc='http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/'\u0026gt; \u0026lt;dc:title\u003edoi:10.1016/j.ccr.2007.06.004\u0026lt;/dc:title\u0026gt; \u0026lt;dc:creator/\u0026gt; \u0026lt;dc:description/\u0026gt; \u0026lt;/rdf:Description\u0026gt; \u0026nbsp; \u0026lt;/rdf:RDF\u003e \u0026lt;?xpacket end='r'?\u0026gt; endstream endobj Kudos anyway to Elsevier for emebedding this piece of information in their PDFs (if indeed it is a general practice). This has the merit of being picked up by Adobe apps and displayed in e.g. Reader. Also third party apps can pull this and use this to retrieve the metadata record from Crossref.\nThe only downside is that technically this seems to be a kludge to satisfy Adobe apps and is not the correct field for filing this information. I would have thought that some other information dictionary field (e.g. “Subject”) would be a better kludge, and then reserve the “Title” and “Author” fields for their proper purposes. The RDF/XML title fields would appear to be inherited from the “Title” field in the information dictionary. It’s a bit of a shame really because the DOI is embedded - it’s just in the wrong place(s). (OK, so that’s still way better, maybe, than not providing this information at all.)\nHopefully, with more examples to mull over and experiences to learn from we can arrive at a much better and more systematic way of including the DOI, and other key metadata fields, within a PDF so that this information can be gleaned easily and unambiguously by third party apps.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/handle-acrobat-reader-plugin/", "title": "Handle Acrobat Reader Plugin", "subtitle":"", "rank": 1, "lastmod": "2007-07-31", "lastmod_ts": 1185840000, "section": "Blog", "tags": [], "description": "Just announced on the handle-info list is a new plugin from CNRI for Acrobat Reader - see here. The announcement says:\n_“It is intended to demonstrate the utility of embedding a identifying\nhandle in a PDF document.\n…\nA set of demonstration documents, each with an embedded identifying\nhandle, is packaged with the plug-in to show potential uses. To make\nproductive use of this technology, a given industry or community of", "content": "Just announced on the handle-info list is a new plugin from CNRI for Acrobat Reader - see here. The announcement says:\n_“It is intended to demonstrate the utility of embedding a identifying\nhandle in a PDF document.\n…\nA set of demonstration documents, each with an embedded identifying\nhandle, is packaged with the plug-in to show potential uses. To make\nproductive use of this technology, a given industry or community of\nusers would have to agree on one or more specific applications and\npopulate the relevant handle records accordingly.”_\nTwo immediate comments:\nThis is a Windows-only plugin (realized that right after hitting the download button and seeing the ‘.exe’ file) and also needs admin rights to install. (So I solved the first hurdle and am trying to clear the second hurdle. Lockdown is not an uncommon practice for enterprise or institutional computers.)\n(Update: Actually, I think I got this wrong. I need admin privileges to install Adobe Acrobat 8. Still scuppered, though. Can’t even see the sample PDF files.) The plugin seems to be aimed at the user rather than at the user agent and thus is necessarily limited in scope, i.e. it needs a human driver. (Ideally content providers would embed metadata within media files using structured markup techniques which would be readily accessible to any downstream app which could leverage this data transparently to provide enhanced user services.) Anyway, I’ll add something more when I can get it installed. I think this tool could be a useful addition to publishing toolkits but also that content providers could do much more for consumers by disclosing metadata for their digital assets in a neutral, structured form. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/uri-template-republished/", "title": "URI Template Republished", "subtitle":"", "rank": 1, "lastmod": "2007-07-28", "lastmod_ts": 1185580800, "section": "Blog", "tags": [], "description": "Well, it all went very quiet for a while but glad to see that the URI Template Internet-Draft has just been republished:\n_“A New Internet-Draft is available from the on-line Internet-Drafts\ndirectories.\nTitle : URI Template\nAuthor(s) : J. Gregorio, et al.\nFilename : draft-gregorio-uritemplate-01.txt\nPages : 9\nDate : 2007-7-23\nURI Templates are strings that can be transformed into URIs after\nembedded variables are substituted. This document defines the", "content": "Well, it all went very quiet for a while but glad to see that the URI Template Internet-Draft has just been republished:\n_“A New Internet-Draft is available from the on-line Internet-Drafts\ndirectories.\nTitle : URI Template\nAuthor(s) : J. Gregorio, et al.\nFilename : draft-gregorio-uritemplate-01.txt\nPages : 9\nDate : 2007-7-23\nURI Templates are strings that can be transformed into URIs after\nembedded variables are substituted. This document defines the\nsyntax and processing of URI Templates.\nA URL for this Internet-Draft is:\nhttps://github.com/jcgregorio/uri-templates/blob/master/draft-gregorio-uritemplate-01.txt”\n_\nURI templates should be a very useful publishing tool. Templates are already used by technologies such as OpenSearch - see here.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-first-hacks/", "title": "XMP: First Hacks", "subtitle":"", "rank": 1, "lastmod": "2007-07-27", "lastmod_ts": 1185494400, "section": "Blog", "tags": [], "description": "\u0026lt;span \u0026gt;(\u0026lt;b\u0026gt;Update - 2007.07.28:\u0026lt;/b\u0026gt; I meant to reference in this entry Pierre Lindenbaum’s post back in May \u0026lt;a href=\u0026quot;http://plindenbaum.blogspot.com/2007/05/is-there-any-xmp-in-scientific-pdf-no.html\u0026quot;\u0026gt;Is there any XMP in scientific pdf ? (No)\u0026lt;/a\u0026gt;, which btw also references Roderic Page’s post on \u0026lt;a href=\u0026quot;http://iphylo.blogspot.com/2007/05/xmp.html\u0026quot;\u0026gt;XMP\u0026lt;/a\u0026gt; but forgot to add in the links in my haste to scoot off. Well, truth is we still can’t answer Pierre in the affirmative but at least we can take the first steps towards rectifying this.)\n\u0026lt;span \u0026gt;I’ve been revisiting Adobe’s \u0026lt;a href=\u0026quot;http://www.adobe.com/products/xmp/\u0026quot;\u0026gt;XMP\u0026lt;/a\u0026gt; just recently. (I blogged \u0026lt;a href=\u0026quot;/blog/xmp-capabilities-extended//\u0026quot;\u0026gt;here\u0026lt;/a\u0026gt; about the new \u0026lt;a href=\u0026quot;http://www.adobe.com/devnet/xmp/\u0026quot;\u0026gt;XMP Toolkit 4.1\u0026lt;/a\u0026gt; back in March.)\n\u0026lt;span \u0026gt;I wanted to share some of my early experiences. First off, after a couple of previous attempts which got pushed aside due to other projects, I managed to compile the libraries and the sample apps that ship with the C++ SDK under Xcode on the Mac. I also needed to compile \u0026lt;a href=\u0026quot;https://libexpat.github.io/\u0026quot;\u0026gt;Expat\u0026lt;/a\u0026gt; first which doesn’t ship with the distribution.\n\u0026lt;span \u0026gt;OK, so far, so good. What this basically leaves one with is a couple of XMP dump utilities (\u0026lt;i\u0026gt;DumpMainXMP\u0026lt;/i\u0026gt; and \u0026lt;i\u0026gt;DumpScannedXMP\u0026lt;/i\u0026gt;) and two others (\u0026lt;i\u0026gt;XMPCoreCoverage\u0026lt;/i\u0026gt; and \u0026lt;i\u0026gt;XMPFilesCoverage\u0026lt;/i\u0026gt;) which is a good start anyways for exploring. And turns out that our PDFs already have some workflow metadata in them. This is encouraging because the SDK allows apps to read and update existing XMP packets from files, though not to write new packets into files (as far as I understand).\n\u0026lt;span \u0026gt;I thought I would take this opportunity anyway to:\n\u0026lt;span \u0026gt;See what XMP metadata terms we might consider adding \u0026lt;span \u0026gt;Try and add these to existing XMP packets\u0026lt;span \u0026gt;Ugly details are presented below, but by updating the XMP packet metadata in one of our PDFs (\u0026lt;i\u0026gt;Nature 445, 37 (2007), C.J. Hogan\u0026lt;/i\u0026gt;) we can teach Acrobat Reader to read - see the “before” (\u0026lt;a href=\u0026quot;https://web.archive.org/web/20130815224916/http://0-nurture-nature-com.libus.csd.mu.edu/\u0026quot;\u0026gt;PDF here\u0026lt;/a\u0026gt;) and “after” (\u0026lt;a href=\u0026quot;https://web.archive.org/web/20130815224916/http://0-nurture-nature-com.libus.csd.mu.edu/\u0026quot;\u0026gt;PDF here\u0026lt;/a\u0026gt;) screenshots in the figure.\n\u0026lt;span \u0026gt;\u0026lt;img src=\u0026quot;/wp/blog/images/acrobats.png\u0026quot; alt=\u0026quot;acrobats.png\u0026quot; width=\u0026quot;583\u0026quot; height=\u0026quot;466\u0026quot; /\u0026gt;\n\u0026lt;span \u0026gt;Of course, this is really about much more than getting Adobe apps to read/write metadata. It’s about using XMP as a standard platform for embedding metadata in digital assets for \u0026lt;i\u0026gt;third-party apps\u0026lt;/i\u0026gt; to read/write. If we can put ID3 tags into our podcasts then why not XMP packets into other media?\u0026lt;/p\u0026gt;\n", "content": "\u0026lt;span \u0026gt;(\u0026lt;b\u0026gt;Update - 2007.07.28:\u0026lt;/b\u0026gt; I meant to reference in this entry Pierre Lindenbaum’s post back in May \u0026lt;a href=\u0026quot;http://plindenbaum.blogspot.com/2007/05/is-there-any-xmp-in-scientific-pdf-no.html\u0026quot;\u0026gt;Is there any XMP in scientific pdf ? (No)\u0026lt;/a\u0026gt;, which btw also references Roderic Page’s post on \u0026lt;a href=\u0026quot;http://iphylo.blogspot.com/2007/05/xmp.html\u0026quot;\u0026gt;XMP\u0026lt;/a\u0026gt; but forgot to add in the links in my haste to scoot off. Well, truth is we still can’t answer Pierre in the affirmative but at least we can take the first steps towards rectifying this.)\n\u0026lt;span \u0026gt;I’ve been revisiting Adobe’s \u0026lt;a href=\u0026quot;http://www.adobe.com/products/xmp/\u0026quot;\u0026gt;XMP\u0026lt;/a\u0026gt; just recently. (I blogged \u0026lt;a href=\u0026quot;/blog/xmp-capabilities-extended//\u0026quot;\u0026gt;here\u0026lt;/a\u0026gt; about the new \u0026lt;a href=\u0026quot;http://www.adobe.com/devnet/xmp/\u0026quot;\u0026gt;XMP Toolkit 4.1\u0026lt;/a\u0026gt; back in March.)\n\u0026lt;span \u0026gt;I wanted to share some of my early experiences. First off, after a couple of previous attempts which got pushed aside due to other projects, I managed to compile the libraries and the sample apps that ship with the C++ SDK under Xcode on the Mac. I also needed to compile \u0026lt;a href=\u0026quot;https://libexpat.github.io/\u0026quot;\u0026gt;Expat\u0026lt;/a\u0026gt; first which doesn’t ship with the distribution.\n\u0026lt;span \u0026gt;OK, so far, so good. What this basically leaves one with is a couple of XMP dump utilities (\u0026lt;i\u0026gt;DumpMainXMP\u0026lt;/i\u0026gt; and \u0026lt;i\u0026gt;DumpScannedXMP\u0026lt;/i\u0026gt;) and two others (\u0026lt;i\u0026gt;XMPCoreCoverage\u0026lt;/i\u0026gt; and \u0026lt;i\u0026gt;XMPFilesCoverage\u0026lt;/i\u0026gt;) which is a good start anyways for exploring. And turns out that our PDFs already have some workflow metadata in them. This is encouraging because the SDK allows apps to read and update existing XMP packets from files, though not to write new packets into files (as far as I understand).\n\u0026lt;span \u0026gt;I thought I would take this opportunity anyway to:\n\u0026lt;span \u0026gt;See what XMP metadata terms we might consider adding \u0026lt;span \u0026gt;Try and add these to existing XMP packets\u0026lt;span \u0026gt;Ugly details are presented below, but by updating the XMP packet metadata in one of our PDFs (\u0026lt;i\u0026gt;Nature 445, 37 (2007), C.J. Hogan\u0026lt;/i\u0026gt;) we can teach Acrobat Reader to read - see the “before” (\u0026lt;a href=\u0026quot;https://web.archive.org/web/20130815224916/http://0-nurture-nature-com.libus.csd.mu.edu/\u0026quot;\u0026gt;PDF here\u0026lt;/a\u0026gt;) and “after” (\u0026lt;a href=\u0026quot;https://web.archive.org/web/20130815224916/http://0-nurture-nature-com.libus.csd.mu.edu/\u0026quot;\u0026gt;PDF here\u0026lt;/a\u0026gt;) screenshots in the figure.\n\u0026lt;span \u0026gt;\u0026lt;img src=\u0026quot;/wp/blog/images/acrobats.png\u0026quot; alt=\u0026quot;acrobats.png\u0026quot; width=\u0026quot;583\u0026quot; height=\u0026quot;466\u0026quot; /\u0026gt;\n\u0026lt;span \u0026gt;Of course, this is really about much more than getting Adobe apps to read/write metadata. It’s about using XMP as a standard platform for embedding metadata in digital assets for \u0026lt;i\u0026gt;third-party apps\u0026lt;/i\u0026gt; to read/write. If we can put ID3 tags into our podcasts then why not XMP packets into other media?\u0026lt;/p\u0026gt;\n\u0026lt;span \u0026gt;First a brief digression on XMP packets, which look essentially like this:\n\u003c?xpacket begin=\"...\" id=\"...\"?\u003e\r\u0026lt;x:xmpmeta xmlns:x=\"adobe:ns:meta/\"\u0026gt;\r\u0026lt;rdf:RDF xmlns:rdf=\"...\" xmlns:...\u0026gt;\r...\r\u0026lt;/rdf:RDF\u0026gt;\r\u0026lt;/x:xmpmeta\u0026gt;\r... XML whitespace as padding ...\r\u0026lt;?xpacket end=\"w\"?\u0026gt;\r\u0026lt;rdf:RDF\u0026gt;\" element which is optionally wrapped by an \"``\u0026lt;x:xmpmeta\u0026gt;``\" element. This XML fragment with trailing XML whitespace is topped and tailed by \"``\u0026lt;?xpacket\u0026gt;``\" processing instructions with \"``begin``\" and \"``end``\" attributes, respectively.\rThe RDF supported is a simple profile of RDF with only certain constructs recognized: scalars, arrays, structures. It is not a means to embed arbitrary RDF/XML structures. But I'll pass on that for now. At first blush it's at least suitable to get a simple dictionary of key/value terms written in, and more besides.\rThe XMP metadata from the ``PDF file`` listed above looks as follows in RDF/N3 (which is a more chipper serialization of RDF than is RDF/XML):``\r`\u0026lt;pre\u0026gt;`\u0026amp;lt;uuid:...\u0026amp;gt;\rdc:creator \u0026ldquo;x\u0026rdquo; ; dc:format \u0026ldquo;application/pdf\u0026rdquo; ; dc:title \u0026ldquo;19.7 N\u0026amp;V.indd NEW.indd\u0026rdquo;@x-default ; pdf:GTS_PDFXConformance \u0026ldquo;PDF/X-1a:2001\u0026rdquo; ; pdf:GTS_PDFXVersion \u0026ldquo;PDF/X-1:2001\u0026rdquo; ; pdf:Producer \u0026ldquo;Acrobat Distiller 6.0.1 for Macintosh\u0026rdquo; ; pdf:Trapped \u0026ldquo;False\u0026rdquo; ; pdfx:GTS_PDFXConformance \u0026ldquo;PDF/X-1a:2001\u0026rdquo; ; pdfx:GTS_PDFXVersion \u0026ldquo;PDF/X-1:2001\u0026rdquo; ; xap:CreateDate \u0026ldquo;2007-07-16T09:25:20+01:00\u0026rdquo; ; xap:CreatorTool \u0026ldquo;InDesign: pictwpstops filter 1.0\u0026rdquo; ; xap:MetadataDate \u0026ldquo;2007-07-16T11:40:21+01:00\u0026rdquo; ; xap:ModifyDate \u0026ldquo;2007-07-16T11:40:21+01:00\u0026rdquo; ; xapMM:DocumentID \u0026ldquo;uuid:be3a9be5-4e3a-4b66-a50b-26f0a0bfc89d\u0026rdquo; ; xapMM:InstanceID \u0026ldquo;uuid:73dcd021-d40a-4cb7-a99b-44f8e90624f4\u0026rdquo; . \u0026lt;/pre\u0026gt;\n`\u0026lt;span \u0026gt;`(`\u0026lt;b\u0026gt;`Note:`\u0026lt;/b\u0026gt;` I’ve omitted namespaces here and dropped some of the structuring info that was present on the \u0026amp;#8220;`\u0026lt;tt\u0026gt;`dc:creator`\u0026lt;/tt\u0026gt;`\u0026amp;#8221; and \u0026amp;#8220;`\u0026lt;tt\u0026gt;`dc:title`\u0026lt;/tt\u0026gt;`\u0026amp;#8221; elements thus leaving all values as simple strings. Back to that in a bit. )\r`\u0026lt;span \u0026gt;`What this says is simply that all these properies expressed in key/value pairs apply to the current document denoted by the resource identifier \u0026amp;#8220;`\u0026lt;tt\u0026gt;`[uuid:...](uuid:...)`\u0026lt;/tt\u0026gt;`\u0026amp;#8220;, and terms are taken from the schemas indicated by the prefixes. So, for example, the term \u0026amp;#8220;`\u0026lt;tt\u0026gt;`creator`\u0026lt;/tt\u0026gt;`\u0026amp;#8221; from the schema referenced by the placeholder \u0026amp;#8220;`\u0026lt;tt\u0026gt;`dc`\u0026lt;/tt\u0026gt;`\u0026amp;#8221; (there is a namespace URI for this but I haven’t shown it here) has the value \u0026amp;#8220;`\u0026lt;tt\u0026gt;`x`\u0026lt;/tt\u0026gt;`\u0026amp;#8221; for this document, and so on.\r`\u0026lt;span \u0026gt;`So, salting away the media- and XMP-specific metadata, we are left with the following work metadata in our main XMP packet.\r`\u0026lt;pre\u0026gt;\u0026lt;span \u0026gt;`\u0026amp;lt;uuid:...\u0026amp;gt;\rdc:creator \u0026ldquo;x\u0026rdquo; ; dc:format \u0026ldquo;application/pdf\u0026rdquo; ; dc:title \u0026ldquo;19.7 N\u0026amp;V.indd NEW.indd\u0026rdquo;@x-default ; \u0026lt;/pre\u0026gt;\n`\u0026lt;span \u0026gt;`Not wildly impressive, i must admit. Ideally we would like to pump this up with a fuller descriptive and rights metadata set such as we routinely syndicate with our web feeds. This would make use of both DC and PRISM vocabularies. In RDF/N3 we might expect to see something like:\r`\u0026lt;pre\u0026gt;\u0026lt;span \u0026gt;`\u0026amp;lt;uuid:...\u0026amp;gt;\rdc:creator \u0026ldquo;Craig J. Hogan\u0026rdquo; ; dc:title \u0026ldquo;Cosmology: Ripples of early starlight\u0026rdquo; ; dc:identifier \u0026ldquo;doi:10.1038/445037a\u0026rdquo; ; dc:description \u0026ldquo;doi:10.1038/445037a\u0026rdquo; ; dc:source \u0026ldquo;Nature 445, 37 (2007)\u0026rdquo; ; dc:date \u0026ldquo;2007-01-04\u0026rdquo; ; dc:format \u0026ldquo;application/pdf\u0026rdquo; ; dc:publisher \u0026ldquo;Nature Publishing Group\u0026rdquo; ; dc:language \u0026ldquo;en\u0026rdquo; ; dc:rights \u0026ldquo;© 2007 Nature Publishing Group\u0026rdquo; ; prism:publicationName \u0026ldquo;Nature\u0026rdquo; ; prism:issn \u0026ldquo;0028-0836\u0026rdquo; ; prism:eIssn \u0026ldquo;1476-4679\u0026rdquo; ; prism:publicationDate \u0026ldquo;2007-01-04\u0026rdquo; ; prism:copyright \u0026ldquo;© 2007 Nature Publishing Group\u0026rdquo; ; prism:rightsAgent \u0026ldquo;permissions@nature.com\u0026rdquo; ; prism:volume \u0026ldquo;445\u0026rdquo; ; prism:number \u0026ldquo;7123\u0026rdquo; ; prism:startingPage \u0026ldquo;37\u0026rdquo; ; prism:endingPage \u0026ldquo;37\u0026rdquo; ; prism:section \u0026ldquo;News and Views\u0026rdquo; ; \u0026lt;/pre\u0026gt;\n`\u0026lt;span \u0026gt;`So, taking this RDF and doing a quick and dirty substitution of it for the existing DC description in the PDF XMP packet (i.e. more or less \u0026amp;#8220;lobotomizing\u0026amp;#8221; the PDF) we then get an updated XMP packet which can be dumped with the `\u0026lt;i\u0026gt;`DumpMainXMP`\u0026lt;/i\u0026gt;` utility as (with some schemas removed):\r`\u0026lt;pre\u0026gt;\u0026lt;span \u0026gt;`// ----------------------------------\r// Dumping main XMP for 445037a.pdf : File info : format = \u0026quot; \u0026ldquo;, handler flags = 00000260 Packet info : offset = 267225, length = 3651 Initial XMP from 445037a.pdf Dumping XMPMeta object \u0026quot;\u0026rdquo; (0x0) \u0026hellip; http://0-purl-org.libus.csd.mu.edu/dc/elements/1.1/ dc: (0x80000000 : schema) dc:rights (0x1E00 : isLangAlt isAlt isOrdered isArray) [1] = \u0026quot; 2007 Nature Publishing Group\u0026quot; (0x50 : hasLang hasQual) ? xml:lang = \u0026ldquo;x-default\u0026rdquo; (0x20 : isQual) dc:language (0x200 : isArray) [1] = \u0026ldquo;en\u0026rdquo; dc:publisher (0x200 : isArray) [1] = \u0026ldquo;Nature Publishing Group\u0026rdquo; dc:format = \u0026ldquo;application/pdf\u0026rdquo; dc:date (0x600 : isOrdered isArray) [1] = \u0026ldquo;2007-01-04\u0026rdquo; dc:source = \u0026ldquo;Nature 445, 37 (2007)\u0026rdquo; dc:description (0x1E00 : isLangAlt isAlt isOrdered isArray) [1] = \u0026ldquo;doi:10.1038/445037a\u0026rdquo; (0x50 : hasLang hasQual) ? xml:lang = \u0026ldquo;x-default\u0026rdquo; (0x20 : isQual) dc:identifier = \u0026ldquo;doi:10.1038/445037a\u0026rdquo; dc:title (0x1E00 : isLangAlt isAlt isOrdered isArray) [1] = \u0026ldquo;Cosmology: Ripples of early starlight\u0026rdquo; (0x50 : hasLang hasQual) ? xml:lang = \u0026ldquo;x-default\u0026rdquo; (0x20 : isQual) dc:creator (0x600 : isOrdered isArray) [1] = \u0026ldquo;Craig J. Hogan\u0026rdquo; https://web.archive.org/web/20211021092941/http://prismstandard.org/namespaces/1.2/basic/ prism: (0x80000000 : schema) prism:section = \u0026ldquo;News and Views\u0026rdquo; prism:endingPage = \u0026ldquo;37\u0026rdquo; prism:startingPage = \u0026ldquo;37\u0026rdquo; prism:number = \u0026ldquo;7123\u0026rdquo; prism:volume = \u0026ldquo;445\u0026rdquo; prism:rightsAgent = \u0026ldquo;permissions@nature.com\u0026rdquo; prism:copyright = \u0026quot; 2007 Nature Publishing Group\u0026quot; prism:publicationDate = \u0026ldquo;2007-01-04\u0026rdquo; prism:eIssn = \u0026ldquo;1476-4679\u0026rdquo; prism:issn = \u0026ldquo;0028-0836\u0026rdquo; prism:publicationName = \u0026ldquo;Nature\u0026rdquo; \u0026lt;/pre\u0026gt;\n`\u0026lt;span \u0026gt;`Full dumps of the \u0026amp;#8220;before\u0026amp;#8221; and \u0026amp;#8220;after\u0026amp;#8221; PDFs are available here:\r*`\u0026lt;span \u0026gt;\u0026lt;i\u0026gt;`DumpMainXMP`\u0026lt;/i\u0026gt;` - `\u0026lt;a href=\u0026quot;https://web.archive.org/web/20080821103510/http://0-nurture-nature-com.libus.csd.mu.edu/tony/xmp/445037a.xmpp.0\u0026quot;\u0026gt;`before`\u0026lt;/a\u0026gt;` and `\u0026lt;a href=\u0026quot;https://web.archive.org/web/20080821103719/http://0-nurture-nature-com.libus.csd.mu.edu/tony/xmp/445037a.xmpp.1\u0026quot;\u0026gt;`after`\u0026lt;/a\u0026gt;`\r`\u0026lt;pre\u0026gt;\u0026lt;/pre\u0026gt;`\r\u0026lt;span \u0026gt;\u0026lt;i\u0026gt;DumpScannedXMP\u0026lt;/i\u0026gt; - \u0026lt;a href=\u0026quot;https://web.archive.org/web/20080821103510/http://0-nurture-nature-com.libus.csd.mu.edu/tony/xmp/445037a.xmpp.0\u0026quot;\u0026gt;before\u0026lt;/a\u0026gt; and \u0026lt;a href=\u0026quot;https://web.archive.org/web/20080821103719/http://0-nurture-nature-com.libus.csd.mu.edu/tony/xmp/445037a.xmpp.1\u0026quot;\u0026gt;after\u0026lt;/a\u0026gt;\u0026lt;span \u0026gt;Note also that in the dump above some of the DC terms are interpreted by the XMP toolkit to have structured formats, i.e. are recognized as array members, and have language and ordering attributes. This seems to be an artefact of the toolkit as the RDF did not specify these structurings. Note also that the PRISM values were not similarly interpreted as the PRISM schema is not registered with the toolkit.\n\u0026lt;span \u0026gt;Obviously, there’s much more to be learned yet. I’ll post an update to this later, but meantime it would be very interesting to get feedback from others on experiences they may have with XMP or any opinions they may want to share. I think it all looks very promising although tools are somewhat restricted.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/publishing-linked-data/", "title": "Publishing Linked Data", "subtitle":"", "rank": 1, "lastmod": "2007-07-19", "lastmod_ts": 1184803200, "section": "Blog", "tags": [], "description": "With these words:\n_“There was quite some interest in Linked Data at this year’s World Wide\nWeb Conference (WWW2007). Therefore, Richard Cyganiak, Tom Heath and I\ndecided to write a tutorial about how to publish Linked Data on the\nWeb, so that interested people can find all relevant information, best\npractices and references in a single place.”_\nChris Bizer announces this draft How to Publish Linked Data on the Web. It’s a bright and breezy tutorial and useful (to me, anyway) for disclosing a couple of links:", "content": "With these words:\n_“There was quite some interest in Linked Data at this year’s World Wide\nWeb Conference (WWW2007). Therefore, Richard Cyganiak, Tom Heath and I\ndecided to write a tutorial about how to publish Linked Data on the\nWeb, so that interested people can find all relevant information, best\npractices and references in a single place.”_\nChris Bizer announces this draft How to Publish Linked Data on the Web. It’s a bright and breezy tutorial and useful (to me, anyway) for disclosing a couple of links:\nFindings of the W3C TAG Linked Data - Design Issues The tutorial is unsurprisingly orthodox in its advocacy for all things HTTP and goes on to say:\n“In the context of Linked Data, we restrict ourselves to using HTTP URIs only and avoid other URI schemes such as URNs and DOIs.”\nBut this only relates back to Berners-Lee’s piece on Linked Data referenced above in which he says:\n“The second rule, to use HTTP URIs, is also widely understood. The only deviation has been, since the web started, a constant tendency for people to invent new URI schemes (and sub-schemes within the urn: scheme) such as LSIDs and handles and XRIs and DOIs and so on, for various reasons. Typically, these involve not wanting to commit to the established Domain Name System (DNS) for delegation of authority but to construct something under separate control. Sometimes it has to do with not understanding that HTTP URIs are names (not addresses) and that HTTP name lookup is a complex, powerful and evolving set of standards. This issue discussed at length elsewhere, and time does not allow us to delve into it here.”\nHmm. Does make one wonder where the concept of URI ever arose. Surely the nascent WWW application should have mandated the exclusive use of HTTP identifiers? Seems that this concept snuck up on us somehow and we now have to put it back into the box. Pandora, indeed!\nBack to the tutorial there are some unorthodox terms or at least I had not heard of them before. Contrasted with the defined term information resources (from AWWW) is the undefined term “non-information resources”. Further on, there’s a distinction made between two types of RDF triple: “literal triples” and “RDF links”. I hadn’t heard of either of these terms before although they are presented as if they were in common usage. The tutorial then goes on to deprecate the use of certain RDF features because it makes it “easier for clients”. So, I guess that the full expressivity of RDF is either not required or the world of “linked data” is not quite so large as it would like to be.\nAnd later on, there’s this puzzling injunction:\n“You should only define terms that are not already defined within well-known vocabularies. In particular this means not defining completely new vocabularies from scratch, but instead extending existing vocabularies to represent your data as required.”\nAm I wrong, or is there something of a Catch 22 there? To extend an arbitrary vocabulary I would need to be the namespace authority - to be the “URI owner” in W3C speak. But I can’t be the authority for all namespaces/vocabularies because by the intent of the above they would likely be just the one (true?) vocabulary which I may or may not be the authority for. I thought the intent of the RDF model and XML namespaces was that terms could be applied from disparate vocabularies to the description at hand.\nAnyways, I am not trying to knock the draft. It’s something of a curate’s egg, that’s true, but I am genuinely looking forward to reading it through and would encourage others to have a look at it too.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/purl-redux/", "title": "PURL Redux", "subtitle":"", "rank": 1, "lastmod": "2007-07-12", "lastmod_ts": 1184198400, "section": "Blog", "tags": [], "description": "Seems that there’s life in the old dog yet. :~) See this post about PURL from Thom Hickey, OCLC, This extract:\nOCLC has contracted with Zepheira to reimplement the PURL code which has become a bit out of date over the years. The new code will be in written in Java and released under the Apache 2.0 license.", "content": "Seems that there’s life in the old dog yet. :~) See this post about PURL from Thom Hickey, OCLC, This extract:\nOCLC has contracted with Zepheira to reimplement the PURL code which has become a bit out of date over the years. The new code will be in written in Java and released under the Apache 2.0 license.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/bionlp-2007/", "title": "BioNLP 2007", "subtitle":"", "rank": 1, "lastmod": "2007-07-10", "lastmod_ts": 1184025600, "section": "Blog", "tags": [], "description": "Just posted on Nascent a brief account of a presentation I gave recently on OTMI at BioNLP 2007. The post lists some of the feedback I received. We are very interested to get further comments so do feel free to contribute comments either directly to the post, privately to otmi@nature.com, or publicly to otmi-discuss@crossref.org. And then there’s always the OTMI wiki available for comment at http://opentextmining.org/.\nIt is important to note that OTMI is not a universal panacea but rather an attempt at bridging the gap between publisher and researcher.", "content": "Just posted on Nascent a brief account of a presentation I gave recently on OTMI at BioNLP 2007. The post lists some of the feedback I received. We are very interested to get further comments so do feel free to contribute comments either directly to the post, privately to otmi@nature.com, or publicly to otmi-discuss@crossref.org. And then there’s always the OTMI wiki available for comment at http://opentextmining.org/.\nIt is important to note that OTMI is not a universal panacea but rather an attempt at bridging the gap between publisher and researcher. We are attempting to provide a framework to enable scholarly publishers to disclose full text for machine processing purposes without compromising their normal publishing obligations.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ibm-article-on-prism/", "title": "IBM Article on PRISM", "subtitle":"", "rank": 1, "lastmod": "2007-07-10", "lastmod_ts": 1184025600, "section": "Blog", "tags": [], "description": "Nice entry article on PRISM here by Uche Ogbuji, Fourthought Inc. on IBM’s DeveloperWorks.", "content": "Nice entry article on PRISM here by Uche Ogbuji, Fourthought Inc. on IBM’s DeveloperWorks.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oh-shiny/", "title": "Oh, shiny!", "subtitle":"", "rank": 1, "lastmod": "2007-07-02", "lastmod_ts": 1183334400, "section": "Blog", "tags": [], "description": "The other day Ed and I visited the OECD to talk about all things e-publishig. At the end of our our meeting, Toby Green, the OECD’s head of publishing, handed all 30+ meeting attendees a copy of their well-known OECD Factbook- on a USB stick.\nBefore you dismiss this as a gimick- note that organizations like the OECD get a lot of political and marketing mileage with “leave behinds”- print copies of their key reports, conference proceedings and reference works.", "content": "The other day Ed and I visited the OECD to talk about all things e-publishig. At the end of our our meeting, Toby Green, the OECD’s head of publishing, handed all 30+ meeting attendees a copy of their well-known OECD Factbook- on a USB stick.\nBefore you dismiss this as a gimick- note that organizations like the OECD get a lot of political and marketing mileage with “leave behinds”- print copies of their key reports, conference proceedings and reference works. While researchers might prefer electronic versions of the publications for their day-to-day work, print versions of the same publications seemed to continue to play a critical role as an “awareness tool.” I know that, for this very reason, several NGO/IGOs that I’ve spoken to have despaired of ever ramping down their print operations.\nI think that the OECD might have figured out a solution to this dilemma. It’s difficult to describe how viscerally satisfying it was to receive one of these Factbook USB-sticks. From the way in which the other meeting attendees swarmed around Toby as he was handing them out, I think that they might have had the same reaction.\nAs we headed back to London on the Eurostar, I almost immediately popped the USB stick into my laptop and started browsing through the Factbook, much as I would have thumbed through a print version of the same (although -truth be told- I would have been tempted to conveniently “forget” the print version in order to not have to shlep it from Paris back to Oxford).\nIn short, I think the system works. Kudos to the OECD for a simple, inexpensive and creative experiment in e-publishing.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oasis-announces-search-web-services-tc/", "title": "OASIS Announces Search Web Services TC", "subtitle":"", "rank": 1, "lastmod": "2007-06-15", "lastmod_ts": 1181865600, "section": "Blog", "tags": [], "description": "OASIS has just announced a technical committee for standardising search services. This from the Call for Participation:\n_\nb. Purpose\nTo define Search and Retrieval Web Services, combining various current and\nongoing web service activities.\nWithin recent years there has been a growth in activity in the development of\nweb service definitions for search and retrieval applications. These include\nSRU, a web service based in part on the NISO/ISO Search and Retrieval standards;", "content": "OASIS has just announced a technical committee for standardising search services. This from the Call for Participation:\n_\nb. Purpose\nTo define Search and Retrieval Web Services, combining various current and\nongoing web service activities.\nWithin recent years there has been a growth in activity in the development of\nweb service definitions for search and retrieval applications. These include\nSRU, a web service based in part on the NISO/ISO Search and Retrieval standards;\nthe Amazon OpenSearch, which defines a means of describing and automating search\nweb forms; as well as many proprietary definitions (e.g. the Google and MSN\nSearch APIs). There are also a number of activities for defining abstract search\nAPIs that can be mapped onto multiple implementations either within native code\nor onto remote procedural calls and web services, such as ZOOM (Z39.50 Object\nOriented Model); SQI (Simple Query Interface), an IEEE standard developed for\nsearching and retrieval in the IMS (Instructional Management Systems) space; and\nOSIDs (Open Service Interface Definitions from the Open Knowledge Initiative.\nWhile abstract APIs would be out of scope, these would inform the work to\nincrease interoperability and compatibility.\n_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/idf-open-meeting-innovative-uses-of-the-doi-system/", "title": "IDF Open Meeting: Innovative uses of the DOI system", "subtitle":"", "rank": 1, "lastmod": "2007-06-08", "lastmod_ts": 1181260800, "section": "Blog", "tags": [], "description": "Please see the details of the IDF Annual Meeting and a related Handle System Workshop in Washington, DC on June 21 which may be of interest - http://0-www-crossref-org.libus.csd.mu.edu/crweblog/2007/06/international_doi_foundation_a.html", "content": "Please see the details of the IDF Annual Meeting and a related Handle System Workshop in Washington, DC on June 21 which may be of interest - http://0-www-crossref-org.libus.csd.mu.edu/crweblog/2007/06/international_doi_foundation_a.html\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/resource-maps/", "title": "Resource Maps", "subtitle":"", "rank": 1, "lastmod": "2007-06-05", "lastmod_ts": 1181001600, "section": "Blog", "tags": [], "description": "Last week we had a second face-to-face of the OAI-ORE (Open Archives Initiative – Object Reuse and Exchange) Technical Committee in New York, the meeting being hosted courtesy of Google. (Hence the snap here taken from the terrace of Google’s canteen with its gorgeous view of midtown Manhattan. And the food’s not too shabby either. ;~)\nThe main input to the meeting was this discussion document: Compound Information Objects: The OAI-ORE Perspective.", "content": " Last week we had a second face-to-face of the OAI-ORE (Open Archives Initiative – Object Reuse and Exchange) Technical Committee in New York, the meeting being hosted courtesy of Google. (Hence the snap here taken from the terrace of Google’s canteen with its gorgeous view of midtown Manhattan. And the food’s not too shabby either. ;~)\nThe main input to the meeting was this discussion document: Compound Information Objects: The OAI-ORE Perspective. This document we feel has now reached a level of maturity that we wanted to share with a wider audience. We invite feedback either directly at ore@openarchives.org or indirectly via yours truly.\nThe document attempts to describe the problem domain - that of describing a scholarly publication as an aggregation of resources on the Web - and to put that squarely into the Web architecture context. What the initiative is seeking to provide is machine descriptions of those resources and their relationships, something that we are inclining to call “resource maps” and as underpinning we are making use of the notion of “named graphs” from ongoing semantic web research. Essentially these resource maps are machine-readable descriptions of participating resources (in a scholarly object - both core resources and related resources) and the relationships between those resources, the whole set of assertions about those resources being named (i.e. having a URI as identifier) and having provenance information attached, e.g. publisher, date of publication, version information (still under discussion). It is envisaged that these compound object descriptions may be available in a variety of serializations from a published, object-specific URL (i.e. a good old-fashioned Web address) but some honest-to-goodness XML serialization is a likely to be one of the candidates. No surprises here, then.\nBelow is a schematic from the paper which shows the publication of a resource map (or named graph) corresponding to the compound object which logically represents a scholarly publication. For those objects of immediate interest to Crossref these would likely be identified with DOI’s although there is no restriction in OAI-ORE on the identifier to be used - other than it be a URI.\nUpdate: For a couple posts from some other members of the ORE TC see here (Peter Murray, OhioLINK) and here (Pete Johnston, Eduserv).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/rkidd/", "title": "Rkidd", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rscs-project-prospect-v1.1/", "title": "RSC’s Project Prospect v1.1", "subtitle":"", "rank": 1, "lastmod": "2007-05-31", "lastmod_ts": 1180569600, "section": "Blog", "tags": [], "description": "We updated our Project Prospect articles today to release v1.1, with a pile of look \u0026amp; feel improvements to the HTML views and links. The most interesting technical addition is the launch of our enhanced RSS feeds, where we have updated our existing feeds for enhanced articles. These now include ontology terms and primary compounds both visually (as text terms and 2D images) and within the RDF - using the OBO in OWL representation and the info:inchi specification mentioned here by Tony only a few weeks ago.\nThe enhanced entries will soon become more common as we concentrate our enhancements on our Advance Articles, but the current example below from our Photochemical and Photobiological Sciences feed is lovely. RDF code after the jump - just as beautiful to the parents…\n", "content": "We updated our Project Prospect articles today to release v1.1, with a pile of look \u0026amp; feel improvements to the HTML views and links. The most interesting technical addition is the launch of our enhanced RSS feeds, where we have updated our existing feeds for enhanced articles. These now include ontology terms and primary compounds both visually (as text terms and 2D images) and within the RDF - using the OBO in OWL representation and the info:inchi specification mentioned here by Tony only a few weeks ago.\nThe enhanced entries will soon become more common as we concentrate our enhancements on our Advance Articles, but the current example below from our Photochemical and Photobiological Sciences feed is lovely. RDF code after the jump - just as beautiful to the parents…\nSo the RDF code for the OBO terms and InChIs looks like this:\n\u0026lt;tt\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;rdf:li\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;content:item rdf:about=\u0026#34;info:inchi/InChI=1/C20H28O/c1-16(8-6-9-17(2)13-15-21)11-12-19-18(3)10-7-14-20(19,4)5/h6,8-9,11-13,15H,7,10,14H2,1-5H3/b9-6-,12-11+,16-8+,17-13+\u0026#34;/\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;/rdf:li\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;rdf:li\u0026gt;\u0026lt;content:item rdf:about=\u0026#34;http://0-purl-org.libus.csd.mu.edu/obo/owl/CL#CL:0000210\u0026#34;/\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;/rdf:li\u0026gt;\u0026lt;br /\u0026gt; \u0026lt;/tt\u0026gt; We now have over five hundred 2007 articles enhanced, so we’ve brought the majority back into controlled access. There are always examples from each journal freely available.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/oai-ore-presentation-at-oai5/", "title": "OAI-ORE Presentation at OAI5", "subtitle":"", "rank": 1, "lastmod": "2007-05-02", "lastmod_ts": 1178064000, "section": "Blog", "tags": [], "description": "I posted here about an initial meeting of the OAI-ORE Technical WG back in January. ORE is the “Object Reuse and Exchange” initiative which is aiming to provide a formalism for describing scholarly works as complete units (or packages) of information on the Web using resource maps which would be available from public access points. From a DOI perspective this work is intimately connected with multiple resolution. For further updates on this work, see here for a presentation by Herbert Van de Sompel on OAI-ORE at the OAI5 Workshop (5th Workshop on Innovations in Scholarly Communication) held a couple weeks back at CERN, Geneva, Switzerland.", "content": " I posted here about an initial meeting of the OAI-ORE Technical WG back in January. ORE is the “Object Reuse and Exchange” initiative which is aiming to provide a formalism for describing scholarly works as complete units (or packages) of information on the Web using resource maps which would be available from public access points. From a DOI perspective this work is intimately connected with multiple resolution. For further updates on this work, see here for a presentation by Herbert Van de Sompel on OAI-ORE at the OAI5 Workshop (5th Workshop on Innovations in Scholarly Communication) held a couple weeks back at CERN, Geneva, Switzerland.\nThe presentation gives an insight regarding the problem domain in which ORE operates, and in the evolving thinking regarding potential solutions. The presentation was recorded on video and is available for both streaming and download (slides, streaming video, video download).\nNote that Michael Nelson of Old Dominion University also presented on behalf of the ORE effort at the recent CNI Task Force Meeting and at the DLF Forum.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/a-modest-proposal/", "title": "A Modest Proposal", "subtitle":"", "rank": 1, "lastmod": "2007-04-11", "lastmod_ts": 1176249600, "section": "Blog", "tags": [], "description": "Was just reminded (thanks, Tim) of the possibility of using a special tag in bookmarking services to tag links to documents of interest to a given community. I think this is a fairly well-established practice. Note that e.g. the OAI-ORE project is using Connotea to bookmark pages of interest and tagging them “oaiore” which can then be easily retrieved using the link http://web.archive.org/web/20160402182544/http://www.connotea.org/.\nI would suggest that Crossref members might like to consider using the tag “crosstech” in bookmarking pages about publishing technology, so that the following links might be used to retrieve documents of interest to this readership:", "content": "Was just reminded (thanks, Tim) of the possibility of using a special tag in bookmarking services to tag links to documents of interest to a given community. I think this is a fairly well-established practice. Note that e.g. the OAI-ORE project is using Connotea to bookmark pages of interest and tagging them “oaiore” which can then be easily retrieved using the link http://web.archive.org/web/20160402182544/http://www.connotea.org/.\nI would suggest that Crossref members might like to consider using the tag “crosstech” in bookmarking pages about publishing technology, so that the following links might be used to retrieve documents of interest to this readership:\ndel.icio.us - \u0026lt;https://web.archive.org/web/20071206033322/https://del.icio.us/ CiteULike - http://www.citeulike.org/tag/crosstech Connotea - http://web.archive.org/web/20160402182544/http://www.connotea.org/ etc. ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/citing-data-sets/", "title": "Citing Data Sets", "subtitle":"", "rank": 1, "lastmod": "2007-03-30", "lastmod_ts": 1175212800, "section": "Blog", "tags": [], "description": "This D-Lib paper by Altman and King looks interesting: “A Proposed Standard for the Scholarly Citation of Quantitative Data”. (And thanks to Herbert Van de Sompel for drawing attention to the paper.) Gist of it (Sect. 3) is\n_“We propose that citations to numerical data include, at a minimum, six required components. The first three components are traditional, directly paralleling print documents. … Thus, we add three components using modern technology, each of which is designed to persist even when the technology changes: a unique global identifier, a universal numeric fingerprint, and a bridge service. They are also designed to take advantage of the digital form of quantitative data.\nAn example of a complete citation, using this minimal version of the proposed standards, is as follows:\n**Micah Altman; Karin MacDonald; Michael P. McDonald, 2005, “Computer Use in Redistricting”,\nhdl:1902.1/AMXGCNKCLU UNF:3:J0PkMygLPfIyT1E/8xO/EA==\nhttp://id.thedata.org/hdl%3A1902.1%2FAMXGCNKCLU\n“_\n", "content": "This D-Lib paper by Altman and King looks interesting: “A Proposed Standard for the Scholarly Citation of Quantitative Data”. (And thanks to Herbert Van de Sompel for drawing attention to the paper.) Gist of it (Sect. 3) is\n_“We propose that citations to numerical data include, at a minimum, six required components. The first three components are traditional, directly paralleling print documents. … Thus, we add three components using modern technology, each of which is designed to persist even when the technology changes: a unique global identifier, a universal numeric fingerprint, and a bridge service. They are also designed to take advantage of the digital form of quantitative data.\nAn example of a complete citation, using this minimal version of the proposed standards, is as follows:\n**Micah Altman; Karin MacDonald; Michael P. McDonald, 2005, “Computer Use in Redistricting”,\nhdl:1902.1/AMXGCNKCLU UNF:3:J0PkMygLPfIyT1E/8xO/EA==\nhttp://id.thedata.org/hdl%3A1902.1%2FAMXGCNKCLU\n“_\nSo the abbreviated citation (author, date, title, unique ID) is supplemented by a UNF which fingerprints the data. UNFs would appear to be a sort of super MD5 in providing a signature of the data content independent of the data serialization to a filestore.\n_“Thus, we add as the fifth component a Universal Numeric Fingerprint or UNF. The UNF is a short, fixed-length string of numbers and characters that summarize all the content in the data set, such that a change in any part of the data would produce a completely different UNF. A UNF works by first translating the data into a canonical form with fixed degrees of numerical precision and then applies a cryptographic hash function to produce the short string. The advantage of canonicalization is that UNFs (but not raw hash functions) are format-independent: they keep the same value even if the data set is moved between software programs, file storage systems, compression schemes, operating systems, or hardware platforms.\n…\nFinally, since most web browsers do not currently recognize global unique identifiers directly (i.e., without typing them into a web form), we add as the sixth and final component of the citation standard a bridge service, which is designed to make this task easier in the medium term.”_\nCertainly looks promising. I’m not sure if there’s any other contestants in this arena.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-forward-linking-webinar/", "title": "Crossref Forward Linking Webinar", "subtitle":"", "rank": 1, "lastmod": "2007-03-29", "lastmod_ts": 1175126400, "section": "Blog", "tags": [], "description": "The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.\nRegistration is now available: [The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.\nRegistration is now available:]1\nAgenda is coming soon.", "content": "The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.\nRegistration is now available: [The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.\nRegistration is now available:]1\nAgenda is coming soon.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/markup-for-dois/", "title": "Markup for DOIs", "subtitle":"", "rank": 1, "lastmod": "2007-03-29", "lastmod_ts": 1175126400, "section": "Blog", "tags": [], "description": "Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now [Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now]3 the possibility of using machine-readable auto-discovery type links for DOIs of the form\nThese LINK tags are placed in the document HEAD section and could be used by crawlers and agents to recognize the work represented by the current document.", "content": "Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now [Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now]3 the possibility of using machine-readable auto-discovery type links for DOIs of the form\nThese LINK tags are placed in the document HEAD section and could be used by crawlers and agents to recognize the work represented by the current document. This sounds like a great idea and we’d like to hear feedback on it.\nConcurrently at Nature we have also been considering how best to mark up in a machine-readable way DOIs appearing within a document page BODY. Current thinking is to do something along the following lines:\ndoi:\n10.1038/nprot.2007.43\nwhich allows the DOI to be presented in the preferred Crossref citation format (doi:10.1038/nprot.2007.43), to be hyperlinked to the handle proxy server (\u0026lt;a href=\u0026quot;http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nprot.2007.43\u0026quot;\u0026gt;http://0-dx-doi-org.libus.csd.mu.edu/10.1038/nprot.2007.43\u0026lt;/a\u0026gt;), and to refer to a validly registered URI form for the DOI (info:doi/10.1038/nprot.2007.43). Again, we would be real interested to hear any opinions on this proposal for inline DOI markup as well as on Leigh’s proposal for document-level DOI markup.\n(Oh, and btw many congrats to Leigh on his recent promotion to CTO, Ingenta.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/publishing-2.0/", "title": "Publishing 2.0", "subtitle":"", "rank": 1, "lastmod": "2007-03-29", "lastmod_ts": 1175126400, "section": "Blog", "tags": [], "description": "XML:UK is holding a one-day conference entitled titled “Publishing 2.0” at Bletchley Park on Wednesday 25th April 2007. Bletchley Park was the location of the United Kingdom’s main codebreaking establishment during the Second World War and is now a museum (and has a train station!). The event will examine some of the more cutting-edge applications of XML technology to publishing. With keynotes by Sean McGrath and Kate Warlock and a series of must-see presentations, this will be the place to be on the last Wednesday in April.", "content": "XML:UK is holding a one-day conference entitled titled “Publishing 2.0” at Bletchley Park on Wednesday 25th April 2007. Bletchley Park was the location of the United Kingdom’s main codebreaking establishment during the Second World War and is now a museum (and has a train station!). The event will examine some of the more cutting-edge applications of XML technology to publishing. With keynotes by Sean McGrath and Kate Warlock and a series of must-see presentations, this will be the place to be on the last Wednesday in April.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/welcome-to-otmi-discuss/", "title": "Welcome to “Otmi-discuss”", "subtitle":"", "rank": 1, "lastmod": "2007-03-23", "lastmod_ts": 1174608000, "section": "Blog", "tags": [], "description": "Just a quick note to mention that we’ve now set up a new mailing list otmi-discuss@crossref.org for public discussion of OTMI - the Open Text Mining Interface proposed by Nature. See the list information page here for details on subscribing to the list and to access the mail archives.\nAnd many thanks to the Crossref folks for hosting this for us!", "content": "Just a quick note to mention that we’ve now set up a new mailing list otmi-discuss@crossref.org for public discussion of OTMI - the Open Text Mining Interface proposed by Nature. See the list information page here for details on subscribing to the list and to access the mail archives.\nAnd many thanks to the Crossref folks for hosting this for us!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/xmp-capabilities-extended/", "title": "XMP Capabilities Extended", "subtitle":"", "rank": 1, "lastmod": "2007-03-22", "lastmod_ts": 1174521600, "section": "Blog", "tags": [], "description": "This post on Adobe’s Creative Solutions PR blog may be worth a gander:\n_“This new update, the Adobe XMP 4.1, provides new libraries for developers to read, write and update XMP in popular image, document and video file formats including: JPEG, PSD, TIFF, AVI, WAV, MPEG, MP3, MOV, INDD, PS, EPS and PNG. In addition, the rewritten XMP 4.1 libraries have been optimized into two major components, the XMP Core and the XMP Files.", "content": "This post on Adobe’s Creative Solutions PR blog may be worth a gander:\n_“This new update, the Adobe XMP 4.1, provides new libraries for developers to read, write and update XMP in popular image, document and video file formats including: JPEG, PSD, TIFF, AVI, WAV, MPEG, MP3, MOV, INDD, PS, EPS and PNG. In addition, the rewritten XMP 4.1 libraries have been optimized into two major components, the XMP Core and the XMP Files.\nThe XMP Core enables the parsing, manipulating and serializing of XMP data, and the XMP Files enables the reading, rewriting, and injecting serialized XMP into the multiple file formats. The XMP Files can be thought of as a “file I/O” component for reading and writing the metadata that is manipulated by the XMP Core component.\nSupported development environments for Adobe’s XMP 4.1 are: XCode 2.3 for Macintosh universal binaries, Visual Studio 2005 (VC8) for Windows, and Eclipse 3.x on any available platform. The XMP Core is available as C++ and Java sources with project files for the Macintosh, Windows and Linux platform. A Java version of XMP Files is under consideration for a future update.”_\nAnd now I just read that last sentence again: “A Java version of XMP Files is under consideration for a future update.” So, how hard do they really want to make uptake of XMP be? Am surprised they’re even still considering offering full Java support, and not offering also anything in the way of support for glue languages such as Perl, Python, or Ruby.\nWhich leads to the question: Is anybody here using XMP and had any success to relate or lessons for the rest of us?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/siia-executive-facetime-webcast-series/", "title": "SIIA Executive FaceTime Webcast Series", "subtitle":"", "rank": 1, "lastmod": "2007-03-21", "lastmod_ts": 1174435200, "section": "Blog", "tags": [], "description": "We thought that this program might interest our CrossTech bloggers.\nHoward Ratner, Chief Technology Officer, Executive Vice-President at Nature Publishing Group is on the agenda.\nMore information is available at: https://web.archive.org/web/20070322234448/http://www.siia.net/.\n", "content": "We thought that this program might interest our CrossTech bloggers.\nHoward Ratner, Chief Technology Officer, Executive Vice-President at Nature Publishing Group is on the agenda.\nMore information is available at: https://web.archive.org/web/20070322234448/http://www.siia.net/.\nSIIA Executive FaceTime Webcast Series\nHoward Ratner, EVP/CTO, Nature Publishing Group\nWednesday, March 28, 2007\n12:00PM – 1:30PM EST\nThe SIIA is pleased to announce that Howard Ratner of Nature Publishing Group will be our guest for the upcoming Executive FaceTime. This live webcast series features one-on-one conversations between leading industry executives and host Hal Espo. Participation is encouraged, the web audience is invited to submit questions posed through the host. Past guests include Tad Smith, CEO of Reed Business Information and L. Gordon Crovitz, EVP of Dow Jones \u0026amp; Company. Registration is free to SIIA members and non-members alike; to participate, you must register by the end of the day on Tuesday, March 27th.\nHoward Ratner\nHoward Ratner is Chief Technology Officer, Executive Vice-President, for the Nature Publishing Group. Based in New York, Howard is in charge of NY operations and has global responsibilities for Production and Manufacturing, Web Development, Web Services, Content Services, and Information Technology across all NPG products. Howard’s prior positions include Director, Electronic Publishing \u0026amp; Production for Springer-Verlag New York, as well as the North American Manager for LINK, and a member of the production staff at John Wiley \u0026amp; Sons. He also serves on the Crossref board, PubMed Central, CORDS and LOCKSS advisory committees, and is a former chair for both the AAP/PSP DOI subcommittee and the DOI-X project.\nHal Espo\nHal Espo is President of Contextual Connections, LLC, a NYC-based consultancy which focuses exclusively in the digital services arena, including digital content, distribution, and applications. Hal has more than 25 years experience as an operating executive as well as a business and product development professional in the electronic information industry. He served as Chief Operating Officer of Index Stock Imagery, Inc., a web-based commercial stock photography and illustration vendor, and previously was the Chief Operating Officer at CORSEARCH, Inc., a trademark research firm serving Fortune 500 companies and law firms.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/agile-descriptions/", "title": "Agile Descriptions", "subtitle":"", "rank": 1, "lastmod": "2007-03-20", "lastmod_ts": 1174348800, "section": "Blog", "tags": [], "description": "Apologies to blog yet another of my posts to Nascent, this time on Agile Descriptions - a talk I gave the week before last before the LC Future of Bibliographic Control WG. (Don’t worry - I shan’t be making it a habit of this.) But certain aspects of the talk (powerpoint is here) may be interesting to this readership, in particular the slides on microformats and how these are tentatively being deployed on Nature Network, and also a detailed anatomy of OTMI files.", "content": "Apologies to blog yet another of my posts to Nascent, this time on Agile Descriptions - a talk I gave the week before last before the LC Future of Bibliographic Control WG. (Don’t worry - I shan’t be making it a habit of this.) But certain aspects of the talk (powerpoint is here) may be interesting to this readership, in particular the slides on microformats and how these are tentatively being deployed on Nature Network, and also a detailed anatomy of OTMI files.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/new-look-web-feeds-from-nature/", "title": "New-Look Web Feeds from Nature", "subtitle":"", "rank": 1, "lastmod": "2007-03-15", "lastmod_ts": 1173916800, "section": "Blog", "tags": [], "description": "I just posted this entry on Nascent, Nature’s Web Publishing blog, about Nature’s new look for web feeds which essentially boils down to our using the RSS 1.0 ‘mod_content’ module to add in a rich content description for human consumption to complement our long-standing commitment to machine-readable descriptions. We are thus able to deliver full citation details in our RSS feeds as XHTML in CDATA sections for humans and as DC/PRISM properties for machines, the whole encoded in our feed format of choice - RSS 1.", "content": "I just posted this entry on Nascent, Nature’s Web Publishing blog, about Nature’s new look for web feeds which essentially boils down to our using the RSS 1.0 ‘mod_content’ module to add in a rich content description for human consumption to complement our long-standing commitment to machine-readable descriptions. We are thus able to deliver full citation details in our RSS feeds as XHTML in CDATA sections for humans and as DC/PRISM properties for machines, the whole encoded in our feed format of choice - RSS 1.0. Note also that we declared our intention to publish parallel feeds in Atom which again will carry both human- and machine-readable citations. Further details on the RSS 1.0/Atom paired feeds will be posted here in the near future.\nPerhaps of special note we have added in the DOI in our descriptions in standard Crossref citation format and linked it to the DX resolver.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/indexing-urls/", "title": "Indexing URLs", "subtitle":"", "rank": 1, "lastmod": "2007-03-08", "lastmod_ts": 1173312000, "section": "Blog", "tags": [], "description": "Leigh Dodds proposes in this post some solutions to persistent linking using web crawlers and social bookmarking.\n“When I use del.icio.us, CiteULike, or Connotea or other social bookmarking service, I end up bookmarking the URL of the site I’m currently using. Its this specific URL that goes into their database and associated with user-assigned tags, etc.\n…\nA more generally applicable approach to addressing this issue, one that is not specific to academic publishing, would be to include, in each article page, embedded metadata that indicates the preferred bookmark link.", "content": "Leigh Dodds proposes in this post some solutions to persistent linking using web crawlers and social bookmarking.\n“When I use del.icio.us, CiteULike, or Connotea or other social bookmarking service, I end up bookmarking the URL of the site I’m currently using. Its this specific URL that goes into their database and associated with user-assigned tags, etc.\n…\nA more generally applicable approach to addressing this issue, one that is not specific to academic publishing, would be to include, in each article page, embedded metadata that indicates the preferred bookmark link. The DOI could again be pressed into service as the preferred bookmarking link.”\nHe’s inviting feedback. I’d certainly like to hear what others may think of these suggestions.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/eprintweb.org/", "title": "eprintweb.org", "subtitle":"", "rank": 1, "lastmod": "2007-03-02", "lastmod_ts": 1172793600, "section": "Blog", "tags": [], "description": "IOP has created an instance of the arXiv repository called eprintweb.org at https://web.archive.org/web/20130803071935/http://eprintweb.org/S/. What’s the difference from arXiv? From the eprinteweb.org site - “We have focused on your experience as a user, and have addressed issues of navigation, searching, personalization and presentation, in order to enhance that experience. We have also introduced reference linking across the entire content, and enhanced searching on all key fields, including institutional address.”\nThe site looks very good and it’s interesting to see a publisher developing a service directly engaging with a repository.\n", "content": "IOP has created an instance of the arXiv repository called eprintweb.org at https://web.archive.org/web/20130803071935/http://eprintweb.org/S/. What’s the difference from arXiv? From the eprinteweb.org site - “We have focused on your experience as a user, and have addressed issues of navigation, searching, personalization and presentation, in order to enhance that experience. We have also introduced reference linking across the entire content, and enhanced searching on all key fields, including institutional address.”\nThe site looks very good and it’s interesting to see a publisher developing a service directly engaging with a repository.\nSome interesting points to note: There are DOI links to published articles - http://www.eprintweb.org/S/article/astro-ph/0603001 - which IOP gets from Crossref. References in the preprints are also linked - http://www.eprintweb.org/S/article/astro-ph/0603001/refs\nCrossref will soon be making available an author/title only query for repositories to use to find DOIs for published papers when the preprint doesn’t have the full citation. Many authors don’t go back to their preprints to update the reference to the published version but the new Crossref query will enable the repositories to do this automatically.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/open-content/", "title": "Open Content", "subtitle":"", "rank": 1, "lastmod": "2007-03-02", "lastmod_ts": 1172793600, "section": "Blog", "tags": [], "description": "In light of my earlier post on OTMI, the mail copied below from Sebastian Hammer at Index Data about open content may be of interest. They are looking to compile a listing of web sources of open content - see this page for further details.\n(Via XML4lib and other lists.)\n", "content": "In light of my earlier post on OTMI, the mail copied below from Sebastian Hammer at Index Data about open content may be of interest. They are looking to compile a listing of web sources of open content - see this page for further details.\n(Via XML4lib and other lists.)\n_“Hi All,\n(apologies for any cross-posting)\nAt Index Data, we have long felt that there were really interesting\nsources of open content out there that was not being utilized as well as\nit could be because it was hidden away in websites. We’re a software\ncompany specializing in information retrieval applications, so\neventually we asked ourselves, ‘what could we all do with this stuff if\nit were exposed using our favorite open standards’.\nWe thought it was worth finding out, so we have set up processes to\nregularly retrieve indexes of major open content resources, and make\nthem available using SRU and Z39.50. We’ve started with the Open Content\nAlliance and Project Gutenberg (two quite different approaches to\nproducing free eBooks), Wikipedia, the Open Directory Project, and\nOAIster. More is on the way.\nConnection information and more details are available at\nhttps://web.archive.org/web/20070325152849/http://indexdata.com//opencontent/.\nThe kind of metadata you can get from these sources varies. The Open\nContent Alliance captures MARC records along with the scanned books,\nwhich makes for excellent metadata. Many of the others produce some\nvariation of DublinCore. Our service, through either Z39.50 or SRU/W,\nexposes both MARC (or MARCXML) and DublinCore in XML for all sources.\nWe’ve created a new mailing list to help inform people of changes to the\nservices, new resources available, etc. Signup at\nhttp://lists.indexdata.dk/cgi-bin/mailman/listinfo/oclist/ .\nWe sincerely hope you will find these resources exciting and useful.\nFeel free to get in touch if you have questions or input.\n-Sebastian\n—\nSebastian Hammer, Index Data\nquinn@indexdata.com www.indexdata.com\nPh: (603) 209-6853 Fax: (866) 383-4485”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/otmi-an-update/", "title": "OTMI - An Update", "subtitle":"", "rank": 1, "lastmod": "2007-03-02", "lastmod_ts": 1172793600, "section": "Blog", "tags": [], "description": "We’ve just posted an update about OTMI (the Open Text Mining Interface) on our Web Publishing blog Nascent. This post details the following changes:\nContact email - otmi@nature.com Wiki - http://opentextmining.org/ Repository - https://web.archive.org/web/20090706181310/http://0-www-nature-com.libus.csd.mu.edu/otmi/journals.opml The OTMI content repository currently provides two years’ worth of full text across five of our titles:\nNature Nature Genetics Nature Reviews Drug Discovery Nature Structural \u0026amp; Molecular Biology The Pharmacogenomics Journal See the wiki for draft technical specs and for a sample script to generate the OTMI files.", "content": "We’ve just posted an update about OTMI (the Open Text Mining Interface) on our Web Publishing blog Nascent. This post details the following changes:\nContact email - otmi@nature.com Wiki - http://opentextmining.org/ Repository - https://web.archive.org/web/20090706181310/http://0-www-nature-com.libus.csd.mu.edu/otmi/journals.opml The OTMI content repository currently provides two years’ worth of full text across five of our titles:\nNature Nature Genetics Nature Reviews Drug Discovery Nature Structural \u0026amp; Molecular Biology The Pharmacogenomics Journal See the wiki for draft technical specs and for a sample script to generate the OTMI files. And feel free to add to the wiki on existing pages or create new pages as required.\nWe’re very much looking forward to any feedback you may have on what we consider to be a very exciting new initiative for scholarly publishers.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/sir-timbls-testimony/", "title": "Sir TimBL’s Testimony", "subtitle":"", "rank": 1, "lastmod": "2007-03-02", "lastmod_ts": 1172793600, "section": "Blog", "tags": [], "description": "Just in case anybody may not have seen this, here‘s the testimony of Sir Tim Berners-Lee yesterday before a House of Representatives Subcommittee on Telecommunications and the Internet. Required reading.\n(Via this post yesterday in the Save the Internet blog.)", "content": "Just in case anybody may not have seen this, here‘s the testimony of Sir Tim Berners-Lee yesterday before a House of Representatives Subcommittee on Telecommunications and the Internet. Required reading.\n(Via this post yesterday in the Save the Internet blog.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/spinning-around/", "title": "“Spinning Around”", "subtitle":"", "rank": 1, "lastmod": "2007-02-23", "lastmod_ts": 1172188800, "section": "Blog", "tags": [], "description": "There’s a great exposition of FRBR (the Functional Requirements for Bibliographic Records model “work -\u0026gt; expression -\u0026gt; manifestation -\u0026gt; item“) in this post from The FRBR Blog on De Revolutionibus as described in The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus by Owen Gingerich. See post for the background and here (103 KB PNG) for a map of the FRBR relationships.\n(Yes, and a twinkly star in the title too.", "content": "There’s a great exposition of FRBR (the Functional Requirements for Bibliographic Records model “work -\u0026gt; expression -\u0026gt; manifestation -\u0026gt; item“) in this post from The FRBR Blog on De Revolutionibus as described in The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus by Owen Gingerich. See post for the background and here (103 KB PNG) for a map of the FRBR relationships.\n(Yes, and a twinkly star in the title too. ;~)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/kay-sera-sera/", "title": "Kay Sera Sera", "subtitle":"", "rank": 1, "lastmod": "2007-02-20", "lastmod_ts": 1171929600, "section": "Blog", "tags": [], "description": "Not specifically publishing-related, but here is a fun rant interview with Alan Kay titled The PC Must Be Revamped—Now.\nMy favorite bit…\n“…in the last few years I’ve been asking computer scientists and programmers whether they’ve ever typed E-N-G-E-L-B-A-R-T into Google-and none of them have. I don’t think you could find a physicist who has not gone back and tried to find out what Newton actually did. It’s unimaginable. Yet the computing profession acts as if there isn’t anything to learn from the past, so most people haven’t gone back and referenced what Engelbart thought.", "content": "Not specifically publishing-related, but here is a fun rant interview with Alan Kay titled The PC Must Be Revamped—Now.\nMy favorite bit…\n“…in the last few years I’ve been asking computer scientists and programmers whether they’ve ever typed E-N-G-E-L-B-A-R-T into Google-and none of them have. I don’t think you could find a physicist who has not gone back and tried to find out what Newton actually did. It’s unimaginable. Yet the computing profession acts as if there isn’t anything to learn from the past, so most people haven’t gone back and referenced what Engelbart thought. ”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/%238220were-sorry%238230%238221/", "title": "“We’re sorry…”", "subtitle":"", "rank": 1, "lastmod": "2007-02-19", "lastmod_ts": 1171843200, "section": "Blog", "tags": [], "description": "Update: All apologies to Google. Apparently this was a problem at our end which our IT folks are currently investigating. (And I thought it was just me. 🙂\nJust managed to get this page:\n_“Google Error\nWe’re sorry…\n… but your query looks similar to automated requests from a computer virus or spyware application. To protect our users, we can’t process your request right now.\nWe’ll restore your access as quickly as possible, so try again soon.", "content": "Update: All apologies to Google. Apparently this was a problem at our end which our IT folks are currently investigating. (And I thought it was just me. 🙂\nJust managed to get this page:\n_“Google Error\nWe’re sorry…\n… but your query looks similar to automated requests from a computer virus or spyware application. To protect our users, we can’t process your request right now.\nWe’ll restore your access as quickly as possible, so try again soon. In the meantime, if you suspect that your computer or network has been infected, you might want to run a virus checker or spyware remover to make sure that your systems are free of viruses and other spurious software.\nWe apologize for the inconvenience, and hope we’ll see you again on Google.\nTo continue searching, please type the characters you see below:”_\nAnd my search request?\nark\n(Actual query is here as argument to the continue parameter.)\nWas hoping to find results related to the The ARK Persistent Identifier Scheme. Maybe I missed something but I’m not impressed.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/at-last-uris-for-inchi/", "title": "At Last! URIs for InChI", "subtitle":"", "rank": 1, "lastmod": "2007-02-19", "lastmod_ts": 1171843200, "section": "Blog", "tags": [], "description": "The info registry has now added in the InChI namespace (see registry entry here) which now means that chemical compounds identified by InChIs (IUPAC‘s International Chemical Identifiers) are expressible in URI form and thus amenable to many Web-based description technologies that use URI as the means to identify objects, e.g. XLink, RDF, etc. As an example, the InChI identifier for naphthalene is\nInChI=1/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H\nand can now be legitimately expressed in URI form as", "content": "The info registry has now added in the InChI namespace (see registry entry here) which now means that chemical compounds identified by InChIs (IUPAC‘s International Chemical Identifiers) are expressible in URI form and thus amenable to many Web-based description technologies that use URI as the means to identify objects, e.g. XLink, RDF, etc. As an example, the InChI identifier for naphthalene is\nInChI=1/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H\nand can now be legitimately expressed in URI form as\ninfo:inchi/InChI=1/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H\nThe info URI scheme exists to support legacy namespaces get a leg up onto the Web. Registered namespaces include PubMed identifiers, DOIs, handles, ADS bibcodes, etc. Increasingly we’ll be expecting to see identifiers (both new and old) represented in a common form - URI.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stick-this-in-your-pipe/", "title": "Stick this in your pipe…", "subtitle":"", "rank": 1, "lastmod": "2007-02-19", "lastmod_ts": 1171843200, "section": "Blog", "tags": [], "description": "Rob Cornelius has a practical little demo of using Yahoo! pipes against some Ingenta feeds.\nLike Tony, I keep experiencing speed/stability problems while accessing pipes so I haven’t yet become a crack-pipes-head.", "content": "Rob Cornelius has a practical little demo of using Yahoo! pipes against some Ingenta feeds.\nLike Tony, I keep experiencing speed/stability problems while accessing pipes so I haven’t yet become a crack-pipes-head.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/openurl-podcast/", "title": "OpenURL Podcast", "subtitle":"", "rank": 1, "lastmod": "2007-02-17", "lastmod_ts": 1171670400, "section": "Blog", "tags": [], "description": "Jon Udell interviews Dan Chudnov about OpenURL, see his blog entry: “A conversation with Dan Chudnov about OpenURL, context-sensitive linking, and digital archiving”. The podcast of the interview is available here.\nInteresting to see these kind of subjects beginning to be covered by a respected technology writer like Jon. As he says in his post:\n“I have ventured into this confusing landscape because I think that the issues that libraries and academic publishers are wrestling with — persistent long-term storage, permanent URLs, reliable citation indexing and analysis — are ones that will matter to many businesses and individuals.", "content": "Jon Udell interviews Dan Chudnov about OpenURL, see his blog entry: “A conversation with Dan Chudnov about OpenURL, context-sensitive linking, and digital archiving”. The podcast of the interview is available here.\nInteresting to see these kind of subjects beginning to be covered by a respected technology writer like Jon. As he says in his post:\n“I have ventured into this confusing landscape because I think that the issues that libraries and academic publishers are wrestling with — persistent long-term storage, permanent URLs, reliable citation indexing and analysis — are ones that will matter to many businesses and individuals. As we project our corporate, professional, and personal identities onto the web, we’ll start to see that the long-term stability of those projections is valuable and worth paying for.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/opendocument-1.1-is-oasis-standard/", "title": "OpenDocument 1.1 is OASIS Standard", "subtitle":"", "rank": 1, "lastmod": "2007-02-15", "lastmod_ts": 1171497600, "section": "Blog", "tags": [], "description": "From the OASIS Press Release:\n“Boston, MA, USA; 13 February 2007 — OASIS, the international standards consortium, today announced that its members have approved version 1.1 of the Open Document Format for Office Applications (OpenDocument) as an OASIS Standard, a status that signifies the highest level of ratification.”", "content": "From the OASIS Press Release:\n“Boston, MA, USA; 13 February 2007 — OASIS, the international standards consortium, today announced that its members have approved version 1.1 of the Open Document Format for Office Applications (OpenDocument) as an OASIS Standard, a status that signifies the highest level of ratification.”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/authors/amy-brand/", "title": "Amy Brand", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Authors", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crossref-author-id-meeting/", "title": "Crossref Author ID meeting", "subtitle":"", "rank": 1, "lastmod": "2007-02-14", "lastmod_ts": 1171411200, "section": "Blog", "tags": [], "description": "February 5, 2007, Washington DC Crossref invited a number of people to attend an information gathering session on the topic of Author IDs. The purpose of the meeting was to determine:\nAbout whether there is an industry need for a central or federated contributor id registry;\nwhether Crossref should have a role in creating such a registry;\nhow to proceed in a way that builds upon existing systems and standards.", "content": "February 5, 2007, Washington DC Crossref invited a number of people to attend an information gathering session on the topic of Author IDs. The purpose of the meeting was to determine:\nAbout whether there is an industry need for a central or federated contributor id registry;\nwhether Crossref should have a role in creating such a registry;\nhow to proceed in a way that builds upon existing systems and standards.\nIn attendance: Jeff Baer, CSA; Judith Barnsby, IOPP; Geoff Bilder, Crossref; Amy Brand, Crossref; David Brown, British Library; Richard Cave, PLoS (remote); Bill Carden, ScholarOne; Gregg Gordon, SSRN; Gerry Grenier, IEEE; Michael Healy, BISG (remote); Helen Henderson, Ringgold; Thomas Hickey, OCLC (remote); Terry Hulburt, IOPP; Tim Ingoldsby, AIP; Ruth Jones, Britsh Library; Marl Land, Parity; Dave Martinson, ACS; Georgios Papadapoulos, Atypon (with two colleagues); Jim Pringle, Thomson; Chris Rosin, Parity; Tim Ryan, Wiley; Philippa Scoones, Blackwell; Chris Shillum, Elsevier; Neil Smalheiser, UIC (remote); Barbara Tillett, LoC; Vetle Torvik, UIC (remote); Charles Trowbridge, ACS; Amanda Ward, Nature (remote); Stu Weibel, OCLC (remote); David Williamson, LoC;\nNotes Amy Brand opened the meeting and welcomed attendees. She said the goal of the meeting was really nothing more than to launch a discussion on a topic of author identifiers and hear from participants re their views and experiences on unique identifiers for individuals — be they authors, contributors, or otherwise. We went around the table and everyone introduced themselves. Amy then introduced Geoff Bilder as moderator of the meeting. Geoffrey Bilder said that Crossref’s members had indicated that they would like Crossref to explore whether it could play a role in creating an author identification system. The members feel that an “author DOI” scheme would help them with production and editorial issues. They also recognize that such a scheme could fuel numerous downstream applications. Geoff apologized for sounding like Rumsfeld and said, we know that there is a lot that we don’t know, but we don’t know exactly what we don’t know. We have just started this project and we wanted to get some feedback from various groups concerned with scholarly publishing in order to understand what people would like to see in regards to author identification schemes and what initiatives/efforts we need to be aware of. He commented that the currently assembled group failed to include the open web community, and their input would be important too as this project develops. The meeting then turned to short project summaries from others.\nProject Summaries Jim Pringle gave a short PPT presentation (attached) and reported that Thomson first started creating its own author ids in 2000, in relation to the launch of its Highly Cited service. The focus for Thomson in this area has been on author disambiguation. Jim said that the focus for Crossref in this area would be a system that could respond to the question “who are you and what have you written”; he also raised concern about matters of author privacy.\nMichael Healy then discussed the International Standard Party Identifier (ISPI). ISO TC 46/SC 9 is developing ISPI as a new international identification system for the parties (persons and corporate bodies) involved in the creation and production of content entities. Work on the ISPI project began in August 2006 when the New Work Item proposal was approved by the member bodies of ISO TC 46/SC 9. The first meeting of the ISPI project group was held at CISAC’s offices in Paris on September 12, 2006. This project has strong representation the library sector, RRO’s, booksellers, music and film/TV industries represented as well. Mr. René Lloret Linares from CISAC (International Confederation of Societies of Authors and Composers) chairs the group; until now CISAC has been using a proprietary id scheme and would like to move to use of an open standard to identify all contributors and creators. Michael was asked whether membership in the project group was open, and he replied that anyone can attend meetings as observers but that voting is restricted to those nominated by their own national standards organization. Chris Shillum then asked the group to think about developed use cases for the publishing industry, and how they differ from potential ISPI applications.\nHelen Henderson reported on the Journals Supply Chain project, a pilot that aims to discover whether the creation of a standard, commonly used identifier for Institutions (customer ids) will be beneficial to parties involved in the journal supply chain. The pilot models interactions between each party — library, publisher, agent. 35 publishers are participating thus far. Helen also said there is a clear need for sub-institutional level ids. Helen also pointed out the value of associating author and institutional ids. On the topic of institutions, Tim Ingoldsby pointed out that both academic and corporate institutions are important. Chris Rosin said Parity is working on author merger and disambiguation as core use cases of author ids for its publisher clients. In particular, they have developed automated merging of instances into profiles, proceeding with conservative bias on what constitutes a match/merge. Parity is also looking at applying author cv’s onto profiles. This will require contributors to participate, and they will need to make it as easy as possible for contributors. Chris said that authentication, trust, and privacy are key considerations; even collecting public information in one place raises privacy issues. Judith Barnsby pointed out that the UK has stronger data protection rules than the US, re privacy. Discussion among the group at this point in the meeting resulted in identifying two different areas in author id assignment — (1) ongoing assignment, (2) retroactive assignment. Geoff said this distinction was useful for Crossref, who could more easily address ongoing assignment via publishers working directly with authors.\nNeil Smalheiser, a neuroscientist at UIC, reported on the Arrowsmith Project, a statistical model based on multiple features of the Medline database. The goal of the model is to predict the probability that any two papers are written by the same person. The project’s “Authority” tool weighs criteria such as researcher affiliation, co-author names, journal title, and medical subject headings to identify the papers most likely written by a target author. For details: arrowsmith.psych.uic.edu/arrowsmith_uic/index.html http://arrowsmith.psych.uic.edu/arrowsmith_uic/index.html\nDavid Williamson of LoC said he was working on name authority files, using ONIX metadata. Barbara Tillet of LoC spoke about authority files and related efforts in library world, which uses the control number, one type of unique id. She reported that IFLA (International Federation of Library Associations) has a group working on how to share authority numbers, which has actually been in discussion since the 1970s; there is to be an IFLA-IPA meeting in April 2007. The library community is eager to share what it knows and what it has developed this far. Barbara suggested that use of Dublin Core format here may be the best way to go. Different communities will no doubt need different ids. What is needed in the library community is an international, multi-lingual solution, based on unicode, connecting regional authority files. Publishers will want to take advantage of library author-ity files for retrospective identifications.\nThomas Hickey of OCLC mentioned the WorldCat Identity service, which summarizes information for 20 million authors searchable in WorldCat. Gerry Grenier reported that IEEE was about to implement its own author disambiguation and id system, and he offered that this metadata could be fed into a Crossref system. Different participants had different views on whether the goal here should be a “light and non-centralized” (or federated) approach versus a centralized registry with one place to link authors across all publishers, versus a hybrid — centralized source to handout unique id, but publisher data could be distributed. There could also be a network of registration agencies working in a federated system. Different participants also had different views on Crossref’s role. Several publishers at the meeting supported Crossref’s role, especially in the STM space, whereas there was concern raised among some parties about whether Crossref was an appropriate choice for a system that will need to be “available everywhere to everybody”, and others re-iterated the importance of giving the academic community a voice in the development of such a service Discussion then turned to use cases — the question being, what problems would having an author id help you solve in your organization?\nUSE CASES ARTICULATED AT MEETING:\nFor RROs, known use case is to facilitate distribution of monies owed to authors;;\nfor booksellers, disambiguation in search;;\nto understand the provenance of documents;\nsearch — to find works for particular person; self presentation — how can I effectively present myself and my work to the world?;\ncross-walks — associating various life sciences ids, such as PubChem;\nidentity of society members;\nidentity of research funding institutions;\ndisambiguation and attribution;\nlinking authors and institutions;\nfor enhancing peer review system — need unique ids to share information with various departments;\nto better know the value of our authors — for activities such as peer review, tracking stats on authors, article downloads, and individualized or personalized services;\nwith a central registry, author only has one place they have to update their information;\nauthors will want the information to be portable when they move from inst to another — “where is Jeff Smith now?” is one such question;\nto associate connected authors with one another;\nto aggregate info on where (what institution) research is being done on a particular topic;\nprivacy can be enhanced with author DOIs;\nsharing info from library to library;\ncluster all the works of a particular person for search purposes;\nstats about authors — “how many times has this author tried and been rejected from Nature?” for instance.\n**NEXT STEPS: Please watch the CrossTech blog for ongoing discussion **\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/microsoft-to-support-openid/", "title": "Microsoft to Support OpenID", "subtitle":"", "rank": 1, "lastmod": "2007-02-08", "lastmod_ts": 1170892800, "section": "Blog", "tags": [], "description": "Kim Cameron, Microsoft’s Identity Czar and member of the Identity Gang, comments on Microsoft’s announcement that they will support OpenID. Another sign that federated identity schemes are gaining traction and OpenID is likely to emerge as a standard the publishers are going to want to grapple with soon.\nThis follows Doc Searl’s comments on the notion of “Creator Relationship Management” where he speculates that the techniques being used in federated identity schemes and the Creative Commons can be combined to create a new “silo-free” value chain amongst creators, producers and distributors.", "content": "Kim Cameron, Microsoft’s Identity Czar and member of the Identity Gang, comments on Microsoft’s announcement that they will support OpenID. Another sign that federated identity schemes are gaining traction and OpenID is likely to emerge as a standard the publishers are going to want to grapple with soon.\nThis follows Doc Searl’s comments on the notion of “Creator Relationship Management” where he speculates that the techniques being used in federated identity schemes and the Creative Commons can be combined to create a new “silo-free” value chain amongst creators, producers and distributors.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/remixing-rss/", "title": "Remixing RSS", "subtitle":"", "rank": 1, "lastmod": "2007-02-08", "lastmod_ts": 1170892800, "section": "Blog", "tags": [], "description": "Niall Kennedy has a post about the newly released Yahoo! Pipes. As he says:\n“Yahoo! Pipes lets any Yahoo! registered user enter a set of data inputs and filter their results. You might splice a feed of your latest bookmarks on del.icio.us with the latest posts from your blog and your latest photographs posted to Flickr.”\nHe also warns about possible implications for web publishers:\n“Yahoo! Pipes makes it easy to remove advertising from feeds or otherwise reformat your content.", "content": "Niall Kennedy has a post about the newly released Yahoo! Pipes. As he says:\n“Yahoo! Pipes lets any Yahoo! registered user enter a set of data inputs and filter their results. You might splice a feed of your latest bookmarks on del.icio.us with the latest posts from your blog and your latest photographs posted to Flickr.”\nHe also warns about possible implications for web publishers:\n“Yahoo! Pipes makes it easy to remove advertising from feeds or otherwise reformat your content.”\nNote: As yet, I have not been able to access the site. Interested to learn if anybody else has and what their experiences have been.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rss-validator-in-the-spotlight/", "title": "RSS Validator in the Spotlight", "subtitle":"", "rank": 1, "lastmod": "2007-02-08", "lastmod_ts": 1170892800, "section": "Blog", "tags": [], "description": "Sam Ruby responds to Brian Kelly’s post about the RSS Validator and its treatment of RSS 1.0, or rather, RSS 1.0 modules. As Ruby notes:\n“There is no question that RSS 1.0 is widely deployed. RSS 1.0 has a minimal core. The validation for that core is pretty solid.”\nNot sure if I’d seen that RSS comparison table before, but it is reassuring. (Oh, and see the really simple case off to the right.", "content": "Sam Ruby responds to Brian Kelly’s post about the RSS Validator and its treatment of RSS 1.0, or rather, RSS 1.0 modules. As Ruby notes:\n“There is no question that RSS 1.0 is widely deployed. RSS 1.0 has a minimal core. The validation for that core is pretty solid.”\nNot sure if I’d seen that RSS comparison table before, but it is reassuring. (Oh, and see the really simple case off to the right. 😉\nGood point, anyway about contributing test cases. I guess we should really submit a PRISM test case. And yes, the Validator is somewhat buggy as some recent testing confirms. On which more later.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/searchulike/", "title": "SearchULike", "subtitle":"", "rank": 1, "lastmod": "2007-02-05", "lastmod_ts": 1170633600, "section": "Blog", "tags": [], "description": "Nelson Minar has a short post on Google’s Search History ‘feature’ and how it can be used to enhance your search experience. I guess that should be SearchULike.", "content": "Nelson Minar has a short post on Google’s Search History ‘feature’ and how it can be used to enhance your search experience. I guess that should be SearchULike.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/whats-my-link/", "title": "What’s My Link?", "subtitle":"", "rank": 1, "lastmod": "2007-02-05", "lastmod_ts": 1170633600, "section": "Blog", "tags": [], "description": "Simon Willison has a great piece here about disambiguating URLs. Best practice on creating and publishing URLs is obviously something of interest to any publisher. See this excerpt from Simon’s post:\n_“Here’s a random example, plucked from today’s del.icio.us popular. convinceme.net is a new online debating site (tag clouds, gradient fills, rounded corners). It’s listed in del.icio.us a total of four times!\nhttps://web.archive.org/web/20070203050251/http://www.convinceme.net/ has 36 saves\nhttps://web.archive.org/web/20070202182238/http://www.convinceme.net/index.php has 148 saves", "content": "Simon Willison has a great piece here about disambiguating URLs. Best practice on creating and publishing URLs is obviously something of interest to any publisher. See this excerpt from Simon’s post:\n_“Here’s a random example, plucked from today’s del.icio.us popular. convinceme.net is a new online debating site (tag clouds, gradient fills, rounded corners). It’s listed in del.icio.us a total of four times!\nhttps://web.archive.org/web/20070203050251/http://www.convinceme.net/ has 36 saves\nhttps://web.archive.org/web/20070202182238/http://www.convinceme.net/index.php has 148 saves\nhttps://web.archive.org/web/20070203050251/http://www.convinceme.net/ has 211 saves\nhttps://web.archive.org/web/20070202182238/http://www.convinceme.net/index.php has 38 saves\nCombined that’s 433 saves; much more impressive, and more likely to end up at the top of a social sharing sites.”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/comments-and-trackbacks/", "title": "comments and trackbacks", "subtitle":"", "rank": 1, "lastmod": "2007-02-02", "lastmod_ts": 1170374400, "section": "Blog", "tags": [], "description": "Due to spam the comments and trackbacks were turned off on the blog since last week. Comments can be moderated so they have now been turned back on. Glad to see postings picking up.", "content": "Due to spam the comments and trackbacks were turned off on the blog since last week. Comments can be moderated so they have now been turned back on. Glad to see postings picking up.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/hooray/", "title": "Hooray!", "subtitle":"", "rank": 1, "lastmod": "2007-02-02", "lastmod_ts": 1170374400, "section": "Blog", "tags": [], "description": "Somebody is both reading (and recommending) this blog - see Lorcan’s post here. Just my opinion but would be really good to see more librarians following this in order to arrive at better consensus.", "content": "Somebody is both reading (and recommending) this blog - see Lorcan’s post here. Just my opinion but would be really good to see more librarians following this in order to arrive at better consensus.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/rsc-launches-semantic-enrichment-of-journal-articles/", "title": "RSC launches semantic enrichment of journal articles", "subtitle":"", "rank": 1, "lastmod": "2007-02-01", "lastmod_ts": 1170288000, "section": "Blog", "tags": [], "description": "The RSC has gone live today with the results of Project Prospect, introducing semantic enrichment of journal articles across all our titles. I’m pretty sure we’re the first primary research publisher to do anything of this scope.\nWe’re identifying chemical compounds and providing synonyms, InChIs (IUPAC’s Chemical Identifier), downloadable CML (Chemical Markup Language), SMILES strings and 2D images for these compounds. In terms of subject area we’re marking up terms from the IUPAC Gold Book, and also Open Biomedical Ontology terms from the Gene, Cell, and Sequence Ontologies.", "content": "The RSC has gone live today with the results of Project Prospect, introducing semantic enrichment of journal articles across all our titles. I’m pretty sure we’re the first primary research publisher to do anything of this scope.\nWe’re identifying chemical compounds and providing synonyms, InChIs (IUPAC’s Chemical Identifier), downloadable CML (Chemical Markup Language), SMILES strings and 2D images for these compounds. In terms of subject area we’re marking up terms from the IUPAC Gold Book, and also Open Biomedical Ontology terms from the Gene, Cell, and Sequence Ontologies. All this stuff is currently available from an enhanced HTML view, with the additional information and links to related articles accessed via highlights in the article and popups.\nThe mark-up tools have been developed together with UK academics based at the Unilever Centre of Molecular Informatics and the Computing Laboratory at Cambridge University.\nAt launch we have about 100 articles from our 2007 publications, with the enhanced views currently free-to-air. Feel free to take a look.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/digital-objects/", "title": "Digital Objects", "subtitle":"", "rank": 1, "lastmod": "2007-01-30", "lastmod_ts": 1170115200, "section": "Blog", "tags": [], "description": "A couple weeks back there was a meeting of the Open Archive Initiative‘s Object Reuse and Exchange (OAI-ORE) Technical Committee hosted in the Butler Library at Columbia University, New York.\nLorcan Dempsey of OCLC blogs here on the report (PDF format) that was generated from that meeting. As does Pete Johnston of Eduserv here.\n", "content": "A couple weeks back there was a meeting of the Open Archive Initiative‘s Object Reuse and Exchange (OAI-ORE) Technical Committee hosted in the Butler Library at Columbia University, New York.\nLorcan Dempsey of OCLC blogs here on the report (PDF format) that was generated from that meeting. As does Pete Johnston of Eduserv here.\nBackground:\nOAI-ORE is being positioned as a companion activity to the more familiar OAI-PMH protocol for metadata harvesting. OAI-ORE relates to the expression and exchange of digital objects across repositories rather than just the exchange of metadata about those objects.\nThe basic problem is that scholarly communication deals in units which are compound resulting from a complex of documents and/or datasets expressed in multiple formats, versions, relationships, etc. The underlying web architecture provides a fairly simple model of resources (identified with URIs) which are interconnected and can be interacted with by retrieving representations of those resources. In practice, this usually results in unique URIs (and thus resources) for each representation - think of one URI for an HTML document, another for a PDF document of the same work, and yet new URIs for those same document formats for a new version of the work. Clearly, all these representations (or documents) are related, and more importantly relate to a single underlying “work”. Web architecture as generally practiced does not provide ready mechanisms to aggregate (and compartmentalize) related documents and datasets.\nMy fairly simple mental picture is that the web landscape is rather like the early universe in which energy (and matter) is distributed uniformly and there is little local “intelligence” which is gradually built up through time by matter formation and aggregations of this matter leading to the more familiar “clumpy” universe with its recognizable galaxies, stars and other objects. This “clumpiness” is precisely what we are missing in the scholarly web.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/an-open-pdf/", "title": "An Open PDF?", "subtitle":"", "rank": 1, "lastmod": "2007-01-29", "lastmod_ts": 1170028800, "section": "Blog", "tags": [], "description": "Adobe announces today the following:\n“SAN JOSE, Calif. — Jan. 29, 2007 — Adobe Systems Incorporated (Nasdaq:ADBE) today announced that it intends to release the full Portable Document Format (PDF) 1.7 specification to AIIM, the Enterprise Content Management Association, for the purpose of publication by the International Organization for Standardization (ISO).”\nThe full press release is here.\n(Via Oleg Tkachenko’s Blog.)", "content": "Adobe announces today the following:\n“SAN JOSE, Calif. — Jan. 29, 2007 — Adobe Systems Incorporated (Nasdaq:ADBE) today announced that it intends to release the full Portable Document Format (PDF) 1.7 specification to AIIM, the Enterprise Content Management Association, for the purpose of publication by the International Organization for Standardization (ISO).”\nThe full press release is here.\n(Via Oleg Tkachenko’s Blog.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/jon-udell-and-dois/", "title": "Jon Udell and DOIs", "subtitle":"", "rank": 1, "lastmod": "2007-01-29", "lastmod_ts": 1170028800, "section": "Blog", "tags": [], "description": "Not to get too self-referential here, but it was very cool to see that Tony Hammond has managed to get Not to get too self-referential here, but it was very cool to see that Tony Hammond has managed to get This based on a podcast interview with Tony posted on January 26th.", "content": "Not to get too self-referential here, but it was very cool to see that Tony Hammond has managed to get Not to get too self-referential here, but it was very cool to see that Tony Hammond has managed to get This based on a podcast interview with Tony posted on January 26th.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/w3c-recs-for-xml-eight-of-em/", "title": "W3C Recs for XML - Eight of ‘Em!", "subtitle":"", "rank": 1, "lastmod": "2007-01-25", "lastmod_ts": 1169683200, "section": "Blog", "tags": [], "description": "Although most folks will already know about this it still seems significant enough to blog the arrival of XQuery 1.0, XSLT 2.0, and XPath 2.0. See the W3C Press Release.", "content": "Although most folks will already know about this it still seems significant enough to blog the arrival of XQuery 1.0, XSLT 2.0, and XPath 2.0. See the W3C Press Release.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/use-of-prism-in-rss/", "title": "Use of PRISM in RSS", "subtitle":"", "rank": 1, "lastmod": "2007-01-23", "lastmod_ts": 1169510400, "section": "Blog", "tags": [], "description": "Was rooting around for some information and stumbled across this page which may be of interest:\nhttp://googlereader.blogspot.com/2006/08/namespaced-extensions-in-feeds.html\nNamespaced Extensions in Feeds\nThursday, August 03, 2006\nposted by Mihai Parparita\n“I wrote a small MapReduce program to go over our BigTable and get the top 50 namespaces based on the number of feeds that use them.”\nSeems quite an impressive percentage for PRISM.", "content": "Was rooting around for some information and stumbled across this page which may be of interest:\nhttp://googlereader.blogspot.com/2006/08/namespaced-extensions-in-feeds.html\nNamespaced Extensions in Feeds\nThursday, August 03, 2006\nposted by Mihai Parparita\n“I wrote a small MapReduce program to go over our BigTable and get the top 50 namespaces based on the number of feeds that use them.”\nSeems quite an impressive percentage for PRISM.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/whats-in-a-uri/", "title": "What’s in a URI?", "subtitle":"", "rank": 1, "lastmod": "2007-01-08", "lastmod_ts": 1168214400, "section": "Blog", "tags": [], "description": "First off, a Happy New Year to all!\nA post of mine to the OpenURL list may possibly be of interest. Following up the recent W3C TAG (Technical Architecture Group) Finding on “The Use of Metadata in URIs” I pointed out that the TAG do not seem to be aware of OpenURL: which is both a standard prescription for including metadata in URI strings and a US information standard to boot.", "content": "First off, a Happy New Year to all!\nA post of mine to the OpenURL list may possibly be of interest. Following up the recent W3C TAG (Technical Architecture Group) Finding on “The Use of Metadata in URIs” I pointed out that the TAG do not seem to be aware of OpenURL: which is both a standard prescription for including metadata in URI strings and a US information standard to boot.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/archives/2006/", "title": "2006", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Archives", "tags": [], "description": "We are a not-for-profit membership organization for scholarly publishing working to make content easy to find, cite, link, assess, and reuse.", "content": "", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/google-offer-on-journal-archiv-1/", "title": "Google offer on journal archives…", "subtitle":"", "rank": 1, "lastmod": "2006-12-18", "lastmod_ts": 1166400000, "section": "Blog", "tags": [], "description": "Peter Suber reports on his Open Access News that Google is offering to digitize journal backfiles. The full text articles are available as images and for free hosted by Google. The deal is non-exclusive and publishers retain copyright (but many backfiles will be out of copyright) but Google will not supply the publisher with the electronic files - so non-exclusive means that the publisher or someone else could digitize the backfile too (but how to recover the costs when it’s all free in Google?", "content": "Peter Suber reports on his Open Access News that Google is offering to digitize journal backfiles. The full text articles are available as images and for free hosted by Google. The deal is non-exclusive and publishers retain copyright (but many backfiles will be out of copyright) but Google will not supply the publisher with the electronic files - so non-exclusive means that the publisher or someone else could digitize the backfile too (but how to recover the costs when it’s all free in Google?).\nDorothea Salo (recent STM Innovations speaker) over at Caveat Lector provides an excellent review of the Google offer with some good advice for publishers (“always control your bits”).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/exhibit-a/", "title": "Exhibit A", "subtitle":"", "rank": 1, "lastmod": "2006-12-12", "lastmod_ts": 1165881600, "section": "Blog", "tags": [], "description": "MIT’s Simile project has just released Exhibit, a ” lightweight structured data publishing framework.” Read that as “an easy-to-use mashup creation tool.” I have heard that Leigh has already started experimenting with it. I look forward to a writeup soon…", "content": "MIT’s Simile project has just released Exhibit, a ” lightweight structured data publishing framework.” Read that as “an easy-to-use mashup creation tool.” I have heard that Leigh has already started experimenting with it. I look forward to a writeup soon…\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/speaking-of-stm-innovations/", "title": "Speaking of STM Innovations", "subtitle":"", "rank": 1, "lastmod": "2006-12-12", "lastmod_ts": 1165881600, "section": "Blog", "tags": [], "description": "The STM Innovations meeting on December 7th in London was excellent. Leigh Dodds has a short summary of the day on his blog. Interestingly, I can’t find anything about the conference on the STM website.", "content": "The STM Innovations meeting on December 7th in London was excellent. Leigh Dodds has a short summary of the day on his blog. Interestingly, I can’t find anything about the conference on the STM website.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/zotero-next-generation-research-tool/", "title": "Zotero - next generation research tool?", "subtitle":"", "rank": 1, "lastmod": "2006-12-12", "lastmod_ts": 1165881600, "section": "Blog", "tags": [], "description": "1 was mentioned at the STM Innovations talk in London and it’s worth taking a look. It’s billed as the next generation of bibliographic management software - End Note but a lot more included. DOIs should be incorporated into this tool - I couldn’t find any mention of Crossref or DOIs.", "content": "1 was mentioned at the STM Innovations talk in London and it’s worth taking a look. It’s billed as the next generation of bibliographic management software - End Note but a lot more included. DOIs should be incorporated into this tool - I couldn’t find any mention of Crossref or DOIs.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/and-just-relax/", "title": "And Just Relax", "subtitle":"", "rank": 1, "lastmod": "2006-11-28", "lastmod_ts": 1164672000, "section": "Blog", "tags": [], "description": "Nice piece of advocacy here by Tim Bray for RELAX. High time to see someone standing up for RELAX - a much friendlier XML schema language.", "content": "Nice piece of advocacy here by Tim Bray for RELAX. High time to see someone standing up for RELAX - a much friendlier XML schema language.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/journal-supply-chain-efficienc/", "title": "Journal Supply Chain Efficiency Improvement Pilot", "subtitle":"", "rank": 1, "lastmod": "2006-10-12", "lastmod_ts": 1160611200, "section": "Blog", "tags": [], "description": "This project - https://web.archive.org/web/20061004011422/www.journalsupplychain.com/ - (which needs a new name or clever acronym) has released a Mid Year Report. The pilot is being extended into 2007 and there is clearly value for publishers in having an unique ID for institutions at the licensing unit level. Ringgold, one of the project partners, has a great database with a validated hierarchy of institutions from consortia down to departments - I had a demo at Frankfurt. The report has some info on benefits for publishers and on possible business models. I think a central, neutral registry of unique IDs would be a real benefit to the industry.\n", "content": "This project - https://web.archive.org/web/20061004011422/www.journalsupplychain.com/ - (which needs a new name or clever acronym) has released a Mid Year Report. The pilot is being extended into 2007 and there is clearly value for publishers in having an unique ID for institutions at the licensing unit level. Ringgold, one of the project partners, has a great database with a validated hierarchy of institutions from consortia down to departments - I had a demo at Frankfurt. The report has some info on benefits for publishers and on possible business models. I think a central, neutral registry of unique IDs would be a real benefit to the industry.\nFrom the report:\n“Publishers\nCertainly publishers are already using an institutional identifier internally with major\nmarketing and customer communication benefits. The main areas where the proposed\nidentifier could add value to the communication between the publisher and customer\nshould be in areas such as:\n• accurate COUNTER usage reports\n• institutional renewals being unrecognized as such and therefore appearing as\nnew subscriptions\n• easier ability to track institutional end-users of consolidated subscriptions\n(especially those where the agent does not deliver orders via ICEDIS\nstructured FTP with Type 2 addresses incorporated in the complete record)”\nOn business models:\n“A sensible business model would have those that receive the most economic benefit\nfrom a respective service providing a respective level of funding to support costs. It is\nclear that publishers are the primary beneficiaries of the institutional identifier, with\nclear benefits, thereby suggesting they should bear the proportionate cost. Ultimately\nthe subscriber pays anyway; economies are reflected in reduced cost to the subscriber\nin a competitive market.\nOther participants would see service improvements, but not the same clear benefits. It\nwould therefore be reasonable to ask the publishers to bear the major cost of the\nestablishment of such an identifier, and to a certain extent they have already done so\nby subscribing selectively to Ringgold’s existing auditing and database services.\nThe various and relevant business revenue streams might be reflected as follows:\n• Free service: limited search only, with number of searches per day restricted,\npossibility of searchers to edit or input information using a “response form”\ndesigned for such purposes\n• Basic subscription: unlimited search access to the database\n• Database license for hosting services: download of standard selected metadata\n• Database license for publishers: access for download of selected metadata, and\nautomatic receipt of alerts for changes”\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/ruby-makes-alist/", "title": "Ruby Makes A-List", "subtitle":"", "rank": 1, "lastmod": "2006-10-12", "lastmod_ts": 1160611200, "section": "Blog", "tags": [], "description": "Um, well. Seems according to O’Reilly Ruby that Ruby is now a mainstream language.\n“The Ruby programming language just made the A-list on the TIOBE Programming Community Index, and Ruby is now listed as a mainstream programming language. For the past three or four years Ruby has consistently placed in the high 20’s in this index, but is now placed as the 13th most popular programming language!”\n(No language wars, but I am, I will confess, a big admirer - for some time.", "content": "Um, well. Seems according to O’Reilly Ruby that Ruby is now a mainstream language.\n“The Ruby programming language just made the A-list on the TIOBE Programming Community Index, and Ruby is now listed as a mainstream programming language. For the past three or four years Ruby has consistently placed in the high 20’s in this index, but is now placed as the 13th most popular programming language!”\n(No language wars, but I am, I will confess, a big admirer - for some time.)\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/stix-and-stones/", "title": "STIX and Stones", "subtitle":"", "rank": 1, "lastmod": "2006-10-05", "lastmod_ts": 1160006400, "section": "Blog", "tags": [], "description": "The STIX Fonts project funded by six major publishers to develop a comprehensive font set for STM publishing has completed its development phase and is about to move into beta testing (planned to commence in late October). Participation is open to all publishers - so now is the time to get involved to ensure your needs are met by this significant activity.", "content": "The STIX Fonts project funded by six major publishers to develop a comprehensive font set for STM publishing has completed its development phase and is about to move into beta testing (planned to commence in late October). Participation is open to all publishers - so now is the time to get involved to ensure your needs are met by this significant activity.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/adsml/", "title": "AdsML", "subtitle":"", "rank": 1, "lastmod": "2006-10-03", "lastmod_ts": 1159833600, "section": "Blog", "tags": [], "description": "A new version of the AdsML Framework 2.0, Release 8 from the AdsML Consortium is now available for download from http://www.adsml.org/2006/announcements/adsml-framework-2-0-release-8-issued/.\nBelow is an extract from the “Vision” document which outlines the broad goals of AdsML.\n", "content": "A new version of the AdsML Framework 2.0, Release 8 from the AdsML Consortium is now available for download from http://www.adsml.org/2006/announcements/adsml-framework-2-0-release-8-issued/.\nBelow is an extract from the “Vision” document which outlines the broad goals of AdsML.\n_“2 The Vision of AdsML\nAccording to its Charter document, the mission of the AdsML Consortium is 3-\nfold:\n• to create an internationally-adopted set of specifications and associated\nbusiness processes for the electronic exchange of business information and\ncontent for advertising\n• to simplify and accelerate business interactions\n• to facilitate use across multiple media in both current and future\nenvironments.\nThis dry, somewhat technical statement masks the simplicity and power of what\nthe AdsML Consortium aims to do. Stated informally, AdsML’s vision is to tie\ntogether all of the parties involved in producing, booking, distributing\nand publishing an ad as if they all used the same software system – but\nwithout actually requiring everyone to switch to a different software system or\nvendor.”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/blogs-well-duh/", "title": "Blogs, Well Duh!", "subtitle":"", "rank": 1, "lastmod": "2006-10-03", "lastmod_ts": 1159833600, "section": "Blog", "tags": [], "description": "Steve Rubel has a reponse here to Lexis-Nexis’ survey on consumers preferred outlets for breaking news and their rubbishing of blogs as a credible publishing forum. It’s something called, er, the Long Tail by Chris Anderson at Wired Magazine.", "content": "Steve Rubel has a reponse here to Lexis-Nexis’ survey on consumers preferred outlets for breaking news and their rubbishing of blogs as a credible publishing forum. It’s something called, er, the Long Tail by Chris Anderson at Wired Magazine.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/couple-web-feeds-to-note/", "title": "Couple Web Feeds to Note", "subtitle":"", "rank": 1, "lastmod": "2006-10-03", "lastmod_ts": 1159833600, "section": "Blog", "tags": [], "description": "Sorry to be somewhat backwards, but just in case any folks didn’t already know there’s a couple new feeds set up recently (or at least they’re newish to me 🙂\nNews from STM (from the STM Association) eFoundations (from Andy Powell and Pete Johnston at Eduserv Foundation in the UK) ", "content": "Sorry to be somewhat backwards, but just in case any folks didn’t already know there’s a couple new feeds set up recently (or at least they’re newish to me 🙂\nNews from STM (from the STM Association) eFoundations (from Andy Powell and Pete Johnston at Eduserv Foundation in the UK) ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/towards-a-science-commons/", "title": "Science Commons", "subtitle":"", "rank": 1, "lastmod": "2006-10-03", "lastmod_ts": 1159833600, "section": "Blog", "tags": [], "description": "Peter Murray-Rust posts on the SPARC-OpenData mailing list about a Commons for Science Conference (Oct. 3/4 in DC). The meeting is invitation-only but the papers are online (see here) and there should be public reports. The meeting underlines the importance of Open Data. There’s a brief abstract below.\n", "content": "Peter Murray-Rust posts on the SPARC-OpenData mailing list about a Commons for Science Conference (Oct. 3/4 in DC). The meeting is invitation-only but the papers are online (see here) and there should be public reports. The meeting underlines the importance of Open Data. There’s a brief abstract below.\n_“The sciences depend on access to and use of factual data. Powered by\ndevelopments in electronic storage and computational capability,\nscientific inquiry today is becoming more data-intensive in almost\nevery discipline. Whether the field is meteorology, genomics,\nmedicine, ecology, or high-energy physics, modern research depends on\nthe availability of multiple databases, drawn from multiple public\nand private sources; and the ability of those diverse databases to be\nsearched, recombined, and processed.”_\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/crosstech/", "title": "CrossTech", "subtitle":"", "rank": 1, "lastmod": "2006-10-02", "lastmod_ts": 1159747200, "section": "Blog", "tags": [], "description": "\u0026lt;span \u0026gt;Just a couple comments about CrossTech:\n\u0026lt;span \u0026gt;1. Shouldn’t it (or couldn’t it) be linked to from the Crossref home page? (This is a public read list after all and so should be made more widely available.) Maybe at some point could be announced on some lists of interest.\n\u0026lt;span \u0026gt;2. Would be very nice to (at least) have a count of membership. I would also like to canvas opinions about making names of the membership public.", "content": "\u0026lt;span \u0026gt;Just a couple comments about CrossTech:\n\u0026lt;span \u0026gt;1. Shouldn’t it (or couldn’t it) be linked to from the Crossref home page? (This is a public read list after all and so should be made more widely available.) Maybe at some point could be announced on some lists of interest.\n\u0026lt;span \u0026gt;2. Would be very nice to (at least) have a count of membership. I would also like to canvas opinions about making names of the membership public. What do others think about this?\n\u0026lt;span \u0026gt;At the end of the day though this facility needs to be driven, otherwise it will end up being just another pier over the water (i.e. a ‘disappointed bridge’ And sorry for cribbing again from JAJ).\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/wiley-does-rss-too/", "title": "Wiley Does RSS, Too!", "subtitle":"", "rank": 1, "lastmod": "2006-10-02", "lastmod_ts": 1159747200, "section": "Blog", "tags": [], "description": "This post blogged by Rafael Sidi at EEI. Wiley are now dishing out RSS feeds. And moreover from a cursory inspection (see e.g. here for the American Journal of Human Biology) it seems like they are putting out RSS 1.0 (RDF) and DC/PRISM metadata. Don’t know if there’s anyone from Wiley who can comment on this. But this really is the best news. (Now, who else can we get to join the party.", "content": "This post blogged by Rafael Sidi at EEI. Wiley are now dishing out RSS feeds. And moreover from a cursory inspection (see e.g. here for the American Journal of Human Biology) it seems like they are putting out RSS 1.0 (RDF) and DC/PRISM metadata. Don’t know if there’s anyone from Wiley who can comment on this. But this really is the best news. (Now, who else can we get to join the party. 😉\n[Editor\u0026rsquo;s update: Link to Wiley was broken and removed. January 2021]\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/acap-automated-content-access/", "title": "ACAP - (Automated Content Access Protocol)", "subtitle":"", "rank": 1, "lastmod": "2006-09-29", "lastmod_ts": 1159488000, "section": "Blog", "tags": [], "description": "The World Association of Newspapers is developing ACAP - see the press release which will be machine readable rights information that search engines would read and act on in an automated way. Rightscom is working on the project and the IPA and EPC (European Publishers Council) are involved.\nPublishers presenting a united front to search engines is a good thing but I’m somewhat skeptical about how such a system would work without being overly complicated.", "content": "The World Association of Newspapers is developing ACAP - see the press release which will be machine readable rights information that search engines would read and act on in an automated way. Rightscom is working on the project and the IPA and EPC (European Publishers Council) are involved.\nPublishers presenting a united front to search engines is a good thing but I’m somewhat skeptical about how such a system would work without being overly complicated. However, the idea of getting more information to the search engines when they are crawling sites is a good idea but what will the publishers say to the search engines? If you get much above crawl/don’t crawl then you need a bilateral agreement that has to be negotiated.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/prism-use-cases/", "title": "PRISM Use Cases", "subtitle":"", "rank": 1, "lastmod": "2006-09-25", "lastmod_ts": 1159142400, "section": "Blog", "tags": [], "description": "At last week’s PRISM Face to Face meeting at Time Inc. (NY), Linda Burman raised the question of how (STM) publishers were using PRISM beyond RSS. I gave a brief presentation of how we at Nature were using PRISM: RSS (well you all know about that), Connotea (our social bookmarking tool), SRU (Search/Retrieve by URL), and OTMI (Open Text Mining Interface - which we’ll shortly be making available for wider comment).", "content": "At last week’s PRISM Face to Face meeting at Time Inc. (NY), Linda Burman raised the question of how (STM) publishers were using PRISM beyond RSS. I gave a brief presentation of how we at Nature were using PRISM: RSS (well you all know about that), Connotea (our social bookmarking tool), SRU (Search/Retrieve by URL), and OTMI (Open Text Mining Interface - which we’ll shortly be making available for wider comment). Be interested to learn if anyone else is using PRISM in other ways.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/password-control-1/", "title": "password control", "subtitle":"", "rank": 1, "lastmod": "2006-09-11", "lastmod_ts": 1157932800, "section": "Blog", "tags": [], "description": "We’ve taken the top level access control off the site. This means that anyone can read the blog but posting will be limited to those with an account (Crossref members and invited participants). This will make it possible to include the CrossTech feed in your regular RSS reader/aggregator. We’ll soon be posting some general terms and conditions for this blog and also sending a message to all Crossref members about joining so we should see membership (and activity) pick up.", "content": "We’ve taken the top level access control off the site. This means that anyone can read the blog but posting will be limited to those with an account (Crossref members and invited participants). This will make it possible to include the CrossTech feed in your regular RSS reader/aggregator. We’ll soon be posting some general terms and conditions for this blog and also sending a message to all Crossref members about joining so we should see membership (and activity) pick up.\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/post/", "title": "Embedding standardized metadata in HTML", "subtitle":"", "rank": 1, "lastmod": "2006-09-05", "lastmod_ts": 1157414400, "section": "Blog", "tags": [], "description": "On the iSpecies blog Rod Page describes how he extracts DOIs from Google Scholar results - he does use the Crossref OpenURL interface and Connotea to get DOIs too. He also says “DOIs are pretty cool” which is good!\nOn another blog post to SemAnt Page describes how he uses LSIDs and DOIs for Ant literature.\nIt seems that there is more and more of this type of use of the DOI so its great we have the OpenURL interface. Could the type of stuff that Page is doing be helped by publishers embedding metadata in their HTML pages? This could include licensing info and information for search engine crawlers.\n", "content": "On the iSpecies blog Rod Page describes how he extracts DOIs from Google Scholar results - he does use the Crossref OpenURL interface and Connotea to get DOIs too. He also says “DOIs are pretty cool” which is good!\nOn another blog post to SemAnt Page describes how he uses LSIDs and DOIs for Ant literature.\nIt seems that there is more and more of this type of use of the DOI so its great we have the OpenURL interface. Could the type of stuff that Page is doing be helped by publishers embedding metadata in their HTML pages? This could include licensing info and information for search engine crawlers.\nIngenta and BMC embed metadata (are there others?) - here is a snippet from a BMC article -\n\u0026lt;cc:Work rdf:about=\"http://0-www-biomedcentral-com.libus.csd.mu.edu/1471-2148/3/16\"\u0026gt; \u0026lt;cc:license rdf:resource=\"http://creativecommons.org/licenses/by/2.0/\"/\u0026gt; \u0026lt;/cc:Work\u0026gt; \u0026lt;cc:License rdf:about=\"http://creativecommons.org/licenses/by/2.0/\"\u0026gt; \u0026lt;cc:permits rdf:resource=\"http://web.resource.org/cc/Reproduction\"/\u0026gt; \u0026lt;cc:permits rdf:resource=\"http://web.resource.org/cc/Distribution\"/\u0026gt; \u0026lt;cc:requires rdf:resource=\"http://web.resource.org/cc/Notice\"/\u0026gt; \u0026lt;cc:requires rdf:resource=\"http://web.resource.org/cc/Attribution\"/\u0026gt; \u0026lt;cc:permits rdf:resource=\"http://web.resource.org/cc/DerivativeWorks\"/\u0026gt; \u0026lt;/cc:License\u0026gt; \u0026lt;item rdf:about=\"http://0-www-biomedcentral-com.libus.csd.mu.edu/1471-2148/3/16\"\u0026gt; \u0026lt;title\u0026gt;Inter-familial relationships of the shorebirds (Aves: Charadriiformes) based on nuclear DNA sequence data\u0026lt;/title\u0026gt; \u0026lt;dc:title\u0026gt;Inter-familial relationships of the shorebirds (Aves: Charadriiformes) based on nuclear DNA sequence data\u0026lt;/dc:title\u0026gt; \u0026lt;dc:creator\u0026gt;Ericson, Per GP\u0026lt;/dc:creator\u0026gt; \u0026lt;dc:creator\u0026gt;Envall, Ida\u0026lt;/dc:creator\u0026gt; \u0026lt;dc:creator\u0026gt;Irestedt, Martin\u0026lt;/dc:creator\u0026gt; \u0026lt;dc:creator\u0026gt;Norman, Janette A\u0026lt;/dc:creator\u0026gt; \u0026lt;dc:identifier\u0026gt;info:doi/10.1186/1471-2148-3-16\u0026lt;/dc:identifier\u0026gt; \u0026lt;dc:identifier\u0026gt;info:pmid/12875664\u0026lt;/dc:identifier\u0026gt; \u0026lt;dc:source\u0026gt;BMC Evolutionary Biology 2003, 3:16\u0026lt;/dc:source\u0026gt; \u0026lt;dc:date\u0026gt;2003-07-23\u0026lt;/dc:date\u0026gt; \u0026lt;prism:publicationName\u0026gt;BMC Evolutionary Biology\u0026lt;/prism:publicationName\u0026gt; \u0026lt;prism:publicationDate\u0026gt;2003-07-23\u0026lt;/prism:publicationDate\u0026gt; \u0026lt;prism:volume\u0026gt;3\u0026lt;/prism:volume\u0026gt; \u0026lt;prism:number\u0026gt;1\u0026lt;/prism:number\u0026gt; \u0026lt;prism:section\u0026gt;Research article\u0026lt;/prism:section\u0026gt; \u0026lt;prism:startingPage\u0026gt;16\u0026lt;/prism:startingPage\u0026gt; \u0026lt;/item\u0026gt; \u0026lt;/rdf:RDF\u0026gt; ", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/password-control/", "title": "password control", "subtitle":"", "rank": 1, "lastmod": "2006-08-29", "lastmod_ts": 1156809600, "section": "Blog", "tags": [], "description": "Hi,\nAt the moment a username and password is needed to read the CrossTech blog in addition to needing an account to post entries. However, it may be better to take off the access control to read the blog - this would mean that services like Technorati and Google could index the blog, which they can’t do at the moment and posting to the blog would be public.\nAs people come on to the list maybe the first thing to comment on is whether we should take off the access control to read the blog.", "content": "Hi,\nAt the moment a username and password is needed to read the CrossTech blog in addition to needing an account to post entries. However, it may be better to take off the access control to read the blog - this would mean that services like Technorati and Google could index the blog, which they can’t do at the moment and posting to the blog would be public.\nAs people come on to the list maybe the first thing to comment on is whether we should take off the access control to read the blog. What to people think?\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/semantic-web-google-has-the-an-1/", "title": "SEMANTIC WEB: GOOGLE HAS THE ANSWERS, BUT NOT THE QUESTIONS", "subtitle":"", "rank": 1, "lastmod": "2006-08-22", "lastmod_ts": 1156204800, "section": "Blog", "tags": [], "description": "Posted by special permission from EPS EPS INSIGHTS :: 01/08/2006\nSEMANTIC WEB: GOOGLE HAS THE ANSWERS, BUT NOT THE QUESTIONS\nThe Google v. Semantic Web discussion at the AAAI (American Association for Artificial Intelligence) featured plenty of confrontation and even some rational argument, but it may chiefly be remembered as the day when Google responded to the challenge of semantic web thinking by saying that the semantic web movement did not matter - thereby demonstrating that it did. by David Worlock, Chairman\n", "content": "Posted by special permission from EPS EPS INSIGHTS :: 01/08/2006\nSEMANTIC WEB: GOOGLE HAS THE ANSWERS, BUT NOT THE QUESTIONS\nThe Google v. Semantic Web discussion at the AAAI (American Association for Artificial Intelligence) featured plenty of confrontation and even some rational argument, but it may chiefly be remembered as the day when Google responded to the challenge of semantic web thinking by saying that the semantic web movement did not matter - thereby demonstrating that it did. by David Worlock, Chairman\nAnd we thought that the real battle this year was between net neutrality and the network owners. Or between those who think that click fraud crucially undermines Google, and those who think it doesn’t matter. We were wrong. July’s “Thrilla in Manila” was the discussion between Tim Berners-Lee and the Google Director of Search, Peter Norvig, at the Boston AAAI meeting. And it is an important moment because Berners-Lee’s assertion that the last semantic web building blocks are moving into place comes at exactly the time when Google seems anxious to diminish semantic web searching. It is a good guess that the latter results from a stimulus dictated by threat. A world where keyword searching was reduced to ground floor in a building of many storeys where it may even be an advantage to be a new market entrant with no history is a world where Google would have to progressively re-invent itself. And what is more difficult, in the recent history of these things, than a company created by a technology re-inventing itself in terms of a new technology?\nSo Google’s Boston blows were first of all aimed at the reality test. Like STM publishers pointing to the unlikelihood of academic researchers adding metadata to articles for repository filing, Google pointed to user and webmaster incompetence as the chief reason why semantic interoperability was doomed to a long, slow and painful generative process. If users cannot configure a server or write HTML, how can they understand all this stuff? And then suppliers would slow it down by trying to make it proprietary. And then, machine to machine interoperability would encourage deception (obviously the click fraud business is hurting). The answer to the Semantic Web, from a Google stance, thus appears to be: very interesting, but not very soon.\nDancing like a bee and stinging like a butterfly, Tim Berners-Lee clearly had the answers to this. The reason why the semantic web appears threatening to those who have entrenched tenancies in search is probably because it is going quicker than expected. His original ‘layer cake’ diagram, a feature on the conference circuit for five years, could now be completed at all levels. RDF as a data language is now well-established (think of RSS). Ontologies, mostly in narrow vertical domains, are moving into place, though there may be issues about relating them to each other. Query and rules languages now populate the other layers, with one of the former, SPARQL, emerging this year as a W3C candidate recommendation (6 April 2006). In a real sense this is the missing link which makes the Semantic Web a viable proposition, and at the same time joins it to the popular hubbub around Web 2.0. If part of the latter dream is data sourcing from a wide variety of service entities to create new web environments from composite content, then SPARQL sitting on top of RDF looks closest to realising that idea. In an important note in O’Reilly XML.com (SPARQL: Web 2.0 Meet the Semantic Web; 16 September 2005), Kendall Clark wrote “Imagine having one query language, and one client, that lets you arbitrarily slice the data of Flickr, del.icio.us, Google, and your three other favourite Web 2.0 sites, all FOAF files, all of the RSS 1.0 feeds (and, eventually, I suspect, all Atom 1.0 feeds) plus MusicBrainz etc”.\nImagining that might well impel you into the ring with Tim Berners-Lee. If Google has to be re-invented, the process of recognition of change has to be slowed. Denying the speedy reality of the semantic web becomes essential while furious R\u0026amp;D takes place. And content and information service providers are not just spectators of this, but participants too.\n© Electronic Publishing Services\nRelated links\n————————————-\nGoogle :: http://www.google.com\nAAAI :: http://0-www-aaai-org.libus.csd.mu.edu\nKendall Clark - SPARQL: Web 2.0 Meet the Semantic Web ::\n[http://www.oreillynet.com/xml/blog/2005/09/sparql_web_20_meet_the_semanti.ht\nml]1\nW3C :: http://www.w3.org\nFlickr :: http://www.flickr.com\nFOAF :: http://www.foaf-project.org/\nMusicBrainz :: http://musicbrainz.org/\n————————————-\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/blog/welcome-to-crosstech/", "title": "Welcome to CrossTech", "subtitle":"", "rank": 1, "lastmod": "2006-08-22", "lastmod_ts": 1156204800, "section": "Blog", "tags": [], "description": "Welcome to CrossTech, a new access-controlled blog to discuss developments in the online scholarly publishing world. Crossref’s mission is to foster dialogue and information sharing among publishers to enable innovation and collaboration. In order to do things collaboratively, publishers need to share information and communicate in an appropriate manner that takes into account anti-trust and competitive issues. The online publishing world changes quickly and many developments are driven by organizations outside of scholarly publishing so CrossTech provides publishers a “protected” space to discuss issues.\n", "content": "Welcome to CrossTech, a new access-controlled blog to discuss developments in the online scholarly publishing world. Crossref’s mission is to foster dialogue and information sharing among publishers to enable innovation and collaboration. In order to do things collaboratively, publishers need to share information and communicate in an appropriate manner that takes into account anti-trust and competitive issues. The online publishing world changes quickly and many developments are driven by organizations outside of scholarly publishing so CrossTech provides publishers a “protected” space to discuss issues.\nNature Publishing Group’s Xanadu blog is the model for CrossTech. Our hope is that CrossTech will build on the idea of Xanadu.\nCrossTech Objectives: To provide a neutral forum where participants can post and discuss technical issues, link to relevant items on the Internet, make others aware of important developments and share and learn from each others’ experiences. CrossTech will promote collaboration and innovation among publishers in an appropriate manner taking account of anti-trust and competitive issues.\nThe main goals of CrossTech are:\nTo provide a common forum for discussing new publishing technologies\nTo develop a publisher technology community\nTo determine common directions for key publishing technologies\nTo foster best practices - and decide the best route to codify or standardize those practices\nTo share experiences\nTo act as an alerting mechanism for publishers to learn of relevant, new technology developments\nPlease let us know if you would like to participate. A username and password will be needed to read, post and comment. To obtain a username and password to post and comment, please email Anna Tolwinska annat@crossref.org.\nWe look forward to having you participate!\n", "headings": [] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/labs/retraction-watch/", "title": "", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Labs", "tags": [], "description": "title: Retraction Watch data in Labs author: Crossref date: 2023-11-22\nNote: We\u0026rsquo;re currently supporting open Retraction Watch data via our Labs API and a .csv file. We plan to model and support it via our REST API in future, but it\u0026rsquo;s supported via Labs while we do that work. Thanks, Labs! How can I find the Retraction Watch data now? The full dataset has been released through Crossref’s Labs API, initially as a .", "content": "title: Retraction Watch data in Labs author: Crossref date: 2023-11-22\nNote: We\u0026rsquo;re currently supporting open Retraction Watch data via our Labs API and a .csv file. We plan to model and support it via our REST API in future, but it\u0026rsquo;s supported via Labs while we do that work. Thanks, Labs! How can I find the Retraction Watch data now? The full dataset has been released through Crossref’s Labs API, initially as a .csv file to download directly: https://0-api-labs-crossref-org.libus.csd.mu.edu/data/retractionwatch?name@email.org (add your ‘mailto’).\nThe Crossref Labs API also displays information about retractions in the /works/ route when metadata is available, such as https://0-api-labs-crossref-org.libus.csd.mu.edu/works/10.2147/CMAR.S324920?name@email.org (add your ‘mailto’).\nWe welcome feedback on how we\u0026rsquo;ve rendered the data in the Labs API to help inform how we model it alongside member-provided metadata in future.\n", "headings": ["How can I find the Retraction Watch data now?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/abigail-ijave/", "title": "Abigail Ijave", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Abigail Ijave Member Support Contractor Biography Abigail works with our membership team helping to support our members.", "content": "\rAbigail Ijave Member Support Contractor Biography Abigail works with our membership team helping to support our members.\n", "headings": ["Abigail Ijave","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/adam-buttrick/", "title": "Adam Buttrick", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Adam Buttrick Metadata Manager Biography Adam is a librarian and developer based in Los Angeles, USA. He previously worked as a data developer for the Getty Conservation Institute, as an implementation manager for OCLC’s Metadata Services, and for the University of Michigan’s Art, Architecture, and Engineering Library. As metadata curation lead for the ROR project, Adam coordinates ongoing updates and improvements to the registry and works closely with ROR’s community curation advisory board.", "content": "\rAdam Buttrick Metadata Manager Biography Adam is a librarian and developer based in Los Angeles, USA. He previously worked as a data developer for the Getty Conservation Institute, as an implementation manager for OCLC’s Metadata Services, and for the University of Michigan’s Art, Architecture, and Engineering Library. As metadata curation lead for the ROR project, Adam coordinates ongoing updates and improvements to the registry and works closely with ROR’s community curation advisory board.\nAdam Buttrick's Latest Blog Posts\rMetadata matching: beyond correctness\rDominika Tkaczyk, Wednesday, Jan 8, 2025\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/axeer1ee In our previous entry, we explained that thorough evaluation is key to understanding a matching strategy\u0026rsquo;s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs.\nHow good is your matching?\rDominika Tkaczyk, Wednesday, Nov 6, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/ief7aibi In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it\u0026rsquo;s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done! How about we start with a quiz?\nThe myth of perfect metadata matching\rDominika Tkaczyk, Wednesday, Aug 28, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/pied3tho In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true!\nThe anatomy of metadata matching\rDominika Tkaczyk, Thursday, Jun 27, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/zie7reeg In our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions. Basic terminology Metadata matching is a high-level concept, with many different problems falling into this category.\nMetadata matching 101: what is it and why do we need it?\rDominika Tkaczyk, Thursday, May 16, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/aewi1cai At Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we\u0026rsquo;ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective.\nRead all of Adam Buttrick's posts \u0026raquo;\r", "headings": ["Adam Buttrick","Biography","Adam Buttrick's Latest Blog Posts","Metadata matching: beyond correctness","How good is your matching?","The myth of perfect metadata matching","The anatomy of metadata matching","Metadata matching 101: what is it and why do we need it?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/amanda-bartell/", "title": "Amanda Bartell", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Amanda Bartell Director of Membership Biography Amanda joined Crossref in late 2017 to help make our member and user experience as smooth and delightful as possible. She previously worked in educational publishing for over 18 years at the intersection between digital marketing and digital product support and has also tutored in marketing for the Oxford College of Marketing. Outside of work she volunteers as a sighted guide for the visually impaired, and is currently learning to Flamenco dance, because why not?", "content": "\rAmanda Bartell Director of Membership Biography Amanda joined Crossref in late 2017 to help make our member and user experience as smooth and delightful as possible. She previously worked in educational publishing for over 18 years at the intersection between digital marketing and digital product support and has also tutored in marketing for the Oxford College of Marketing. Outside of work she volunteers as a sighted guide for the visually impaired, and is currently learning to Flamenco dance, because why not? ¡Olé!\nX @abartell Amanda Bartell's Latest Blog Posts\rUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024\nIn StrategyFees Leave a comment\nWe’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes. Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by: Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.\nISR part four: Working together as a community to preserve the integrity of the scholarly record\rAmanda Bartell, Wednesday, Apr 26, 2023\nIn Research IntegrityTrustworthinessStrategy Leave a comment\nWe\u0026rsquo;ve been spending some time speaking to the community about our role in research integrity, and particularly the integrity of the scholarly record. In this blog, we\u0026rsquo;ll be sharing what we\u0026rsquo;ve discovered, and what we\u0026rsquo;ve been up to in this area. We’ve discussed in our previous posts in the “Integrity of the Scholarly Record (ISR)” series that the infrastructure Crossref builds and operates (together with our partners and integrators) captures and preserves the scholarly record, making it openly available for humans and machines through metadata and relationships about all research activity.\nISR part two: How our membership approach helps to preserve the integrity of the scholarly record\rAmanda Bartell, Monday, Oct 10, 2022\nIn Research IntegrityTrustworthinessMembershipOperationsGovernance Leave a comment\nIn part one of our series on the Integrity of the Scholarly Record (ISR), we talked about how the metadata that our members register with us helps to preserve the integrity of the record, and in particular how \u0026rsquo;trust signals\u0026rsquo; in the metadata, combined with relationships and context, can help the community assess the work. In this second blog, we describe membership eligibility and what you can and cannot tell simply from the fact that an organisation is a Crossref member; why increasing participation and reducing barriers actually helps to enhance the integrity of the scholarly record; and how we handle the very small number of cases where there may be a question mark.\nISR part one: What is our role in preserving the integrity of the scholarly record?\rAmanda Bartell, Thursday, Sep 22, 2022\nIn Research IntegrityTrustworthinessStrategy Leave a comment\nThe integrity of the scholarly record is an essential aspect of research integrity. Every initiative and service that we have launched since our founding has been focused on documenting and clarifying the scholarly record in an open, machine-actionable and scalable form. All of this has been done to make it easier for the community to assess the trustworthiness of scholarly outputs. Now that the scholarly record itself has evolved beyond the published outputs at the end of the research process – to include both the elements of that process and its aftermath – preserving its integrity poses new challenges that we strive to meet\u0026hellip; we are reaching out to the community to help inform these efforts.\nFlies in your metadata (ointment)\rIsaac Farley, Monday, Jul 25, 2022\nIn MetadataContent RegistrationResearch Nexus Leave a comment\nQuality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors). We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.\nRead all of Amanda Bartell's posts \u0026raquo;\r", "headings": ["Amanda Bartell","Biography","X","Amanda Bartell's Latest Blog Posts","Update on the Resourcing Crossref for Future Sustainability research","ISR part four: Working together as a community to preserve the integrity of the scholarly record","ISR part two: How our membership approach helps to preserve the integrity of the scholarly record","ISR part one: What is our role in preserving the integrity of the scholarly record?","Flies in your metadata (ointment)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/amanda-french/", "title": "Amanda French", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Amanda French Technical Community Manager Biography Amanda French, Technical Community Manager for ROR, is a well-known community manager and project director in the digital humanities and scholarly communication sphere. Most recently, she served as Community Lead at The COVID Tracking Project at The Atlantic, where she helped build and nurture a community of more than 800 volunteers dedicated to collecting and publishing key COVID-19 data. Prior to that, she directed the Mellon-funded project \u0026lsquo;Resilient Networks for Inclusive Digital Humanities\u0026rsquo; at GWU Libraries, directed the Digital Research Services unit at Virginia Tech Libraries, led the THATCamp unconference initiative at GMU\u0026rsquo;s Roy Rosenzweig Center for History and New Media, and was a member of the first cohort of CLIR Postdoctoral Fellows.", "content": "\rAmanda French Technical Community Manager Biography Amanda French, Technical Community Manager for ROR, is a well-known community manager and project director in the digital humanities and scholarly communication sphere. Most recently, she served as Community Lead at The COVID Tracking Project at The Atlantic, where she helped build and nurture a community of more than 800 volunteers dedicated to collecting and publishing key COVID-19 data. Prior to that, she directed the Mellon-funded project \u0026lsquo;Resilient Networks for Inclusive Digital Humanities\u0026rsquo; at GWU Libraries, directed the Digital Research Services unit at Virginia Tech Libraries, led the THATCamp unconference initiative at GMU\u0026rsquo;s Roy Rosenzweig Center for History and New Media, and was a member of the first cohort of CLIR Postdoctoral Fellows. She often speaks and sometimes writes about openness in scholarly publishing, crowdsourcing, Agile, digital humanities, and related topics. In her free time she plays guitar, plants pollinator-friendly flowers, and enjoys the company of one dog and three cats.\nX @amandafrench Amanda French's Latest Blog Posts\rOpen Funder Registry to transition into Research Organization Registry (ROR)\rAmanda French, Thursday, Sep 7, 2023\nIn Open Funder RegistryRORIdentifiersMetadata Leave a comment\nToday, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned.\nHow I think about ROR as infrastructure\rAmanda French, Friday, Jul 8, 2022\nIn RORStaffResearch Nexus Leave a comment\nThe other day I was out and about and got into a conversation with someone who asked me about my doctoral work in English literature. I\u0026rsquo;ve had the same conversation many times: I tell someone (only if they ask!) that my dissertation was a history of the villanelle, and then they cheerfully admit that they don\u0026rsquo;t know what a villanelle is, and then I ask them if they\u0026rsquo;re familiar with Dylan Thomas\u0026rsquo;s poem \u0026ldquo;Do not go gentle into that good night.\nRead all of Amanda French's posts \u0026raquo;\r", "headings": ["Amanda French","Biography","X","Amanda French's Latest Blog Posts","Open Funder Registry to transition into Research Organization Registry (ROR)","How I think about ROR as infrastructure"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/amy-bosworth/", "title": "Amy Bosworth", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Amy Bosworth Accounts Receivable Manager Biography Amy Bosworth has been with Crossref since 2011 and is the Assistant Accounts Receivable Manager. She is responsible for entering all receivables and assisting our members with their billing questions. When Amy isn’t working, she can most likely be found at a hockey rink or a soccer field cheering on her two children.\nX @AmyB0219 Amy Bosworth's Latest Blog Posts\rIt’s not about the money, money, money.", "content": "\rAmy Bosworth Accounts Receivable Manager Biography Amy Bosworth has been with Crossref since 2011 and is the Assistant Accounts Receivable Manager. She is responsible for entering all receivables and assisting our members with their billing questions. When Amy isn’t working, she can most likely be found at a hockey rink or a soccer field cheering on her two children.\nX @AmyB0219 Amy Bosworth's Latest Blog Posts\rIt’s not about the money, money, money.\rAmy Bosworth, Thursday, Oct 18, 2018\nIn MembershipContent RegistrationCommunity Leave a comment\nBut actually, sometimes it is about the money. As a not-for-profit membership organization that is obsessed with persistence, we have a duty to remain sustainable and manage our finances in a responsible way. Our annual audit is incredibly thorough, and our outside auditors and Board-based Audit committee consistently report that we’re in good shape. Our Membership \u0026amp; Fees committee regularly reviews both membership fees and Content Registration fees for a growing range of research outputs.\nRead all of Amy Bosworth's posts \u0026raquo;\r", "headings": ["Amy Bosworth","Biography","X","Amy Bosworth's Latest Blog Posts","It’s not about the money, money, money."] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/andrew-gilmartin/", "title": "Andrew Gilmartin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Andrew Gilmartin Technical Staff Biography Andrew has moved on from Crossref. Andrew Gilmartin was part of the US team responsible for implementing and overseeing the query and deposit systems. As a senior member of the technical staff, he helped plan the design and implementation of Crossref\u0026rsquo;s ever-evolving and growing services.\nAndrew Gilmartin's Latest Blog Posts\rDOIs and matching regular expressions\rAndrew Gilmartin, Tuesday, Aug 11, 2015\nIn IdentifiersProgramming Leave a comment", "content": "\rAndrew Gilmartin Technical Staff Biography Andrew has moved on from Crossref. Andrew Gilmartin was part of the US team responsible for implementing and overseeing the query and deposit systems. As a senior member of the technical staff, he helped plan the design and implementation of Crossref\u0026rsquo;s ever-evolving and growing services.\nAndrew Gilmartin's Latest Blog Posts\rDOIs and matching regular expressions\rAndrew Gilmartin, Tuesday, Aug 11, 2015\nIn IdentifiersProgramming Leave a comment\nWe regularly see developers using regular expressions to validate or scrape for DOIs. For modern Crossref DOIs the regular expression is short\n/^10.\\d{4,9}/[-._;()/:A-Z0-9]+$/i\nFor the 74.9M DOIs we have seen this matches 74.4M of them. If you need to use only one pattern then use this one.\nRead all of Andrew Gilmartin's posts \u0026raquo;\r", "headings": ["Andrew Gilmartin","Biography","Andrew Gilmartin's Latest Blog Posts","DOIs and matching regular expressions"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/anna-tolwinska/", "title": "Anna Tolwinska", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Anna Tolwinska Member Experience Manager Biography Anna has moved on from Crossref. Anna Tolwinska was responsible for helping members understand their participation and opportunities with Crossref. She had been with Crossref for over ten years in both marketing and member outreach roles. When she wasn\u0026rsquo;t reading or answering questions about membership and metadata, Anna enjoyed going camping with her family and watching a good film.\nX @atolwinska ORCID iD 0000-0001-5088-8915 Anna Tolwinska's Latest Blog Posts\rCrossref Conversations: audio blog about helping open science\rRosa Morais Clark, Friday, Aug 20, 2021", "content": "\rAnna Tolwinska Member Experience Manager Biography Anna has moved on from Crossref. Anna Tolwinska was responsible for helping members understand their participation and opportunities with Crossref. She had been with Crossref for over ten years in both marketing and member outreach roles. When she wasn\u0026rsquo;t reading or answering questions about membership and metadata, Anna enjoyed going camping with her family and watching a good film.\nX @atolwinska ORCID iD 0000-0001-5088-8915 Anna Tolwinska's Latest Blog Posts\rCrossref Conversations: audio blog about helping open science\rRosa Morais Clark, Friday, Aug 20, 2021\nIn Community Leave a comment\nCrossref Conversations is an audio blog we\u0026rsquo;re trying out that will cover various topics important to our community. This conversation is between colleagues Anna Tolwinska and Rosa Morais Clark, discussing how we can make research happen faster, with fewer hurdles, and how Crossref can help. Our members have been asking us how Crossref can support open science, and we have a few insights to share. So we invite you to have a listen.\n3,2,1… it’s ‘lift-off’ for Participation Reports\rAnna Tolwinska, Wednesday, Aug 1, 2018\nIn ParticipationMember BriefingMetadataBest Practices Leave a comment\nMetadata is at the heart of all our services. With a growing range of members participating in our community—often compiling or depositing metadata on behalf of each other—the need to educate and express obligations and best practice has increased. In addition, we’ve seen more and more researchers and tools making use of our APIs to harvest, analyze and re-purpose the metadata our members register, so we’ve been very aware of the need to be more explicit about what this metadata enables, why, how, and for whom.\nLinking references is different from registering references\rAnna Tolwinska, Wednesday, May 30, 2018\nIn ReferencesReference LinkingCited-ByCitationLinking Leave a comment\nFrom time to time we get questions from members asking what the difference is between reference linking and registering references as part the Content Registration process. Here\u0026rsquo;s the distinction: Linking out to other articles from your reference lists is a key part of being a Crossref members - it\u0026rsquo;s an obligation in the membership agreement and it levels the playing field when all members link their references to one another.\nHow good is your metadata?\rKirsty Meddings, Thursday, Apr 26, 2018\nIn Member BriefingParticipationMetadataContent RegistrationResearch Nexus Leave a comment\nExciting news! We are getting very close to the beta release of a new tool to publicly show metadata coverage. As members register their content with us they also add additional information which gives context for other members and for services that help e.g. discovery or analytics.\nRicher metadata makes content useful. Participation reports will give\u0026mdash;for the first time\u0026mdash;a clear picture for anyone to see the metadata Crossref has. This is data that\u0026rsquo;s long been available via our Public REST API, now visualized.\nGetting Started with Crossref DOIs, courtesy of Scholastica\rAnna Tolwinska, Monday, Apr 25, 2016\nIn DOIsIdentifiersLinkingMetadataPersistence Leave a comment\nI had a great chat with Danielle Padula of Scholastica, a journals platform with an integrated peer-review process that was founded in 2011. We talked about how journals get started with Crossref, and she turned our conversation into a blog post that describes the steps to begin registering content and depositing metadata with us. Since the result is a really useful description of our new member on-boarding process, I want to share it with you here as well.\nRead all of Anna Tolwinska's posts \u0026raquo;\r", "headings": ["Anna Tolwinska","Biography","X","ORCID iD","Anna Tolwinska's Latest Blog Posts","Crossref Conversations: audio blog about helping open science","3,2,1… it’s ‘lift-off’ for Participation Reports","Linking references is different from registering references","How good is your metadata?","Getting Started with Crossref DOIs, courtesy of Scholastica"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/barbara-cruz/", "title": "Barbara Cruz", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Barbara Cruz Accounting Assistant Biography Barbara has moved on from Crossref. Barbara Cruz worked in the finance department as an Accounting Assistant. She joined Crossref in 2018 to assist with customer inquiries and administrative duties. She had 15 years of experience in administration, accounts payable (AP), accounts receivable (AR), and payroll. She liked music, dance, and was a real people person who loved to help. She spent her free time with her family dog, baking cookies, swimming, taking road trips, going to the movies, and having fun.", "content": "\rBarbara Cruz Accounting Assistant Biography Barbara has moved on from Crossref. Barbara Cruz worked in the finance department as an Accounting Assistant. She joined Crossref in 2018 to assist with customer inquiries and administrative duties. She had 15 years of experience in administration, accounts payable (AP), accounts receivable (AR), and payroll. She liked music, dance, and was a real people person who loved to help. She spent her free time with her family dog, baking cookies, swimming, taking road trips, going to the movies, and having fun.\n", "headings": ["Barbara Cruz","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/bryan-vickery/", "title": "Bryan Vickery", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Bryan Vickery Director of Product Biography Bryan joined Crossref in August 2019 as Director of Product. He has more than 20 years experience in scholarly communications including online communities, A\u0026amp;I databases, preprint servers, open access publishing, institutional repositories, peer review and production systems at Taylor \u0026amp; Francis, BioMedCentral/Springer and Elsevier. Prior to joining Crossref he was Managing Director, Research Services, at Taylor \u0026amp; Francis where he oversaw the launch of wizdom.", "content": "\rBryan Vickery Director of Product Biography Bryan joined Crossref in August 2019 as Director of Product. He has more than 20 years experience in scholarly communications including online communities, A\u0026amp;I databases, preprint servers, open access publishing, institutional repositories, peer review and production systems at Taylor \u0026amp; Francis, BioMedCentral/Springer and Elsevier. Prior to joining Crossref he was Managing Director, Research Services, at Taylor \u0026amp; Francis where he oversaw the launch of wizdom.ai. He loves watching rugby, hiking, and music festivals\u0026hellip; So pretty much anything that involves mud.\nX @Vickerbry Bryan Vickery's Latest Blog Posts\rOpen Abstracts: Where are we?\rLudo Waltman, Friday, Sep 25, 2020\nIn MetadataContent RegistrationCollaborationCommunity Leave a comment\nThe Initiative for Open Abstracts (I4OA) launched this week. The initiative calls on scholarly publishers to make the abstracts of their publications openly available. More specifically, publishers that work with Crossref to register DOIs for their publications are requested to include abstracts in the metadata they deposit in Crossref. These abstracts will then be made openly available by Crossref. 39 publishers have already agreed to join I4OA and to open their abstracts.\nEvolving our support for text-and-data mining\rBryan Vickery, Friday, Aug 21, 2020\nIn Text and Data Mining Leave a comment\nMany researchers want to carry out analysis and extraction of information from large sets of data, such as journal articles and other scholarly content. Methods such as screen-scraping are error-prone, place too much strain on content sites and may be unrepeatable or break if site layouts change. Providing researchers with automated access to the full-text content via DOIs and Crossref metadata reduces these problems, allowing for easy deduplication and reproducibility. Supporting text and data mining echoes our mission to make research outputs easy to find, cite, link, assess, and reuse.\nEvents got the better of us\rBryan Vickery, Friday, Mar 27, 2020\nIn IdentifiersMetadataCitationCollaborationDataEvent Data Leave a comment\nPublisher metadata is one side of the story surrounding research outputs, but conversations, connections and activities that build further around scholarly research, takes place all over the web. We built Event Data to capture, record and make available these \u0026lsquo;Events\u0026rsquo; –– providing open, transparent, and traceable information about the provenance and context of every Event. Events are comments, links, shares, bookmarks, references, etc.\nMetadata Manager Update\rBryan Vickery, Tuesday, Mar 24, 2020\nIn MetadataContent RegistrationIdentifiers Leave a comment\nAt Crossref, we\u0026rsquo;re committed to providing a simple, usable, efficient and scalable web-based tool for registering content by manually making deposits of, and updates to, metadata records. Last year we launched Metadata Manager in beta for journal deposits to help us explore this further. Since then, many members have used the tool and helped us better understand their needs.\nIntroducing our new Director of Product\rEd Pentz, Monday, Aug 19, 2019\nIn CommunityProductStaff Leave a comment\nI\u0026rsquo;m happy to announce that Bryan Vickery has joined Crossref today as our new Director of Product. Bryan has extensive experience developing products and services at publishers such as Taylor \u0026amp; Francis, where he led the creation of the open-access platform Cogent OA. Most recently he was Managing Director of Research Services at T\u0026amp;F, including Wizdom.ai after it was acquired.\nRead all of Bryan Vickery's posts \u0026raquo;\r", "headings": ["Bryan Vickery","Biography","X","Bryan Vickery's Latest Blog Posts","Open Abstracts: Where are we?","Evolving our support for text-and-data mining","Events got the better of us","Metadata Manager Update","Introducing our new Director of Product"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/carlos-del-ojo-elias/", "title": "Carlos del Ojo Elias", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Carlos del Ojo Elias Senior Software Developer Biography Carlos started at Crossref in 2020, he has a computer science and bioinformatics background. He has been working in several fields ranging from digital security to microbial genomics. He is interested in data analysis and visualization. He enjoys playing music and woodworking.", "content": "\rCarlos del Ojo Elias Senior Software Developer Biography Carlos started at Crossref in 2020, he has a computer science and bioinformatics background. He has been working in several fields ranging from digital security to microbial genomics. He is interested in data analysis and visualization. He enjoys playing music and woodworking.\n", "headings": ["Carlos del Ojo Elias","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/christine-buske/", "title": "Christine Buske", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Christine Buske Product Manager Biography Christine joined Crossref in early 2018, initially focusing on all aspects of Event Data. Her career in scholarly technology started during the final stages of her PhD in Neuroscience at the University of Toronto. After earning her degree, she pursued a career in technology and product management at SpringerNature and Elsevier, contributing to the reference management products Papers and Mendeley, respectively. Her professional background prior to product management included scholarly outreach, marketing, and business development.", "content": "\rChristine Buske Product Manager Biography Christine joined Crossref in early 2018, initially focusing on all aspects of Event Data. Her career in scholarly technology started during the final stages of her PhD in Neuroscience at the University of Toronto. After earning her degree, she pursued a career in technology and product management at SpringerNature and Elsevier, contributing to the reference management products Papers and Mendeley, respectively. Her professional background prior to product management included scholarly outreach, marketing, and business development. Outside of her interests in scholarly communications, workflows, and data, Christine was passionate about traveling, wildlife photography, and was (re)learning salsa dancing. Sadly, Christine passed away in 2019. You can find a homage to her life in, In Memory of Christine Hone.\nX christine_phd ORCID iD 0000-0002-3372-3702 Christine Buske's Latest Blog Posts\rEvent Data is production ready\rChristine Buske, Wednesday, Sep 12, 2018\nIn CitationCollaborationDataEvent DataIdentifiers Leave a comment\nWe’ve been working on Event Data for some time now, and in the spirit of openness, much of that story has already been shared with the community. In fact, when I recently joined as Crossref’s Product Manager for Event Data, I jumped onto an already fast moving train—headed for a bright horizon.\nHello, meet Event Data Version 1, and new Product Manager\rChristine Buske, Thursday, Mar 29, 2018\nIn CitationCollaborationDataEvent DataIdentifiers Leave a comment\nI joined Crossref only a few weeks ago, and have happily thrown myself into the world of Event Data as the service’s new product manager. In my first week, a lot of time was spent discussing the ins and outs of Event Data. This learning process made me very much feel like you might when you’ve just bought a house, and you’re studying the blueprints while also planning the house-warming party.\nRead all of Christine Buske's posts \u0026raquo;\r", "headings": ["Christine Buske","Biography","X","ORCID iD","Christine Buske's Latest Blog Posts","Event Data is production ready","Hello, meet Event Data Version 1, and new Product Manager"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/christine-cormack-wood/", "title": "Christine Cormack Wood", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Christine Cormack Wood Head of Marketing \u0026amp; Comms Biography Chrissie has moved on from Crossref. Based in Oxford, Chrissie was responsible for all Marketing and Communications activity at Crossref. Before joining Crossref, she had worked with a number of global multinational corporations, bringing business strategies to life through innovative and integrated omni-channel marketing. Having lived and worked across Asia and the Middle East for almost twenty years, she possessed extensive global experience and a truly international outlook.", "content": "\rChristine Cormack Wood Head of Marketing \u0026amp; Comms Biography Chrissie has moved on from Crossref. Based in Oxford, Chrissie was responsible for all Marketing and Communications activity at Crossref. Before joining Crossref, she had worked with a number of global multinational corporations, bringing business strategies to life through innovative and integrated omni-channel marketing. Having lived and worked across Asia and the Middle East for almost twenty years, she possessed extensive global experience and a truly international outlook. With a savvy approach to tactical planning, Chrissie happily brought her skills to support the Member and Community Outreach team in meeting Crossref\u0026rsquo;s educational and awareness objectives\nX @WoodCormack ORCID iD 0000-0002-8104-8078 Christine Cormack Wood's Latest Blog Posts\rHow Crossref metadata is helping bring migration research in Europe under one roof\rChristine Cormack Wood, Tuesday, Jan 29, 2019\nIn APIsAPI Case Study Leave a comment\nConflict, instability and economic conditions are just some of the factors driving new migration into Europe—and European policy makers are in dispute about how to manage and cope with the implications. Everyone agrees that in order to respond to the challenges and opportunities of migration, a better understanding is required of what drives migration towards Europe, what trajectories and infrastructures facilitate migration, and what the key characteristics of different migrant flows are, in order to inform and improve policy making.\nUsing the Crossref REST API. Part 12 (with Europe PMC)\rChristine Cormack Wood, Wednesday, Oct 10, 2018\nIn APIsAPI Case StudyPreprints Leave a comment\nAs part of our blog series highlighting some of the tools and services that use our API, we asked Michael Parkin\u0026mdash;Data Scientist at the European Bioinformatics Institute\u0026mdash;a few questions about how Europe PMC uses our metadata where preprints are concerned.\nA wrap up of the Crossref blog series for SciELO\rChristine Cormack Wood, Friday, Oct 5, 2018\nIn CommunityDOIRecord TypesSponsorshipMembershipTranslations Leave a comment\nCrossref member SciELO (Scientific Electronic Library Online), based in Brazil, celebrated two decades of operation last week with a three-day event The SciELO 20 Years Conference.\nJoin us in Toronto this November for LIVE18\rChristine Cormack Wood, Tuesday, Sep 25, 2018\nIn CommunityCrossref LIVEAnnual Meeting Leave a comment\nLIVE18, your Crossref annual meeting, is fast approaching! We’re looking forward to welcoming everyone in Toronto, November 13-14.\nUsing the Crossref REST API. Part 11 (with MDPI/Scilit)\rChristine Cormack Wood, Tuesday, Sep 18, 2018\nIn APIsIdentifiersInteroperabilityAPI Case Study Leave a comment\nContinuing our blog series highlighting the uses of Crossref metadata, we talked to Martyn Rittman and Bastien Latard who tell us about themselves, MDPI and Scilit, and how they use Crossref metadata.\nRead all of Christine Cormack Wood's posts \u0026raquo;\r", "headings": ["Christine Cormack Wood","Biography","X","ORCID iD","Christine Cormack Wood's Latest Blog Posts","How Crossref metadata is helping bring migration research in Europe under one roof","Using the Crossref REST API. Part 12 (with Europe PMC)","A wrap up of the Crossref blog series for SciELO","Join us in Toronto this November for LIVE18","Using the Crossref REST API. Part 11 (with MDPI/Scilit)"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/chuck-koscher/", "title": "Chuck Koscher", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Chuck Koscher Senior Advisor Biography Chuck has moved on from Crossref. Chuck Koscher joined Crossref in 2002. His primary responsibility had been the development and operation of Crossref’s core services and technical infrastructure. As a senior staff member, he also contributed to the definition of Crossref’s mission and the expansion of its services, such as the launch of the Open Funder Registry (formerly known as FundRef). His role included the management of technical support and back-end business operations.", "content": "\rChuck Koscher Senior Advisor Biography Chuck has moved on from Crossref. Chuck Koscher joined Crossref in 2002. His primary responsibility had been the development and operation of Crossref’s core services and technical infrastructure. As a senior staff member, he also contributed to the definition of Crossref’s mission and the expansion of its services, such as the launch of the Open Funder Registry (formerly known as FundRef). His role included the management of technical support and back-end business operations. Chuck and his team interfaced directly with members in dealing with issues affected by new or evolving industry practices, such as those involving non-journal content like books, standards, and databases. Chuck had been active within the industry, having served 9 years on the NISO board of directors, and participated in initiatives such as the NISO/NFAIS Best Practice in Journal Publishing and NISO’s Supplemental Material Working Group. Prior to Crossref, Chuck had over 20 years of software engineering experience, primarily in the aerospace industry.\nX @ckoscher ORCID iD 0000-0003-2181-9595 Chuck Koscher's Latest Blog Posts\rPreprints and Crossref’s metadata services\rChuck Koscher, Monday, Aug 29, 2016\nIn Preprints Leave a comment\nWe’re putting the final touches on the changes that will allow preprint publishers to register their metadata with Crossref and assign DOIs. These changes support Crossref’s CitedBy linking between the preprint and other scholarly publications (journal articles, books, conference proceedings). Full preprint support will be released over the next few weeks.\nA fairer approach to waiting for deposits\rChuck Koscher, Wednesday, Jul 20, 2016\nIn Content RegistrationCrossref SystemMetadataNews Release Leave a comment\nIf you ever see me in the checkout line at some store do not ever get in the line I’m in. It is always the absolute slowest. Crossref’s metadata system has a sort of checkout line, when members send in their data they got processed essentially in a first come first served basis. It’s called the deposit queue. We had controls to prevent anyone from monopolizing the queue and ways to jump forward in the queue but our primary goal was to give everyone a fair shot at getting processed as soon as possible.\nCrossref OpenURL resolver\rChuck Koscher, Tuesday, Jul 7, 2009\nIn OpenURL Leave a comment\nA new version of our OpenURL resolver was deployed July 2 which should handle higher traffic (e.g. we have re-enable the LibX plug-in ) Unfortunately there were a few hick ups with the new version which I believe are now corrected (a character encoding bug and a XML structure translation problem). Sorry for any inconvenience.\nCrossref’s OpenURL query interface\rChuck Koscher, Wednesday, May 6, 2009\nIn OpenURLAPIs Leave a comment\nOver the past two weeks we’ve focused on our OpenURL query interface with the goal being to improve its reliability. I’d like to mention some things we’ve done. We now require an OpenURL account to use this interface (see the registration page) . This account is still free, there are no fixed usage limits, and the terms of use have been greatly simplified. Resources have been re-arranged dedicating more horse-power to the OpenURL function.\nObject Reuse and Exchange\rChuck Koscher, Wednesday, Mar 5, 2008\nIn Standards Leave a comment\nOn March 3rd the Open Archives Initiative held a roll out meeting of the first alpha release of the ORE specification (http://www.openarchives.org/ore/) . According to Herbert Van de Sompel a beta release is planned for late March / early April and a 1.0 release targeted for September. The presentations focused on the aggregation concepts behind ORE and described an ATOM based implementation. ORE is the second project from the OAI but unlike its sibling PMH it is not exclusively a repository technology.\nRead all of Chuck Koscher's posts \u0026raquo;\r", "headings": ["Chuck Koscher","Biography","X","ORCID iD","Chuck Koscher's Latest Blog Posts","Preprints and Crossref’s metadata services","A fairer approach to waiting for deposits","Crossref OpenURL resolver","Crossref’s OpenURL query interface","Object Reuse and Exchange"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/collin-knopp-schwyn/", "title": "Collin Knopp-Schwyn", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Collin Knopp-Schwyn Membership Specialist Biography Collin began their work with Crossref in 2021 after managing nightly operations at an off-off-Broadway theater for nearly five years. Outside of supporting Crossref\u0026rsquo;s members, Collin edits Wikipedia, conducts independent research, writes/directs/produces plays, and rides public transit end-to-end just to have a pleasant place to read a book.", "content": "\rCollin Knopp-Schwyn Membership Specialist Biography Collin began their work with Crossref in 2021 after managing nightly operations at an off-off-Broadway theater for nearly five years. Outside of supporting Crossref\u0026rsquo;s members, Collin edits Wikipedia, conducts independent research, writes/directs/produces plays, and rides public transit end-to-end just to have a pleasant place to read a book.\n", "headings": ["Collin Knopp-Schwyn","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/cristi-martin/", "title": "Cristi Martin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Cristi Martin Member Support Contractor Biography Cristi works with our membership team helping to support our members.", "content": "\rCristi Martin Member Support Contractor Biography Cristi works with our membership team helping to support our members.\n", "headings": ["Cristi Martin","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/danilo-kuljanin/", "title": "Danilo Kuljanin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Danilo Kuljanin Biography Danilo works with our support team helping our members.", "content": "\rDanilo Kuljanin Biography Danilo works with our support team helping our members.\n", "headings": ["Danilo Kuljanin","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/david-haber/", "title": "David Haber", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "David Haber Publishing Operations Director Biography David Haber is the Publishing Operations Director at American Society for Microbiology\nDavid Haber's Latest Blog Posts\rShooting for the stars – ASM’s journey towards complete metadata\rKornelia Korzec, Tuesday, Mar 14, 2023\nIn MetadataCommunityPublishingResearch NexusContent RegistrationBest Practices Leave a comment\nAt Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent.", "content": "\rDavid Haber Publishing Operations Director Biography David Haber is the Publishing Operations Director at American Society for Microbiology\nDavid Haber's Latest Blog Posts\rShooting for the stars – ASM’s journey towards complete metadata\rKornelia Korzec, Tuesday, Mar 14, 2023\nIn MetadataCommunityPublishingResearch NexusContent RegistrationBest Practices Leave a comment\nAt Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent. We invest time in community initiatives such as Metadata 20/20 and Better Together webinars. We encourage members to take time to look up their participation reports, and our team can support you if you’re looking to understand and improve any aspects of metadata coverage of your content.\nRead all of David Haber's posts \u0026raquo;\r", "headings": ["David Haber","Biography","David Haber's Latest Blog Posts","Shooting for the stars – ASM’s journey towards complete metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/deborah-plavin/", "title": "Deborah Plavin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Deborah Plavin Digital Publishing Manager Biography Deborah Plavin is the Digital Publishing Manager at American Society for Microbiology\nDeborah Plavin's Latest Blog Posts\rShooting for the stars – ASM’s journey towards complete metadata\rKornelia Korzec, Tuesday, Mar 14, 2023\nIn MetadataCommunityPublishingResearch NexusContent RegistrationBest Practices Leave a comment\nAt Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent.", "content": "\rDeborah Plavin Digital Publishing Manager Biography Deborah Plavin is the Digital Publishing Manager at American Society for Microbiology\nDeborah Plavin's Latest Blog Posts\rShooting for the stars – ASM’s journey towards complete metadata\rKornelia Korzec, Tuesday, Mar 14, 2023\nIn MetadataCommunityPublishingResearch NexusContent RegistrationBest Practices Leave a comment\nAt Crossref, we care a lot about the completeness and quality of metadata. Gathering robust metadata from across the global network of scholarly communication is essential for effective co-creation of the research nexus and making the inner workings of academia traceable and transparent. We invest time in community initiatives such as Metadata 20/20 and Better Together webinars. We encourage members to take time to look up their participation reports, and our team can support you if you’re looking to understand and improve any aspects of metadata coverage of your content.\nRead all of Deborah Plavin's posts \u0026raquo;\r", "headings": ["Deborah Plavin","Biography","Deborah Plavin's Latest Blog Posts","Shooting for the stars – ASM’s journey towards complete metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/dima-safonov/", "title": "Dima Safonov", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Dima Safonov Senior Software Developer Biography Dima Safonov has background in building cloud infrastructure and remote learning systems. He is passionate about improving user and developer experience, designing practical APIs, and writing detailed documentation. In his off-time he enjoys hiking, reading sci-fi, and telling every dog they\u0026rsquo;re a good dog.", "content": "\rDima Safonov Senior Software Developer Biography Dima Safonov has background in building cloud infrastructure and remote learning systems. He is passionate about improving user and developer experience, designing practical APIs, and writing detailed documentation. In his off-time he enjoys hiking, reading sci-fi, and telling every dog they\u0026rsquo;re a good dog.\n", "headings": ["Dima Safonov","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/dominika-tkaczyk/", "title": "Dominika Tkaczyk", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Dominika Tkaczyk Director of Data Science Biography Dominika joined Crossref\u0026rsquo;s R\u0026amp;D team in August 2018 as a Principal R\u0026amp;D Developer. Within her first few years at Crossref, she focused primarily on the research and development of metadata matching strategies, to enrich the Research Nexus network with new relationships. In 2024 Dominika became Crossref’s Director of Data Science and launched the Data Science Team. The goal of the Data Science Team is to explore new possibilities for using the data to serve the scholarly community, continue the enrichment of the scholarly record with more metadata and relationships, and develop strong collaborations with like-minded community initiatives.", "content": "\rDominika Tkaczyk Director of Data Science Biography Dominika joined Crossref\u0026rsquo;s R\u0026amp;D team in August 2018 as a Principal R\u0026amp;D Developer. Within her first few years at Crossref, she focused primarily on the research and development of metadata matching strategies, to enrich the Research Nexus network with new relationships. In 2024 Dominika became Crossref’s Director of Data Science and launched the Data Science Team. The goal of the Data Science Team is to explore new possibilities for using the data to serve the scholarly community, continue the enrichment of the scholarly record with more metadata and relationships, and develop strong collaborations with like-minded community initiatives. Before joining Crossref, Dominika was a researcher and a data scientist at the University of Warsaw, Poland, and a postdoctoral researcher at Trinity College Dublin, Ireland. She received a PhD in Computer Science from the Polish Academy of Sciences in 2016 for her research on metadata extraction from full-text documents using machine learning and natural language processing techniques.\nORCID iD 0000-0001-5055-7876 Dominika Tkaczyk's Latest Blog Posts\rMetadata matching: beyond correctness\rDominika Tkaczyk, Wednesday, Jan 8, 2025\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/axeer1ee In our previous entry, we explained that thorough evaluation is key to understanding a matching strategy\u0026rsquo;s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs.\nHow good is your matching?\rDominika Tkaczyk, Wednesday, Nov 6, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/ief7aibi In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it\u0026rsquo;s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done! How about we start with a quiz?\nThe myth of perfect metadata matching\rDominika Tkaczyk, Wednesday, Aug 28, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/pied3tho In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true!\nThe anatomy of metadata matching\rDominika Tkaczyk, Thursday, Jun 27, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/zie7reeg In our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions. Basic terminology Metadata matching is a high-level concept, with many different problems falling into this category.\nMetadata matching 101: what is it and why do we need it?\rDominika Tkaczyk, Thursday, May 16, 2024\nIn MetadataLinkingMetadata MatchingData Science Leave a comment\nhttps://doi.org/10.13003/aewi1cai At Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we\u0026rsquo;ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective.\nRead all of Dominika Tkaczyk's posts \u0026raquo;\r", "headings": ["Dominika Tkaczyk","Biography","ORCID iD","Dominika Tkaczyk's Latest Blog Posts","Metadata matching: beyond correctness","How good is your matching?","The myth of perfect metadata matching","The anatomy of metadata matching","Metadata matching 101: what is it and why do we need it?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/dr-kelvin-githaiga/", "title": "Dr. Kelvin Githaiga", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Dr. Kelvin Githaiga Biography Kelvin works with our support team helping our members.", "content": "\rDr. Kelvin Githaiga Biography Kelvin works with our support team helping our members.\n", "headings": ["Dr. Kelvin Githaiga","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/ed-pentz/", "title": "Ed Pentz", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Ed Pentz Executive Director Biography Ed Pentz became Crossref\u0026rsquo;s first Executive Director when the organization was founded in 2000 and manages all aspects of the organization to ensure that it fulfills its mission to make research outputs easy to find, cite, link and assess. Ed was Chair of the ORCID board of directors from 2014-2017 and is current Treasurer of the International DOI Foundation. Prior to joining Crossref, Ed held electronic publishing, editorial and sales positions at Harcourt Brace in the US and UK and managed the launch of Academic Press’s first online journal, the Journal of Molecular Biology, in 1995.", "content": "\rEd Pentz Executive Director Biography Ed Pentz became Crossref\u0026rsquo;s first Executive Director when the organization was founded in 2000 and manages all aspects of the organization to ensure that it fulfills its mission to make research outputs easy to find, cite, link and assess. Ed was Chair of the ORCID board of directors from 2014-2017 and is current Treasurer of the International DOI Foundation. Prior to joining Crossref, Ed held electronic publishing, editorial and sales positions at Harcourt Brace in the US and UK and managed the launch of Academic Press’s first online journal, the Journal of Molecular Biology, in 1995. Ed has a degree in English Literature from Princeton University and lives in Oxford, England.\nTopics Scholarly communications Crossref tools and services Not-for-profit governance X @epentz ORCID iD 0000-0002-5993-8592 Ed Pentz's Latest Blog Posts\rPOSI 2.0 feedback\rEd Pentz, Tuesday, Jan 28, 2025\nIn POSISustainabilityCommunity Leave a comment\nAs a provider of foundational open scholarly infrastructure, Crossref is an adopter of the Principles of Open Scholarly Infrastructure (POSI). In December 2024 we posted our updated POSI self-assessment. POSI provides an invaluable framework for transparency, accountability, susatinability and community alignment. There are 21 other POSI adopters. Together, we are now undertaking a public consultation on proposed revisions for a version 2.0 release of the principles, which would update the current version 1.\nA progress update and a renewed commitment to community\rGinny Hendricks, Thursday, Dec 12, 2024\nIn ProgramsStrategyProduct Leave a comment\nLooking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata. In this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what\u0026rsquo;s paused and why.\nSummary of the environmental impact of Crossref\rEd Pentz, Thursday, Dec 5, 2024\nIn CommunityEnvironment Leave a comment\nIn June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas: The environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!\nNews: Crossref and Retraction Watch\rGinny Hendricks, Tuesday, Sep 12, 2023\nIn Research IntegrityRetractionsResearch NexusNews Release Leave a comment\nhttps://doi.org/10.13003/c23rw1d9 Crossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together 12th September 2023 —\u0026ndash; The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource.\nHow funding agencies can meet OSTP (and Open Science) guidance using existing open infrastructure\rEd Pentz, Thursday, Nov 17, 2022\nIn Research NexusResearch FundersGrantsMetadataIdentifiersCommunity Leave a comment\nIn August 2022, the United States Office of Science and Technology Policy (OSTP) issued a memo (PDF) on ensuring free, immediate, and equitable access to federally funded research (a.k.a. the “Nelson memo”). Crossref is particularly interested in and relevant for the areas of this guidance that cover metadata and persistent identifiers—and the infrastructure and services that make them useful. Funding bodies worldwide are increasingly involved in research infrastructure for dissemination and discovery.\nRead all of Ed Pentz's posts \u0026raquo;\r", "headings": ["Ed Pentz","Biography","Topics","X","ORCID iD","Ed Pentz's Latest Blog Posts","POSI 2.0 feedback","A progress update and a renewed commitment to community","Summary of the environmental impact of Crossref","News: Crossref and Retraction Watch","How funding agencies can meet OSTP (and Open Science) guidance using existing open infrastructure"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/esha-datta/", "title": "Esha Datta", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Esha Datta Principal R\u0026amp;D Developer Biography Esha joined Crossref in June 2018 until the end of 2024. Her career as a developer has been mostly in libraries where she helped create services and tools that provided and improved access to digitized content such as archival images and videos. Prior to Crossref, she worked at NYU Libraries as a developer where she built metadata based services and tools. Along with fiddling around with code and metadata, Esha loves to cook and is a huge fan of cheese (the runnier the better).", "content": "\rEsha Datta Principal R\u0026amp;D Developer Biography Esha joined Crossref in June 2018 until the end of 2024. Her career as a developer has been mostly in libraries where she helped create services and tools that provided and improved access to digitized content such as archival images and videos. Prior to Crossref, she worked at NYU Libraries as a developer where she built metadata based services and tools. Along with fiddling around with code and metadata, Esha loves to cook and is a huge fan of cheese (the runnier the better).\n", "headings": ["Esha Datta","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/evans-atoni/", "title": "Evans Atoni", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Evans Atoni Technical Support Specialist Biography Evans Atoni is a member of the Technical Support team. He joined Crossref in September 2021. He’s passionate about advancing open access \u0026amp; POSI. In his spare time, he enjoys anything outdoors, family time, and travelling. Evans works remotely from Nairobi, Kenya.\nORCID iD 0000-0002-9757-1621 Evans Atoni's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024", "content": "\rEvans Atoni Technical Support Specialist Biography Evans Atoni is a member of the Technical Support team. He joined Crossref in September 2021. He’s passionate about advancing open access \u0026amp; POSI. In his spare time, he enjoys anything outdoors, family time, and travelling. Evans works remotely from Nairobi, Kenya.\nORCID iD 0000-0002-9757-1621 Evans Atoni's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment\nMy name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nRead all of Evans Atoni's posts \u0026raquo;\r", "headings": ["Evans Atoni","Biography","ORCID iD","Evans Atoni's Latest Blog Posts","Solving your technical support questions in a snap!"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/fabienne-michaud/", "title": "Fabienne Michaud", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Fabienne Michaud Product Manager Biography Based in London, UK, Fabienne was a Product Manager at Crossref. She led our work on Similarity Check and the Open Funder Registry for three years. Fabienne is a chartered librarian who has worked in a variety of roles for over twenty years in academic, research and not-for-profit libraries. Before joining Crossref in April 2021, Fabienne was the Geological Society of London’s Head of Library and Information Services for eight years.", "content": "\rFabienne Michaud Product Manager Biography Based in London, UK, Fabienne was a Product Manager at Crossref. She led our work on Similarity Check and the Open Funder Registry for three years. Fabienne is a chartered librarian who has worked in a variety of roles for over twenty years in academic, research and not-for-profit libraries. Before joining Crossref in April 2021, Fabienne was the Geological Society of London’s Head of Library and Information Services for eight years. Fabienne is Non Executive Director on the Association of Learned and Professional Society Publishers (ALPSP) Board.\nFabienne Michaud's Latest Blog Posts\rSimilarity check update: A new similarity report and AI writing detection tool soon to be available to iThenticate v2 users\rFabienne Michaud, Wednesday, Nov 1, 2023\nIn Similarity CheckCommunity Leave a comment\nIn May, we updated you on the latest changes and improvements to the new version of iThenticate and let you know that a new similarity report and AI writing detection tool were on the horizon. On Wednesday 1 November 2023, Turnitin (who produce iThenticate) will be releasing a brand new similarity report and a free preview to their AI writing detection tool in iThenticate v2. The AI writing detection tool will be enabled by default and account administrators will be able to switch it off/on.\nOpen Funder Registry to transition into Research Organization Registry (ROR)\rAmanda French, Thursday, Sep 7, 2023\nIn Open Funder RegistryRORIdentifiersMetadata Leave a comment\nToday, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned.\nOpen funding metadata through Crossref; a workshop to discuss challenges and improving workflows\rHans de Jonge, Wednesday, Sep 6, 2023\nIn Research FundersOpen Funder Registry Leave a comment\nTen years on from the launch of the Open Funder Registry (OFR, formerly FundRef), there is renewed interest in the potential of openly available funding metadata through Crossref. And with that: calls to improve the quality and completeness of that data. Currently, about 25% of Crossref records contain some kind of funding information. Over the years, this figure has grown steadily. A number of recent publications have shown, however, that there is considerable variation in the extent to which publishers deposit these data to Crossref.\nSimilarity Check: look out for a refreshed interface and improvements for iThenticate v2 account administrators\rFabienne Michaud, Monday, May 1, 2023\nIn Similarity CheckCommunity Leave a comment\nIn 2022, we flagged up some changes to Similarity Check, which were taking place in v2 of Turnitin\u0026rsquo;s iThenticate tool used by members participating in the service. We noted that further enhancements were planned, and want to highlight some changes that are coming very soon. These changes will affect functionality that is used by account administrators, and doesn\u0026rsquo;t affect the Similarity Reports themselves. From Wednesday 3 May 2023, administrators of iThenticate v2 accounts will notice some changes to the interface and improvements to the Users, Groups, Integrations, Statistics and Paper Lookup sections.\nSimilarity Check: what’s new with iThenticate v2?\rFabienne Michaud, Tuesday, May 10, 2022\nIn Similarity CheckCommunity Leave a comment\nSince we announced last September the launch of a new version of iThenticate, a number of you have upgraded and become familiar with iThenticate v2 and its new and improved features which include: A faster, more user-friendly and responsive interface A preprint exclusion filter, giving users the ability to identify content on preprint servers more easily A new “red flag” feature that signals the detection of hidden text such as text/quotation marks in white font, or suspicious character replacement A private repository available for browser users, allowing them to compare against their previous submissions to identify duplicate submissions within your organisation A content portal, helping users check how much of their own research outputs have been successfully indexed, self-diagnose and fix the content that has failed to be indexed in iThenticate.\nRead all of Fabienne Michaud's posts \u0026raquo;\r", "headings": ["Fabienne Michaud","Biography","Fabienne Michaud's Latest Blog Posts","Similarity check update: A new similarity report and AI writing detection tool soon to be available to iThenticate v2 users","Open Funder Registry to transition into Research Organization Registry (ROR)","Open funding metadata through Crossref; a workshop to discuss challenges and improving workflows","Similarity Check: look out for a refreshed interface and improvements for iThenticate v2 account administrators","Similarity Check: what’s new with iThenticate v2?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/geoffrey-bilder/", "title": "Geoffrey Bilder", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Geoffrey Bilder Director of Technology \u0026amp; Research Biography Geoffrey Bilder has led the technical development and launch of a number of industry initiatives at Crossref, including Similarity Check (formerly CrossCheck), Crossmark, ORCID, and the Open Funder Registry (formerly FundRef). He co-founded Brown University\u0026rsquo;s Scholarly Technology Group in 1993, providing the Brown academic community with advanced technology consulting in support of their research, teaching, and scholarly communication. He was subsequently head of IT R\u0026amp;D at Monitor Group, a global management consulting firm.", "content": "\rGeoffrey Bilder Director of Technology \u0026amp; Research Biography Geoffrey Bilder has led the technical development and launch of a number of industry initiatives at Crossref, including Similarity Check (formerly CrossCheck), Crossmark, ORCID, and the Open Funder Registry (formerly FundRef). He co-founded Brown University\u0026rsquo;s Scholarly Technology Group in 1993, providing the Brown academic community with advanced technology consulting in support of their research, teaching, and scholarly communication. He was subsequently head of IT R\u0026amp;D at Monitor Group, a global management consulting firm. From 2002 to 2005, Geoffrey was Chief Technology Officer of scholarly publishing firm Ingenta, and just prior to joining Crossref, he was a Publishing Technology Consultant at Scholarly Information Strategies.\nTopics future of scholarly communication scholarly infrastructures persistent identifiers ORCID X @gbilder ORCID iD 0000-0003-1315-5960 Geoffrey Bilder's Latest Blog Posts\rNews: Crossref and Retraction Watch\rGinny Hendricks, Tuesday, Sep 12, 2023\nIn Research IntegrityRetractionsResearch NexusNews Release Leave a comment\nhttps://doi.org/10.13003/c23rw1d9 Crossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together 12th September 2023 —\u0026ndash; The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource.\nStart citing data now. Not later\rGeoffrey Bilder, Thursday, Mar 23, 2023\nIn MetadataCitationData CitationResearch Nexus Leave a comment\nRecording data citations supports data reuse and aids research integrity and reproducibility. Crossref makes it easy for our members to submit data citations to support the scholarly record. TL;DR Citations are essential/core metadata that all members should submit for all articles, conference proceedings, preprints, and books. Submitting data citations to Crossref has long been possible. And it’s easy, you just need to: Include data citations in the references section as you would for any other citation Include a DOI or other persistent identifier for the data if it is available - just as you would for any other citation Submit the references to Crossref through the content registration process as you would for any other record And your data citations will flow through all the normal processes that Crossref applies to citations.\nMartin Paul Eve is joining our R\u0026amp;D group as a Principal Developer\rGeoffrey Bilder, Friday, Aug 26, 2022\nIn StaffLabs Leave a comment\nI\u0026rsquo;m delighted to say that Martin Paul Eve will be joining Crossref as a Principal R\u0026amp;D Developer starting in January 2023. As a Professor of Literature, Technology, and Publishing at Birkbeck, University of London- Martin has always worked on issues relating to metadata and scholarly infrastructure. In joining the Crossref R\u0026amp;D group, Martin can focus full-time on helping us design and build a new generation of services and tools to help the research community navigate and make sense of the scholarly record.\nAnnouncing our new Head of Strategic Initiatives: Dominika Tkaczyk\rGeoffrey Bilder, Friday, Jun 10, 2022\nIn StaffLabs Leave a comment\nTL;DR A year ago, we announced that we were putting the \u0026ldquo;R\u0026rdquo; back in R\u0026amp;D. That was when Rachael Lammey joined the R\u0026amp;D team as the Head of Strategic Initiatives. And now, with Rachael assuming the role of Product Director, I\u0026rsquo;m delighted to announce that Dominika Tkaczyk has agreed to take over Rachael\u0026rsquo;s role as the Head of Strategic Initiatives. Of course, you might already know her. We will also immediately start recruiting for a new Principal R\u0026amp;D Developer to work with Esha and Dominika on the R\u0026amp;D team.\nOutage of March 24, 2022\rGeoffrey Bilder, Thursday, Mar 24, 2022\nIn Data CenterPost Mortem Leave a comment\nSo here I am, apologizing again. Have I mentioned that I hate computers? We had a large data center outage. It lasted 17 hours. It meant that pretty much all Crossref services were unavailable - our main website, our content registration system, our reports, our APIs. 17 hours was a long time for us - but it was also an inconvenient time for numerous members, service providers, integrators, and users. We apologise for this.\nRead all of Geoffrey Bilder's posts \u0026raquo;\r", "headings": ["Geoffrey Bilder","Biography","Topics","X","ORCID iD","Geoffrey Bilder's Latest Blog Posts","News: Crossref and Retraction Watch","Start citing data now. Not later","Martin Paul Eve is joining our R\u0026amp;D group as a Principal Developer","Announcing our new Head of Strategic Initiatives: Dominika Tkaczyk","Outage of March 24, 2022"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/ginny-hendricks/", "title": "Ginny Hendricks", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Ginny Hendricks Chief Program Officer Biography In 2015, Ginny Hendricks established the community and membership functions at Crossref which encompassed community engagement \u0026amp; comms, member experience, technical support, and metadata strategy. In 2024 she developed the Program group as our CPO and incorporated product/program management within the group. Before joining Crossref, she ran \u0026lsquo;Ardent\u0026rsquo; for a decade, where she consulted within scholarly communications for awareness and growth strategies, developed and launched online products, and built virtual global communities.", "content": "\rGinny Hendricks Chief Program Officer Biography In 2015, Ginny Hendricks established the community and membership functions at Crossref which encompassed community engagement \u0026amp; comms, member experience, technical support, and metadata strategy. In 2024 she developed the Program group as our CPO and incorporated product/program management within the group. Before joining Crossref, she ran \u0026lsquo;Ardent\u0026rsquo; for a decade, where she consulted within scholarly communications for awareness and growth strategies, developed and launched online products, and built virtual global communities. In 2018 she founded the Metadata 20/20 collaboration to advocate for richer, connected, reusable, and open metadata, and she helps guide several open infrastructure initiatives such as ROR and POSI. She recently co-founded FORCE11\u0026rsquo;s Upstream community blog for all things open research, and she was an early contributor to the Barcelona Declaration on Open Research Informationhttps://barcelona-declaration.org/\nTopics Open scholarly infrastructure Integrity of the scholarly record Non-profit leadership Global community engagement Program/product development ORCID iD 0000-0002-0353-2702 Ginny Hendricks's Latest Blog Posts\rA progress update and a renewed commitment to community\rGinny Hendricks, Thursday, Dec 12, 2024\nIn ProgramsStrategyProduct Leave a comment\nLooking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata. In this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what\u0026rsquo;s paused and why.\nMetadata beyond discoverability\rGinny Hendricks, Tuesday, Dec 3, 2024\nIn Research NexusCommunityMetadataPublishing Leave a comment\nMetadata is one of the most important tools needed to communicate with each other about science and scholarship. It tells the story of research that travels throughout systems and subjects and even to future generations. We have metadata for organising and describing content, metadata for provenance and ownership information, and metadata is increasingly used as signals of trust. Following our panel discussion on the same subject at the ALPSP University Press Redux conference in May 2024, in this post we explore the idea that metadata, once considered important mostly for discoverability, is now a vital element used for evidence and the integrity of the scholarly record.\nUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024\nIn StrategyFees Leave a comment\nWe’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes. Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by: Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.\nCelebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?\rKornelia Korzec, Monday, Jul 1, 2024\nIn Research FundersGrantsInfrastructureMetadataIdentifiers Leave a comment\nWe’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure. The system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure.\nRebalancing our REST API traffic\rStewart Houten, Tuesday, Jun 4, 2024\nIn APIInfrastructure Leave a comment\nSince we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research. The REST API is relied on by a large part of the research information community and beyond, seeing around 1.\nRead all of Ginny Hendricks's posts \u0026raquo;\r", "headings": ["Ginny Hendricks","Biography","Topics","ORCID iD","Ginny Hendricks's Latest Blog Posts","A progress update and a renewed commitment to community","Metadata beyond discoverability","Update on the Resourcing Crossref for Future Sustainability research","Celebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?","Rebalancing our REST API traffic"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/gurjit-bhullar/", "title": "Gurjit Bhullar", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "\rGurjit Bhullar Member Experience Coordinator Biography Gurjit has moved on from Crossref. Gurjit joined Crossref in late 2017 to help with the member onboarding process, to ensure each member has a positive end-to-end experience. Outside of work, Gurjit will often be found reading novels, cooking hearty food and baking cakes.\nX @GurjitMiss ", "content": "\rGurjit Bhullar Member Experience Coordinator Biography Gurjit has moved on from Crossref. Gurjit joined Crossref in late 2017 to help with the member onboarding process, to ensure each member has a positive end-to-end experience. Outside of work, Gurjit will often be found reading novels, cooking hearty food and baking cakes.\nX @GurjitMiss ", "headings": ["Gurjit Bhullar","Biography","X"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/helena-cousijn/", "title": "Helena Cousijn", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Helena Cousijn Director of Programs and Services Biography Helena joined Crossref in February 2024 as Director of Programs and Services. She is passionate about open research and open infrastructure and joined us from DataCite, where she served as the Community Engagement Director for over 6 years. Before then, she worked at Elsevier, the Netherlands Organization for Scientific Research (NWO), and the Dutch Brain Bank. Helena holds a DPhil in Neuroscience from the University of Oxford.", "content": "\rHelena Cousijn Director of Programs and Services Biography Helena joined Crossref in February 2024 as Director of Programs and Services. She is passionate about open research and open infrastructure and joined us from DataCite, where she served as the Community Engagement Director for over 6 years. Before then, she worked at Elsevier, the Netherlands Organization for Scientific Research (NWO), and the Dutch Brain Bank. Helena holds a DPhil in Neuroscience from the University of Oxford. Outside of work, she enjoys spending time with her family (including dogs and horses), camping, rowing, and other outdoor activities.\nORCID iD 0000-0001-6660-6214 Helena Cousijn's Latest Blog Posts\rWe\u0026#39;ll be rocking your world again at PIDapalooza 2020\rGinny Hendricks, Sunday, Aug 18, 2019\nIn PIDapaloozaPersistenceIdentifiersCollaborationCommunityMeetings Leave a comment\nThe official countdown to PIDapalooza 2020 begins here! It\u0026rsquo;s 163 days to go till our flame-lighting opening ceremony at the fabulous Belem Cultural Center in Lisbon, Portugal. Your friendly neighborhood PIDapalooza Planning Committee\u0026mdash;Helena Cousijn (DataCite), Maria Gould (CDL), Stephanie Harley (ORCID), Alice Meadows (ORCID), and I\u0026mdash;are already hard at work making sure it’s the best one so far!\nWork through your PID problems on the PID Forum\rRachael Lammey, Thursday, Feb 21, 2019\nIn IdentifiersInfrastructureCollaborationCommunity Leave a comment\nAs self-confessed PID nerds, we’re big fans of a persistent identifier. However, we’re also conscious that the uptake and use of PIDs isn’t a done deal, and there are things that challenge how broadly these are adopted by the community. At PIDapalooza (an annual festival of PIDs) in January, ORCID, DataCite and Crossref ran an interactive session to chat about the cool things that PIDs allow us to do, what’s working well and, just as importantly, what isn’t, so that we can find ways to improve and approaches that work.\nData Citation: what and how for publishers\rRachael Lammey, Friday, Nov 23, 2018\nIn DataCitationDataCite Leave a comment\nWe’ve mentioned why data citation is important to the research community. Now it’s time to roll up our sleeves and get into the ‘how’. This part is important, as citing data in a standard way helps those citations be recognised, tracked, and used in a host of different services.\nWhy Data Citation matters to publishers and data repositories\rHelena Cousijn, Thursday, Nov 8, 2018\nIn DataCitationDaatCiteResearch Nexus Leave a comment\nA couple of weeks ago we shared with you that data citation is here, and that you can start doing data citation today. But why would you want to? There are always so many priorities, why should this be at the top of the list?\nData citation: let’s do this\rRachael Lammey, Thursday, Oct 4, 2018\nIn DataCitationDataCiteResearch Nexus Leave a comment\nData citation is seen as one of the most important ways to establish data as a first-class scientific output. At Crossref and DataCite we are seeing growth in journal articles and other record types citing data, and datasets making the link the other way. Our organizations are committed to working together to help realize the data citation community’s ambition, so we’re embarking on a dedicated effort to get things moving.\nRead all of Helena Cousijn's posts \u0026raquo;\r", "headings": ["Helena Cousijn","Biography","ORCID iD","Helena Cousijn's Latest Blog Posts","We\u0026#39;ll be rocking your world again at PIDapalooza 2020","Work through your PID problems on the PID Forum","Data Citation: what and how for publishers","Why Data Citation matters to publishers and data repositories","Data citation: let’s do this"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/irene-mokeira/", "title": "Irene Mokeira", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Irene Mokeira Billing Support Specialist Biography I am passionate about delivering exceptional service to life-supporting and sustaining causes. When I’m not working, I dedicate my time to fitness and exploring the world. Whether I’m reading a biography of visionaries like Jane Goodall, watching a documentary on the surreal Waitomo Glowworm Caves in New Zealand, or engaging with unique thinkers, I constantly seek to experience life in the richest and most meaningful way.", "content": "\rIrene Mokeira Billing Support Specialist Biography I am passionate about delivering exceptional service to life-supporting and sustaining causes. When I’m not working, I dedicate my time to fitness and exploring the world. Whether I’m reading a biography of visionaries like Jane Goodall, watching a documentary on the surreal Waitomo Glowworm Caves in New Zealand, or engaging with unique thinkers, I constantly seek to experience life in the richest and most meaningful way.\n", "headings": ["Irene Mokeira","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/isaac-farley/", "title": "Isaac Farley", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Isaac Farley Head of Participation and Support Biography Isaac joined Crossref in April 2018 having previously been a member. He worked for the Society of Exploration Geophysicists as their Digital Publications Manager. In addition to more than five years of experience in digital publishing, he has previous experience in community building, volunteer engagement, and education. In 2024, Isaac took on the expanded role on the Senior Management Team of Head of Participation and Support.", "content": "\rIsaac Farley Head of Participation and Support Biography Isaac joined Crossref in April 2018 having previously been a member. He worked for the Society of Exploration Geophysicists as their Digital Publications Manager. In addition to more than five years of experience in digital publishing, he has previous experience in community building, volunteer engagement, and education. In 2024, Isaac took on the expanded role on the Senior Management Team of Head of Participation and Support. He enjoys writing, music, podcasts, his family, and the outdoors, in no particular order. Isaac works remotely from Tulsa, Oklahoma.\nIsaac Farley's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment\nMy name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nFlies in your metadata (ointment)\rIsaac Farley, Monday, Jul 25, 2022\nIn MetadataContent RegistrationResearch Nexus Leave a comment\nQuality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors). We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.\nHiccups with credentials in the Test Admin Tool\rIsaac Farley, Wednesday, Jan 26, 2022\nIn Content RegistrationAdmin Tool Leave a comment\nTL;DR We inadvertently deleted data in our authentication sandbox that stored member credentials for our Test Admin Tool - test.crossref.org. We’re restoring credentials using our production data, but this will mean that some members have credentials that are out-of-sync. Please contact support@crossref.org if you have issues accessing test.crossref.org. The details Earlier today the credentials in our authentication sandbox were inadvertently deleted. This was a mistake on our end that has resulted in those credentials no longer being stored for our members using our Test Admin Tool - test.\nLesson learned, the hard way: Let’s not do that again!\rIsaac Farley, Wednesday, Sep 8, 2021\nIn Content RegistrationMetadataPost MortemURL Updates Leave a comment\nTL;DR We missed an error that led to resource resolution URLs of some 500,000+ records to be incorrectly updated. We have reverted the incorrect resolution URLs affected by this problem. And, we’re putting in place checks and changes in our processes to ensure this does not happen again. How we got here Our technical support team was contacted in late June by Wiley about updating resolution URLs for their content. It\u0026rsquo;s a common request of our technical support team, one meant to make the URL update process more efficient, but this was a particularly large request.\nStepping up our deposit processing game\rIsaac Farley, Monday, Mar 8, 2021\nIn Content RegistrationDOIs Leave a comment\nSome of you who have submitted content to us during the first two months of 2021 may have experienced content registration delays. We noticed; you did, too. The time between us receiving XML from members, to the content being registered with us and the DOI resolving to the correct resolution URL, is usually a matter of minutes. Some submissions take longer - for example, book registrations with large reference lists, or very large files from larger publishers can take up to 24 to 48 hours to process.\nRead all of Isaac Farley's posts \u0026raquo;\r", "headings": ["Isaac Farley","Biography","Isaac Farley's Latest Blog Posts","Solving your technical support questions in a snap!","Flies in your metadata (ointment)","Hiccups with credentials in the Test Admin Tool","Lesson learned, the hard way: Let’s not do that again!","Stepping up our deposit processing game"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jason-hanna/", "title": "Jason Hanna", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jason Hanna Software Developer Biography Jason has moved on from Crossref. Jason Hanna joined Crossref in 2017 as part of the technical team as a developer. He loved to cook and was quite good at it. Jason also enjoyed hiking or skiing depending on the season and was a bit of a self-described gaming/tech nerd.", "content": "\rJason Hanna Software Developer Biography Jason has moved on from Crossref. Jason Hanna joined Crossref in 2017 as part of the technical team as a developer. He loved to cook and was quite good at it. Jason also enjoyed hiking or skiing depending on the season and was a bit of a self-described gaming/tech nerd.\n", "headings": ["Jason Hanna","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jennifer-kemp/", "title": "Jennifer Kemp", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jennifer Kemp Head of Partnerships Biography Jennifer has moved on from Crossref. Jennifer Kemp was Head of Partnerships at Crossref, where she worked with members, service providers, and metadata users to improve community participation, metadata, and discoverability. Prior to Crossref, she had been most recently Senior Manager of Policy and External Relations, North America for Springer Nature. Her experience in scholarly publishing began with her work as a Publication Manager at HighWire Press, where she had a variety of clients publishing in a wide range of disciplines.", "content": "\rJennifer Kemp Head of Partnerships Biography Jennifer has moved on from Crossref. Jennifer Kemp was Head of Partnerships at Crossref, where she worked with members, service providers, and metadata users to improve community participation, metadata, and discoverability. Prior to Crossref, she had been most recently Senior Manager of Policy and External Relations, North America for Springer Nature. Her experience in scholarly publishing began with her work as a Publication Manager at HighWire Press, where she had a variety of clients publishing in a wide range of disciplines. Jennifer’s perspective on the industry remained influenced by her years as a librarian, and she was active in a number of community initiatives. At Crossref, she facilitated the Books Interest Group, Funder Advisory Group, and the Metadata User Working Group. She also served on the Next Generation Library Publishing Advisory Board, the Library Publishing Coalition Preservation Task Force, and the Open Access eBook Usage (OAeBU) Board of Trustees.\nTopics Scholarly communications Metadata retrieval Event Data Use of our metadata Books Libraries X @SaysJKemp Jennifer Kemp's Latest Blog Posts\rIn the know on workflows: The metadata user working group\rJennifer Kemp, Tuesday, Feb 28, 2023\nIn UsersMetadataCommunity Leave a comment\nWhat’s in the metadata matters because it is So.Heavily.Used. You might be tired of hearing me say it but that doesn’t make it any less true. Our open APIs now see over 1 billion queries per month. The metadata is ingested, displayed and redistributed by a vast, global array of systems and services that in whole or in part are often designed to point users to relevant content. It’s also heavily used by researchers, who author the content that is described in the metadata they analyze.\nDon\u0026#39;t take it from us: Funder metadata matters\rJennifer Kemp, Thursday, Feb 16, 2023\nIn MetadataResearch FundersData Leave a comment\nWhy the focus on funding information? We are often asked who uses Crossref metadata and for what. One common use case is researchers in bibliometrics and scientometrics (among other fields) doing meta analyses on the entire corpus of records. As we pass the 10 year mark for the Funder Registry and 5 years of funders joining Crossref as members to register their grants, it’s worth a look at some recent research that focuses specifically on funding information.\nMeasuring Metadata Impacts: Books Discoverability in Google Scholar\rLettie Conrad, Wednesday, Jan 25, 2023\nIn MetadataBooksSearch Leave a comment\nThis blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen. As sponsors of this project, we at Crossref are excited to see this work shared out. The scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.\nAccessibility for Crossref DOI Links: Call for comments on proposed new guidelines\rJennifer Kemp, Tuesday, Sep 6, 2022\nIn DOIsLinkingInteroperabilityAccessibilityDOI Display Guidelines Leave a comment\nOur entire community \u0026ndash; members, metadata users, service providers, community organizations and researchers \u0026ndash; create and/or use DOIs in some way so making them more accessible is a worthy and overdue effort. For the first time in five years and only the second time ever, we are recommending some changes to our DOI display guidelines (the changes aren’t really for display but more on that below). We don’t take such changes lightly, because we know it means updating established workflows.\nWith a little help from your Crossref friends: Better metadata\rJennifer Kemp, Thursday, Mar 31, 2022\nIn MetadataLinkingAPIS Leave a comment\nWe talk so much about more and better metadata that a reasonable question might be: what is Crossref doing to help? Members and their service partners do the heavy lifting to provide Crossref with metadata and we don’t change what is supplied to us. One reason we don’t is because members can and often do change their records (important note: updated records do not incur fees!). However, we do a fair amount of behind the scenes work to check and report on the metadata as well as to add context and relationships.\nRead all of Jennifer Kemp's posts \u0026raquo;\r", "headings": ["Jennifer Kemp","Biography","Topics","X","Jennifer Kemp's Latest Blog Posts","In the know on workflows: The metadata user working group","Don\u0026#39;t take it from us: Funder metadata matters","Measuring Metadata Impacts: Books Discoverability in Google Scholar","Accessibility for Crossref DOI Links: Call for comments on proposed new guidelines","With a little help from your Crossref friends: Better metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jennifer-lin/", "title": "Jennifer Lin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jennifer Lin Director of Product Management Biography Jennifer has moved on from Crossref. Jennifer Lin had over fifteen years of experience in product development, project management, community outreach, and change management within scholarly communications, education, and the public sector. She was the Director of Product Management at Crossref, a scholarly infrastructure provider. Previously, she worked for PLOS where she oversaw product strategy and development for the publisher data program, article-level metrics initiative, and open assessment activities.", "content": "\rJennifer Lin Director of Product Management Biography Jennifer has moved on from Crossref. Jennifer Lin had over fifteen years of experience in product development, project management, community outreach, and change management within scholarly communications, education, and the public sector. She was the Director of Product Management at Crossref, a scholarly infrastructure provider. Previously, she worked for PLOS where she oversaw product strategy and development for the publisher data program, article-level metrics initiative, and open assessment activities. Jennifer earned her PhD at Johns Hopkins University.\nTopics Product management Scholarly infrastructure funders and funding data research integrity research information network preprints Crossref Event Data altmetrics X @jenniferlin15 ORCID iD 0000-0002-9680-2328 Jennifer Lin's Latest Blog Posts\rSimilarity Check is changing\rJennifer Lin, Thursday, May 30, 2019\nIn Similarity CheckMember Briefing Leave a comment\nTl;dr Crossref is taking over the service management of Similarity Check from Turnitin. That means we\u0026rsquo;re your first port of call for questions and your agreement will be direct with us. This is a very good thing because we have agreed and will continue to agree the best possible set-up for our collective membership. Similarity Check participants need to take action to confirm the new terms with us as soon as possible and before 31st August 2019.\nMetadata Manager: Members, represent!\rJennifer Lin, Monday, Oct 15, 2018\nIn MetadataContent RegistrationCitationsIdentifiers Leave a comment\nOver 100 Million unique scholarly works are distributed into systems across the research enterprise 24/7 via our APIs at a rate of around 633 Million queries a month. Crossref is broadcasting descriptions of these works (metadata) to all corners of the digital universe.\nLeaving the house - where preprints go\rJennifer Lin, Tuesday, Aug 21, 2018\nIn PreprintsMetadataContent RegistrationAPIResearch Nexus Leave a comment\n“Pre-prints” are sometimes neither Pre nor Print (c.f. https://0-doi-org.libus.csd.mu.edu/10.12688/f1000research.11408.1, but they do go on and get published in journals. While researchers may have different motivations for posting a preprint, such as establishing a record of priority or seeking rapid feedback, the primary motivation appears to be timely sharing of results prior to journal publication.\nSo where in fact do preprints get published?\nPeer review publications\rJennifer Lin, Sunday, Aug 12, 2018\nIn Peer ReviewRecord TypesResearch Nexus Leave a comment\nPeer review publications\u0026mdash;not peer-reviewed publications, but peer reviews as publications Our newest dedicated record type\u0026mdash;peer review\u0026mdash;has received a warm welcome from our members since rollout last November. We are pleased to formally integrate them into the scholarly record, giving the scholars who participated credit for their work, ensuring readers and systems dependably get from the reviews to the article (and vice versa), and making sure that links to these works persist over time.\nPreprints growth rate ten times higher than journal articles\rJennifer Lin, Thursday, May 31, 2018\nIn PreprintsMetadataContent RegistrationCitation Leave a comment\nThe Crossref graph of the research enterprise is growing at an impressive rate of 2.5 million records a month - scholarly communications of all stripes and sizes. Preprints are one of the fastest growing types of content. While preprints may not be new, the growth may well be: ~30% for the past 2 years (compared to article growth of 2-3% for the same period). We began supporting preprints in November 2016 at the behest of our members. When members register them, we ensure that: links to these publications persist over time; they are connected to the full history of the shared research results; and the citation record is clear and up-to-date.\nRead all of Jennifer Lin's posts \u0026raquo;\r", "headings": ["Jennifer Lin","Biography","Topics","X","ORCID iD","Jennifer Lin's Latest Blog Posts","Similarity Check is changing","Metadata Manager: Members, represent!","Leaving the house - where preprints go","So where in fact do preprints get published?","Peer review publications","Preprints growth rate ten times higher than journal articles"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jessica-gray/", "title": "Jessica Gray", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jessica Gray Accounts Payable Specialist Biography Jessica has moved on from Crossref. Jessica Gray joined Crossref in December of 2021 as the Accounts Payable Specialist. She was responsible for the daily recording of expense transactions and vendor communications. In addition, she enjoyed kayaking, beach days, and spending time with friends and family in her spare time.", "content": "\rJessica Gray Accounts Payable Specialist Biography Jessica has moved on from Crossref. Jessica Gray joined Crossref in December of 2021 as the Accounts Payable Specialist. She was responsible for the daily recording of expense transactions and vendor communications. In addition, she enjoyed kayaking, beach days, and spending time with friends and family in her spare time.\n", "headings": ["Jessica Gray","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jillian-jones/", "title": "Jillian Jones", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jillian Jones HR Associate Biography Jillian has moved on from Crossref. Jillian joined the Crossref team in 2018 in a new HR Associate role to support all aspects of HR in the organization. When she is not working, Jillian enjoys reading, camping, appreciating art and music, and spending time exploring Boston.", "content": "\rJillian Jones HR Associate Biography Jillian has moved on from Crossref. Jillian joined the Crossref team in 2018 in a new HR Associate role to support all aspects of HR in the organization. When she is not working, Jillian enjoys reading, camping, appreciating art and music, and spending time exploring Boston.\n", "headings": ["Jillian Jones","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/joe-aparo/", "title": "Joe Aparo", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Joe Aparo Head of Infrastructure Services Biography Joe has moved on from Crossref. He joined Crossref in late 2018 after many years of developing business software solutions for a variety of companies, both large and small. They and their wife, both originally from New England, were residing in Rockport, MA. They had two grown, independent, and, of course, amazing children. In addition to spending quality time with their family, their absolute favorite activities included woodworking, home building/renovation, and playing tennis.", "content": "\rJoe Aparo Head of Infrastructure Services Biography Joe has moved on from Crossref. He joined Crossref in late 2018 after many years of developing business software solutions for a variety of companies, both large and small. They and their wife, both originally from New England, were residing in Rockport, MA. They had two grown, independent, and, of course, amazing children. In addition to spending quality time with their family, their absolute favorite activities included woodworking, home building/renovation, and playing tennis.\n", "headings": ["Joe Aparo","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/joe-wass/", "title": "Joe Wass", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Joe Wass Head of Software Development Biography Joe Wass was Head of the Software Development team. He spent his first five years at Crossref getting to know our broad community, with a special focus on finding citations in new places on the web, keeping tabs on the evolving activities of scholars round the world.\nX @joewass ORCID iD 0000-0002-0840-454X Joe Wass's Latest Blog Posts\rMending Chesterton\u0026#39;s Fence: Open Source Decision-making\rJoe Wass, Monday, Mar 18, 2024", "content": "\rJoe Wass Head of Software Development Biography Joe Wass was Head of the Software Development team. He spent his first five years at Crossref getting to know our broad community, with a special focus on finding citations in new places on the web, keeping tabs on the evolving activities of scholars round the world.\nX @joewass ORCID iD 0000-0002-0840-454X Joe Wass's Latest Blog Posts\rMending Chesterton\u0026#39;s Fence: Open Source Decision-making\rJoe Wass, Monday, Mar 18, 2024\nIn EngineeringPOSI Leave a comment\nWhen each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write. By the time the next person looks at that code, some of that context will have evaporated.\nMending Chesterton\u0026#39;s Fence: Open Source Decision-making\rJoe Wass, Monday, Mar 18, 2024\nIn Engineering Leave a comment\nWhen each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we\u0026rsquo;re trying to solve, what technical assumptions we\u0026rsquo;re making, what we already tried but didn\u0026rsquo;t work, how much coffee we\u0026rsquo;ve had today. All of these have an effect on the software we write. By the time the next person looks at that code, some of that context will have evaporated.\nRenewed Persistence\rJoe Wass, Saturday, Apr 1, 2023\nIn Engineering Leave a comment\nWe believe in Persistent Identifiers. We believe in defence in depth. Today we\u0026rsquo;re excited to announce an upgrade to our data resilience strategy. Defence in depth means layers of security and resilience, and that means layers of backups. For some years now, our last line of defence has been a reliable, tried-and-tested technology. One that\u0026rsquo;s been around for a while. Yes, I\u0026rsquo;m talking about the humble 5¼ inch floppy disk.\nWhat\u0026#39;s that DOI?\rJoe Wass, Monday, Jan 21, 2019\nIn Event DataPidapalooza Leave a comment\nThis is a long overdue followup to 2016\u0026rsquo;s \u0026ldquo;URLs and DOIs: a complicated relationship\u0026rdquo;. Like that post, this accompanies my talk at PIDapalooza, the festival of open persistent identifiers). I don\u0026rsquo;t think I need to give a spoiler warning when I tell you that it\u0026rsquo;s still complicated. But this post presents some vocabulary to describe exactly how complicated it is. Event Data has been up and running and collecting data for a couple of years now, but this post describes changes we made toward the end of 2018.\nHear this, real insight into the inner workings of Crossref\rJoe Wass, Sunday, Apr 1, 2018\nIn Member BriefingEvent DataLaunch Leave a comment\nYou want to hear more from us. We hear you. We’ve spent the past year building Crossref Event Data, and hope to launch very soon. Building a new piece of infrastructure from scratch has been an exciting project, and we’ve taken the opportunity to incorporate as much feedback from the community as possible. We’d like to take a moment to share some of the suggestions we had, and how we’ve acted on them.\nRead all of Joe Wass's posts \u0026raquo;\r", "headings": ["Joe Wass","Biography","X","ORCID iD","Joe Wass's Latest Blog Posts","Mending Chesterton\u0026#39;s Fence: Open Source Decision-making","Mending Chesterton\u0026#39;s Fence: Open Source Decision-making","Renewed Persistence","What\u0026#39;s that DOI?","Hear this, real insight into the inner workings of Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/joel-schuweiler/", "title": "Joel Schuweiler", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Joel Schuweiler Senior Site Reliability Engineer Biography Joel has moved on from Crossref. Joel joined Crossref in late 2019 to help leverage up-and-coming solutions to ensure stability and reliability. He has lived in various places but currently calls Minneapolis, Minnesota, his home. In his free time, he operates historic streetcars, works on property on an island in Lake Superior, and enjoys contributing to the open-source world by fixing bugs in other people\u0026rsquo;s software and releasing his own.", "content": "\rJoel Schuweiler Senior Site Reliability Engineer Biography Joel has moved on from Crossref. Joel joined Crossref in late 2019 to help leverage up-and-coming solutions to ensure stability and reliability. He has lived in various places but currently calls Minneapolis, Minnesota, his home. In his free time, he operates historic streetcars, works on property on an island in Lake Superior, and enjoys contributing to the open-source world by fixing bugs in other people\u0026rsquo;s software and releasing his own.\nJoel Schuweiler's Latest Blog Posts\rOpen-source code: giving back\rJoel Schuweiler, Friday, Apr 30, 2021\nIn CollaborationInfrastructureCommunity Leave a comment\nTL:DR; Hi, I\u0026rsquo;m Joel GitLab UI unsatisfactory Wrote a UI to use the API Wrote a missing API Open company contributes changes back to another open company Now have a method for getting work done much easier Hurrah! I\u0026rsquo;m Joel, a Senior Site Reliability Engineer here at Crossref. I have a long background in open source, software development, and solving unique problems. One of my earliest computer influences was my father.\nRead all of Joel Schuweiler's posts \u0026raquo;\r", "headings": ["Joel Schuweiler","Biography","Joel Schuweiler's Latest Blog Posts","Open-source code: giving back"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/johanssen-obanda/", "title": "Johanssen Obanda", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Johanssen Obanda Community Engagement Manager Biography Obanda joined Crossref in 2023 to work with Crossref Ambassadors to effectively engage their communities and support Crossref’s outreach initiatives. Obanda is passionate about building an inclusive research ecosystem where researchers across the globe can easily access scientific knowledge and make meaningful connections. His previous experience includes social entrepreneurship and science communication. For fun, Obanda likes to explore historical sites and experience the sunset.", "content": "\rJohanssen Obanda Community Engagement Manager Biography Obanda joined Crossref in 2023 to work with Crossref Ambassadors to effectively engage their communities and support Crossref’s outreach initiatives. Obanda is passionate about building an inclusive research ecosystem where researchers across the globe can easily access scientific knowledge and make meaningful connections. His previous experience includes social entrepreneurship and science communication. For fun, Obanda likes to explore historical sites and experience the sunset.\nX @johansseno ORCID iD 0000-0002-2111-7780 Johanssen Obanda's Latest Blog Posts\rCommon views and questions about metadata across Africa\rJohanssen Obanda, Wednesday, Apr 24, 2024\nIn MetadataCommunityMeetingsOutreach Leave a comment\nThis past year has been a captivating journey of immersion within the Crossref community, a mix of online interactions and meaningful in-person experiences. From the engaging Sustainability Research and Innovation Conference in Port Elizabeth, South Africa, to the impactful webinars conducted globally, this has been more than just a professional endeavour; it has been a personal exploration of collaboration, insights, and a shared commitment to pushing the boundaries of scholarly communication.\nPerspectives: Audrey Kenni-Nemaleu on scholarly communications in Cameroon\rAudrey Kenni-Nemaleu, Thursday, Oct 5, 2023\nIn CommunityPerspectives Leave a comment\nOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nNotre série de blogs Perspectives met en lumière différents membres de la communauté internationale de Crossref. Nous en apprenons davantage sur leur vie et sur la manière dont ils ont appris à nous connaître et à travailler avec nous, et nous entendons parler du paysage de la recherche universitaire dans leur pays, des défis auxquels ils sont confrontés et de leurs projets pour l\u0026rsquo;avenir.\nPerspectives: My thoughts on starting my new role at Crossref\rJohanssen Obanda, Thursday, Jul 6, 2023\nIn OutreachStaffPerspectives Leave a comment\nMy name is Johanssen Obanda. I joined Crossref in February 2023 as a Community Engagement Manager to look after the Ambassadors program and help with other outreach activities. I work remotely from Kenya, where there is an increasing interest in improving the exposure of scholarship by Kenyan researchers and ultimately by the wider community of African researchers. In this blog, I’m sharing the experience and insights of my first 4 months in this role.\nRead all of Johanssen Obanda's posts \u0026raquo;\r", "headings": ["Johanssen Obanda","Biography","X","ORCID iD","Johanssen Obanda's Latest Blog Posts","Common views and questions about metadata across Africa","Perspectives: Audrey Kenni-Nemaleu on scholarly communications in Cameroon","Perspectives: My thoughts on starting my new role at Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/jon-stark/", "title": "Jon Stark", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Jon Stark Senior Software Developer Biography Jon Stark has been part of the Crossref software development team since 2004. Prior to Crossref, Jon had worked at Northern Light Technology and EBSCO Publishing. Aside from work, Jon enjoys time with his family, a love of nature and the outdoors, woodworking, flying (with the use of an airplane, not jumping off cliffs), and serving in his local church community.\nJon Stark's Latest Blog Posts\rResolution reports: a look inside and ahead\rIsaac Farley, Tuesday, Dec 17, 2019", "content": "\rJon Stark Senior Software Developer Biography Jon Stark has been part of the Crossref software development team since 2004. Prior to Crossref, Jon had worked at Northern Light Technology and EBSCO Publishing. Aside from work, Jon enjoys time with his family, a love of nature and the outdoors, woodworking, flying (with the use of an airplane, not jumping off cliffs), and serving in his local church community.\nJon Stark's Latest Blog Posts\rResolution reports: a look inside and ahead\rIsaac Farley, Tuesday, Dec 17, 2019\nIn Content RegistrationReportsDOI Resolution Leave a comment\nIsaac Farley, technical support manager, and Jon Stark, software developer, provide a glimpse into the history and current state of our popular monthly resolution reports. They invite you, our members, to help us understand how you use these reports. This will help us determine the best next steps for further improvement of these reports, and particularly what we do and don’t filter out of them.\nRead all of Jon Stark's posts \u0026raquo;\r", "headings": ["Jon Stark","Biography","Jon Stark's Latest Blog Posts","Resolution reports: a look inside and ahead"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/josh-brown/", "title": "Josh Brown", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Josh Brown Funder Engagement Consultant Biography Josh worked with funders around the world to encourage the use of DOIs for grants, to make it easier to reuse information about funding, and to bring them into the Crossref community. Before joining Crossref, Josh had worked at ORCID as Director of Partnerships (among other roles) and had positions at CERN, Jisc, and University College London. He was a qualified librarian and trained chef, and had worked around the world on projects related to persistent identifiers, research information management, open science, repositories, open access publishing, research evaluation, scholarly communications, research infrastructure, and trying to make things work a bit better.", "content": "\rJosh Brown Funder Engagement Consultant Biography Josh worked with funders around the world to encourage the use of DOIs for grants, to make it easier to reuse information about funding, and to bring them into the Crossref community. Before joining Crossref, Josh had worked at ORCID as Director of Partnerships (among other roles) and had positions at CERN, Jisc, and University College London. He was a qualified librarian and trained chef, and had worked around the world on projects related to persistent identifiers, research information management, open science, repositories, open access publishing, research evaluation, scholarly communications, research infrastructure, and trying to make things work a bit better.\nTopics Research funder workflows research information community engagement and partnerships scholarly communications ORCID iD 0000-0002-8689-4935 Josh Brown's Latest Blog Posts\rFunders and infrastructure: let’s get building\rJosh Brown, Monday, Jul 29, 2019\nIn CollaborationResearch FundersInfrastructureMetadataGrants Leave a comment\nHuman intelligence and curiosity are the lifeblood of the scholarly world, but not many people can afford to pursue research out of their own pocket. We all have bills to pay. Also, compute time, buildings, lab equipment, administration, and giant underground thingumatrons do not come cheap. In 2017, according to statistics from UNESCO, $1.7 trillion dollars were invested globally in Research and Development. A lot of this money comes from the public - 22c in every dollar spent on R\u0026amp;D in the USA comes from government funds, for example.\nRead all of Josh Brown's posts \u0026raquo;\r", "headings": ["Josh Brown","Biography","Topics","ORCID iD","Josh Brown's Latest Blog Posts","Funders and infrastructure: let’s get building"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/kathleen-luschek/", "title": "Kathleen Luschek", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Kathleen Luschek Technical Support Specialist Biography Kathleen joined Crossref in May 2019 as part of the member experience team. She previously worked at the University of Hawaii Libraries, managing their instiutional repositories and Open Access policy. She also spent some time working in Open Access publishing at PLOS in San Francisco. When not immersed in scholarly communications and metadata, Kathleen can be found in the ocean (usually on a surfboard) or with her nose in a book.", "content": "\rKathleen Luschek Technical Support Specialist Biography Kathleen joined Crossref in May 2019 as part of the member experience team. She previously worked at the University of Hawaii Libraries, managing their instiutional repositories and Open Access policy. She also spent some time working in Open Access publishing at PLOS in San Francisco. When not immersed in scholarly communications and metadata, Kathleen can be found in the ocean (usually on a surfboard) or with her nose in a book.\nKathleen Luschek's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment\nMy name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nCalling all 24-hour (PID) party people!\rKathleen Luschek, Tuesday, Oct 13, 2020\nIn PIDapaloozaPersistenceIdentifiersCollaborationCommunityMeetings Leave a comment\nWhile we wish we could be together in person to celebrate the fifth PIDapalooza, there\u0026rsquo;s an upside to moving it online: now everyone can participate in the universe\u0026rsquo;s best PID party! With 24 hours of non-stop PID programming, you\u0026rsquo;ll be able to come to the party no matter where you happen to be. Send us your ideas for #PIDapalooza21 Now is your chance to share your work in the #PIDapalooza21 spotlight!\nSimilarity Check is changing\rJennifer Lin, Thursday, May 30, 2019\nIn Similarity CheckMember Briefing Leave a comment\nTl;dr Crossref is taking over the service management of Similarity Check from Turnitin. That means we\u0026rsquo;re your first port of call for questions and your agreement will be direct with us. This is a very good thing because we have agreed and will continue to agree the best possible set-up for our collective membership. Similarity Check participants need to take action to confirm the new terms with us as soon as possible and before 31st August 2019.\nRead all of Kathleen Luschek's posts \u0026raquo;\r", "headings": ["Kathleen Luschek","Biography","Kathleen Luschek's Latest Blog Posts","Solving your technical support questions in a snap!","Calling all 24-hour (PID) party people!","Similarity Check is changing"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/kim-harvey/", "title": "Kim Harvey", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Kim Harvey Member Support Contractor Biography Kim works with our membership team helping to support our members.", "content": "\rKim Harvey Member Support Contractor Biography Kim works with our membership team helping to support our members.\n", "headings": ["Kim Harvey","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/kirsty-meddings/", "title": "Kirsty Meddings", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Kirsty Meddings Product Manager Biography Kirsty Meddings had been involved in a diverse set of initiatives that kept her busy since 2008. She spent most of her career in scholarly communications, in a variety of marketing and product development roles for intermediaries and technology suppliers. She spoke conversational geek and was competent in publishing, working towards fluency in both. Tragically, Kirsty passed away in 2020. A tribute to her can be read in A tribute to our Kirsty on our blog.", "content": "\rKirsty Meddings Product Manager Biography Kirsty Meddings had been involved in a diverse set of initiatives that kept her busy since 2008. She spent most of her career in scholarly communications, in a variety of marketing and product development roles for intermediaries and technology suppliers. She spoke conversational geek and was competent in publishing, working towards fluency in both. Tragically, Kirsty passed away in 2020. A tribute to her can be read in A tribute to our Kirsty on our blog. We will remember her always.\nX @kmeddings ORCID iD 0000-0001-9205-2956 Kirsty Meddings's Latest Blog Posts\rSimilarity Check news: introducing the next generation iThenticate.\rKirsty Meddings, Tuesday, Jul 28, 2020\nIn Member BriefingSimilarity Check Leave a comment\nCrossref’s Similarity Check service is used by our members to detect text overlap with previously published work that may indicate plagiarism of scholarly or professional works. Manuscripts can be checked against millions of publications from other participating Crossref members and general web content using the iThenticate text comparison software from Turnitin.\nEncouraging even greater reporting of corrections and retractions\rKirsty Meddings, Monday, Mar 30, 2020\nIn Content RegistrationCrossmarkMetadataMember Briefing Leave a comment\nTL;DR: We no longer charge fees for members to participate in Crossmark, and we encourage all our members to register metadata about corrections and retractions - even if you can’t yet add the Crossmark button and pop-up box to your landing pages or PDFs.\n\u0026ndash;\nCan you help us to launch Distributed Usage Logging?\rKirsty Meddings, Monday, Mar 2, 2020\nIn MembersCommunityCollaborationStandards Leave a comment\nUpdate: Deadline extended to 23:59 (UTC) 13th March 2020.\nDistributed Usage Logging (DUL) allows publishers to capture traditional usage activity related to their content that happens on sites other than their own so they can provide reports of “total usage”, for example to subscribing institutions, regardless of where that usage happens.\nBig things have small beginnings: the growth of the Open Funder Registry\rKirsty Meddings, Sunday, Jul 21, 2019\nIn CollaborationIdentifiersMetadata Leave a comment\nThe Open Funder Registry plays a critical role in making sure that our members correctly identify the funding sources behind the research that they are publishing. It addresses a similar problem to the one that led to the creation of ORCID: researchers\u0026rsquo; names are hard to disambiguate and are rarely unique; they get abbreviated, have spelling variations and change over time. The same is true of organizations. You don’t have to read all that many papers to see authors acknowledge funding from the US National Institutes of Health as NIH, National Institutes for Health, National Institute of Health, etc.\nPutting content in context\rKirsty Meddings, Monday, May 13, 2019\nIn CrossmarkContent RegistrationMetadataMember Briefing Leave a comment\nYou can’t go far on this blog without reading about the importance of registering rich metadata. Over the past year we’ve been encouraging all of our members to review the metadata they are sending us and find out which gaps need filling by looking at their Participation Report.\nThe metadata elements that are tracked in Participation Reports are mostly beyond the standard bibliographic information that is used to identify a work. They are important because they provide context: they tell the reader how the research was funded, what license it’s published under, and more about its authors via links to their ORCID profiles. And while this metadata is all available through our APIs, we also display much of it to readers through our Crossmark service.\nRead all of Kirsty Meddings's posts \u0026raquo;\r", "headings": ["Kirsty Meddings","Biography","X","ORCID iD","Kirsty Meddings's Latest Blog Posts","Similarity Check news: introducing the next generation iThenticate.","Encouraging even greater reporting of corrections and retractions","Can you help us to launch Distributed Usage Logging?","Big things have small beginnings: the growth of the Open Funder Registry","Putting content in context"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/kornelia-korzec/", "title": "Kornelia Korzec", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Kornelia Korzec Director of Community Biography Kora joined Crossref in 2022 to ensure a community-centred approach across all communications. She previously worked at eLife, mobilising researchers to advocate for greater adoption of open science. Prior to that, Kora headed engagement at Engineering Without Borders UK, an international development charity, and before that – she designed behaviour-change campaigns for waste minimisation with local authorities in Cambridgeshire. Outside work, Kora enjoys spending time with her two little boys, working on a doctoral project on anti-consumerism, as well as climbing and dancing.", "content": "\rKornelia Korzec Director of Community Biography Kora joined Crossref in 2022 to ensure a community-centred approach across all communications. She previously worked at eLife, mobilising researchers to advocate for greater adoption of open science. Prior to that, Kora headed engagement at Engineering Without Borders UK, an international development charity, and before that – she designed behaviour-change campaigns for waste minimisation with local authorities in Cambridgeshire. Outside work, Kora enjoys spending time with her two little boys, working on a doctoral project on anti-consumerism, as well as climbing and dancing.\nX @qornik ORCID iD 0000-0002-4632-5228 Kornelia Korzec's Latest Blog Posts\rA summary of our Annual Meeting\rRosa Morais Clark, Monday, Dec 9, 2024\nIn Annual MeetingMeetingsCommunityGovernance Leave a comment\nThe Crossref2024 annual meeting gathered our community for a packed agenda of updates, demos, and lively discussions on advancing our shared goals. The day was filled with insights and energy, from practical demos of Crossref’s latest API features to community reflections on the Research Nexus initiative and the Board elections. Our Board elections are always the focal point of the Annual Meeting. We want to start reflecting on the day by congratulating our newly elected board members: Katharina Rieck from Austrian Science Fund (FWF), Lisa Schiff from California Digital Library, Aaron Wood from American Psychological Association, and Amanda Ward from Taylor and Francis, who will officially join (and re-join) in January 2025.\nSummary of the environmental impact of Crossref\rEd Pentz, Thursday, Dec 5, 2024\nIn CommunityEnvironment Leave a comment\nIn June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas: The environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!\nMetadata beyond discoverability\rGinny Hendricks, Tuesday, Dec 3, 2024\nIn Research NexusCommunityMetadataPublishing Leave a comment\nMetadata is one of the most important tools needed to communicate with each other about science and scholarship. It tells the story of research that travels throughout systems and subjects and even to future generations. We have metadata for organising and describing content, metadata for provenance and ownership information, and metadata is increasingly used as signals of trust. Following our panel discussion on the same subject at the ALPSP University Press Redux conference in May 2024, in this post we explore the idea that metadata, once considered important mostly for discoverability, is now a vital element used for evidence and the integrity of the scholarly record.\nUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024\nIn StrategyFees Leave a comment\nWe’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes. Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by: Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.\nCelebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?\rKornelia Korzec, Monday, Jul 1, 2024\nIn Research FundersGrantsInfrastructureMetadataIdentifiers Leave a comment\nWe’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure. The system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure.\nRead all of Kornelia Korzec's posts \u0026raquo;\r", "headings": ["Kornelia Korzec","Biography","X","ORCID iD","Kornelia Korzec's Latest Blog Posts","A summary of our Annual Meeting","Summary of the environmental impact of Crossref","Metadata beyond discoverability","Update on the Resourcing Crossref for Future Sustainability research","Celebrating five years of Grant IDs: where are we with the Crossref Grant Linking System?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/laura-cuniff/", "title": "Laura Cuniff", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Laura Cuniff Billing Support Specialist Biography Laura can be found volunteering at one of the local organizations supporting her hometown, working a job that she enjoys, and in excellent company with a dog or cat.", "content": "\rLaura Cuniff Billing Support Specialist Biography Laura can be found volunteering at one of the local organizations supporting her hometown, working a job that she enjoys, and in excellent company with a dog or cat.\n", "headings": ["Laura Cuniff","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/laura-j-wilkinson/", "title": "Laura J Wilkinson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Laura J Wilkinson Education Manager Biography Laura was a Education Manager with Crossref until April 2021.\nLaura J Wilkinson's Latest Blog Posts\rCome for a swim in our new pool of Education materials\rLaura J Wilkinson, Wednesday, Apr 29, 2020\nIn MetadataEducationAPIs Leave a comment\nAfter 20 years in operation, and as our system matures from experimental to foundational infrastructure, it’s time to review our documentation. Having a solid core of education materials about the why and the how of Crossref is essential in making participation possible, easy, and equitable.", "content": "\rLaura J Wilkinson Education Manager Biography Laura was a Education Manager with Crossref until April 2021.\nLaura J Wilkinson's Latest Blog Posts\rCome for a swim in our new pool of Education materials\rLaura J Wilkinson, Wednesday, Apr 29, 2020\nIn MetadataEducationAPIs Leave a comment\nAfter 20 years in operation, and as our system matures from experimental to foundational infrastructure, it’s time to review our documentation. Having a solid core of education materials about the why and the how of Crossref is essential in making participation possible, easy, and equitable. As our system has evolved, our membership has grown and diversified, and so have our tools - both for depositing metadata with Crossref, and for retrieving and making use of it.\nWhere does publisher metadata go and how is it used?\rLaura J Wilkinson, Monday, Sep 17, 2018\nIn ParticipationContent RegistrationMetadataBest Practices Leave a comment\nEarlier this week, colleagues from Crossref, ScienceOpen, and OPERAS/OpenEdition joined forces to run a webinar on “Where does publisher metadata go and how is it used?”.\nRead all of Laura J Wilkinson's posts \u0026raquo;\r", "headings": ["Laura J Wilkinson","Biography","Laura J Wilkinson's Latest Blog Posts","Come for a swim in our new pool of Education materials","Where does publisher metadata go and how is it used?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/lena-stoll/", "title": "Lena Stoll", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Lena Stoll Program Lead Biography Lena joined the Product team at Crossref in 2023. A chemist by training, she worked in editorial roles with a focus on STM book publishing before transitioning into software product management. Prior to joining Crossref, her focus as a Group Product Manager at Morressier was on building peer review workflows and content hosting tools. In 2024 Lena took on the expanded role of Program Lead, responsible for all our activities that encompass responding to community trends, including research integrity, and most front-end tools.", "content": "\rLena Stoll Program Lead Biography Lena joined the Product team at Crossref in 2023. A chemist by training, she worked in editorial roles with a focus on STM book publishing before transitioning into software product management. Prior to joining Crossref, her focus as a Group Product Manager at Morressier was on building peer review workflows and content hosting tools. In 2024 Lena took on the expanded role of Program Lead, responsible for all our activities that encompass responding to community trends, including research integrity, and most front-end tools. Lena lives in Berlin, Germany, where she spends most of her spare time fostering an obsession with her dog that borders on the unhealthy.\nLena Stoll's Latest Blog Posts\rRe-introducing Participation Reports to encourage best practices in open metadata\rLena Stoll, Thursday, Jul 25, 2024\nIn Participation ReportsMetadataBest Practices Leave a comment\nWe’ve just released an update to our participation report, which provides a view for our members into how they are each working towards best practices in open metadata. Prompted by some of the signatories and organizers of the Barcelona Declaration, which Crossref supports, and with the help of our friends at CWTS Leiden, we have fast-tracked the work to include an updated set of metadata best practices in participation reports for our members.\nRead all of Lena Stoll's posts \u0026raquo;\r", "headings": ["Lena Stoll","Biography","Lena Stoll's Latest Blog Posts","Re-introducing Participation Reports to encourage best practices in open metadata"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/lettie-conrad/", "title": "Lettie Conrad", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Lettie Conrad Independent researcher \u0026amp; consultant Biography Lettie Y. Conrad, PhD, Independent researcher \u0026amp; consultant and Senior Associate, Maverick Publishing Specialists, North American Editor, Learned Publishing\nLettie Conrad's Latest Blog Posts\rMeasuring Metadata Impacts: Books Discoverability in Google Scholar\rLettie Conrad, Wednesday, Jan 25, 2023\nIn MetadataBooksSearch Leave a comment\nThis blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen. As sponsors of this project, we at Crossref are excited to see this work shared out.", "content": "\rLettie Conrad Independent researcher \u0026amp; consultant Biography Lettie Y. Conrad, PhD, Independent researcher \u0026amp; consultant and Senior Associate, Maverick Publishing Specialists, North American Editor, Learned Publishing\nLettie Conrad's Latest Blog Posts\rMeasuring Metadata Impacts: Books Discoverability in Google Scholar\rLettie Conrad, Wednesday, Jan 25, 2023\nIn MetadataBooksSearch Leave a comment\nThis blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen. As sponsors of this project, we at Crossref are excited to see this work shared out. The scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.\nRead all of Lettie Conrad's posts \u0026raquo;\r", "headings": ["Lettie Conrad","Biography","Lettie Conrad's Latest Blog Posts","Measuring Metadata Impacts: Books Discoverability in Google Scholar"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/lindsay-russell/", "title": "Lindsay Russell", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Lindsay Russell HR Manager Biography Lindsay has moved on from Crossref. Lindsay joined Crossref in 2013 and was the HR Manager, where she was actively involved with the management of HR operations within the organization. When Lindsay wasn\u0026rsquo;t working, she enjoyed yoga, watching football, and spending time with her family.\nLindsay Russell's Latest Blog Posts\rMore new faces at Crossref\rLindsay Russell, Thursday, Oct 21, 2021\nIn Member BriefingCommunityStaff Leave a comment", "content": "\rLindsay Russell HR Manager Biography Lindsay has moved on from Crossref. Lindsay joined Crossref in 2013 and was the HR Manager, where she was actively involved with the management of HR operations within the organization. When Lindsay wasn\u0026rsquo;t working, she enjoyed yoga, watching football, and spending time with her family.\nLindsay Russell's Latest Blog Posts\rMore new faces at Crossref\rLindsay Russell, Thursday, Oct 21, 2021\nIn Member BriefingCommunityStaff Leave a comment\nLooking at the road ahead, we’ve set some ambitious goals for ourselves and continue to see new members join from around the world, now numbering 16,000. To help achieve all that we plan in the years to come, we’ve grown our teams quite a bit over the last couple of years, and we are happy to welcome Carlos, Evans, Fabienne, Mike, Panos, and Patrick.\nRead all of Lindsay Russell's posts \u0026raquo;\r", "headings": ["Lindsay Russell","Biography","Lindsay Russell's Latest Blog Posts","More new faces at Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/lisa-hart-martin/", "title": "Lisa Hart Martin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Lisa Hart Martin Director of Finance \u0026amp; Operations Biography Lisa Hart Martin was one of the organization\u0026rsquo;s first employees, joining in 2000. She oversaw Finance, Accounting, Administration, and HR for both the US and UK offices. She served as Crossref Board Secretary, managing Governance and keeping up to date on international matters. Outside of Crossref, Lisa served the community as a member of the Board of Directors of NESAE and ASAE’s Finance and Business Operations Council, chaired SSP’s Audit Committee, and was a member of Orcid’s Audit Committee.", "content": "\rLisa Hart Martin Director of Finance \u0026amp; Operations Biography Lisa Hart Martin was one of the organization\u0026rsquo;s first employees, joining in 2000. She oversaw Finance, Accounting, Administration, and HR for both the US and UK offices. She served as Crossref Board Secretary, managing Governance and keeping up to date on international matters. Outside of Crossref, Lisa served the community as a member of the Board of Directors of NESAE and ASAE’s Finance and Business Operations Council, chaired SSP’s Audit Committee, and was a member of Orcid’s Audit Committee. Beyond her professional commitments, she continued to enjoy advocating for children and young adults with disabilities, blending herbal teas and medicine, and spending time with her family and pets.\nLisa Hart Martin's Latest Blog Posts\r2019 election slate\rLisa Hart Martin, Friday, Aug 23, 2019\nIn BoardMember BriefingGovernanceElectionsCrossref LiveAnnual Meeting Leave a comment\n2019 Board Election The annual board election is a very important event for Crossref and its members. The board of directors, comprising 16 member organizations, governs Crossref, sets its strategic direction and makes sure that we fulfill our mission. Our members elect the board - its \u0026ldquo;one member one vote\u0026rdquo; - and we like to see as many members as possible voting. We are very pleased to announce the 2019 election slate - we have a great set of candidates and an update to the ByLaws addressing the composition of the slate to ensure that the board continues to be representative of our membership.\nExpress your interest in serving on the Crossref board\rLisa Hart Martin, Wednesday, Apr 24, 2019\nIn BoardMember BriefingGovernanceElection Leave a comment\nThe Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2019 election. The board\u0026rsquo;s purpose is to provide strategic and financial oversight and counsel to the Executive Director and the staff leadership team, with the key responsibilities being: Setting the strategic direction for the organization; Providing financial oversight; and Approving new policies and services. The Board tends to review the strategic direction every few years, taking a landscape view of the scholarly communications community and trends that may affect Crossref\u0026rsquo;s mission.\nUpdates to our by-laws\rLisa Hart Martin, Thursday, Nov 29, 2018\nIn BoardGovernanceElection Leave a comment\nGood governance is important and something that Crossref thinks about regularly so the board frequently discusses the topic, and this year even more so. At the November 2017 meeting there was a motion passed to create an ad-hoc Governance Committee to develop a set of governance-related questions/recommendations. The Committee has met regularly this year and the following questions are under deliberation regarding term limits, role of the Nominating Committee, implications of contested elections, and more.\n2018 election slate\rLisa Hart Martin, Friday, Aug 17, 2018\nIn BoardMember BriefingGovernanceElectionCrossref LIVEAnnual Meeting Leave a comment\nWith Crossref developing and extending its services for members and other constituents at a rapid pace, it’s an exciting time to be on our board. We recieved 26 expressions of interest this year, so it seems our members are also excited about what they could help us achieve.\nDo you want to be on our Board?\rLisa Hart Martin, Wednesday, Apr 18, 2018\nIn BoardMember BriefingGovernanceElection Leave a comment\nDo you want to effect change for the scholarly community?\nThe Crossref Nominating Committee is inviting expressions of interest to serve on the Board as it begins its consideration of a slate for the November 2018 election.\nRead all of Lisa Hart Martin's posts \u0026raquo;\r", "headings": ["Lisa Hart Martin","Biography","Lisa Hart Martin's Latest Blog Posts","2019 election slate","Express your interest in serving on the Crossref board","Updates to our by-laws","2018 election slate","Do you want to be on our Board?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/lucy-ofiesh/", "title": "Lucy Ofiesh", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Lucy Ofiesh Chief Operating Officer Biography Lucy joined Crossref in 2019 as the Director of Finance and Operations, becoming our COO in 2024 and epanding the group to include technology and data science. Lucy\u0026rsquo;s background is in nonprofit management, with a focus on organizational growth, strategic planning, financial management, and building high functioning teams. Prior to joining Crossref, she was the Chief Operating Officer for the Center for Open Science, overseeing finance, operations, and tech.", "content": "\rLucy Ofiesh Chief Operating Officer Biography Lucy joined Crossref in 2019 as the Director of Finance and Operations, becoming our COO in 2024 and epanding the group to include technology and data science. Lucy\u0026rsquo;s background is in nonprofit management, with a focus on organizational growth, strategic planning, financial management, and building high functioning teams. Prior to joining Crossref, she was the Chief Operating Officer for the Center for Open Science, overseeing finance, operations, and tech. Before entering the scholarly infrastructure space, Lucy led operations for museums in New York. She lives in Charlottesville, Virginia where she can be found corralling two young boys.\nX @lucyofiesh Lucy Ofiesh's Latest Blog Posts\rA progress update and a renewed commitment to community\rGinny Hendricks, Thursday, Dec 12, 2024\nIn ProgramsStrategyProduct Leave a comment\nLooking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata. In this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what\u0026rsquo;s paused and why.\n2024 POSI audit\rLucy Ofiesh, Saturday, Dec 7, 2024\nIn GovernanceSustainabilityPOSI Leave a comment\nBackground The Principles of Open Scholarly Infrastructure (POSI) provides a set of guidelines for operating open infrastructure in service to the scholarly community. It sets out 16 points to ensure that the infrastructure on which the scholarly and research communities rely is openly governed, sustainable, and replicable. Each POSI adopter regularly reviews progress, conducts periodic audits, and self-reports how they’re working towards each of the principles. In 2020, Crossref’s board voted to adopt the Principles of Open Scholarly Infrastructure, and we completed our first self-audit.\nSummary of the environmental impact of Crossref\rEd Pentz, Thursday, Dec 5, 2024\nIn CommunityEnvironment Leave a comment\nIn June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas: The environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!\nUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024\nIn StrategyFees Leave a comment\nWe’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes. Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by: Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.\nMeet the candidates and vote in our 2024 Board elections\rLucy Ofiesh, Tuesday, Sep 24, 2024\nIn BoardMember BriefingGovernanceElectionsCrossref LiveAnnual Meeting Leave a comment\nOn behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election. Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats. We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization\u0026rsquo;s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).\nRead all of Lucy Ofiesh's posts \u0026raquo;\r", "headings": ["Lucy Ofiesh","Biography","X","Lucy Ofiesh's Latest Blog Posts","A progress update and a renewed commitment to community","2024 POSI audit","Summary of the environmental impact of Crossref","Update on the Resourcing Crossref for Future Sustainability research","Meet the candidates and vote in our 2024 Board elections"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/luis-montilla/", "title": "Luis Montilla", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Luis Montilla Technical Community Manager Biography Luis was a researcher-turned-publisher before joining Crossref as a Technical Community Manager in 2022. He is busy educating our community about using the Crossref API, collaborating with API users, including Plus subscribers, to help them make the most of our metadata. Additionally, he partners with service integrators, such as publishing platforms, to realise opportunities to make that metadata even richer and workflows even efficient.", "content": "\rLuis Montilla Technical Community Manager Biography Luis was a researcher-turned-publisher before joining Crossref as a Technical Community Manager in 2022. He is busy educating our community about using the Crossref API, collaborating with API users, including Plus subscribers, to help them make the most of our metadata. Additionally, he partners with service integrators, such as publishing platforms, to realise opportunities to make that metadata even richer and workflows even efficient.\nX @luismmontilla ORCID iD 0000-0002-7079-6775 Luis Montilla's Latest Blog Posts\rDrawing on the Research Nexus with Policy documents: Overton’s use of Crossref API\rLuis Montilla, Saturday, Jun 15, 2024\nIn APIsAPI Case Study Leave a comment\nUpdate 2024-07-01: This post is based on an interview with Euan Adie, founder and director of Overton._ What is Overton? Overton is a big database of government policy documents, also including sources like intergovernmental organizations, think tanks, and big NGOs and in general anyone who\u0026rsquo;s trying to influence a government policy maker. What we\u0026rsquo;re interested in is basically, taking all the good parts of the scholarly record and applying some of that to the policy world.\nPerspectives: Luis Montilla on making science fiction concepts a reality in the scholarly ecosystem\rLuis Montilla, Monday, Nov 20, 2023\nIn PerspectivesCommunity Leave a comment\nHello, readers! My name is Luis, and I\u0026rsquo;ve recently started a new role as the Technical Community Manager at Crossref, where I aim to bridge the gap between some of our services and our community awareness to enhance the Research Nexus. I\u0026rsquo;m excited to share my thoughts with you. My journey from research to science communications infrastructure has been a gradual transition. As a Masters student in Biological Sciences, I often felt curious about the behind-the-scenes after a paper is submitted and published.\nRead all of Luis Montilla's posts \u0026raquo;\r", "headings": ["Luis Montilla","Biography","X","ORCID iD","Luis Montilla's Latest Blog Posts","Drawing on the Research Nexus with Policy documents: Overton’s use of Crossref API","Perspectives: Luis Montilla on making science fiction concepts a reality in the scholarly ecosystem"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/madeleine-watson/", "title": "Madeleine Watson", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Madeleine Watson Product Manager Biography Madeleine Watson was a Product Manager with Crossref until January 2018.\nX @mwats0n Madeleine Watson's Latest Blog Posts\rA transparent record of life after publication\rMadeleine Watson, Wednesday, Nov 1, 2017\nIn Event DataWikipediaTransparencyRelationshipsDataCite Leave a comment\nCrossref Event Data and the importance of understanding what lies beneath the data. Some things in life are better left a mystery. There is an argument for opaqueness when the act of full disclosure only limits your level of enjoyment: in my case, I need a complete lack of transparency to enjoy both chicken nuggets and David Lynch films.", "content": "\rMadeleine Watson Product Manager Biography Madeleine Watson was a Product Manager with Crossref until January 2018.\nX @mwats0n Madeleine Watson's Latest Blog Posts\rA transparent record of life after publication\rMadeleine Watson, Wednesday, Nov 1, 2017\nIn Event DataWikipediaTransparencyRelationshipsDataCite Leave a comment\nCrossref Event Data and the importance of understanding what lies beneath the data. Some things in life are better left a mystery. There is an argument for opaqueness when the act of full disclosure only limits your level of enjoyment: in my case, I need a complete lack of transparency to enjoy both chicken nuggets and David Lynch films. And that works for me. But metrics are not nuggets. Because in order to consume them, you really need to know how they’re made.\nPublishers, help us capture Events for your content\rMadeleine Watson, Monday, Oct 2, 2017\nIn Best PracticesCitationCollaborationDataEvent DataIdentifiers Leave a comment\nThe day I received my learner driver permit, I remember being handed three things: a plastic thermosealed reminder that age sixteen was not a good look on me; a yellow L-plate sign as flimsy as my driving ability; and a weighty ‘how to drive’ guide listing all the things that I absolutely must not, under any circumstances, even-if-it-seems-like-a-really-swell-idea-at-the-time, never, ever do.\nNow put your hands up! (for a Similarity Check update)\rMadeleine Watson, Tuesday, Jun 6, 2017\nIn Full-Text LinksMember BriefingMetadataSimilarity Check Leave a comment\nToday, I’m thinking back to 2008. A time when khaki and gladiator sandals dominated my wardrobe. The year when Obama was elected, and Madonna and Guy Ritchie parted ways. When we were given both the iPhone 3G and the Kindle, and when the effects of the global financial crisis lead us to come to terms with the notion of a ‘staycation’. In 2008 we met both Wall-E and Benjamin Button, were enthralled by the Beijing Olympics, and became addicted to Breaking Bad.\nImportant changes to Similarity Check\rMadeleine Watson, Friday, Oct 21, 2016\nIn Full-Text LinksMember BriefingMetadataSimilarity Check Leave a comment\nNew features, new indexing, new name - oh my! TL;DR The indexing of Similarity Check users’ content into the shared full-text database is about to get a lot faster. Now we need members assistance in helping Turnitin (the company who own and operate the iThenticate plagiarism checking tool) to transition to a new method of indexing content.\nCrossref Event Data: early preview now available\rMadeleine Watson, Monday, Apr 18, 2016\nIn CitationDataCiteCollaborationDataEvent DataIdentifiersNews ReleaseWikipedia Leave a comment\nTest out the early preview of Event Data while we continue to develop it. Share your thoughts. And be warned: we may break a few eggs from time to time!\nChicken by anbileru adaleru from the The Noun Project\nWant to discover which research works are being shared, liked and commented on? What about the number of times a scholarly item is referenced? Starting today, you can whet your appetite with an early preview of the forthcoming Crossref Event Data service. We invite you to start exploring the activity of DOIs as they permeate and interact with the world after publication.\nRead all of Madeleine Watson's posts \u0026raquo;\r", "headings": ["Madeleine Watson","Biography","X","Madeleine Watson's Latest Blog Posts","A transparent record of life after publication","Publishers, help us capture Events for your content","Now put your hands up! (for a Similarity Check update)","Important changes to Similarity Check","New features, new indexing, new name - oh my!","Crossref Event Data: early preview now available"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/madhura-amdekar/", "title": "Madhura Amdekar", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Madhura Amdekar Community Engagement Manager Biography Madhura Amdekar is a Community Engagement Manager at Crossref. Her primary responsibility is to engage with scholarly editors, publishers, and editorial organizations to develop engagement programmes that help this community to leverage rich metadata for asserting the integrity of the scholarly record. Prior to joining Crossref, she was a Senior Associate at Wiley, where she was overseeing an editor support service. Madhura has a PhD.", "content": "\rMadhura Amdekar Community Engagement Manager Biography Madhura Amdekar is a Community Engagement Manager at Crossref. Her primary responsibility is to engage with scholarly editors, publishers, and editorial organizations to develop engagement programmes that help this community to leverage rich metadata for asserting the integrity of the scholarly record. Prior to joining Crossref, she was a Senior Associate at Wiley, where she was overseeing an editor support service. Madhura has a PhD. in Behavioural Ecology from the Indian Institute of Science (Bengaluru, India), for which she studied the colour-change behaviour of a tropical agamid lizard. She has a special interest in research integrity and she is based in the Netherlands. Outside of work, she enjoys reading fiction, embroidering, and traveling.\nTopics Research Integrity Editorial community X @MadAmdekar Madhura Amdekar's Latest Blog Posts\rResearch Integrity Roundtable 2024\rMartyn Rittman, Friday, Nov 15, 2024\nIn Research IntegrityCrossmark Leave a comment\nFor the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers. Since the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.\nCrossmark community consultation: What did we learn?\rMartyn Rittman, Tuesday, Jul 2, 2024\nIn CrossmarkCommunity Leave a comment\nIn the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That\u0026rsquo;s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the research they are looking at has updates by clicking the Crossmark logo.\nIntegrity of the Scholarly Record (ISR): what do research institutions think?\rMadhura Amdekar, Thursday, May 9, 2024\nIn Research IntegrityTrustworthinessStrategy Leave a comment\nEarlier this year, we reported on the roundtable discussion event that we had organised in Frankfurt on the heels of the Frankfurt Book Fair 2023. This event was the second in the series of roundtable events that we are holding with our community to hear from you how we can all work together to preserve the integrity of the scholarly record - you can read more about insights from these events and about ISR in this series of blogs.\nISR Roundtable 2023: The future of preserving the integrity of the scholarly record together\rMadhura Amdekar, Tuesday, Feb 6, 2024\nIn Research IntegrityTrustworthinessStrategy Leave a comment\nMetadata about research objects and the relationships between them form the basis of the scholarly record: rich metadata has the potential to provide a richer context for scholarly output, and in particular, can provide trust signals to indicate integrity. Information on who authored a research work, who funded it, which other research works it cites, and whether it was updated, can act as signals of trustworthiness. Crossref provides foundational infrastructure to connect and preserve these records, but the creation of these records is an ongoing and complex community effort.\nPerspectives: Madhura Amdekar on meeting the community and pursuing passion for research integrity\rMadhura Amdekar, Tuesday, Dec 5, 2023\nIn PerspectivesCommunity Leave a comment\nThe second half of 2023 brought with itself a couple of big life changes for me: not only did I move to the Netherlands from India, I also started a new and exciting job at Crossref as the newest Community Engagement Manager. In this role, I am a part of the Community Engagement and Communications team, and my key responsibility is to engage with the global community of scholarly editors, publishers, and editorial organisations to develop sustained programs that help editors to leverage rich metadata.\nRead all of Madhura Amdekar's posts \u0026raquo;\r", "headings": ["Madhura Amdekar","Biography","Topics","X","Madhura Amdekar's Latest Blog Posts","Research Integrity Roundtable 2024","Crossmark community consultation: What did we learn?","Integrity of the Scholarly Record (ISR): what do research institutions think?","ISR Roundtable 2023: The future of preserving the integrity of the scholarly record together","Perspectives: Madhura Amdekar on meeting the community and pursuing passion for research integrity"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/maria-sullivan/", "title": "Maria Sullivan", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Maria Sullivan Supervising Accountant Biography Maria Sullivan joined Crossref in April of 2016 as a Staff Accountant where her primary responsibilities are Accounts Payable, Expensify, and Zendesk. Maria brings 10+ years of accounting and finance experience to Crossref. Maria is fluent in Portuguese and English. In her spare time, she enjoys travel, motorcycle rides, spending time with her children, granddaughters and fur babies. She volunteers bimonthly with her four fur babies at a residential home for physically/mentally challenged adults.", "content": "\rMaria Sullivan Supervising Accountant Biography Maria Sullivan joined Crossref in April of 2016 as a Staff Accountant where her primary responsibilities are Accounts Payable, Expensify, and Zendesk. Maria brings 10+ years of accounting and finance experience to Crossref. Maria is fluent in Portuguese and English. In her spare time, she enjoys travel, motorcycle rides, spending time with her children, granddaughters and fur babies. She volunteers bimonthly with her four fur babies at a residential home for physically/mentally challenged adults.\n", "headings": ["Maria Sullivan","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/mark-woodhall/", "title": "Mark Woodhall", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "\rMark Woodhall Senior Software Developer Biography Mark has moved on from Crossref. Mark joined Crossref as part of the technical team in June 2020. Prior to working at Crossref, he had built up a range of experience across many industries and technical stacks. Outside of work, he enjoyed keeping up with the latest tech, occasional gaming, and spending time with his family.\nX @markwoodhall ", "content": "\rMark Woodhall Senior Software Developer Biography Mark has moved on from Crossref. Mark joined Crossref as part of the technical team in June 2020. Prior to working at Crossref, he had built up a range of experience across many industries and technical stacks. Outside of work, he enjoyed keeping up with the latest tech, occasional gaming, and spending time with his family.\nX @markwoodhall ", "headings": ["Mark Woodhall","Biography","X"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/martin-eve/", "title": "Martin Eve", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Martin Eve Principal R\u0026amp;D Developer Biography Martin Paul Eve was Principal R\u0026amp;D Developer at Crossref for 18 months until 2024, working on experimental research and development projects. Martin is also the Professor of Literature, Technology and Publishing at the University of London\u0026rsquo;s Birkbeck College.\nX @martin_eve Mastodon @mpe@ravenation.club ORCID iD 0000-0002-5589-8511 Martin Eve's Latest Blog Posts\rTesting times\rMartin Eve, Wednesday, Apr 3, 2024\nIn ToolsAuthorizationLabs Leave a comment", "content": "\rMartin Eve Principal R\u0026amp;D Developer Biography Martin Paul Eve was Principal R\u0026amp;D Developer at Crossref for 18 months until 2024, working on experimental research and development projects. Martin is also the Professor of Literature, Technology and Publishing at the University of London\u0026rsquo;s Birkbeck College.\nX @martin_eve Mastodon @mpe@ravenation.club ORCID iD 0000-0002-5589-8511 Martin Eve's Latest Blog Posts\rTesting times\rMartin Eve, Wednesday, Apr 3, 2024\nIn ToolsAuthorizationLabs Leave a comment\nOne of the challenges that we face in Labs and Research at Crossref is that, as we prototype various tools, we need the community to be able to test them. Often, this involves asking for deposit to a different endpoint or changing the way that a platform works to incorporate a prototype. The problem is that our community is hugely varied in its technical capacity and level of ability when it comes to modifying their platform.\nCredential Checking at Crossref\rMartin Eve, Friday, Mar 15, 2024\nIn ToolsAuthorizationLabs Leave a comment\nIt turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.\nCredential Checking at Crossref\rMartin Eve, Friday, Mar 15, 2024\nIn ToolsAuthorizationLabs Leave a comment\nIt turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.\nWhat do we know about DOIs\rMartin Eve, Thursday, Feb 29, 2024\nIn CommunityStaff Leave a comment\nCrossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).\nWhat do we know about DOIs\rMartin Eve, Thursday, Feb 29, 2024\nIn CommunityStaff Leave a comment\nCrossref holds metadata for approximately 150 million scholarly artifacts. These range from peer reviewed journal articles through to scholarly books through to scientific blog posts. In fact, amid such heterogeneity, the only singular factor that unites such items is that they have been assigned a document object identifier (DOI); a unique identification string that can be used to resolve to a resource pertaining to said metadata (often, but not always, a copy of the work identified by the metadata).\nRead all of Martin Eve's posts \u0026raquo;\r", "headings": ["Martin Eve","Biography","X","Mastodon","ORCID iD","Martin Eve's Latest Blog Posts","Testing times","Credential Checking at Crossref","Credential Checking at Crossref","What do we know about DOIs","What do we know about DOIs"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/martyn-rittman/", "title": "Martyn Rittman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Martyn Rittman Program Lead Biography Martyn joined Crossref in June 2020 as Product Manager. Prior to that he spent seven years at open access publisher MDPI in various roles, including roles in production, editorial, and author services. Before moving into publishing Martyn was a researcher, developing instrumentation in life sciences, material sciences, and analytical chemistry. He completed his PhD at the University of Warwick before postdoc positions at the University of Reading and the University of Freiburg.", "content": "\rMartyn Rittman Program Lead Biography Martyn joined Crossref in June 2020 as Product Manager. Prior to that he spent seven years at open access publisher MDPI in various roles, including roles in production, editorial, and author services. Before moving into publishing Martyn was a researcher, developing instrumentation in life sciences, material sciences, and analytical chemistry. He completed his PhD at the University of Warwick before postdoc positions at the University of Reading and the University of Freiburg. In 2024 Martyn took on the expanded role of Program Lead, responsible for all our activities that fall under the Research Nexus, such as metadata relationships, matching, and APIs. Outside of work you can find him spening time with his family, making music, or exploring the Black Forest by foot or by bike.\nX @martynrittman Martyn Rittman's Latest Blog Posts\rRetraction Watch retractions now in the Crossref API\rMartyn Rittman, Wednesday, Jan 29, 2025\nIn REST APIRetraction WatchResearch Integrity Leave a comment\nRetractions and corrections from Retraction Watch are now available in Crossref’s REST API. Back in September 2023, we announced the acquisition of the Retraction Watch database with an ongoing shared service. Since then, they have sent us regular updates, which are publicly available as a csv file. Our aim has always been to better integrate these retractions with our existing metadata, and today we’ve met that goal. This is the first time we have supplemented our metadata with a third-party data source.\nResearch Integrity Roundtable 2024\rMartyn Rittman, Friday, Nov 15, 2024\nIn Research IntegrityCrossmark Leave a comment\nFor the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers. Since the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.\nCrossmark community consultation: What did we learn?\rMartyn Rittman, Tuesday, Jul 2, 2024\nIn CrossmarkCommunity Leave a comment\nIn the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That\u0026rsquo;s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the research they are looking at has updates by clicking the Crossmark logo.\nBetter preprint metadata through community participation\rMartyn Rittman, Wednesday, Nov 9, 2022\nIn PreprintsMetadataCommunity Leave a comment\nPreprints have become an important tool for rapidly communicating and iterating on research outputs. There is now a range of preprint servers, some subject-specific, some based on a particular geographical area, and others linked to publishers or individual journals in addition to generalist platforms. In 2016 the Crossref schema started to support preprints and since then the number of metadata records has grown to around 16,000 new preprint DOIs per month.\nAmendments to membership terms to open reference distribution and include UK jurisdiction\rGinny Hendricks, Monday, Apr 4, 2022\nIn ReferencesMetadataBoard Leave a comment\nTl;dr Forthcoming amendments to Crossref\u0026rsquo;s membership terms will include: Removal of \u0026lsquo;reference distribution preference\u0026rsquo; policy: all references in Crossref will be treated as open metadata from 3rd June 2022. An addition to sanctions jurisdictions: the United Kingdom will be added to sanctions jurisdictions that Crossref needs to comply with. Sponsors and members have been emailed today with the 60-day notice needed for changes in terms. Reference distribution preferences In 2017, when we consolidated our metadata services under Metadata Plus, we made it possible for members to set a preference for the distribution of references to Open, Limited, or Closed.\nRead all of Martyn Rittman's posts \u0026raquo;\r", "headings": ["Martyn Rittman","Biography","X","Martyn Rittman's Latest Blog Posts","Retraction Watch retractions now in the Crossref API","Research Integrity Roundtable 2024","Crossmark community consultation: What did we learn?","Better preprint metadata through community participation","Amendments to membership terms to open reference distribution and include UK jurisdiction"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/maryna-kovalyova/", "title": "Maryna Kovalyova", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Maryna Kovalyova Member Experience Manager Biography Maryna thrives on creating (digital) environments for others to connect and grow. Her affinity for numbers together with an extensive educational background, intercultural experiences, and diverse professional expertise forms the ideal link between data analysis and stakeholder management to drive business development.", "content": "\rMaryna Kovalyova Member Experience Manager Biography Maryna thrives on creating (digital) environments for others to connect and grow. Her affinity for numbers together with an extensive educational background, intercultural experiences, and diverse professional expertise forms the ideal link between data analysis and stakeholder management to drive business development.\n", "headings": ["Maryna Kovalyova","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/michelle-cancel/", "title": "Michelle Cancel", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Michelle Cancel HR Manager Biography Michelle Cancel joined Crossref in November 2022 as the Human Resources Manager and brings 10+ years of experience in different industries, including nonprofits. Michelle has a generalist background and has overseen all HR functions, people operations, teams, and departments. She approaches all her work through two lenses; culture and DEI. As a passionate HR professional, Michelle appreciates strategy and innovation. She resides in Florida with her spouse, son, and pup.", "content": "\rMichelle Cancel HR Manager Biography Michelle Cancel joined Crossref in November 2022 as the Human Resources Manager and brings 10+ years of experience in different industries, including nonprofits. Michelle has a generalist background and has overseen all HR functions, people operations, teams, and departments. She approaches all her work through two lenses; culture and DEI. As a passionate HR professional, Michelle appreciates strategy and innovation. She resides in Florida with her spouse, son, and pup. She enjoys travel, reading, music, more music, wining/dining, spending time with family and friends, and any opportunity to spread kindness.\nMichelle Cancel's Latest Blog Posts\rWe’re hiring! New technical, community, and membership roles at Crossref\rMichelle Cancel, Friday, Apr 21, 2023\nIn JobsCommunityMembership Leave a comment\nDo you want to help make research communications better in all corners of the globe? Come and join the world of nonprofit open infrastructure and be part of improving the creation and sharing of knowledge. We are recruiting for three new staff positions, all new roles and all fully remote and flexible. See below for more about our ethos and what it\u0026rsquo;s like working at Crossref. 🚀 Technical Community Manager, working with our \u0026lsquo;integrators\u0026rsquo; so all repository/publishing platforms and plugins, all API users incl.\nRead all of Michelle Cancel's posts \u0026raquo;\r", "headings": ["Michelle Cancel","Biography","Michelle Cancel's Latest Blog Posts","We’re hiring! New technical, community, and membership roles at Crossref"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/michelle-urberg/", "title": "Michelle Urberg", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Michelle Urberg Independent researcher \u0026amp; consultant Biography Michelle Urberg, PhD, MSLIS, Metadata Consultant and Information Architect - Making the world better, one piece of metadata at a time.\nMichelle Urberg's Latest Blog Posts\rMeasuring Metadata Impacts: Books Discoverability in Google Scholar\rLettie Conrad, Wednesday, Jan 25, 2023\nIn MetadataBooksSearch Leave a comment\nThis blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen. As sponsors of this project, we at Crossref are excited to see this work shared out.", "content": "\rMichelle Urberg Independent researcher \u0026amp; consultant Biography Michelle Urberg, PhD, MSLIS, Metadata Consultant and Information Architect - Making the world better, one piece of metadata at a time.\nMichelle Urberg's Latest Blog Posts\rMeasuring Metadata Impacts: Books Discoverability in Google Scholar\rLettie Conrad, Wednesday, Jan 25, 2023\nIn MetadataBooksSearch Leave a comment\nThis blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen. As sponsors of this project, we at Crossref are excited to see this work shared out. The scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.\nRead all of Michelle Urberg's posts \u0026raquo;\r", "headings": ["Michelle Urberg","Biography","Michelle Urberg's Latest Blog Posts","Measuring Metadata Impacts: Books Discoverability in Google Scholar"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/mike-gill/", "title": "Mike Gill", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Mike Gill Senior Software Developer Biography Mike Gill has been a software developer his whole career, working in London for the first twelve years then moving to greener pastures (literally, it was the Peak District). Mike plays bass guitar and when not holding down the low end, likes to mess around in GarageBand and iMovie making music and videos, mostly for his own amusement but sometimes for the amusement of others (not always intentionally).", "content": "\rMike Gill Senior Software Developer Biography Mike Gill has been a software developer his whole career, working in London for the first twelve years then moving to greener pastures (literally, it was the Peak District). Mike plays bass guitar and when not holding down the low end, likes to mess around in GarageBand and iMovie making music and videos, mostly for his own amusement but sometimes for the amusement of others (not always intentionally).\n", "headings": ["Mike Gill","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/mike-yalter/", "title": "Mike Yalter", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Mike Yalter Software Developer Biography Mike Yalter was a software developer who joined the Crossref team in 2012. He was responsible for the day-to-day running of the Metadata Search and API, the Open Funder Registry related services and co-access, and he also developed for the main infrastructure of the deposit system. Mike\u0026rsquo;s previous work experience included a long-term role with IBM as a developer for their Rational brand of products.", "content": "\rMike Yalter Software Developer Biography Mike Yalter was a software developer who joined the Crossref team in 2012. He was responsible for the day-to-day running of the Metadata Search and API, the Open Funder Registry related services and co-access, and he also developed for the main infrastructure of the deposit system. Mike\u0026rsquo;s previous work experience included a long-term role with IBM as a developer for their Rational brand of products.\nMike Yalter's Latest Blog Posts\rStepping up our deposit processing game\rIsaac Farley, Monday, Mar 8, 2021\nIn Content RegistrationDOIs Leave a comment\nSome of you who have submitted content to us during the first two months of 2021 may have experienced content registration delays. We noticed; you did, too. The time between us receiving XML from members, to the content being registered with us and the DOI resolving to the correct resolution URL, is usually a matter of minutes. Some submissions take longer - for example, book registrations with large reference lists, or very large files from larger publishers can take up to 24 to 48 hours to process.\nRead all of Mike Yalter's posts \u0026raquo;\r", "headings": ["Mike Yalter","Biography","Mike Yalter's Latest Blog Posts","Stepping up our deposit processing game"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/m%C3%BCge-bakio%C4%9Flu/", "title": "Müge Bakioğlu", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Müge Bakioğlu Biography Müge works with our support team helping our members.", "content": "\rMüge Bakioğlu Biography Müge works with our support team helping our members.\n", "headings": ["Müge Bakioğlu","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/nadia-turpin/", "title": "Nadia Turpin", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Nadia Turpin Senior Software Developer Biography Nadia has a computer science background. She has worked as a developer in the Postal service in Norway, and several other private and public employers in Norway. She lives now in Spain, and enjoys mostly music, concerts, reading and taking care of her plants.", "content": "\rNadia Turpin Senior Software Developer Biography Nadia has a computer science background. She has worked as a developer in the Postal service in Norway, and several other private and public employers in Norway. She lives now in Spain, and enjoys mostly music, concerts, reading and taking care of her plants.\n", "headings": ["Nadia Turpin","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/oliver-mussell/", "title": "Oliver Mussell", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Oliver Mussell Site Reliability Engineer Biography Oliver has worked as a Systems Administrator since 2012 in many industries, including Medical, Energy Generation, Environmental, Government, and Education. Most of his work has been maintaining the critical server infrastructure underpinning these industries. Most recently, maintaining the servers running around 5,000 websites for UK schools and various services to allow schools to communicate with parents. In his free time, he is interested in playing video games, going hiking and camping, and learning about cryptography.", "content": "\rOliver Mussell Site Reliability Engineer Biography Oliver has worked as a Systems Administrator since 2012 in many industries, including Medical, Energy Generation, Environmental, Government, and Education. Most of his work has been maintaining the critical server infrastructure underpinning these industries. Most recently, maintaining the servers running around 5,000 websites for UK schools and various services to allow schools to communicate with parents. In his free time, he is interested in playing video games, going hiking and camping, and learning about cryptography.\n", "headings": ["Oliver Mussell","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/panos-pandis/", "title": "Panos Pandis", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Panos Pandis Senior Software Developer Biography Panos joined Crossref in 2020. He loves supporting research through his work and in the past has worked as a developer for universities and research facilities. When not coding for work, he is interested in creative programming, interactivity and prototyping with open software and hardware. He enjoys cooking, cycling, sailing and all things outdoors.", "content": "\rPanos Pandis Senior Software Developer Biography Panos joined Crossref in 2020. He loves supporting research through his work and in the past has worked as a developer for universities and research facilities. When not coding for work, he is interested in creative programming, interactivity and prototyping with open software and hardware. He enjoys cooking, cycling, sailing and all things outdoors.\n", "headings": ["Panos Pandis","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/patience-mbum/", "title": "Patience Mbum", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Patience Mbum Finance Clerk Biography Patience joined Crossref in December 2023 as a member of the Finance team. She is passionate about seeing organizations thrive through informed financial decision-making. In her spare time, she loves to travel, spend time with family, educating people on personal finance, and volunteering to improve girl child education in Africa. Based in Lagos, Nigeria, Patience works remotely.", "content": "\rPatience Mbum Finance Clerk Biography Patience joined Crossref in December 2023 as a member of the Finance team. She is passionate about seeing organizations thrive through informed financial decision-making. In her spare time, she loves to travel, spend time with family, educating people on personal finance, and volunteering to improve girl child education in Africa. Based in Lagos, Nigeria, Patience works remotely.\n", "headings": ["Patience Mbum","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/patricia-feeney/", "title": "Patricia Feeney", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Patricia Feeney Head of Metadata Biography Patricia\u0026rsquo;s role as Head of Metadata was created in 2018 to bring together all aspects of metadata, such as our strategy and overall vision, review and introduction of new record types, best practice around inputs (Content Registration) as well as outputs (representations through our APIs), and consulting with the community about metadata. During her 10 years at Crossref she’s helped thousands of publishers understand how to record and distribute metadata for millions of scholarly items.", "content": "\rPatricia Feeney Head of Metadata Biography Patricia\u0026rsquo;s role as Head of Metadata was created in 2018 to bring together all aspects of metadata, such as our strategy and overall vision, review and introduction of new record types, best practice around inputs (Content Registration) as well as outputs (representations through our APIs), and consulting with the community about metadata. During her 10 years at Crossref she’s helped thousands of publishers understand how to record and distribute metadata for millions of scholarly items. She’s also worked in various scholarly publishing roles and as a systems librarian and cataloger.\nTopics Content Registration metadata schemas XML JSON best practice X @SchemaSchemer ORCID iD 0000-0002-4011-3590 Patricia Feeney's Latest Blog Posts\rMetadata schema development plans\rPatricia Feeney, Monday, Jul 22, 2024\nIn Metadata Leave a comment\nIt’s been a while, here’s a metadata update and request for feedback In Spring 2023 we sent out a survey to our community with a goal of assessing what our priorities for metadata development should be - what projects are our community ready to support? Where is the greatest need? What are the roadblocks? The intention was to help prioritize our metadata development work. There’s a lot we want to do, a lot our community needs from us, but we really want to make sure we’re focusing on the projects that will have the most immediate impact for now.\nSome rip-RORing news for affiliation metadata\rGinny Hendricks, Monday, Jul 26, 2021\nIn AffiliationsSchemaRORMetadata Leave a comment\nWe’ve just added to our input schema the ability to include affiliation information using ROR identifiers. Members who register content using XML can now include ROR IDs, and we’ll add the capability to our manual content registration form, participation reports, and metadata retrieval APIs in the near future. And we are inviting members to a Crossref/ROR webinar on 29th September at 3pm UTC. The background We’ve been working on the Research Organization Registry (ROR) as a community initiative for the last few years.\nYou’ve had your say, now what? Next steps for schema changes\rPatricia Feeney, Thursday, Apr 2, 2020\nIn MetadataSchemaContent RegistrationMember Briefing Leave a comment\nIt seems like ages ago, particularly given recent events, but we had our first public request for feedback on proposed schema updates in December and January. The feedback we received indicated two big things: we’re on the right track, and you want us to go further. This update has some significant but important changes to contributors, but is otherwise a fairly moderate update. The feedback was mostly supportive, with a fair number of helpful suggestions about details.\nCrossref metadata for bibliometrics\rGinny Hendricks, Friday, Feb 21, 2020\nIn MetadataBibliometricsCitation DataAPIsAPI Case Study Leave a comment\nOur paper, Crossref: the sustainable source of community-owned scholarly metadata, was recently published in Quantitative Science Studies (MIT Press). The paper describes the scholarly metadata collected and made available by Crossref, as well as its importance in the scholarly research ecosystem.\nProposed schema changes - have your say\rPatricia Feeney, Wednesday, Dec 4, 2019\nIn MetadataSchemaContent RegistrationMember Briefing Leave a comment\nThe first version of our metadata input schema (a DTD, to be specific) was created in 1999 to capture basic bibliographic information and facilitate matching DOIs to citations. Over the past 20 years the bibliographic metadata we collect has deepened, and we’ve expanded our schema to include funding information, license, updates, relations, and other metadata. Our schema isn’t as venerable as a MARC record or as comprehensive as JATS, but it’s served us well.\nRead all of Patricia Feeney's posts \u0026raquo;\r", "headings": ["Patricia Feeney","Biography","Topics","X","ORCID iD","Patricia Feeney's Latest Blog Posts","Metadata schema development plans","Some rip-RORing news for affiliation metadata","You’ve had your say, now what? Next steps for schema changes","Crossref metadata for bibliometrics","Proposed schema changes - have your say"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/patrick-polischuk/", "title": "Patrick Polischuk", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Patrick Polischuk Product Manager Biography Patrick joined Crossref in 2018 until 2024 where he managed the REST API for metadata consumers. Previously he worked as a Senior Product Manager at PLOS, working on everything from manuscript submission systems to journal publishing platforms. Before making the jump to scholarly publishing Patrick worked on emerging technology policy in Washington, DC, where he earned an MA in International Science and Technology Policy from the George Washington University.", "content": "\rPatrick Polischuk Product Manager Biography Patrick joined Crossref in 2018 until 2024 where he managed the REST API for metadata consumers. Previously he worked as a Senior Product Manager at PLOS, working on everything from manuscript submission systems to journal publishing platforms. Before making the jump to scholarly publishing Patrick worked on emerging technology policy in Washington, DC, where he earned an MA in International Science and Technology Policy from the George Washington University. In his spare time Patrick enjoys growing vegetables, backpacking in the mountains, and petting cats.\nPatrick Polischuk's Latest Blog Posts\rRebalancing our REST API traffic\rStewart Houten, Tuesday, Jun 4, 2024\nIn APIInfrastructure Leave a comment\nSince we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research. The REST API is relied on by a large part of the research information community and beyond, seeing around 1.\n2024 public data file now available, featuring new experimental formats\rPatrick Polischuk, Tuesday, May 14, 2024\nIn MetadataCommunityAPIs Leave a comment\nThis year’s public data file is now available, featuring over 156 million metadata records deposited with Crossref through the end of April 2024 from over 19,000 members. A full breakdown of Crossref metadata statistics is available here. Like last year, you can download all of these records in one go via Academic Torrents or directly from Amazon S3 via the “requester pays” method. Download the file: The torrent download can be initiated here.\nSubject codes, incomplete and unreliable, have got to go\rPatrick Polischuk, Wednesday, Mar 13, 2024\nIn MetadataAPIs Leave a comment\nSubject classifications have been available via the REST API for many years but have not been complete or reliable from the start and will soon be deprecated. dfdfd The subject metadata element was born out of a Labs experiment intended to enrich the metadata returned via Crossref Metadata Search with All Subject Journal Classification codes from Scopus. This feature was developed when the REST API was still fairly new, and we now recognize that the initial implementation worked its way into the service prematurely.\n2023 public data file now available with new and improved retrieval options\rPatrick Polischuk, Tuesday, May 2, 2023\nIn MetadataCommunityAPIs Leave a comment\nWe have some exciting news for fans of big batches of metadata: this year’s public data file is now available. Like in years past, we’ve wrapped up all of our metadata records into a single download for those who want to get started using all Crossref metadata records. We’ve once again made this year’s public data file available via Academic Torrents, and in response to some feedback we’ve received from public data file users, we’ve taken a few additional steps to make accessing this 185 gb file a little easier.\n2022 public data file of more than 134 million metadata records now available\rPatrick Polischuk, Friday, May 13, 2022\nIn MetadataCommunityAPIs Leave a comment\nIn 2020 we released our first public data file, something we’ve turned into an annual affair supporting our commitment to the Principles of Open Scholarly Infrastructure (POSI). We’ve just posted the 2022 file, which can now be downloaded via torrent like in years past. We aim to publish these in the first quarter of each year, though as you may notice, we’re a little behind our intended schedule. The reason for this delay was that we wanted to make critical new metadata fields available, including resource URLs and titles with markup.\nRead all of Patrick Polischuk's posts \u0026raquo;\r", "headings": ["Patrick Polischuk","Biography","Patrick Polischuk's Latest Blog Posts","Rebalancing our REST API traffic","2024 public data file now available, featuring new experimental formats","Subject codes, incomplete and unreliable, have got to go","2023 public data file now available with new and improved retrieval options","2022 public data file of more than 134 million metadata records now available"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/patrick-vale/", "title": "Patrick Vale", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Patrick Vale Senior Front End Developer Biography Patrick has worked on a broad range of technologies, but is particularly fascinated by interfaces and how people relate to them. Using the latest techniques, he can sometimes be found talking about \u0026lsquo;surfaces\u0026rsquo;. When not building things to help people get on with their day, he can be found growing plants, cycling with his family or making music.", "content": "\rPatrick Vale Senior Front End Developer Biography Patrick has worked on a broad range of technologies, but is particularly fascinated by interfaces and how people relate to them. Using the latest techniques, he can sometimes be found talking about \u0026lsquo;surfaces\u0026rsquo;. When not building things to help people get on with their day, he can be found growing plants, cycling with his family or making music.\n", "headings": ["Patrick Vale","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/paul-davis/", "title": "Paul Davis", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Paul Davis Tech Support \u0026amp; R\u0026amp;D Analyst Biography Paul Davis helps members and users navigate all things metadata. He joined in May 2017 and in his spare time he enjoys playing sport as regularly as possible, despite the body not healing as quickly as it once used to. He enjoys spending time with his family and has a passion for all things food.\nPaul Davis's Latest Blog Posts\rSolving your technical support questions in a snap!", "content": "\rPaul Davis Tech Support \u0026amp; R\u0026amp;D Analyst Biography Paul Davis helps members and users navigate all things metadata. He joined in May 2017 and in his spare time he enjoys playing sport as regularly as possible, despite the body not healing as quickly as it once used to. He enjoys spending time with his family and has a passion for all things food.\nPaul Davis's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment\nMy name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nFlies in your metadata (ointment)\rIsaac Farley, Monday, Jul 25, 2022\nIn MetadataContent RegistrationResearch Nexus Leave a comment\nQuality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors). We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.\nMemoirs of a DOI detective...it’s error-mentary dear members\rPaul Davis, Monday, Apr 27, 2020\nIn Content RegistrationIdentifiersMetadata Leave a comment\nHello, I’m Paul Davis and I’ve been part of the Crossref support team since May 2017. In that time I’ve become more adept as a DOI detective, helping our members work out whodunnit when it comes to submission errors.\nIf you have ever received one of our error messages after you have submitted metadata to us, you may know that some are helpful and others are, well, difficult to decode. I\u0026rsquo;m here to help you to become your own DOI detective.\nRead all of Paul Davis's posts \u0026raquo;\r", "headings": ["Paul Davis","Biography","Paul Davis's Latest Blog Posts","Solving your technical support questions in a snap!","Flies in your metadata (ointment)","Memoirs of a DOI detective...it’s error-mentary dear members"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/paula-graham-dwyer/", "title": "Paula Graham-Dwyer", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Paula Graham-Dwyer Controller Biography Paula has moved on from Crossref. Paula joined Crossref as Controller in 2011, after working for the organization for ten years as a financial consultant. She was responsible for maintaining the accuracy and integrity of the financial reporting and financial systems of Crossref. Prior to her financial consulting, Paula had gained extensive accounting and tax experience through her work in public accounting, as well as holding previous controller positions in both the private and not-for-profit sectors.", "content": "\rPaula Graham-Dwyer Controller Biography Paula has moved on from Crossref. Paula joined Crossref as Controller in 2011, after working for the organization for ten years as a financial consultant. She was responsible for maintaining the accuracy and integrity of the financial reporting and financial systems of Crossref. Prior to her financial consulting, Paula had gained extensive accounting and tax experience through her work in public accounting, as well as holding previous controller positions in both the private and not-for-profit sectors. When she wasn\u0026rsquo;t analyzing numbers, Paula enjoyed reading, baking, and spending time with her family and dogs.\n", "headings": ["Paula Graham-Dwyer","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/rachael-lammey/", "title": "Rachael Lammey", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Rachael Lammey Director of Product Biography For 12 years, Rachael held several roles at Crossref before leaving to further her career in February 2023. Most recently, as Director of Product, she led Crossref\u0026rsquo;s product team, consulting with members, users, and other open scholarly infrastructure organisations to help focus the organisation\u0026rsquo;s priorities and deliver on our ambitious roadmap. Rachael worked her way up through editorial groups at a scholarly publisher before joining Crossref as a Product Manager in 2012.", "content": "\rRachael Lammey Director of Product Biography For 12 years, Rachael held several roles at Crossref before leaving to further her career in February 2023. Most recently, as Director of Product, she led Crossref\u0026rsquo;s product team, consulting with members, users, and other open scholarly infrastructure organisations to help focus the organisation\u0026rsquo;s priorities and deliver on our ambitious roadmap. Rachael worked her way up through editorial groups at a scholarly publisher before joining Crossref as a Product Manager in 2012. In that role she introduced ORCID Auto-update and oversaw improvements to Crossmark and Similarity Check. In the Community team she initiated our important partnership with the Public Knowledge Project and other organisations, grew adoption of preprints, grants, and data citation, and a brief strint in R\u0026amp;D saw her engage new technical and community iniiatives such as crowd-sourcing retrations and updates which led to the acquisition of the Retraction Watch database in 2023.\nTopics Metadata (and Crossref\u0026#39;s REST API) preprints text mining funding data scholarly publishing community-focused product development X @rachaellammey ORCID iD 0000-0001-5800-1434 Rachael Lammey's Latest Blog Posts\rRORing ahead: using ROR in place of the Open Funder Registry\rRachael Lammey, Tuesday, Jan 30, 2024\nIn RORMetadataOpen Funder Registry Leave a comment\nA few months ago we announced our plan to deprecate our support for the Open Funder Registry in favour of using the ROR Registry to support both affiliation and funder use cases. The feedback we’ve had from the community has been positive and supports our members, service providers and metadata users who are already starting to move in this direction. We wanted to provide an update on work that’s underway to make this transition happen, and how you can get involved in working together with us on this.\nNews: Crossref and Retraction Watch\rGinny Hendricks, Tuesday, Sep 12, 2023\nIn Research IntegrityRetractionsResearch NexusNews Release Leave a comment\nhttps://doi.org/10.13003/c23rw1d9 Crossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together 12th September 2023 —\u0026ndash; The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource.\nOpen Funder Registry to transition into Research Organization Registry (ROR)\rAmanda French, Thursday, Sep 7, 2023\nIn Open Funder RegistryRORIdentifiersMetadata Leave a comment\nToday, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned.\nThe more the merrier, or how more registered grants means more relationships with outputs\rDominika Tkaczyk, Wednesday, Feb 22, 2023\nIn GrantsResearch Funders Leave a comment\nOne of the main motivators for funders registering grants with Crossref is to simplify the process of research reporting with more automatic matching of research outputs to specific awards. In March 2022, we developed a simple approach for linking grants to research outputs and analysed how many such relationships could be established. In January 2023, we repeated this analysis to see how the situation changed within ten months. Interested? Read on!\nISR part three: Where does Crossref have the most impact on helping the community to assess the trustworthiness of the scholarly record?\rRachael Lammey, Monday, Oct 17, 2022\nIn Research IntegrityTrustworthinessProduct Leave a comment\nAns: metadata and services are all underpinned by POSI. Leading into a blog post with a question always makes my brain jump ahead to answer that question with the simplest answer possible. I was a nightmare English Literature student. \u0026lsquo;Was Macbeth purely a villain?\u0026rsquo; \u0026lsquo;No\u0026rsquo;. *leaves exam* Just like not giving one-word answers to exam questions, playing our role in the integrity of the scholarly record and helping our members enhance theirs takes thought, explanation, transparency, and work.\nRead all of Rachael Lammey's posts \u0026raquo;\r", "headings": ["Rachael Lammey","Biography","Topics","X","ORCID iD","Rachael Lammey's Latest Blog Posts","RORing ahead: using ROR in place of the Open Funder Registry","News: Crossref and Retraction Watch","Open Funder Registry to transition into Research Organization Registry (ROR)","The more the merrier, or how more registered grants means more relationships with outputs","ISR part three: Where does Crossref have the most impact on helping the community to assess the trustworthiness of the scholarly record?"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/rakesh-masih/", "title": "Rakesh Masih", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Rakesh Masih Head of Product Design Biography Rakesh has moved on from Crossref. Rakesh Masih joined Crossref in 2015, having graduated from De Montfort University with a BA Hons Degree in Multimedia Design. He brought with him a design background from a number of creative industries and 9 years of experience. People would often find him walking around the Oxford office with a sketch pad, creating new design concepts. Some said his finger might even have been made from pencils.", "content": "\rRakesh Masih Head of Product Design Biography Rakesh has moved on from Crossref. Rakesh Masih joined Crossref in 2015, having graduated from De Montfort University with a BA Hons Degree in Multimedia Design. He brought with him a design background from a number of creative industries and 9 years of experience. People would often find him walking around the Oxford office with a sketch pad, creating new design concepts. Some said his finger might even have been made from pencils. He was responsible for creating user experience designs for the product team and also designed the visual style for all new products. His motivations came from analyzing people’s behaviors and interactions to craft truly imaginative user interfaces that were elegant to experience and visually pleasing to the eye.\nTopics User Experience Visual Design ", "headings": ["Rakesh Masih","Biography","Topics"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/riley-marsh/", "title": "Riley Marsh", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Riley Marsh Metadata Manager Biography Riley joined Crossref in August 2024 as Metadata Manager. Before this, she was a children’s school librarian and later moved to the Digital Scholarship \u0026amp; Initiatives department at Tulane University Libraries. There, Riley worked on the university’s Digital Collections, Electronic Theses and Dissertations Archive, and various digitization projects involving partners across campus and beyond. Riley works remotely from New Orleans, Louisiana, where she enjoys live music, good food, and weird fashion.", "content": "\rRiley Marsh Metadata Manager Biography Riley joined Crossref in August 2024 as Metadata Manager. Before this, she was a children’s school librarian and later moved to the Digital Scholarship \u0026amp; Initiatives department at Tulane University Libraries. There, Riley worked on the university’s Digital Collections, Electronic Theses and Dissertations Archive, and various digitization projects involving partners across campus and beyond. Riley works remotely from New Orleans, Louisiana, where she enjoys live music, good food, and weird fashion.\nORCID iD 0009-0005-2271-4033 ", "headings": ["Riley Marsh","Biography","ORCID iD"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/robbykha-rosalien/", "title": "Robbykha Rosalien", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Robbykha Rosalien Member Support Specialist Biography Robbykha joined Crossref in April 2022 as a Member Support Specialist. She previously worked at Universitas Indonesia’s Research and Innovation Management Product Office, supporting researchers in publishing their academic journals and articles. In addition to that, she spent time as an Assistant Editor in Chief and Reviewer for Makara Journal of Health Research. She works remotely from Jakarta, Indonesia, and enjoys walking, traveling, and cooking.", "content": "\rRobbykha Rosalien Member Support Specialist Biography Robbykha joined Crossref in April 2022 as a Member Support Specialist. She previously worked at Universitas Indonesia’s Research and Innovation Management Product Office, supporting researchers in publishing their academic journals and articles. In addition to that, she spent time as an Assistant Editor in Chief and Reviewer for Makara Journal of Health Research. She works remotely from Jakarta, Indonesia, and enjoys walking, traveling, and cooking. She is also a practicing Dentist.\nORCID iD 0000-0003-0207-409X ", "headings": ["Robbykha Rosalien","Biography","ORCID iD"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/rosa-morais-clark/", "title": "Rosa Morais Clark", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Rosa Morais Clark Communications \u0026amp; Events Manager Biography Rosa serves as the Communications \u0026amp; Events Manager at Crossref. Her responsibilities encompass managing external communications, coordinating sponsorships, and creating content across multiple platforms. Additionally, she designs and executes impactful events tailored to nurture the growth of Crossref\u0026rsquo;s diverse membership and community base. Drawing from her background in administration management and sales leadership, she strives to stimulate engaging conversations and fosters collaborations within the community.", "content": "\rRosa Morais Clark Communications \u0026amp; Events Manager Biography Rosa serves as the Communications \u0026amp; Events Manager at Crossref. Her responsibilities encompass managing external communications, coordinating sponsorships, and creating content across multiple platforms. Additionally, she designs and executes impactful events tailored to nurture the growth of Crossref\u0026rsquo;s diverse membership and community base. Drawing from her background in administration management and sales leadership, she strives to stimulate engaging conversations and fosters collaborations within the community. In her downtime, she enjoys fine dining, photography, and the company of her family, friends, and dog.\nTopics Communications Events Sponsorships X @clarkrm1 Rosa Morais Clark's Latest Blog Posts\rA summary of our Annual Meeting\rRosa Morais Clark, Monday, Dec 9, 2024\nIn Annual MeetingMeetingsCommunityGovernance Leave a comment\nThe Crossref2024 annual meeting gathered our community for a packed agenda of updates, demos, and lively discussions on advancing our shared goals. The day was filled with insights and energy, from practical demos of Crossref’s latest API features to community reflections on the Research Nexus initiative and the Board elections. Our Board elections are always the focal point of the Annual Meeting. We want to start reflecting on the day by congratulating our newly elected board members: Katharina Rieck from Austrian Science Fund (FWF), Lisa Schiff from California Digital Library, Aaron Wood from American Psychological Association, and Amanda Ward from Taylor and Francis, who will officially join (and re-join) in January 2025.\nSummary of the environmental impact of Crossref\rEd Pentz, Thursday, Dec 5, 2024\nIn CommunityEnvironment Leave a comment\nIn June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas: The environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!\nEd Pentz accepts the 2024 NISO Miles Conrad Award\rRosa Morais Clark, Tuesday, Feb 13, 2024\nIn CommunityStaffStrategyGEM Leave a comment\nGreat news to share: our Executive Director, Ed Pentz, has been selected as the 2024 recipient of the Miles Conrad Award from the USA\u0026rsquo;s National Information Standards Organization (NISO). The award is testament to an individual\u0026rsquo;s lifetime contribution to the information community, and we couldn\u0026rsquo;t be more delighted that Ed was voted to be this year\u0026rsquo;s well-deserved recipient. During the NISO Plus conference this week in Baltimore, USA, Ed accepted his award and delivered the 2024 Miles Conrad lecture, reflecting on how far open scholarly infrastructure has come, and the part he has played in this at Crossref and through numerous other collaborative initiatives.\nPerspectives: Audrey Kenni-Nemaleu on scholarly communications in Cameroon\rAudrey Kenni-Nemaleu, Thursday, Oct 5, 2023\nIn CommunityPerspectives Leave a comment\nOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nNotre série de blogs Perspectives met en lumière différents membres de la communauté internationale de Crossref. Nous en apprenons davantage sur leur vie et sur la manière dont ils ont appris à nous connaître et à travailler avec nous, et nous entendons parler du paysage de la recherche universitaire dans leur pays, des défis auxquels ils sont confrontés et de leurs projets pour l\u0026rsquo;avenir.\nPerspectives: Mohamad Mostafa on scholarly communications in UAE\rMohamad Mostafa, Monday, Feb 27, 2023\nIn CommunityPerspectives Leave a comment\nOur Perspectives blog series highlights different members of our diverse, global community at Crossref. We learn more about their lives and how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, the challenges they face, and their plans for the future.\nتسلط سلسلة مدونة توقعات - وجهات نظر الخاصة بنا الضوء على أعضاء مختلفين من مجتمعنا العالمي المتنوع في كروس رف .نتعلم المزيد عن حياتهم وكيف تعرفوا وعملوا معنا، ونسمع رؤى حول مشهد البحث العلمي في بلدهم، والتحديات التي يواجهونها، وخططهم للمستقبل. Read all of Rosa Morais Clark's posts \u0026raquo;\r", "headings": ["Rosa Morais Clark","Biography","Topics","X","Rosa Morais Clark's Latest Blog Posts","A summary of our Annual Meeting","Summary of the environmental impact of Crossref","Ed Pentz accepts the 2024 NISO Miles Conrad Award","Perspectives: Audrey Kenni-Nemaleu on scholarly communications in Cameroon","Perspectives: Mohamad Mostafa on scholarly communications in UAE"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/ryan-mcfall/", "title": "Ryan McFall", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Ryan McFall Director of Finance Biography Ryan McFall is responsible for leading the month end close, supervising members of the finance staff and assisting with preparation of financial and management reporting. Ryan joined the Finance Team in May 2017 coming from Greencore USA, LLC where he gained a diverse background in Finance with a growing company. Ryan enjoys going to see live music, travel and experiencing new things.\nRyan McFall's Latest Blog Posts\rUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024", "content": "\rRyan McFall Director of Finance Biography Ryan McFall is responsible for leading the month end close, supervising members of the finance staff and assisting with preparation of financial and management reporting. Ryan joined the Finance Team in May 2017 coming from Greencore USA, LLC where he gained a diverse background in Finance with a growing company. Ryan enjoys going to see live music, travel and experiencing new things.\nRyan McFall's Latest Blog Posts\rUpdate on the Resourcing Crossref for Future Sustainability research\rKornelia Korzec, Monday, Oct 28, 2024\nIn StrategyFees Leave a comment\nWe’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes. Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by: Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.\nRead all of Ryan McFall's posts \u0026raquo;\r", "headings": ["Ryan McFall","Biography","Ryan McFall's Latest Blog Posts","Update on the Resourcing Crossref for Future Sustainability research"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/sally-jennings/", "title": "Sally Jennings", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Sally Jennings Member Support Specialist Biography Sally joined Crossref in August 2019 as a Member Support Specialist after many years in educational publishing as an editor and project manager with publishers including Harcourt Education, Pearson, and Oxford University Press. Before joining Crossref she spent five years in academic publishing as a Journal Manager with Elsevier. Sally lives and works in North Cornwall, where she enjoys country life with her partner and two small children, fitting in as much riding and women\u0026rsquo;s rugby as she can alongside running a small suckler beef herd.", "content": "\rSally Jennings Member Support Specialist Biography Sally joined Crossref in August 2019 as a Member Support Specialist after many years in educational publishing as an editor and project manager with publishers including Harcourt Education, Pearson, and Oxford University Press. Before joining Crossref she spent five years in academic publishing as a Journal Manager with Elsevier. Sally lives and works in North Cornwall, where she enjoys country life with her partner and two small children, fitting in as much riding and women\u0026rsquo;s rugby as she can alongside running a small suckler beef herd.\n", "headings": ["Sally Jennings","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/sara-bowman/", "title": "Sara Bowman", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Sara Bowman Program Lead Biography Sara joined Crossref in 2020 as a Product Manager. She is passionate about building technology to connect research outputs and improve scholarly communication. Prior to joining Crossref, Sara spent 6 years in various roles at the Center for Open Science, most recently as Product Manager. In 2024 Sara took on the expanded role of Program Lead, responsible for all the activities that help Crossref systems modernise and scale.", "content": "\rSara Bowman Program Lead Biography Sara joined Crossref in 2020 as a Product Manager. She is passionate about building technology to connect research outputs and improve scholarly communication. Prior to joining Crossref, Sara spent 6 years in various roles at the Center for Open Science, most recently as Product Manager. In 2024 Sara took on the expanded role of Program Lead, responsible for all the activities that help Crossref systems modernise and scale. When she’s not working, she can be found running, reading, cooking, and chasing a toddler.\nSara Bowman's Latest Blog Posts\rNext steps for Content Registration\rSara Bowman, Monday, May 17, 2021\nIn Content RegistrationProductCommunity Leave a comment\nUPDATE, 20 December 2021 We are delaying the Metadata Manager sunset until 6 months after release of our new content registration tool. You can expect to see the new tool in production in the first half of 2022. For more information, see this post in the Community Forum. Hi, I’m Sara, one of the Product Managers here at Crossref. I joined the team in April 2020, primarily tasked with looking after Content Registration mechanisms.\nRead all of Sara Bowman's posts \u0026raquo;\r", "headings": ["Sara Bowman","Biography","Sara Bowman's Latest Blog Posts","Next steps for Content Registration"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/shayn-smulyan/", "title": "Shayn Smulyan", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Shayn Smulyan Technical Support Specialist Biography Shayn Smulyan assists members and users in getting the right metadata in the right places. Previously, he has worked in library access services at Providence College and in technical support at EBSCO Information Services. He enjoys dystopian literature and lacto-fermentation.\nShayn Smulyan's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment", "content": "\rShayn Smulyan Technical Support Specialist Biography Shayn Smulyan assists members and users in getting the right metadata in the right places. Previously, he has worked in library access services at Providence College and in technical support at EBSCO Information Services. He enjoys dystopian literature and lacto-fermentation.\nShayn Smulyan's Latest Blog Posts\rSolving your technical support questions in a snap!\rIsaac Farley, Thursday, Jan 25, 2024\nIn Content RegistrationOpen SupportReportsReferencesPersistenceResearch Nexus Leave a comment\nMy name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!\nFlies in your metadata (ointment)\rIsaac Farley, Monday, Jul 25, 2022\nIn MetadataContent RegistrationResearch Nexus Leave a comment\nQuality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors). We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.\nMetadata Corrections, Updates, and Additions in Metadata Manager\rShayn Smulyan, Monday, Jan 13, 2020\nIn Metadata ManagerContent RegistrationCitationsIdentifiers Leave a comment\nIt\u0026rsquo;s been a year since Metadata Manager was first launched in Beta. We\u0026rsquo;ve received a lot of helpful feedback from many Crossref members who made the switch from Web Deposit Form to Metadata Manager for their journal article registrations.\nThe most common use for Metadata Manager is to register new DOIs for newly published articles. For the most part, this is a one-time process. You enter the metadata, register your DOI, and success!\nImproved processes, and more via Metadata Manager\rShayn Smulyan, Thursday, Jan 17, 2019\nIn Metadata ManagerContent RegistrationCitationsIdentifiers Leave a comment\nHi, Crossref blog-readers. I’m Shayn, from Crossref’s support team. I’ve been fielding member questions about how to effectively deposit metadata and register content (among other things) for the past three years. In this post, I’ll take you through some of the improvements that Metadata Manager provides to those who currently use the Web Deposit form.\nRead all of Shayn Smulyan's posts \u0026raquo;\r", "headings": ["Shayn Smulyan","Biography","Shayn Smulyan's Latest Blog Posts","Solving your technical support questions in a snap!","Flies in your metadata (ointment)","Metadata Corrections, Updates, and Additions in Metadata Manager","Improved processes, and more via Metadata Manager"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/shayna-mullen/", "title": "Shayna Mullen", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Shayna Mullen Accounting Clerk Biography Shayna has moved on from Crossref. She was serving in the US Army National Guard, working as an Ammunition Accounting Specialist. In their free time, they liked to spend time with family and friends. They enjoyed summer activities such as hiking, visiting amusement parks, and going to the beach. Additionally, they enjoyed baking, extreme couponing with their mom, and pugs were their favorite dogs.", "content": "\rShayna Mullen Accounting Clerk Biography Shayna has moved on from Crossref. She was serving in the US Army National Guard, working as an Ammunition Accounting Specialist. In their free time, they liked to spend time with family and friends. They enjoyed summer activities such as hiking, visiting amusement parks, and going to the beach. Additionally, they enjoyed baking, extreme couponing with their mom, and pugs were their favorite dogs.\n", "headings": ["Shayna Mullen","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/stewart-houten/", "title": "Stewart Houten", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Stewart Houten Head of Infrastructure Services Biography Stewart has extensive experience with various frameworks, coding languages, automation development, and site monitoring. He leads our infrastructure services team and is primarily responsible for Crossref’s infrastructure services.\nStewart Houten's Latest Blog Posts\rRebalancing our REST API traffic\rStewart Houten, Tuesday, Jun 4, 2024\nIn APIInfrastructure Leave a comment\nSince we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service.", "content": "\rStewart Houten Head of Infrastructure Services Biography Stewart has extensive experience with various frameworks, coding languages, automation development, and site monitoring. He leads our infrastructure services team and is primarily responsible for Crossref’s infrastructure services.\nStewart Houten's Latest Blog Posts\rRebalancing our REST API traffic\rStewart Houten, Tuesday, Jun 4, 2024\nIn APIInfrastructure Leave a comment\nSince we first launched our REST API around 2013 as a Labs project, it has evolved well beyond a prototype into arguably Crossref’s most visible and valuable service. It is the result of 20,000 organisations around the world that have worked for many years to curate and share metadata about their various resources, from research grants to research articles and other component inputs and outputs of research. The REST API is relied on by a large part of the research information community and beyond, seeing around 1.\nRead all of Stewart Houten's posts \u0026raquo;\r", "headings": ["Stewart Houten","Biography","Stewart Houten's Latest Blog Posts","Rebalancing our REST API traffic"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/susan-collins/", "title": "Susan Collins", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Susan Collins Community Engagement Manager Biography Susan has been with Crossref since 2008 and is part of the Member \u0026amp; Community Outreach Team. She works closely with our global community of small publisher members, focusing on the tools and resources that address their specific needs. She also manages the Sponsor Program which aims to make membership benefits available to smaller organizations who are unable to join directly due to financial, administrative, or technical barriers.", "content": "\rSusan Collins Community Engagement Manager Biography Susan has been with Crossref since 2008 and is part of the Member \u0026amp; Community Outreach Team. She works closely with our global community of small publisher members, focusing on the tools and resources that address their specific needs. She also manages the Sponsor Program which aims to make membership benefits available to smaller organizations who are unable to join directly due to financial, administrative, or technical barriers. Susan also works closely with colleagues at Public Knowledge Project (PKP) on the development of OJS/Crossref collaborative projects.\nTopics Crossref services Content Registration, Cited-by, and Reference Linking small publishers Sponsors GEM program membership and onboarding X @collinssu Susan Collins's Latest Blog Posts\rThe GEM program - year one\rSusan Collins, Wednesday, Jan 24, 2024\nIn GEMCommunityMembershipEquity Leave a comment\nIn January 2023, we began our Global Equitable Membership (GEM) Program to provide greater membership equitability and accessibility to organisations located in the least economically advantaged countries in the world. Eligibility for the program is based on a member\u0026rsquo;s country; our list of countries is predominantly based on the International Development Association (IDA). Eligible members pay no membership or content registration fees. The list undergoes periodic reviews, as countries may be added or removed over time as economic situations change.\nRefocusing our Sponsors Program; a call for new Sponsors in specific countries\rSusan Collins, Monday, Feb 6, 2023\nIn SponsorsGEMEquityMembership Leave a comment\nSome small organizations who want to register metadata for their research and participate in Crossref are not able to do so due to financial, technical, or language barriers. To attempt to reduce these barriers we have developed several programs to help facilitate membership. One of the most significant\u0026mdash;and successful\u0026mdash;has been our Sponsor program. Sponsors are organizations that are generally not producing scholarly content themselves but work with or publish on behalf of groups of smaller organizations that wish to join Crossref but face barriers to do so independently.\nIntroducing our new Global Equitable Membership (GEM) program\rSusan Collins, Wednesday, Dec 7, 2022\nIn News ReleaseMembershipFeesEquity Leave a comment\nWhen Crossref began over 20 years ago, our members were primarily from the United States and Western Europe, but for several years our membership has been more global and diverse, growing to almost 18,000 organizations around the world, representing 148 countries. As we continue to grow, finding ways to help organizations participate in Crossref is an important part of our mission and approach. Our goal of creating the Research Nexus\u0026mdash;a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society\u0026mdash;can only be achieved by ensuring that participation in Crossref is accessible to all.\nRethinking staff travel, meetings, and events\rGinny Hendricks, Tuesday, Jun 7, 2022\nIn CommunityCollaborationStaff Leave a comment\nAs a distributed, global, and community-led organisation, sharing information and listening to our members both online and in person has always been integral to what we do. For many years Crossref has held both in-person and online meetings and events, which involved a fair amount of travel by our staff, board, and community. This changed drastically in March 2020, when we had to stop traveling and stop having in-person meetings and events.\nCrossref LIVE Brazil evoked vibrant Q\u0026amp;A session\rSusan Collins, Wednesday, Oct 31, 2018\nIn Crossref LIVECommunityCollaboration Leave a comment\nThere has been a steady increase in the growth of our membership in Latin America—and in Brazil in particular—over the past few years. We currently have more than 800 Brazil-based members; some as individual members, but most are sponsored by another organization. As part of our LIVE Local program Chuck Koscher and I traveled to meet some of these members in Goiânia and Fortaleza, where we co-hosted events with Associação Brasileira de Editores Científicos do Brasil (ABEC Brasil)—one of our largest Sponsors.\nRead all of Susan Collins's posts \u0026raquo;\r", "headings": ["Susan Collins","Biography","Topics","X","Susan Collins's Latest Blog Posts","The GEM program - year one","Refocusing our Sponsors Program; a call for new Sponsors in specific countries","Introducing our new Global Equitable Membership (GEM) program","Rethinking staff travel, meetings, and events","Crossref LIVE Brazil evoked vibrant Q\u0026amp;A session"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/tim-pickard/", "title": "Tim Pickard", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Tim Pickard Systems Support Analyst Biography Tim Pickard joined Crossref in August 2002 and brings 20+ years of experience and a calm demeanor that belies his sense of humor. Tim has overseen the growth of technologies from client server to virtual machines and cloud services.", "content": "\rTim Pickard Systems Support Analyst Biography Tim Pickard joined Crossref in August 2002 and brings 20+ years of experience and a calm demeanor that belies his sense of humor. Tim has overseen the growth of technologies from client server to virtual machines and cloud services.\n", "headings": ["Tim Pickard","Biography"] }, { "url": "https://0-www-crossref-org.libus.csd.mu.edu/people/vanessa-fairhurst/", "title": "Vanessa Fairhurst", "subtitle":"", "rank": 1, "lastmod" : "", "lastmod_ts" : 0, "section": "Our people", "tags": [], "description": "Vanessa Fairhurst Community Engagement Manager Biography Vanessa has moved on from Crossref. Vanessa Fairhurst joined Crossref in June 2017 and is based in our Oxford office. She previously worked in International Development with a focus on access to scholarly information and research in developing countries. Having lived and worked both in Asia and Europe, Vanessa has always enjoyed working cross-culturally, with a diverse network of people. Outside of the office Vanessa enjoys travel, writing, improving her Spanish and never goes anywhere without a good book!", "content": "\rVanessa Fairhurst Community Engagement Manager Biography Vanessa has moved on from Crossref. Vanessa Fairhurst joined Crossref in June 2017 and is based in our Oxford office. She previously worked in International Development with a focus on access to scholarly information and research in developing countries. Having lived and worked both in Asia and Europe, Vanessa has always enjoyed working cross-culturally, with a diverse network of people. Outside of the office Vanessa enjoys travel, writing, improving her Spanish and never goes anywhere without a good book!\nX @NessaFairhurst Vanessa Fairhurst's Latest Blog Posts\rRethinking staff travel, meetings, and events\rGinny Hendricks, Tuesday, Jun 7, 2022\nIn CommunityCollaborationStaff Leave a comment\nAs a distributed, global, and community-led organisation, sharing information and listening to our members both online and in person has always been integral to what we do. For many years Crossref has held both in-person and online meetings and events, which involved a fair amount of travel by our staff, board, and community. This changed drastically in March 2020, when we had to stop traveling and stop having in-person meetings and events.\nDo you want to be a Crossref Ambassador?\rVanessa Fairhurst, Thursday, Apr 14, 2022\nIn EducationAmbassadorsCommunityCollaboration Leave a comment\nA re-cap We kicked off our Ambassador Program in 2018 after consultation with our members, who told us they wanted greater support and representation in their local regions, time zones, and languages. We also recognized that our membership has grown and changed dramatically over recent years and that it is likely to continue to do so. We now have over 16,000 members across 140 countries. As we work to understand what’s to come and ensure that we are meeting the needs of such an expansive community, having trusted local contacts we can work closely with is key to ensuring we are more proactive in engaging with new audiences and supporting existing members.\nPerspectives: Bruna Erlandsson on scholarly communications in Brazil\rBruna Erlandsson, Monday, Mar 28, 2022\nIn CommunityPerspectives Leave a comment\nJoin us for the first in our Perspectives blog series. In this series of blogs, we will be meeting different members of our diverse, global community at Crossref. We learn more about their lives, how they came to know and work with us, and we hear insights about the scholarly research landscape in their country, challenges they face, and plans for the future.\nDiscuss all things metadata in our new community forum\rVanessa Fairhurst, Thursday, Feb 11, 2021\nIn CollaborationCommunity Leave a comment\nTL;DR: We have a Community Forum (yay!), you can come and join it here: community.crossref.org. Community is fundamental to us at Crossref, we wouldn’t be where we are or achieve the great things we do without the involvement of you, our diverse and engaged members and users. Crossref was founded as a collaboration of publishers with the shared goal of making links between research outputs easier, building a foundational infrastructure making research easier to find, cite, link, assess, and re-use.\nCommunity Outreach in 2020\rVanessa Fairhurst, Monday, Jun 29, 2020\nIn OutreachCommunityCrossref LIVEEducationAmbassadors Leave a comment\n2020 hasn’t been quite what any of us had imagined. The pandemic has meant big adjustments in terms of working; challenges for parents balancing childcare and professional lives; anxieties and tensions we never had before; the strain of potentially being away from co-workers, friends, and family for a prolonged period of time. Many have suffered job losses and around the world, many have sadly lost their lives to the virus.\nRead all of Vanessa Fairhurst's posts \u0026raquo;\r", "headings": ["Vanessa Fairhurst","Biography","X","Vanessa Fairhurst's Latest Blog Posts","Rethinking staff travel, meetings, and events","Do you want to be a Crossref Ambassador?","Perspectives: Bruna Erlandsson on scholarly communications in Brazil","Discuss all things metadata in our new community forum","Community Outreach in 2020"] }]