Name of the speaker: Martin Jagerhorn
Job title: Business Development Advisor
In today’s fragmented research ecosystem, a multitude of systems for collection, storage, and data exchange have evolved. Typically, the data and documents managed by these systems do not meet the FAIR principles (Findable, Accessible, Identifiable, and Reusable), making it difficult to connect them for seamless transfer and reuse of data. To support the FAIR principles and enhance scientific reproducibility, we need more connected persistent identifiers (PIDs). The FAIR Funder Workflow brings key stakeholders together to build an end-to-end workflow showcasing the outputs of research investments leveraging connections between PIDs and metadata – all in compliance with exemplary FAIR best practices. The aim is to implement:
- An initial specification for the end-to-end workflow that supports FAIR data and policy development.
- An interface that allows funders and institutions to track connections between Data Management Plans (DMPs), grants, investigators, articles, datasets, authors and organizations, and display events and usage throughout the research funding lifecycle.
The work with the FAIR Funder Workflow aligns with many other community initiatives, like Make Data Count, machine-actionable DMP, GO FAIR, and FORCE11, and will leverage the experience and results from several large scale funded infrastructure projects such as Horizon 2020’s Projects ODIN, THOR and FREYA to integrate existing, well-proven technologies that lead to qualitative enhancements to how we implement and manage open science.
By leveraging existing infrastructure, connecting essential parts and filling some specific gaps, we can with relatively small means ensure FAIR data. By having FAIR data, we can automate the reuse for a multitude of purposes, whether it’s for populating institutional repositories or reporting, and thus minimize or even eliminate administrative work for the researchers. The presentation highlights one set of stakeholders, but is a generic setup, and thus is relevant for the scholarly communications community at large to be aware of, with the aim to stimulate others to follow a similar approach, ensuring a greater degree of automation and a more seamless flow of information throughout the research ecosystem.