FAIR Island Project Receives NSF Funding

FAIR Island

Crossposted from the FAIR Island website

The California Digital Library (CDL), University of California Gump South Pacific Research Station, Berkeley Institute for Data Science (BIDS), Metadata Game Changers, and DataCite are pleased to announce that they have been awarded a 2- year NSF EAGER grant entitled “The FAIR Island Project for Place-based Open Science” (full proposal text). 

The FAIR Island project examines the impact of implementing optimal research data management policies and requirements, affording us the unique opportunity to look at the outcomes of strong data policies at a working field station. Building on the Island Digital Ecosystem Avatars (IDEA) Consortium (see Davies et al. 2016), the FAIR Island Project leverages collaboration between the Gump Station on the island of Moorea in French Polynesia (host of the NSF Moorea Coral Reef Long-Term Ecological Research site), and Tetiaroa Society, which operates a newly established field station located on the atoll of Tetiaroa a short distance from Moorea. 

The FAIR Island project builds interoperability between pieces of critical research infrastructure — DMPs, research practice, PIDs, data policy, and publications contributing to the advancement and adoption of Open Science.  In the global context, there are ongoing efforts to make science Open and FAIR to bring more rigor to the research process, in turn increasing the reproducibility and reusability of scientific results.  DataCite as a global partner in the project, has been working to recognize the importance of better management of research entities. This has led to critical advances concerning the development of infrastructure for Open Science. Increased availability of the different research outputs of a project (datasets, pre-registrations, software, protocols, etc.) would enable the reuse of research to aggregate findings across studies to evaluate discoveries in the field and ultimately assess and accelerate progress.

Key outcomes the FAIR Island team will develop include: 

  1. CDL, BIDS, and the University of California Natural Reserve System will work together to build an integrated system for linking research data to their associated publications via PIDs. We will develop a provenance dashboard from field to publication, documenting all research data and research outcomes derived from that data. 
  1. The project also facilitates further development of the DataCite Commons interface and extends connections made possible via the networked DMP that allows users to track relationships between DMPs, investigators, outputs, organizations, research methods, and protocols; and display citations throughout the research lifecycle.
  1. Developing an optimal data policy for place-based research by CDL, BIDS, and Metadata Game Changers is the cornerstone component of the FAIR Island project.  A reusable place-based data policy template will be shared and implemented amongst participating UC-managed field stations and marine labs. In addition, we will be incorporating these policies into a templated data management plan within the DMPTool application and sharing it with the broader community via our website, whitepapers, and conferences such as the Research Data Alliance (RDA) Plenaries.

The FAIR Island project is in a unique position to demonstrate how we can advance open science by creating optimal FAIR data policies governing all research conducted at field stations. Starting with the field station on Tetiaroa, the project team plans to demonstrate how FAIR data practices can make the reuse of data and the collaboration of data more efficient. Data Management Plans (DMPs) in this “FAIR data utopia” will be utilized as key documents for tracking provenance, attribution, compliance, deposit, and publication of all research data collected on the island by implementing mandatory registration requirements, including extensive use of controlled vocabularies, personal identifiers (PIDs), and other identifiers.

The project will make significant contributions to international Open Science standards and collaborate with open infrastructure providers to provide a scalable implementation of best practices across services. In addition, DataCite seeks to extend the infrastructure services developed in the project to their member community across 48 countries and 2,500 repositories globally. 

We will continue to share details and feature developments related to the FAIR Island project via our blog. You can join the conversation at the next RDA plenary in November 2021. Feedback or questions are most welcome and can be sent directly to info@fairisland.org

DMP Competition Winners: DMPs so good they go to 11

Last December we announced the inaugural Qualitative Data Management Plan (DMP) Competition, sponsored jointly by The Qualitative Data Repository, Princeton Research Data Service, and the DMPTool. As qualitative researchers writing such plans frequently ask for examples of excellent DMPs for qualitative research, we hoped that this competition would assemble a trove of exemplar DMPs that we could share with the research community. 

We received a wealth of excellent submissions. Many of the DMPs were so good in fact, that for that extra push over the cliff we decided to expand our pool of awardees from 10 to 11 outstanding Qualitative DMPs from a wide range of disciplines. We couldn’t be more excited to announce these winners today. We’re hugely thankful to everyone who submitted a DMP, and, of course, to the five data management experts who judged the entries (listed below).

Each entry was reviewed by three expert judges. They assessed DMPs on a 1-4 (not adequate to exemplary) scale for each item in an 18-item rubric rubric based on the DART Project  as well as guidance from the DMPTool. Judges also assigned an overall quality score from 1-10 to each DMP. You can find our rubric on OSF. Rubric scores and overall scores were closely correlated (r=.89), suggesting that the rubric closely aligned with experts’ assessments of overall quality. We also asked judges to include some overall observations about each DMP: we have included excerpts from these for each winner. 

And the awards for Outstanding Qualitative DMP go to:

Listed alphabetically by first author with summary comments from the judges

1. Amelia Acker, Ashley Bower, Emily Simpson, Bethany Radcliff, University of Texas at Austin, School of Information, “COVID-19 Oral Histories Project,” developed for research by Whitney Chappell, University of Texas at San Antonio 

“Wonderful DMP and approach to community-centered work”

2. Nicholas Bell, University of Pennsylvania and Georgetown University, “Why Do So Few Workers Take Trade Adjustment Assistance” 

“This is a strong DMP, and it’s clear the author has thought through and begun implementing good data management principles even in the composition of the DMP itself. Clear descriptions of data collection and plans for storing and sharing.”

3. Patricia Condon, Louise Buckley and Eleta Exline, University of New Hampshire,  “Teaching Quantitative Data in the Social Sciences at the University of New Hampshire: Data Management Plan”.

“Concise and straightforward descriptions of data formats, plans for storing and preserving … Wonderful DMP and acknowledgement that it’s a living document!”

4. Dayna Cueva Alegría, University of Kentucky, NSF SBE, “Water Pollution Governance in Lake Titicaca: Creating Political Spaces of Democratization

“Strong DMP with a lot of attention and detail paid to data formats, storage, preservation, and sharing”

5. Laura Garbes, Brown University, NSF SBE, with Andrew Creamer, Science Data Specialist, Brown University,  “Analyzing Diversity Efforts in Public Radio Organizations – A comparative approach to performance standards in the workplace” 

“…this DMP is pretty much perfect. Includes different measures to avoid issues related to confidentiality and security as well as it is clearly committed to data discoverability, accessibility and reusability, specially when articulates about the storage/archiving options”

6. Christopher Hale, University of Alabama, NSF- SBE “Ethnic Diversity and Public Goods Provision Across Latin America”

“Strong plan for description of data collection, storage, and sharing, with good attention to considerations for de-identifying data during the entire process, prior to depositing with the repository. This DMP has a lot of great detail about the security and anonymity practices of the PI…”

7. Jaeci Hall, University of Oregon, NSF-SBE, “Text Analysis of Taldash (GAL) in Support of Nuu-wee-ya’ Language Revitalization: Indigenous-based linguistic analysis and methodological reflections

“This DMP is an excellent example of cultural sensitivity when working with indigenous materials… Good plans for handling sensitive data and the role of partner institutions with regards to data ownership and rights to share.”

8. Tina Nabatchi, PARCC, Syracuse University and  Rebecca McLain, Portland State University, NSF SBE, “The Atlas of Collaboration: Building the World’s First Large N Database on Collaborative Governance” 

“This DMP is strong in describing both how data will be gathered and maintained now, and how it will be appropriately archived in the future. Provides a great description of the expected data and roles and responsibilities with regard to data in a multi-institutional project. Fantastic DMP.”

9. Joshua Rubin, Bates College, NSF SBE, with Pete Schlax, Science and Data Librarian, Bates College, “Possibility Spaces and Possible Things

“Overall an excellent DMP… [T] he overall plan is strengthened by inclusion of QDR selection for data sharing” 

10. Carolina Seigler, Princeton University, Department of Sociology, NSF-SBE, “Religion and Sexual Violence” 

“Compelling DMP, really made the case why the data cannot be shared well and the security provisions were exemplary.” 

11. Ieva Zumbyte, Brown University,  NSF-SES, with Andrew Creamer, Science Data Specialist, Brown University, “Tracing the Quality of Public Childcare in the Neighborhoods of Chennai, India

“…carefully considers issues such as licensing and re-identification of de-identified data… Very good description of the chosen repository and the characteristics that backup such a choice, even though the raw data won’t be shared.”

Our panel of expert judges

  • Renata G. Curty, Social Sciences Research Facilitator, UCSB Library’s Research Data Services, University of California, Santa Barbara
  • Jennifer Doty, Research Data Librarian, Emory University
  • Celia Emmelhainz, Anthropology & Qualitative Research Librarian, University of California, Berkeley
  • Megan O’Donnell, Data Services Librarian, Iowa State University
  • Vicky Rampin, Research Data Management and Reproducibility Librarian, New York University Libraries

DMPRoadmap Annual Planning Meeting

This is a joint blog post between DMPonline and the DMPTool

In February we conducted our annual strategic planning meeting between DCC and CDL to discuss joint plans for the upcoming year. We were joined from DCC by: Kevin Ashley, Patricia Herterich, Magdalena Drafiova, Marta Nicholson, Ray Carrick, Angus Whyte, Diana Sisu and from CDL: John Chodacki, Marisa Strong, Catherine Nancarrow, Brian Riley and Maria Praetzellis.

This meeting was a follow up to our 2019  meeting, where we had a chance to meet for three days with our colleagues and we wanted to replicate this in our half day online meeting. This time around we had to swap to Zoom for the lovely city of Edinburgh and only met for a half day instead of three days. Nonetheless, we managed to accomplish some important high level planning discussions regarding the work of continuing our collaboration on the Roadmap codebase. In this blog post we provide you with the summary of what we discussed and share our plans for the coming months. 

Celebrating the achievements of 2020

We all agreed that despite the many challenges of 2020 (not to mention the departure of Sarah Jones and Sam Rust), this was a very successful year for our collaboration. Our team of developers completed several large developments a few of which are highlighted below: 

  • Completed the Rails5 migration 
  • Developed an API that is compliant with the RDA Common Standard for DMPs
  • Released a new feature allowing for conditional questions and notifications within DMP templates
  • Improved the usage dashboard
  • Integrated with Google Analytics
  • Integrated with translation.io to facilitate several languages

Several new features surrounding machine-actionable DMPs were also released of the past year including: 

  • RORs Identifiers for research organizations
  • Funder Registry Identifiers for funders
  • ORCiDs for DMP creators and collaborators
  • API compliant with RDA Common Standard Metadata Schema 
  • Ability to export plans as RDA Common Standard compliant JSON

Highlights of our 2021 Development Plans 

During the first quarter of 2021, DMPonline will focus on consolidating the code base, making sure the various changes both the DMPTool and DMPonline team have developed over the past year are integrated and any new work is carried out on top of a shared code base. 

UX Improvements 

Based on the extensive usability testing that both DMPTool and DMPonline have conducted over the past year, we will select pieces of work that will have significant impact for both services. Initially we will focus on the creation of a new plan wizard making the creation of new plans and the selection of templates and appropriate guidance easier.

Expanded machine-actionable DMP features

  • The ability to generate a unique identifier for a DMP with an associated landing page that connects the DPM to eventual research outputs
  • A new Research Outputs tab will allow for more granular description of specific research outputs 
  • Integration with the Registry of Research Data Repositories (re3data)
  • Integration with FAIRsharing
  • Plan versioning

DMPRoadmap for funders

In 2021, we will also work on making DMPRoadmap more useful to funders. This will include:

  • A different dashboard view
  • Easier ways to integrate grant numbers and other funder specific information
  • Tagging of institutional DMP templates as funder compliant

Other collaborations

The DMPonline team will also work with the TU Delft on a project that will integrate the system more with institutional login options to automatically get more information about users and use that to improve workflows and reporting for institutional admins.

RSpace integration

The electronic lab notebook, RSpace, and the DMPTool are currently working on an integration allowing for the bi-directional linking of data between DMPTool and RSpace. The first phase of this work is currently in development and utilizes OAuth so that users can connect accounts. Once we get this initial connection running, the team will look at bi-directional notifications and updates between the two systems.

For a more detailed description of our upcoming development plans please see our wiki page. This promises to be another busy but exciting year of work for both teams and we look forward to continuing to share our progress with you!

Furthering Open Science through Research Data Management Services

As I begin my second year at CDL, I am excited to outline the objectives and key activities for my work: furthering research data management (RDM) practices that support open science at the University of California and beyond. 

I conceptualize our work in the larger context of what an ideal RDM ecosystem might be: wherein open science practices are universally understood and implemented by data creators and stewards and built upon the bedrock of simple, interoperable RDM infrastructure and optimal open data policy. Below are four key ways in  which RDM services at CDL contribute to this overall effort in 2021.

  1. Facilitating Communication Between Data Librarians and Researchers

For almost ten years now, the DMPTool web application has provided accessible, jargon-free, practical guidance for researchers to create and implement effective data management plans for 30+ funding agencies. Thanks to our dedicated Editorial Board we are able to keep the tool up in sync with current funder requirements and best practices. 

In 2021, we will be expanding our outreach to the library community by offering quarterly community calls with DMPTool users in order to discuss new features, highlight community use, and facilitate feedback. Additionally, the DMPTool Editorial Board will analyze existing guidance within the tool to identify aspects that need to be updated or new topics that should be included. The DMPTool has long been a community-supported application and we will continue to expand our engagement with the community as we grow the application. 

  1. Serving as an Interoperable Partner in Essential RDM Services

Our work developing the next generation of machine-actionable, networked DMPs builds upon community developed standards and is rooted in collaboration. In order to create the new networked DMP, these partnerships will continue to be essential to our success. Last year’s release of the RDA DMP Common Standard for machine-actionable Data Management Plans and the recent report Implementing Effective Data Practices: Stakeholder Recommendations for Collaborative Research Support (written by CDL, ARL, AAU & APLU) are testament to the power of these partnerships. We simply get more done when we work together. Additionally our continued collaboration with DMPonline allows us to share resources as we co-develop via the DMPRoadmap codebase, share best practices, and advance new features jointly. 

Looking ahead, in 2021, we will expand on our collaborations including:

  • Partnering with DataCite to encourage adoption of the new DMP ID, a resource made possible by the forthcoming metadata scheme update. Expect more updates on this soon!
  • A new integration between the DMPTool and electronic lab notebook platforms, starting with RSpace.
  • Partnering with the UC Natural Reserve System and the Tetiaroa Society to advance data policies supporting open science at working field stations.
  1. Supporting a Transparent Research Process 

Much of our work last year was focused on developing the backend infrastructure necessary to confidently be able to say DMPTool DMPs are machine-actionable. 

With the infrastructure in place and development completed, in 2021 we will be releasing several new features to expand the possibilities of the new networked DMP and help ensure transparency in the research process. Many of these new features are currently being pilot tested as part of the FAIR Island Project. We will also be conducting webinars in the coming weeks to gather feedback from the community to further inform our iterative feature development and release cycles.

  1. Developing Optimal Open Data Policies 

The FAIR Island project is a real-world use case evaluating the impact of implementing optimal research data management policies and requirements; the project will help demonstrate and publicize the outcomes of strong data policies in practice at a working field station. 

With the recent addition of Erin Robinson to the team, the FAIR Island project is making swift progress towards implementing a data policy that will govern data collected on the Tetiaroa atoll. This data policy is still open to community feedback so if you are interested in contributing, now is your chance! Please share your thoughts via this survey

In 2021, the FAIR Island project team will continue to advance and iterate on the data policy, working with additional field stations to advance data policies supporting open science. In partnership with the UC Natural Reserve System and 4Site network, we aim to move toward a common, optimal data policy that can be shared amongst UC field stations and other partner sites. To keep abreast of our progress please check out our project website where we are tracking project work in our blog. 

How to contribute
Building on a solid foundation of community developed standards for DMPs and FAIR data, this year we will be moving much of this work from theory into real world implementation. 

It’s an exciting time for these developments and we welcome all questions, comments, and advice.  Please reach out with your thoughts!

Call for Submissions to the Inaugural Qualitative Data Management Plan (DMP) Competition

QDR-DMPTool-Princeton

Data Management Plans (DMPs) play an integral role in ensuring that data are collected, organized, stored, and shared responsibly. Qualitative researchers writing such plans frequently ask for examples of excellent DMPs for qualitative research. To respond to this need, and to celebrate excellence in managing and sharing qualitative data, we are excited to announce the inaugural Qualitative Data Management Plan Competition.

If you have a DMP for a qualitative research project, you are invited to submit it for a chance to win one of 10 “outstanding qualitative DMP” awards, each of which includes a prize of $100. The competition is a joint initiative of the Qualitative Data Repository, DMPTool, and the Princeton Research Data Service

Rules for Submission:

  1. The DMP must describe a research project that is either primarily qualitative in nature, or is multi-method and qualitative data form a significant part of the project  (“qualitative” is conceived broadly; see this non-exhaustive list of types of qualitative data).
  2. The DMP should be about 2 or 3 single-spaced pages in length.
  3. The DMP must be publicly available online. We recommend sharing the DMP directly through the DMPTool or publishing on the Zenodo platform.
  4. The competition is open to DMPs from current or past proposals.
  5. If the DMP was written for a particular funding opportunity, please include a link to the funder’s requirements for DMPs.
  6. The author(s) must complete the submission form no later than 11:59 PM EDT March 15, 2021.

Because of legal restrictions beyond our control, while anyone submitting a DMP will be considered for an award, the monetary component of the award is only available to participants who are eligible to work in the US.

Please submit your entry through this form

Valid submissions will be reviewed by a panel of five judges, and their evaluations will be guided by the DMP rubric from the DART Project (https://osf.io/kh2y6/). We expect to notify the 10 winners via email by April 30, 2021 and publicly announce them on our websites and social media. Please contact the competition organizers at qdr@syr.edu with any questions.

Our panel judges are:

  • Renata G. Curty, Social Sciences Research Facilitator, UCSB Library’s Research Data Services, University of California, Santa Barbara
  • Jennifer Doty, Research Data Librarian, Emory University
  • Celia Emmelhainz, Anthropology & Qualitative Research Librarian, University of California, Berkeley
  • Megan O’Donnell, Data Services Librarian, Iowa State University
  • Vicky Steeves, Research Data Management and Reproducibility Librarian, New York University Libraries

Interviews on Implementing Effective Data Practices, Part I: Why This Work Matters

Cross-posted from ARL News by Natalie Meyers, Judy Ruttenberg, and Cynthia Hudson-Vitale | October 28, 2020

In preparation for the December 2019 invitational conference, “Implementing Effective Data Practices,” hosted by the Association of Research Libraries (ARL), Association of American Universities (AAU), Association of Public and Land-grant Universities (APLU), and California Digital Library (CDL), we conducted a series of short pre-conference interviews.

We interviewed representatives from scholarly societies, research communities, funding agencies, and research libraries about their perspectives and goals around machine-readable data management plans (maDMPs) and persistent identifiers (PIDs) for data. We hoped to help expose the community to the range of objectives and concerns we bring to the questions we collectively face in adopting these practices. We asked about the value the interviewees see or wish to see in maDMPs and PIDs, their concerns, and their pre-conference goals.

In an effort to make these perspectives more widespread, we are sharing excerpts from these interviews and discussing them in the context of the final conference report that was released recently. Over the next three weeks, we will explore and discuss interview themes in the context of broad adoption of these critical tools.

Why This Work Matters

To start off this series of scholarly communications stakeholder perspectives, we need to position the importance of this infrastructure within broader goals. The overall goal of the conference was to explore the ways that stakeholders could adopt a more connected ecosystem for research data outputs. The vision of why this was important and how it would be implemented was a critical discussion point for the conference attendees.

Benjamin Pierson, then senior program officer, now deputy director for enterprise data, Bill and Melinda Gates Foundation, expressed the value of this infrastructure as key to solving real-world issues and making data and related assets first-class research assets that can be reused with confidence.

Clifford Lynch, executive director, Coalition for Networked Information, stated how a public sharing of DMPs within an institution would create better infrastructure and coordination at the university level for research support.

From the funder perspective, Jason Gerson, senior program officer, PCORI (Patient-Centered Outcomes Research Institute), indicated that PIDs are also essential for providing credit for researchers as well as for providing funders with a mechanism to track the impact of the research they fund.

Margaret Levenstein, director, ICPSR (Inter-university Consortium for Political and Social Research), spoke about the importance of machine-readable DMPs and PIDs for enhancing research practices of graduate students and faculty as well as the usefulness for planning repository services.

For those developing policies at the national level, Dina Paltoo, then assistant director for policy development, US National Library of Medicine, currently assistant director, scientific strategy and innovation, Immediate Office of the Director, US National Heart, Lung, and Blood Institute, discussed how machine-readable data management plan are integral for connecting research assets.

All of the pre-conference interviews are available on the ARL YouTube channel.

Natalie Meyers is interim head of the Navari Family Center for Digital Scholarship and e-research librarian for University of Notre Dame, Judy Ruttenberg is senior director of scholarship and policy for ARL, and Cynthia Hudson-Vitale is head of Research Informatics and Publishing for Penn State University Libraries.

Effective Data Practices: new recommendations to support an open research ecosystem

We are pleased to announce the release of a new report written with our partners at the Association of Research Libraries (ARL), the Association of American Universities (AAU), and the Association of Public and Land-grant Universities (APLU): Implementing Effective Data Practices: Stakeholder Recommendations for Collaborative Research Support.  

The report brings together information and insights shared during a December 2019 National Science Foundation sponsored invitational conference on implementing effective data practices. In this report, experts from library, research, and scientific communities provide key recommendations for effective data practices to support a more open research ecosystem. 

During the December conference, the project team developed a set of recommendations for the broad adoption and implementation of NSF’s recommended data practices as described in the NSF’s May 2019 Dear Colleague Letter.  The report focuses on recommendations for research institutions and also provides guidance for publishers, tool builders, and professional associations. The AAU-APLU Institutional Guide to Accelerating Public Access to Research Data, forthcoming in spring 2021, will include the recommendations.

The conference focused on designing guidelines for (1) using persistent identifiers (PIDs) for datasets, and (2) creating machine-readable data management plans (DMPs), both data practices that were recommended by NSF. Based on the information and insights shared during the conference, the project team developed a set of recommendations for the broad adoption and implementation of NSF’s preferred data practices. 

The report focuses on recommendations for research institutions and also provides guidance for publishers, tool builders, and professional associations. The AAU-APLU Institutional Guide to Accelerating Public Access to Research Data, forthcoming in spring 2021, will include the recommendations.

Five key takeaways from the report are:

  • Center the researcher by providing tools, education, and services that are built around data management practices that accommodate the scholarly workflow.
  • Create closer integration of library and scientific communities, including researchers, institutional offices of research, research computing, and disciplinary repositories.
  • Provide sustaining support for the open PID infrastructure that is a core community asset and essential piece of scholarly infrastructure. Beyond adoption and use of PIDs, organizations that sustain identifier registries need the support of the research community.
  • Unbundle the DMP, because the DMP as currently understood may be overloaded with too many expectations (for example, simultaneously a tool within the lab, among campus resource units, and with repositories and funding agencies). Unbundling may allow for different parts of a DMP to serve distinct and specific purposes.
  • Unlock discovery by connecting PIDs across repositories to assemble diverse data to answer new questions, advance scholarship, and accelerate adoption by researchers.

The report also identifies five core PIDs that are fundamental and foundational to an open data ecosystem. Using these PIDs will ensure that basic metadata about research is standardized, networked, and discoverable in scholarly infrastructure: 

  1. Digital object identifiers (DOIs) from DataCite to identify research data, as well as from Crossref to identify publications
  2. Open Researcher and Contributor (ORCID) iDs to identify researchers
  3. Research Organization Registry (ROR) IDs to identify research organization affiliations 
  4. Crossref Funder Registry IDs to identifier research funders 
  5. Crossref Grant IDs to identify grants and other types of research awards

The report is intended to encourage collaboration and conversation among a wide range of stakeholder groups in the research enterprise by showcasing how collaborative processes help with implementing PIDs and machine-actionable DMPs (maDMPs) in ways that can advance public access to research.

The full report is now available online

This material is based upon work supported by the National Science Foundation under Grant Number 1945938. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Project team:

  • John Chodacki, California Digital Library
  • Cynthia Hudson-Vitale, Pennsylvania State University
  • Natalie Meyers, University of Notre Dame
  • Jennifer Muilenburg, University of Washington
  • Maria Praetzellis, California Digital Library
  • Kacy Redd, Association of Public and Land-grant Universities
  • Judy Ruttenberg, Association of Research Libraries
  • Katie Steen, Association of American Universities

 

Additional report and conference contributors:

  • Joel Cutcher-Gershenfeld, Brandeis University
  • Maria Gould, California Digital Library

DMPTool Funder Templates Updated

We are excited to announce the completion of the first project of our newly established DMPTool Editorial board. As of September 2020, the Board has audited 36 funder templates within the DMPTool and updated the templates when necessary to reflect current proposal requirements and ensure all funder related content is up to date.

Template updates mean that admins will now need to transfer any customizations you may have created for these templates (instructions here). 

None of the updates made to templates affect the core requirements of the DMPs and updates largely involve correcting links, resources, and other data management planning requirements. A detailed summary of the changes for each template is below and you can view all templates on the DMPTool Funder Requirements page.

The critical work keeping the DMPTool in line with current funder requirements would not have been possible without the effort, expertise, and excellence of our volunteer Editorial Board and we at the DMPTool are endlessly grateful for their commitment to supporting the tool. Please join us in recognizing their contributions and thanking them for their work supporting our shared infrastructure advancing research data management.

  • Heather L Barnes, PhD, Digital Curation Librarian, Wake Forest University
  • Raj Kumar Bhardwaj, PhD, Librarian, St Stephen’s College, University of Delhi, India
  • Renata G. Curty, PhD, Social Sciences Data Curator, University of California, Santa Barbara
  • Jennifer Doty, Research Data Librarian, Emory University
  • Nina Exner, Research Data Librarian, Virginia Commonwealth University
  • Geoff Hamm, PhD, Scientific Publications Coordinator, Lawrence Berkeley National Laboratory
  • Janice Hermer, Health Sciences Liaison Librarian, Arizona State University
  • Megan O’Donnell, Data Services Librarian, Iowa State University
  • Reid Otsuji, Data Curation Specialist Librarian, University of California, San Diego
  • Nick Ruhs, PhD, STEM Data & Research Librarian, Florida State University
  • Anna Sackmann, Science Data & Engineering Librarian, University of California, Berkeley
  • Bridget Thrasher, PhD, Data Stewardship Coordinator, Associate Scientist III, National Center for Atmospheric Research
  • Douglas L. Varner, Assistant Dean for Information Management / Chief Biomedical Informationist, Georgetown University Medical Center

Together with the Editorial Board, we’ll be working on adding new templates to the tool over the coming months. If you have suggestions for funders to be added please let us know by emailing maria.praetzellis@ucop.edu.

Summary of DMPTool Template Updates

All NSF templates were updated to include links to the updated 2020 Proposal & Award Policies and Procedures Guide (2020 PAPPG). Additional updates are summarized below:

NSF-AGS: Atmospheric and Geospace Sciences

  • Updated link to new 2020 PAPPG
  • Edited question text 

BCO-DMO NSF OCE: Biological and Chemical Oceanography

  • Updated link to new 2020 PAPPG
  • Updated questions & links

NSF-CISE: Computer and Information Science and Engineering 

  • Updated link to 2020 PAPPG. 
  • Added “Additional Guidance on Selecting or Evaluating a Repository” under “Plans for Archiving and Preservation”

NSF-DMR: Materials Research

Department of Energy (DOE): Generic

  • Funder links added for Office of Science, and Energy Efficiency/Renewable Energy instructions

Department of Energy (DOE): Office of Science

  • Funder link added
  • Description updated with additional guidance

Institute of Museum and Library Services (IMLS) 

  • Data Management Plans for IMLS are collected via the IMLS Digital Product Form. Originally the form was broken out into three templates within the DMPTool, however we have streamlined the process and combined them into one, comprehensive, template to more accurately reflect current requirements.

National Aeronautics and Space Administration (NASA)

  • Updated text to match the wording of NASA’s description of an ideal DMP 

USDA

  • Reformatted section 1 to make reading easier.
  • Deleted the compliance/reporting section. This is no longer part of the DMP template as it is related to annual reporting. This information was moved to an Overview phase description.
  • Made the guidance links consistent.

Alfred P. Sloan Foundation

National Oceanic and Atmospheric Administration (NOAA)

  • Updated links

U.S. Geological Survey (USGS)

  • Updated questions and links
  • We are continuing to work with USGS and may have additional updates to this template in the near future. 

Summer 2020 DMPTool Release

We’re very pleased to announce the release of several major new features for the DMPTool! This includes:

  • Integration with the Research Organization Registry (ROR) and Funder Registry (Fundref)
  • The ability to create conditional questions and set email notifications within DMP templates
  • Integration with Google Analytics for usage statistics
  • The ability to connect additional grant contributors (and their ORCIDs) to a plan

The release notes are available in the DMPTool GitHub and detailed descriptions are available below.

Research Organization and Funder Registry Integration

ROR is a registry of Persistent Identifiers (PIDs) for research organizations, which are defined as any organization that conducts, produces, manages, or touches research. ROR has generated identifiers for over 91,000 organizations so far. The Crossref Funder Registry (Fundref) is a registry of grant-giving organizations and has created over 20,000 identifiers so far. We now have 1,582 unique organizations matched with their RORs or Funder IDs within the DMPTool. 

Utilizing these identifiers within a DMP is a key step towards a truly machine-actionable DMP (maDMP). Employing PIDs such as ROR and Fundref in DMPs facilitates the linking of people, grants, and organizations, and enables better tracking and discovery of research outputs by institution. These identifiers will be included in our upcoming maDMP JSON export feature, which is due to be released in late summer and is key in enabling maDMP interactions via API integrations.

Snippet from our upcoming API utilizing the RDA Common Standard schema and incorporating RORs

Organizational administrators of the DMPTool may notice an increase in the number of users affiliated with your institution. As part of integrating with ROR and Fundref, we have connected 4,750 previously un-affiliated users with their host institution by matching email domains. 

Conditional Questions and Email Notifications

DMPTool administrators that take advantage of the feature to create or customize templates will be excited to learn that you can now reduce the number of questions included in a customized template by skipping questions. For example, if a research project is not creating or using any sensitive data, you can now modify a template in order to skip questions related to special handling of sensitive data.

Additionally, the new feature includes the ability to set email notifications that are triggered if a user selects a specific answer. For example, you may want to create an alert for large data volumes. 

Read more about utilizing these new features in our documentation or watch this video tutorial created by our DMPRoadmap colleagues at DMPOnline. 

Two important things to note about creating conditions and email notifications: 

1.  If you are creating a new template, save all questions first and then set the conditions on them.   

2. This feature only works on questions with structured answers including checkbox, drop downs or radio buttons. You can add a condition on a single option or a combination of responses. 

There has also been interest in enabling questions to be displayed when the user answers a conditional question instead of hiding them (the current default). We are currently consulting with the community to better understand the use cases, functionality, and scope of technical work to add this feature. This will be released after we complete our current work migrating to Rails v5 (Summer 2020). 

Google Analytics

Users can now use Google Analytics to track web statistics for an organizational account within the DMPTool. Statistics retrieved by Google Analytics include: number of sessions, users, average session duration, and pageviews. 

Users who are already using Google Analytics for tracking may want to add the DMPTool to their account. Connecting DMPTool to Google Analytics is a quick and easy process — simply copy the tracker code from your Google Analytics account and paste it into your Organizational Details page in the DMPTool and you’re good to go.

For further details about adding your DMPTool account to Google Analytics, please see our help documentation. Existing DMPTool Usage Statistics also remain accessible from within the DMPTool for all organizational administrators. 

Support for Multiple Contributors

A new tab entitled “Contributors” is now visible within the Create Plan interface. Here users can list contributors to a grant, including their ORCIDs, and select a role for each individual. This feature utilizes the CRediT, Contributor Roles Taxonomy to describe each contributor’s specific contribution. Using ORCIDs and a controlled vocabulary for roles will facilitate the tracking of key people involved in the project. This will allow contributors to receive credit for their work and will enable other stakeholders involved in the project to identify the key individuals involved.

Project start/end dates

To support our machine-actionable DMP work, we have added project start and end dates to the project details page. Having these key project dates as part of the DMP is essential in triggering actions at the appropriate moment. For example, a project end data can trigger an action to notify key stakeholders, such as repository managers or storage administrators, at the end of the grant.

What’s next?

Together with our DMPRoadmap colleagues, we are currently upgrading our infrastructure to Rails 5. This is a substantial piece of development work that, although entirely on the backend and invisible to users, is essential to keeping our service running and adding additional requested features. We expect this development to be completed in July.

Following the Rails upgrade, work will continue on our maDMP initiative and we plan on pushing out a feature to mint DOIs for DMPs late this summer. Additional features we are developing simultaneously include: support for multiple datasets within a DMP, an updated API and the ability to export plans as JSON, and a new template builder to facilitate the creation of maDMP templates within the application. We’ll continue to update you here as development work progresses.

As always, feedback or questions are most welcome and can be sent directly to maria.praetzellis@ucop.edu.

The DMPTool needs you!

DMPToolEditorialBoard

The DMPTool, which as of February 2020 has supported 44,415 users and 266 participating institutions, currently maintains thirty-eight templates for twenty-two different federal and private funders. It is no secret that funder requirements and subsequent application and data management guidance change regularly and it is crucial that the DMPTool reflects these changes in real time to remain a high-integrity resource. Staying up to date with rapidly changing grant requirements and RDM best practices is no small task, however. There is no magic wand or automated system to keep the DMPTool in sync with current requirements. Then how does it happen? The key factor in the success of the DMPTool over the last 9 years has been and continues to be the contributions of and collaboration with the user community.

As the capabilities of the DMPTool expand and needs for it rise, we are currently calling for additional community members to contribute and to ensure its continued success. We’re therefore pleased to announce the formation of and the invitation to join the DMPTool Editorial Board. Our goal for the Board is formalize existing community involvement in the tool and to have representation across disciplines with varied areas of expertise, from a wide range of institutions, including librarians along the full career spectrum. Experience working with DMPs is desirable and we welcome applications from any individuals committed to supporting effective research data management.

Responsibilities for members of the Editorial Board include the following:

  • One year term of service (with the opportunity to extend if desired)
  • Short bi-monthly (or as needed) meetings
  • Individual ownership of specific funder templates, linked to your area of focus
  • Creation of new templates as needed
  • Provide suggestions for default guidance and best practices
  • Identification and publication of example DMPs to include in the tool
  • Estimated 1-4 hours of work a month to check for requirement updates from funders, reporting to the Board, and updating templates and guidance in the DMPTool

Joining the DMPTool Editorial Board presents an excellent opportunity to meet fellow research data management professionals, actively contribute to the community, help support a critical piece of open-source technology advancing research data management, and keep abreast of the larger changes in funding requirements and the larger funding ecosystem. Editorial Board members will work to ensure the tool provides current information about grant requirements and corresponding guidance.

We hope you will consider this invitation to contribute and apply! We have opened applications and responses are due Friday, March 13. All questions, comments, concerns, or advice are welcome: maria.praetzellis@ucop.edu. We look forward to hearing from you!