DMPTool Funder Templates Updated

We are excited to announce the completion of the first project of our newly established DMPTool Editorial board. As of September 2020, the Board has audited 36 funder templates within the DMPTool and updated the templates when necessary to reflect current proposal requirements and ensure all funder related content is up to date.

Template updates mean that admins will now need to transfer any customizations you may have created for these templates (instructions here). 

None of the updates made to templates affect the core requirements of the DMPs and updates largely involve correcting links, resources, and other data management planning requirements. A detailed summary of the changes for each template is below and you can view all templates on the DMPTool Funder Requirements page.

The critical work keeping the DMPTool in line with current funder requirements would not have been possible without the effort, expertise, and excellence of our volunteer Editorial Board and we at the DMPTool are endlessly grateful for their commitment to supporting the tool. Please join us in recognizing their contributions and thanking them for their work supporting our shared infrastructure advancing research data management.

  • Heather L Barnes, PhD, Digital Curation Librarian, Wake Forest University
  • Raj Kumar Bhardwaj, PhD, Librarian, St Stephen’s College, University of Delhi, India
  • Renata G. Curty, PhD, Social Sciences Data Curator, University of California, Santa Barbara
  • Jennifer Doty, Research Data Librarian, Emory University
  • Nina Exner, Research Data Librarian, Virginia Commonwealth University
  • Geoff Hamm, PhD, Scientific Publications Coordinator, Lawrence Berkeley National Laboratory
  • Janice Hermer, Health Sciences Liaison Librarian, Arizona State University
  • Megan O’Donnell, Data Services Librarian, Iowa State University
  • Reid Otsuji, Data Curation Specialist Librarian, University of California, San Diego
  • Nick Ruhs, PhD, STEM Data & Research Librarian, Florida State University
  • Anna Sackmann, Science Data & Engineering Librarian, University of California, Berkeley
  • Bridget Thrasher, PhD, Data Stewardship Coordinator, Associate Scientist III, National Center for Atmospheric Research
  • Douglas L. Varner, Assistant Dean for Information Management / Chief Biomedical Informationist, Georgetown University Medical Center

Together with the Editorial Board, we’ll be working on adding new templates to the tool over the coming months. If you have suggestions for funders to be added please let us know by emailing maria.praetzellis@ucop.edu.

Summary of DMPTool Template Updates

All NSF templates were updated to include links to the updated 2020 Proposal & Award Policies and Procedures Guide (2020 PAPPG). Additional updates are summarized below:

NSF-AGS: Atmospheric and Geospace Sciences

  • Updated link to new 2020 PAPPG
  • Edited question text 

BCO-DMO NSF OCE: Biological and Chemical Oceanography

  • Updated link to new 2020 PAPPG
  • Updated questions & links

NSF-CISE: Computer and Information Science and Engineering 

  • Updated link to 2020 PAPPG. 
  • Added “Additional Guidance on Selecting or Evaluating a Repository” under “Plans for Archiving and Preservation”

NSF-DMR: Materials Research

Department of Energy (DOE): Generic

  • Funder links added for Office of Science, and Energy Efficiency/Renewable Energy instructions

Department of Energy (DOE): Office of Science

  • Funder link added
  • Description updated with additional guidance

Institute of Museum and Library Services (IMLS) 

  • Data Management Plans for IMLS are collected via the IMLS Digital Product Form. Originally the form was broken out into three templates within the DMPTool, however we have streamlined the process and combined them into one, comprehensive, template to more accurately reflect current requirements.

National Aeronautics and Space Administration (NASA)

  • Updated text to match the wording of NASA’s description of an ideal DMP 

USDA

  • Reformatted section 1 to make reading easier.
  • Deleted the compliance/reporting section. This is no longer part of the DMP template as it is related to annual reporting. This information was moved to an Overview phase description.
  • Made the guidance links consistent.

Alfred P. Sloan Foundation

National Oceanic and Atmospheric Administration (NOAA)

  • Updated links

U.S. Geological Survey (USGS)

  • Updated questions and links
  • We are continuing to work with USGS and may have additional updates to this template in the near future. 

Summer 2020 DMPTool Release

We’re very pleased to announce the release of several major new features for the DMPTool! This includes:

  • Integration with the Research Organization Registry (ROR) and Funder Registry (Fundref)
  • The ability to create conditional questions and set email notifications within DMP templates
  • Integration with Google Analytics for usage statistics
  • The ability to connect additional grant contributors (and their ORCIDs) to a plan

The release notes are available in the DMPTool GitHub and detailed descriptions are available below.

Research Organization and Funder Registry Integration

ROR is a registry of Persistent Identifiers (PIDs) for research organizations, which are defined as any organization that conducts, produces, manages, or touches research. ROR has generated identifiers for over 91,000 organizations so far. The Crossref Funder Registry (Fundref) is a registry of grant-giving organizations and has created over 20,000 identifiers so far. We now have 1,582 unique organizations matched with their RORs or Funder IDs within the DMPTool. 

Utilizing these identifiers within a DMP is a key step towards a truly machine-actionable DMP (maDMP). Employing PIDs such as ROR and Fundref in DMPs facilitates the linking of people, grants, and organizations, and enables better tracking and discovery of research outputs by institution. These identifiers will be included in our upcoming maDMP JSON export feature, which is due to be released in late summer and is key in enabling maDMP interactions via API integrations.

Snippet from our upcoming API utilizing the RDA Common Standard schema and incorporating RORs

Organizational administrators of the DMPTool may notice an increase in the number of users affiliated with your institution. As part of integrating with ROR and Fundref, we have connected 4,750 previously un-affiliated users with their host institution by matching email domains. 

Conditional Questions and Email Notifications

DMPTool administrators that take advantage of the feature to create or customize templates will be excited to learn that you can now reduce the number of questions included in a customized template by skipping questions. For example, if a research project is not creating or using any sensitive data, you can now modify a template in order to skip questions related to special handling of sensitive data.

Additionally, the new feature includes the ability to set email notifications that are triggered if a user selects a specific answer. For example, you may want to create an alert for large data volumes. 

Read more about utilizing these new features in our documentation or watch this video tutorial created by our DMPRoadmap colleagues at DMPOnline. 

Two important things to note about creating conditions and email notifications: 

1.  If you are creating a new template, save all questions first and then set the conditions on them.   

2. This feature only works on questions with structured answers including checkbox, drop downs or radio buttons. You can add a condition on a single option or a combination of responses. 

There has also been interest in enabling questions to be displayed when the user answers a conditional question instead of hiding them (the current default). We are currently consulting with the community to better understand the use cases, functionality, and scope of technical work to add this feature. This will be released after we complete our current work migrating to Rails v5 (Summer 2020). 

Google Analytics

Users can now use Google Analytics to track web statistics for an organizational account within the DMPTool. Statistics retrieved by Google Analytics include: number of sessions, users, average session duration, and pageviews. 

Users who are already using Google Analytics for tracking may want to add the DMPTool to their account. Connecting DMPTool to Google Analytics is a quick and easy process — simply copy the tracker code from your Google Analytics account and paste it into your Organizational Details page in the DMPTool and you’re good to go.

For further details about adding your DMPTool account to Google Analytics, please see our help documentation. Existing DMPTool Usage Statistics also remain accessible from within the DMPTool for all organizational administrators. 

Support for Multiple Contributors

A new tab entitled “Contributors” is now visible within the Create Plan interface. Here users can list contributors to a grant, including their ORCIDs, and select a role for each individual. This feature utilizes the CRediT, Contributor Roles Taxonomy to describe each contributor’s specific contribution. Using ORCIDs and a controlled vocabulary for roles will facilitate the tracking of key people involved in the project. This will allow contributors to receive credit for their work and will enable other stakeholders involved in the project to identify the key individuals involved.

Project start/end dates

To support our machine-actionable DMP work, we have added project start and end dates to the project details page. Having these key project dates as part of the DMP is essential in triggering actions at the appropriate moment. For example, a project end data can trigger an action to notify key stakeholders, such as repository managers or storage administrators, at the end of the grant.

What’s next?

Together with our DMPRoadmap colleagues, we are currently upgrading our infrastructure to Rails 5. This is a substantial piece of development work that, although entirely on the backend and invisible to users, is essential to keeping our service running and adding additional requested features. We expect this development to be completed in July.

Following the Rails upgrade, work will continue on our maDMP initiative and we plan on pushing out a feature to mint DOIs for DMPs late this summer. Additional features we are developing simultaneously include: support for multiple datasets within a DMP, an updated API and the ability to export plans as JSON, and a new template builder to facilitate the creation of maDMP templates within the application. We’ll continue to update you here as development work progresses.

As always, feedback or questions are most welcome and can be sent directly to maria.praetzellis@ucop.edu.

DMPRoadmap Team at the maDMP Hackathon

Research Data Alliance (RDA) recently hosted a three day (27-29 May 2020) machine-actionable DMP hackathon to build integrations and test the Common Standard for maDMPs. The event, coordinated through teams at RDA-Austria and TU Wien, was well attended with over 70 participants from Australia, Europe, Africa, and North America. 

The teams that work on DMP Tool (dmptool.org) and DMP Online (dmponline.org) were really pleased to represent our shared DMPRoadmap codebase and show our conformance with the standard and ability to exchange DMPs across systems. This blog post details the work of the DMPRoadmap group in the hackathon, for a full review of all outputs please visit the Hackathon GitHub.

What did we work on?

Maria Praetzellis and Sarah Jones, product managers from DMPRoadmap, joined the hackathon “TigTag” team and focused on mapping maDMPs to funder templates. During the hackathon, their group successfully mapped required questions from several funder specific DMPs including: 

  • Horizon 2020
  • Science Europe
  • National Science Foundation
  • U.S. Geological Survey

The goal of the exercise was to develop guidance on how to normalize the ways that fields from specific funder templates can be mapped to the standard, and, when necessary, develop extensions to incorporate template specific needs. The team came up with several proposals for changes to the documentation and structure of DMP Common Standard and made a few recommendations for extensions to the standard. The team is now assembling the recommendations and will submit ideas as issues to the Common Standard GitHub so work can be tracked going forward. 

Brian Riley and Sam Rust, developers from DMPRoadmap,  joined the hackathon “DMP Exchange team” and worked to determine how the RDA Common Standard JSON format could be used to exchange DMP metadata between tools. Their team provided a staging service and granted API keys to other development teams to allow testing of prototypes, which helped all participants debug issues. Over the course of the hackathon, our new maDMP API helped developers of the following DMP systems implement their own APIs:

Based on this work, we were able to exchange maDMP metadata between DMPTool and those three systems by the end of the hackathon.  Below are screenshots of DMP exports from the Data Stewardship Wizard that were imported into the DMPTool. Because we were each using the RDA Common Standard format, the new DMP was created within the DMPTool and the appropriate metadata was successfully mapped: title, description, project start/end dates, grant ID, contact information, and contributor information.

While the data models used by many systems do not yet offer full support of the RDA Common Standard model, progress was made towards mapping the high level DMP information across the board. Also, the confirmation that these systems could exchange information using RDA Common Standard JSON was encouraging and will likely open the door for future integrations. 

Other outcomes

We also collaborated with members of the DMP Melbourne, University of Cape Town and Stockholm University on an integration with their institutional repository platform. The teams were interested in pushing both DMP metadata and the physical DMP document into that repository. However, they did not yet support the maDMP standard. So the team created two separate prototype scripts. The first script extracts DMPs from a DMPRoadmap system and creates a placeholder Project that future datasets can be connected to and also uploads a PDF copy of the DMP. The second script converts their JSON into RDA Common Standard compliant JSON. While their institutional repositories do not contain many DMPs at this point, a service like this could help extract DMPs for import into DMP systems that utilize the RDA Common Standards in the future. We hope to build upon this work to facilitate integrations with additional repositories in the future. 

Future work 

Hackathon participants are now collating work produced during the hackathon into a final report. In addition, participants expressed interest in:

  • More communities. Most of the attendees at this hackathon were developers from DMP-focused tools. In the future, it would be great to have participants from other communities, including developers of CRIS systems, data repository platforms, and ethics tools.  This would help us expand the types of use cases being served.
  • More PIDs. The power of connected information replies on persistent identifiers.  We would like to increase our connection with various standards and integrate with the Research Organization Registry (ROR), the Funder Registry, and the Contributor Roles Taxonomy (CRediT) to provide more structured information to support such integrations.

Thank you again to the team at RDA Austria and TU Wein for organizing the hackathon.  If you’re interested in tracking future development and outputs of this work please follow the GitHub and consider joining the RDA Common Standard Working Group or Active DMPs Interest Group

The DMPTool needs you!

DMPToolEditorialBoard

The DMPTool, which as of February 2020 has supported 44,415 users and 266 participating institutions, currently maintains thirty-eight templates for twenty-two different federal and private funders. It is no secret that funder requirements and subsequent application and data management guidance change regularly and it is crucial that the DMPTool reflects these changes in real time to remain a high-integrity resource. Staying up to date with rapidly changing grant requirements and RDM best practices is no small task, however. There is no magic wand or automated system to keep the DMPTool in sync with current requirements. Then how does it happen? The key factor in the success of the DMPTool over the last 9 years has been and continues to be the contributions of and collaboration with the user community.

As the capabilities of the DMPTool expand and needs for it rise, we are currently calling for additional community members to contribute and to ensure its continued success. We’re therefore pleased to announce the formation of and the invitation to join the DMPTool Editorial Board. Our goal for the Board is formalize existing community involvement in the tool and to have representation across disciplines with varied areas of expertise, from a wide range of institutions, including librarians along the full career spectrum. Experience working with DMPs is desirable and we welcome applications from any individuals committed to supporting effective research data management.

Responsibilities for members of the Editorial Board include the following:

  • One year term of service (with the opportunity to extend if desired)
  • Short bi-monthly (or as needed) meetings
  • Individual ownership of specific funder templates, linked to your area of focus
  • Creation of new templates as needed
  • Provide suggestions for default guidance and best practices
  • Identification and publication of example DMPs to include in the tool
  • Estimated 1-4 hours of work a month to check for requirement updates from funders, reporting to the Board, and updating templates and guidance in the DMPTool

Joining the DMPTool Editorial Board presents an excellent opportunity to meet fellow research data management professionals, actively contribute to the community, help support a critical piece of open-source technology advancing research data management, and keep abreast of the larger changes in funding requirements and the larger funding ecosystem. Editorial Board members will work to ensure the tool provides current information about grant requirements and corresponding guidance.

We hope you will consider this invitation to contribute and apply! We have opened applications and responses are due Friday, March 13. All questions, comments, concerns, or advice are welcome: maria.praetzellis@ucop.edu. We look forward to hearing from you!

New Year, New DMPTool Release

Our latest DMPTool release includes several exciting features and improvements, including a new API. The release highlights are outlined below, for a comprehensive listing please check our release notes for v2.1.4 and V2.1.3. 

API 

DMPTool administrators who have been granted tokens can now access statistical information about their organizational accounts and query plans created in the DMPTool via our new full text API. The API currently has 2 endpoints: Plans and Statistics. 

The full text Plans API allows users to retrieve plans as a JSON file and filter by dates, specific templates, users and plans. This new API will be essential in our work with machine-actionable DMPs as it will enable the export of plans into other RDM systems and facilitate further integration with external applications. 

The Statistics endpoint includes data regarding users, templates and plans. Users can retrieve information from this endpoint on queries such as: number of users who have joined your organization; number of plans created by your organization’s users; and metadata about all plans created by all users from your organization.

In order to use the API, permissions for each endpoint must be given to your organization and an API token must display on your ‘edit profile’ page. To request access to the API please contact us.

Please keep in mind that we will be updating the API to conform to the DMP Common Standard in Spring of 2020, so while the API is up and ready for use, we recommend holding off on building any integrations or applications around it until the updated version is released. 

One Click Plan Creation

Screenshot of the ability to create a new plan from the Funder Requirements page.

Users can now create a plan for a specific funder template from the Funder Requirements page instead of going through the create plan page. You can also retrieve a static URL to the plan that can be sent along to users, thus enabling them to go straight to the desired page. (A big thank you to DMPOPIDoR for contributing this new feature to our shared codebase.)

Accessibility

Building off several months of analysis, testing, and expert recommendations, the DMPRoadmap crew has been working towards making the DMPTool accessible for all users, including those with disabilities. Highlights of new accessibility features include: support for assistive technologies, improved visual cues and improvements to text magnification tools. A full list of all accessibility issues addressed in this release is available on our git repository.     

Create departments within an organization

The new Department field enables administrative users to define specific schools or departments to the organization. Our partners at DMPOnline have made a short video demonstrating how to utilize this new feature. 

Request Feedback

The button to Request Feedback on a plan has been moved to its own tab in an effort to highlight this feature. If you have the ability to request feedback enabled for your organization, it will now appear after the Share tab when creating a plan. If you don’t have this feature enabled but are interested in learning more, please check our documentation or contact us for any questions.

Request feedback tab has moved to the end

This year promises to be a busy one for the development crew with many big features currently in the works, including machine actionable DMPs, improved usage dashboards, and Zenodo/RIO Journal integration. For a high level overview of our upcoming work for 2020 please check out our development roadmap

As always, feedback or questions are most welcome and can be sent directly to maria.praetzellis@ucop.edu.

DMP services unite!

This November the DMPRoadmap team conducted a series of strategic planning meetings. Meeting in-person was highly productive and a great way to energize the team for the ambitious work we have planned for the upcoming year. Read more about the meeting and our development goals below. This blog post was originally published by Magdalena Drafiova from DMP online on 3 December, 2019.

From left to right: Brian Riley, Benjamin Faure, Marta Nicholson, Maria Praetzellis, Sarah Jones, Sam Rust and Ray Carrick.

In the middle of November we were joined for three days by our colleagues Maria Praetzellis and Brian Riley from DMPTool and Benjamin Faure from OPIDoR. On our end Sarah Jones, Sam Rust, Ray Carrick, Marta Nicholson, Diana Sisu and Magdalena Drafiova represented DMPonline. We’ve had a number of new people join the team over the past year so the meetings were a great opportunity to get to know one another and discuss where to take things next.

Over the three days we had a mix of group discussions to plan the future development roadmap (results of that later), as well as developer / project manager sessions and discussions with the wider DCC and CDL team on machine-actionable DMPs. Below we report out on the results of our sessions and the future development roadmap

Developer team meeting

The tech team had a separate team meeting to give more time to discuss changes to the codebase and development procedures.They walked through the data model and key functionality to bring new devs up to speed and discussed major pieces of infrastructure work to schedule over the coming year (e.g. upgrading to Rails v.5, making a more robust test infrastructure, etc.). They also reviewed the current development project management processes and will be revising our PR review workflow and incorporating a continuous integration approach. This will allow developers to work more atomically. A single bug fix or feature enhancement will now be handled individually instead of as a component of a larger single release. Each issue will be merged into the codebase as a single point release allowing the team to work more efficiently as well as making it easier to accept contributions from external developers.

Project management meeting
Magdalena, Maria, Sarah and Diana discussed procedures for prioritizing tickets, managing the team and conducting User Acceptance Testing (UAT). Sarah and Diana will share expertise on weekly PM meetings to bring Magdalena and Maria up to speed. We have also decided to change our sprint schedule as we will be joined by more developers. We want to do our releases more often and have less tickets on the board so we can review them all in each call. This coupled with the continuous integration approach should get fixes and features out more quickly. We have assigned a developer to each area which we want to work on, although we want to ensure that the knowledge is shared and everyone has an opportunity to work across the codebase so we don’t create dependencies.

We also discussed the need to conduct user testing, especially on the administrative area of the tool. This will involve setting some tasks and observing users complete them to see what issues they encounter and where the tool is not intuitive. We hope to run these tests in Summer 2020. If you would be interested in getting people from your organization involved, please let us know.

Development roadmap
We agreed on the development roadmap by dividing our key areas of work into time phases. Some activities are ongoing system improvements and will happen throughout the time periods.The first part of work which we hope that will run till February 2020 is around the feedback we have received in our user groups. This work will finalize the conditional questions functionality, improve search for administrators and make the usage dashboard more insightful so you can get better analytics about how is the tool used at your institution. We will also integrate a new feature from DMP OPIDoR to enable one click plan creation. From the public templates page, users will be able to click on an icon and create a plan based on that template. We are also planning integrations so you can export DMPs to Zenodo and RIO Journal and complete our work on regional filtering to separate funders/templates/organization by country.

The second part of the work will focus on making our default template machine-actionable by adding integrations to controlled vocabularies, a re3data repository selector, license selector, fewer free text fields, as well as important identifiers for users (ORCID ids) and organizations (ROR ids). We will also update our API so that it conforms to the RDA Common standard.

We will finish the year by adding new features that allow administrators to pre-define a subset of good institutionally shared plans. We will also improve the current plan version and a lifecycle of plan version so you can indicate the status of the plan. We will also work on incorporating multiple datasets into DMPs so you can get better insights about various storage requirements, license requirements etc. Enabling static pages to be edited is also on the to-do list. Lots to look forward to!

What’s new with our machine actionable DMP work?

Building on the conceptual framework laid out in articles such as Ten principles for machine-actionable data management plans and prior blog posts covering such topics as what maDMPs are, what they can do to support automation, utilizing common standards and PIDs, and maDMPs as living documents, we are now moving into active development on the technical aspects of our NSF funded EAGER research project

A phased approach: building a plan for maDMPs

The goal of our EAGER research project is to explore the potential of machine-actionable DMPs as a means to transform the DMPs from a compliance exercise based on static text documents into a key component of a networked research data management. This ecosystem will not only facilitate, but also improve the research process for all stakeholders. 

We will be laying out the phases of work in the coming months and will continue to use this blog to keep the community informed of our progress, and to solicit your feedback and ideas.

Phase 1 Workplan

maDMP_phase1

Phase 1 of of our research entails exploring the following three high level ideas:

  1. How to best restructure the DMPTool metadata to utilize the RDA Working Group Common Standard
  2. How to optimize the Digital Object Identifiers (DOI) metadata schema for DMPs 
  3. How to best incorporate other Persistent identifiers (PIDs) into DMPs

Common Standards

The common data model for the creation of machine-actionable DMPs, produced by the RDA working group on DMP Common Standards, was recently released for community feedback. Our partners at the Digital Curation Center (DCC) have now implemented this model into the DMPRoadmap codebase. A big thank you to Sam Rust from DCC for his work on this! Those interested in learning more about the Common Standard in DMPRoadmap may want to view a recent webinar recording of Sam detailing this work. This was a fundamental step towards machine actionable DMPs, as it forms the foundation to enable information flow between DMPs and affiliated external systems in a standardized manner.

DOIs for DMPs

With our partners at the Digital Curation Center (DCC), we are working to incorporate the common standards into the shared DMPRoadmap codebase and our DMPTool development plans. As part of this work, we have partnered with DataCite to update their metadata schema to better support DMPs and to optimize a workflow for generating DOIs for DMPs. By relying on the DOI infrastructure, we will then be able to utilize the Event Data service from DataCite to record when assertions have been made on the DOI. More on the workflows surrounding this aspect of the project below. 

DMPs and the PID graph

Projects such as Freya have been working to connect research outputs through a PID graph.  A key question underpinning much of our work is how we can best leverage the PID graph (see Principle 5: Use PIDs and controlled vocabularies) within the DMP ecosystem. To connect DMPs to the larger PID ecosystem, our first phase will also include incorporating the following persistent identifiers into the DMP as a baseline for future work:

Phase 1 workflows

As discussed above, in Phase 1, we are building a system to mint DOIs for DMPs and creating a landing page for DMP DOIs to record updates to the DOI that occur over time. Although the system can be thought of as a giant API, pulling and pushing data from various sources, we are also building a landing page for these DOIs in order to visually demonstrate the types of connections made possible by tracking a research project over time from the point of DMP creation. 

Below is a high level overview of this workflow and whiteboarding of its potential architecture. (For those that would like a more detailed view, please check out our GitHub).maDMPRegistry

  1. maDMP system accepts common standard metadata from DMPTool (DMP Roadmap) 
  2. maDMP system sends that metadata to DataCite to mint a DOI (which it then returns to the DMPTool)
  3. A landing page is generated for the DMP DOI
  4. A separate harvester application queries outside APIs to check for assertions recorded against the DOI. For this phase of work we will work with the NSF awards API, and return any award information into the maDMP system. 
  5. The maDMP system then sends any award info returned to DataCite 

Our goal is to leverage the work being done by the RDA Exposing DMP working group to help inform the privacy concerns of exposing certain types of assertions on this landing page.  

Next Steps

Looking ahead, we plan to produce a basic prototype ready for testing and feedback by the end of October. I will be presenting on our work thus far at the upcoming RDA and CODATA meetings. During these meetings, I look forward to continuing our work with the RDA Common Standards Working Group (and to meeting many of those active in this space for the first time in-person)! 

Once we establish the workflow to record assertions to a DMP DOI, our next phase of work will include pilot projects with domain-specific and institutional stakeholders to test the flow and integration of relevant information across services and systems. With these partners we plan to test how maDMPs can help track data management activities as they occur during the course of a grant project. 

Finally, it’s important to note that all of our development work is being done in a test environment where we will continue to iterate for the next several months as we determine how best to deploy new features to the DMPTool and DMPRoadmap codebase. 

Interested in contributing?

Lastly, we realize that maDMP is far from the most euphonious or creative name for this service (nor is our original idea of the DMPHub much better). We are open to any and all ideas for naming this work so if you have any ideas, however strange or off the wall, please do let us know. If we use your idea we promise to shower you with accolades for your denomination genius. Also, free stickers galore.

To review or contribute to the technical components of the project check out our GitHub. And most importantly, please send any and all feedback, questions, or ideas for names to maria.praetzellis@ucop.edu.

 

What’s new with the DMPTool?

The past few months have been quite fruitful in terms of pushing forward on the technical details surrounding machine-actionable DMPs.

The common data model for the creation of machine-actionable DMPs, produced by the RDA working group on DMP Common Standards, was recently released for community feedback. With our partners at the Digital Curation Center (DCC), we are now actively incorporating this model into the DMPRoadmap codebase and our DMPTool development plans.

As part of our NSF EAGER grant, CDL has partnered with DataCite to explore how DOI infrastructure could enable the passing of information between RDM systems and supporting integration between various related systems. The initial phase of this work includes piloting workflows that efficiently move information between stakeholders, systems, and researcher workflows. Our goal is a working prototype developed by mid-October of this year. This is exciting as it represents the first step towards realizing our long-term goal of machine-actionable DMPs as critical infrastructure in the research process.


Community involvement 

Another key goal for the coming months is to re-engage the DMPTool community via regular virtual user meetings, the re-creation of advisory boards, and most importantly hearing more from you about how the DMPTool is working (or not) and gathering feedback on future developments and ideas for new areas of making the DMPTool even more useful and vital. In the coming weeks, we will reach out with more details on the above. However, in the meantime, please feel free to contact me and introduce yourself!

I am interested in hearing any input, questions, comments or feedback. You can contact me directly at maria.praetzellis@ucop.edu.

Meet the new DMPTool Product Manager

MariaPraetzellisHeadshotToday, August 19, marks my seventh week as the new DMPTool Product Manager, and the latest Research Data Specialist to join the team at UC3. I’m thrilled to be joining such an active and engaged community of professionals committed to the principles of open science, open infrastructure, and public access to research and knowledge.

As I take the reins from Stephanie Simms, I’m grateful for her instrumental work in rethinking the capabilities of a data management plan (DMP) and her work with the community in developing the conceptual frameworks and use cases for the creation of machine-actionable DMPs. As I’ve learned more in these first weeks, I am invigorated by the plans for machine-actionable DMPs, seeing the critical role they could play in research and data sharing and the exciting potential for expanding their dynamism, utility, and centrality to research data workflows. 

Prior to joining CDL, I was a Program Manager in the Web Archiving and Data Services group at the Internet Archive. At the Internet Archive, I managed domain-scale web harvesting, dataset and indexing services, and computational access to large-scale data for researchers. I bring a strong background in product management for services used by a global set of partners and a commitment to community-driven feature development and system integrations. 

I’m looking forward to expanding upon this experience as I begin work on furthering development of the DMPTool, keeping in step with what can be useful to and benefit the community, and advancing our shared commitment to open access to research and research data.

Please feel free to reach out and introduce yourself! I’m eager to receive any feedback or questions. You can reach me directly at maria.praetzellis@ucop.edu.

Representing time in machine-actionable DMPs

In this next installment of the machine-actionable DMP blog series, we want to address the broader context of time to hone in on answering the following question:

How and when do you update some piece of information in a DMP?

This happens to be the substance of Principle 9 from our preprint, forthcoming in PLOS Miksa et al. 2018: maDMPs should be versioned, updatable, living documents.

DMPs should not just be seen as a “plan” but as updatable, versioned documents representing and recording the actual state of data management as the project unfolds. The act of planning is far more important than the plan itself, and to derive value for researchers and other stakeholders, the plan needs to evolve. DMPs should track the course of research activities from planning to sharing and preserving outputs, recording key events over the course of a project to become an evolving record of activities related to the implementation of the plan.

We can all agree that it’s important to treat maDMPs as living documents, but there are multiple approaches we might take to updating them, and multiple stakeholders who should be able to provide updates for particular pieces of information at particular points along the way. First we’ll provide a quick overview of the current state of DMP-time as represented in systems and policies related to our NSF EAGER project, plus a handful of other relevant systems and policies that extend the geographical and organizational scope. Then, we’ll pitch an idea for how we can handle DMP-time using Crossref/DataCite Event Data Service. We welcome, nay encourage your feedback about this and other ideas as we experiment and iterate and prove things out in practice.

Representing time in DMPs

So we built a graph database with seed data from our partners at BCO-DMO and the UC Gump Field Station on Moorea, and enriched it with information from the NSF Awards API and public plans created with the DMPTool. All of the projects represented in the database correspond with NSF awards and therefore the DMPs have an associated timeline of:

  1. Create DMP and submit grant proposal (via institutional Office of Research, NSF Fastlane system)
  2. Grant awarded (grant number issued by NSF)
  3. Grant period ends, final report due (data deposited at appropriate repository)

This current grant/DMP workflow fails to capture information about actual data management activities as they unfold over the course of a project, however, data management staff at BCO-DMO and the Gump Field Station perform interventions and provide manual updates in their own repository systems opportunistically. These updates can occur during active stages of multi-year projects and most of them are done at the grant closeout stage when researchers are engaged with reporting activities and aware that they must deposit their data. Relevant NSF program officers from the Geosciences Directorate conduct manual compliance checks to ensure that grantees have deposited data prior to issuing a new award, which is a very useful feature of this case study.

In addition to the data repository systems, information about these projects flows through institutional grant management systems, NSF’s Fastlane system, and a subset is made publicly available via the NSF Awards API (example of our award). Each of these systems records the start data and end date for the award, and some include interim reporting dates. Our ongoing analysis for maDMP prototyping is focused on identifying additional milestones during the course of a project and which stakeholders should be responsible for updating which pieces of information…drilling into the original question of how and when do you update things?

DMP-time in European contexts

To avoid an overly narrow focus on one national context and one funding agency in this larger thematic discussion about time, we’ll also consider some European examples. The European Commission’s Horizon 2020 program acknowledges the fact that information about research data changes from the planning to final preservation stages; as a result, DMPs have built-in versioning. Horizon 2020 proposals that receive an award must submit a first version of the DMP within the first 6 months of the project. The DMP needs to be updated over the course of the project whenever significant changes arise, however, this “requirement” is somewhat vague and reads more like a best practice. Updated versions of the DMP are required at any periodic reporting deadline and at the time of the final report. DMPonline provides an optional set of Horizon 2020 templates that includes an 1) Initial DMP, 2) Detailed DMP, and 3) Final review DMP.

Our maDMP collaborators at the Technical University of Vienna are forging ahead with their own institutional prototyping efforts to automate DMPs and integrate them with local infrastructure. They just released this excellent interactive “mockups” tool and invite your feedback. Within the mockups system, time is represented through the concept of DMP Granularity and in some cases this is related to funding status. The level of granularity corresponds roughly with versions, which carry the labels “initial, detailed, or sophisticated.”

Representing time in maDMPs: Ideas for the future

The ability to update DMPs is central to our own plans for realizing machine-actionability and relies on infrastructure that already exists. In a nutshell, our idea is to insert DMPs and corresponding grant numbers into the sprawling web of information connecting people and their published outputs. We think the mechanism for accomplishing this is to issue DataCite DOIs for DMPs: this creates an identifier against which we can assert things programmatically. In addition, this hooks DMPs into Crossref/DataCite Event Data, which is a stream of assertions of relationships between research-related things. Existing and emerging registries of information are already leveraging this infrastructure—Scholix, ORCID, Wikidata, Make Data Count, etc. DMPs and grant numbers would provide a view of the connections between everything at the project level.

Documentation for Event Data explains that it “is a hub for the collection and distribution of a variety of Events and contains data from a selection of Sources. Every Event has a time at which it was created. This is usually soon after the Event was observed. In addition to this, every Event has a theoretical date on which it occurred…dates are represented as the occurred_at, timestamp and updated_date fields on each Event. The Query API has two views which allow you to find Events filtered by both occurred_at and timestamp timescales. It also lets you query for Events that have been updated since a given date.” This hub of information would therefore support versioning of the DMP as well as dynamic updating of key pieces of information (e.g. data types, volumes, licenses, repositories) by various stakeholders over time. Stakeholders could rely on this open hub of information and begin to make plans based on it (e.g., a named repository learns that a TB of data is expected within a specific timeframe).

In this scenario, the DMP would become an assertion store (cf. Wikidata and Wikibase). The assertion store would have a timeline component and anyone could use the DMP identifier to ping/query the Event Data Query API and find out what’s been asserted about the project. Various DMP stakeholders could also assert things about the project and update information over time. Each stakeholder could query and model DMP information based on the types of relationships and get the specific details they’re interested in… so an institution could discover who their PIs are collaborating with[o], a funder could check[p] if a dataset has been deposited in a named repository, a repository manager could search for any changes to a specific project or all relevant projects within a specific date range, etc. Wikidata has already begun indexing policies, in fact; once this happens at scale and is integrated with indexing of datasets, we could have automated dashboards displaying policy compliance and project progress.

That’s about it. Please tell us what you think about this approach to transforming a DMP into something active and updated, versioned and linked to research outputs.