New NSF Templates – Mirrors of the Research.gov Webform – Now Available

TL;DR

  • Three new NSF templates – one for SBE Directorate, one for EDU, and a Generic for all other Directorates – are now published in the DMP Tool, along with selected guidance from NSF on how to complete them
  • The content, questions, and options are matched from the Research.gov webtool so people can plan their DMP in advance and get collaboration and feedback
  • Researchers will need to transfer the responses at the end into the webform instead of uploading a PDF
  • We will continue to monitor and update as directorates add more response options to the form

Introduction

Like many in the data management community, we’ve been following changes to data management plans (DMPs) at the National Science Foundation.  Instead of submitting the DMP as a PDF, the DMP is now filled out as a form on Research.gov.  This has both positive and negative implications for proper planning, collaboration, and interoperability, which we talk in more detail about on Upstream.  In this post, we will walk through how we replicated the form into the DMP Tool so people can use it to prepare their DMP in advance of completing the final version on Research.gov.  This way, people can still get personalized guidance and feedback from their organization, and collaborate with other users, before taking the final output to copy over to NSF’s webform.

NSF’s New DMSP Webform

The way the Research.gov form works is that you add specific Data and Research Product Categories, one category at a time (up to 4 total).  Each category accounts for all data of that type collected for a research project.  For example, if a project collects two distinct sets of human MRI data that will be published as separate datasets, that would still go in as one category of “human MRI data.” For each broad data category, the researcher will report on:

  • Which, if any, access policies or limitations that apply (i.e., reasons to not fully share data publicly, such as legal considerations)
  • What data standards and metadata will be used (i.e., the format and standard of the data, such as BIDS)
  • The provenance of the data (i.e., if its an existing resource or new collection)
  • The public archiving location (i.e., the repository it will be stored at, such as OSF)
  • The timeline for public accessibility (which is expected to be time of publication unless there is a justified reason for extending)
  • The duration of data availability (which is expected to be at least 2 years unless there is a justified reason for less), including a confirmation that the retention policy of repository will be adhered to
  • Accountability for data management (i.e., which PI or co-PI is responsible)

Some of these questions always provide a list of standard options to select from, such as the timeline and duration of data availability, while others depend on whether the Directorate has entered options, which such as Data and Research Product Category, Data Standards, and Public Archiving.  At the time of this post, most Directorates have not entered options for those, so they just offer an “Add New” option to write in their own. 

How it Works on the DMP Tool

In the DMP Tool, we don’t have the exact capabilities to perfectly replicate the form, but we have published templates that capture all the information and options in them, with additional guidance pulled from various NSF websites. You can download copies of the templates or start a plan at our Funder Requirements page. All older versions are now unpublished or marked obsolete and will be unpublished soon.

Most applicants will use the “Generic” template that doesn’t provide extra options for data type, standards, and repositories, while those applying to the Directorate for Social, Behavioral and Economic Sciences (SBE) or Directorate for STEM Education (EDU) can use those templates in the tool to see what options those directorates provide.

First, the researcher needs to select how many Data or Research Product Categories they will add.  This allows us to hide the appropriate number of later sections, so they only see questions in the right number of sections for the data categories they would add.  If they select 0, that is similar to checking the box on NSF’s webform that a detailed DMSP is not needed, and they can use the Additional Information box to justify why they don’t need one, in case they want feedback from their organization on that.

Screenshot of the DMP Tool, showing a radio button question asking how many Data and Research Product Categories the user wants to add, with options from 0 to 4 categories.  There is a text box below labeled "Additional Information," and a sidebar to the right listing Guidance from NSF with multiple paragraphs about data categories.
Screenshot of the start of the DMP Tool template

For each section, they will first enter the Title and Description of that Data category.  

Screenshot of the DMP Tool, showing a text field question of "Title of Data of Research Product Category" and a text box question with "Description of Data or Research Product Category."  The right sidebar has guidance from NSF on what to answer for these questions.
DMP Tool: Title and Description
Screenshot of similar questions from the NSF webform, showing a selection of "Add New" as the category, then a text field question of "Title" and a text box question with "Description."
NSF Webform: Title and Description

Or, for SBE and EDU, they can select from the options for the Title that matches what they’d be shown in the NSF webform, or add their own using the Additional Information box. Note that the Additional Information box does not need to be filled out if a standard option is selected.

Screenshot of the DMP Tool showing the same Title question as previously, except now instead of a text field it is showing a long list of radio buttons to select data types, such as "computer model" and "human EEG."
DMP Tool: Data Types for SBE specifically

Next, they will answer all the follow-up questions for each category.  While we don’t have the ability to add formatting to response options or validate how many are selected (e.g., to make sure people don’t select more than 6 access limitations), we provide all the same options as the tool for people to build their DMP the same way they would in the form.

Screenshot of the DMP Tool, showing a list of checkbox style questions where users can select any number of Access Policies and Limitations for data sharing, including options like "Human Data Protection" and "Resource Limitations" with definitions.  There is an Additional Information text box below the question, and a sidebar with NSF guidance to the right.
DMP Tool: Access Policies and Limitations
Screenshot of the same Access Policies and Limitations question from the NSF webform, showing a dropdown box with three options visible on screen, and more available with scrolling.
NSF Webform: Access Policies and Limitations

For SBE specifically, there are some repositories that are only shown if certain Data types are selected earlier.  For example, GitHub is only offered as an option if Bespoke Research Software or Computer Model is the Data or Research Product category.  Since we don’t have the functionality to customize options based on a prior questions, we instead list in the response which it applies to so people can select appropriately.

Screenshot of the DMP Tool, showing a list of Public Archiving options as radio buttons, with options of Databrary, Github, OpenNeuro, OSF, and Add New.  The Github and OpenNeuro options also include a list of which data types they apply to. There is an Additional Information text box below the question, and a sidebar with NSF guidance to the right.
DMP Tool: Public Archiving
Screenshot of the Public Archiving question from the NSF webform, showing a selection of repositories to pick from under the heading of Data Sharing Location.  In this version, only Databrary, OpenNeuro, OSF, and Add New are showing since it is from an example of Human MRI data and not software, so GitHub is not displayed.
NSF Webform: Public Archiving

The Additional Information box is turned on for every question in case people are using “Add New” or providing extra justification in the form, though it often won’t need to be filled out when standard, expected options are selected.

Each question has Guidance on the right sidebar pulled from NSF policies, the webform itself (i.e., info buttons and help text), the guide to the webform, or notes from DMP Tool about functionality.  Organizations can also publish customizations of the template to add their own specific guidance to the form, or add guidance to Themes that will show up next to relevant questions.

Moving Forward

We hope this is helpful for researchers, especially those who want to get feedback from collaborators, data librarians, or other administrators at their university before submitting their plan to NSF.  It will also allow people to publish their plan, get a DMP ID, and connect future outputs to this plan within the tool.  While it is extra work to transfer responses into Research.gov at the end instead of uploading a PDF, the collaboration and guidance may be worth the extra steps.

We’ll keep an eye on feedback and update as needed.  Please report issues or suggested additions (e.g., if you notice a directorate has new options before we do) to dmptool@ucop.edu.

Updated NIH Data Management and Sharing Plan Now Available in the DMP Tool

We are pleased to share that the new 2026 Data Management and Sharing Plan (DMSP) format for NIH is now available in the DMP Tool.  While this form is not required for applications until May 25th, NIH has stated that they are already accepting it and encourage people writing new plans to use the new format.

The new template in the tool is currently titled “NIH 2026 Data Management and Sharing Plan (2026 Pilot Format, required starting May 25, 2026).”  The 2023 version (formerly known as NIH-Default DMSP) is still available in the tool as well for now with the new title “NIH DMS Plan Format 2023 Version (Allowed for due dates prior to May 25th, 2026)”, as it is allowed prior to May 25th. However, the 2026 template is the recommended format now and will be the first result returned when users select NIH as their funder.  

The legacy NIH-NIMH template is also being deprecated because the new 2026 NIH DMSP format applies across all NIH Institutes and Centers, including NIMH. On May 25th, the older NIH templates will be marked deprecated and then removed, leaving the 2026 format as the single NIH template option that applies to all NIH applications (though some Institutes may have additional sharing policies to keep in mind when answering the questions). We will update the title of the 2026 format at that time to remove the May 25th date.

Screenshot of the DMP Tool page that starts a new DMP.  The primary funding organization field has NIH selected, and the dropdown under "Which DMP template would you like to use?" shows 3 available NIH templates, with the top one highlighted, and the title matching the 2026 version mentioned in the paragraph.
Template options when selecting NIH as the funder for a plan

The template matches the NIH Format as closely as possible and brings in relevant Guidance from NIH policies, help pages, and FAQs to help researchers answer each question appropriately. It also includes sample responses for elements such as Element 4 and Element 6 to help users better understand type of information NIH is expecting.

You can download a copy of the template or start a plan from it at our Funder Requirements page.

NIH DMSP Template Working Group

To curate the guidance, the NIH DMSP Template Working Group, who developed the guidance for the 2023 template in the tool, came back together to work on adding new guidance from this updated form.  The group was again chaired by DMP Tool editorial board member Nina Exner and involved the following members who contributed to meetings and guidance additions:

  • Nina Exner (Chair; DMP Tool Editorial Board Member), Virginia Commonwealth University
  • Mathew Covey, The Rockefeller University
  • Will Dean, Temple University
  • Seonyoung Kim, Washington University in St. Louis, Bernard Becker Medical Library
  • Jim Martin, University of Arizona
  • Genevieve Milliken, University of Nevada, Las Vegas
  • Melissa Ratajeski (DMP Tool Editorial Board member), University of Pittsburgh
  • Lesley Skalla, Duke University Medical Center
  • Amy Yarnell, University of Maryland, Baltimore

Each question has guidance pulled from various NIH policy pages and FAQs to give information on how to answer each question.  There are occasional notes about the tool implementation as well. The group’s goal was to balance giving enough guidance to bring key points right into the tool, but not overwhelming with too much text, so there is a mix of both direct guidance and links to other pages that may give more detailed information.

Screenshot of the DMP Tool showing Element 3 of the NIH template, which is a Yes/No question asking if shared scientific data will be made available for at least as long as required by applicable data repository policies and/or journal policies.  On the right side of the screen is a Guidance sidebar showing 2 paragraphs of text of guidance from NIH, including a reminder that institutions are required to keep the data for at least 3 years following closeout of a grant, and a note from the DMP Tool that the repository they select in Element 6 may have additional retention policies to consider for this questions as well.
Example of Element 3 with the guidance sidebar on the right

We at UC3 want to thank all of these members for their work to help get this out in a helpful and timely manner for NIH applicants. Navigating the policy documents and resources to bring the concise but comprehensive guidance to each question took a lot of effort from all members, and we are grateful to their work for making these templates accessible to plan writers.

Additional DMP Tool notes for Administrators

  • Any plans created under the old templates will be unchanged, even once those templates are removed later.  Template updates only impact new plans created after the publication of a template or update.
  • If you would like to add customizations to the NIH 2026 template for your organization, such as additional institutional questions or extra guidance for researchers at your university, see our documentation on customization.
  • If you added customizations to the 2023 NIH-Default DMSP template, they will not roll over to the 2026 template (since it is a brand new template and not just a version update), and your content will become inaccessible in the admin menu once the 2023 template is unpublished.  Please download or copy anything you want from your customization before the end of May 2026.
  • If your institution previously added custom guidance to the legacy NIH-NIMH template, please review and migrate that content to the new 2026 NIH template. Since the 2026 NIH DMS Plan format now applies across all NIH Institutes and Centers, including NIMH, the older NIMH template should no longer be used for new plans and can be removed after your guidance has been transferred. Please download or copy anything you want from your customization before the end of May 2026.
  • If you have any issues or questions around the new template, please reach out to us at dmptool@ucop.edu

Other resources on the NIH form

Thank you once again to the working group and everyone who has written guidance to help navigate this new update! We’ll continue to monitor changes and updates from NIH to reflect the most up to date format and guidance.


Notice: In a future post, we will discuss updates related to NSF templates releasing next week.  Our plan is still to mirror them from Research.gov as best we can once we can view them in the tool.  For more on UC3’s thoughts on these changes, see our article What is the Future of Data Management Plans? on the Upstream blog.

What is the Future of Data Management Plans? [X-Post from Upstream]

Note: This post is a cross-post of an article written for Upstream blog to make sure DMP Tool followers are aware of these important changes.  Please refer to that site as the version of record; DOI: 10.54900/fbq63-61s08

As stated in a prior post, we will be adding the updated NIH and NSF forms to the DMP Tool and expect to have both available by the end of the month.

Over the past decade, there has been an international effort across the research community to make data management and sharing plans (DMSPs, also called DMPs) more than static, narrative documents. Through work on machine-actionable DMPs (maDMPs), shared metadata standards, and integration with research infrastructure, the goal for a growing number of groups around the world has been to make DMPs more structured, more connected, and more meaningful across the research lifecycle.

This work has led to real progress. DMPs are increasingly seen not just as compliance requirements, but as part of a broader ecosystem that connects researchers, institutions, repositories, and funders. The idea that DMPs should be interoperable, reusable, and able to support downstream workflows is now more widely accepted than ever.

At the same time, recent developments from the National Science Foundation (NSF) and the National Institutes of Health (NIH) suggest a shift in how this vision is being implemented. Both agencies are moving away from free-form narrative plans toward more structured formats. NSF has announced that, starting April 27, 2026, their DMPs will be completed directly within Research.gov as a webform, while NIH is introducing a revised template for their DMSPs beginning May 25, 2026 that emphasizes structured responses and simplified inputs.

We have recently outlined these changes in a post on our DMP Tool blog, and in many ways, these changes reflect the direction the community has been advocating for. But they also raise an important question: as DMPs become more streamlined and embedded in funder systems, how do we ensure they remain interoperable, collaborative, and connected to the broader research data ecosystem?

Improvements in the DMP landscape

Many of the recent changes from funders reflect directions that the community has been actively working toward for years. Efforts around maDMPs, shared metadata standards, and stronger connections between planning and outputs have all been grounded in a common goal: to make DMPs more structured, more usable, and more integrated into the research lifecycle. In that context, the move away from free-form narrative plans toward more structured formats is both expected and welcome.

Several aspects of the evolving landscape stand out as particularly positive:

  • Moving toward structured questions helps reduce ambiguity and brings greater consistency to how plans are created and reviewed. 
  • A clearer expectation that data should be shared, with exceptions requiring justification, reinforces a shift from recommendation to norm. 
  • Embedding DMP creation into proposal systems meets researchers where they are and has the potential to reduce administrative burden at the point of application.

There is also a broader opportunity here. More structured plans make it easier to connect DMPs to downstream activities, including tracking data sharing over the course of a project and linking plans to outputs such as datasets, repositories, and related identifiers. These are areas where the community has invested significant effort, through initiatives such as maDMPsDMP IDs, and tools designed to support more dynamic and reusable integrations.

Taken together, these changes signal real progress. They suggest that funders are not only encouraging data sharing, but also rethinking how planning can better support it in practice.

At the same time, as these ideas move from principle to implementation, new questions begin to emerge. The benefits of structure, simplicity, and integration depend on how well they connect to the broader ecosystem and whether they continue to support meaningful, collaborative planning. These are the areas where the details of implementation will matter most.

Changes at NSF

Recently, NSF has moved toward a structured, webform-based DMP. While the full form has not yet been released, it is expected to include a set of core questions covering familiar elements of data management planning:

  • What kind of data is being shared
  • What concerns limit the sharing of data and why
  • What is the format of the shared data
  • Where will it be shared
  • For how long will it be available
  • What is the source of the data
  • Who is responsible for managing the data

This shift toward structured input is an important development. It brings greater consistency to how plans are created and reviewed and aligns with long-standing efforts to make DMPs more machine-readable and actionable. At the same time, the decision to implement this form within Research.gov introduces a new set of questions about how these plans will connect to the broader research data ecosystem.

maDMPs have been developed with the goal of enabling information to move between systems, supporting workflows that extend beyond the point of proposal submission. As NSF stated in a past Dear Colleague Letter:

A machine-readable document allows a computer program to interpret the DMP, such as to prepare a data repository for an eventual deposit of a large or complicated dataset….A benefit of DMP tools for researchers is that they can generate both a PDF version of the DMP that is suitable for inclusion in a grant proposal and a machine-readable version suitable for sharing with an intended recipient data repository or the researcher’s home institution.

If DMPs are created and maintained entirely within a closed system, without mechanisms such as APIs or support for interoperable formats, it becomes more difficult to realize this vision. Rather than flowing across systems, key information may remain siloed, requiring researchers or institutions to recreate plans in other environments in order to support downstream use. This not only introduces additional effort, but also increases the risk that multiple versions of a plan diverge over time.

There are also implications for the broader infrastructure that has been developing around DMPs. Persistent identifiers such as DMP IDs, along with shared metadata standards developed through efforts like the Research Data Alliance, are intended to support discovery, tracking, and integration across the research lifecycle. If DMPs created in funder systems cannot easily be registered, exported, publicized, or linked to these services, an important layer of connectivity may be lost and some of the core principles of maDMPs are not realized.

Finally, the shift to a funder-hosted form changes how DMPs are created in practice. Data management planning is often a collaborative process, involving researchers, librarians, and institutional support staff. External tools and shared documents make it easier to iterate on plans, incorporate guidance, and ensure alignment with institutional policies and available resources. When plans are created directly within submission systems, that collaborative process can become more difficult, which may reduce opportunities for support and lead to plans that are harder to implement in practice.

NSF’s approach reflects important progress toward more structured and usable DMPs. At the same time, it highlights the importance of ensuring that structure is paired with interoperability, so that DMPs can function not only within funder systems, but across the broader ecosystem they are intended to support.

Changes at NIH

NIH has updated their DMSP template to reflect a different, but equally important, shift in approach. Unlike NSF’s webform, the NIH plan will still be created outside of a submission system for now, allowing researchers to use tools such as the DMP Tool and to collaborate more easily with institutional partners (though some discussions indicate NIH may consider a webform in the future). This supports many of the goals the community has been working toward, including integration with existing tools, the ability to register and reuse plans, and more flexible, collaborative workflows.

The NIH’s emphasis seems to be on creating a streamlined, structured format, which is understandable. By focusing on a small number of core questions, primarily centered on whether data will be shared, where it will be shared, and what outputs are expected, their new template reduces the burden on researchers at the proposal stage and aligns with broader efforts to simplify the DMP process and more easily track compliance with data sharing.

At the same time, this simplification introduces a different kind of tension.

Data management plans are most effective when they prompt researchers to think prospectively about how data will be managed throughout the lifecycle of a project. As stated by NIH regarding the 2023 policy:

Prospectively planning for how scientific data will be managed and ultimately shared is a crucial first step in optimizing the reach of data generated from NIH-funded research. Investigators and institutions are encouraged to consider these crucial elements early in research planning. 

A more minimal template may make it easier to complete a plan, but it may also reduce the extent to which researchers engage with these aspects of planning. When the primary interaction becomes confirming that data will be shared, there is a risk that important details are deferred until later in the project, when options may be more limited and challenges more difficult to address.  Key elements such as metadata, standards, preservation, and access will be less likely to be considered in advance, leaving researchers less positioned to produce data that is usable by others.

There is also a subtle shift in how researchers interact with institutional support. One of the benefits of more detailed DMSPs has been the opportunity for researchers to engage with data librarians and stewards, who bring expertise in policies, repositories, and best practices. A simplified form may reduce the need for that engagement, which lowers burden, but may also reduce access to guidance that helps ensure plans are both compliant and achievable.

NIH’s approach creates a challenge not about interoperability, but about maintaining the role of DMPs as meaningful planning tools. The move toward simplicity is an important step in reducing friction, but it also raises the question of how to preserve the depth of planning that enables effective data sharing in practice.

What we’d like to see

Taken together, these changes from NSF and NIH reflect progress and also highlight an important inflection point. As DMPs become more structured and more embedded in funder workflows, the next question is: how do we ensure they remain connected to the broader ecosystem they are intended to support?

Focus on Interoperability

One area where this alignment becomes especially important is interoperability.

Supporting mechanisms such as APIs, along with the ability to import and export DMPs in structured, machine-readable formats, allows each plan created to connect with institutional tools, repositories, and other parts of the research lifecycle. This would preserve the benefits of webform-based submission, including structured input, integration with proposal systems, and funder-side tracking, while also enabling the kinds of workflows envisioned through machine-actionable DMPs.

In practice, this could support multiple pathways for researchers. Some may choose to complete a plan directly within a funder system, while others may develop it in a tool such as DMP Tool or a similar service and submit it through interoperable formats. Institutions could build integrations that allow DMPs to be shared across systems, reducing duplication of effort and improving consistency between planning and implementation.

More broadly, enabling access to DMPs through APIs would allow the ecosystem to build on them. Institutions could connect plans to grant management systems, track compliance with data sharing commitments, and provide targeted support to researchers working with complex data. Connections to persistent identifiers and other research infrastructure would further strengthen the ability to discover, link, and reuse data over time.

Pre- and post-award versions of DMPs

A second area for consideration is how DMPs are used across different stages of the research lifecycle.

There is a strong case for distinguishing between planning at the proposal stage and planning after funding has been awarded. A lighter-weight, structured plan at the application stage can support review and reduce burden for both applicants and reviewers. At the same time, more detailed planning is often most valuable once a project is funded, when researchers have greater clarity about their data and stronger incentives to ensure their plans are actionable.

This staged approach is already used in other contexts such as Horizon Europe, where an initial statement of intent is followed by a more comprehensive plan developed after funding. Applying a similar model here could balance efficiency with effectiveness: keeping proposal requirements streamlined while ensuring that funded projects benefit from more thorough, collaborative planning.

Such an approach would also better align with institutional support structures. Libraries and data support teams could focus their efforts where they are most impactful, working closely with funded projects to develop plans that reflect available resources, appropriate repositories, and relevant standards. Providing a defined window after funding to complete this work would allow researchers the time and context needed to engage meaningfully with the process.

Taken together, these directions point toward a model where DMPs are both simpler and more connected: easy to create at the point of application, but also interoperable, extensible, and capable of supporting the full research lifecycle.

Conclusion

The recent updates from NSF and NIH mark an important moment in the evolution of data management planning. They reflect many of the directions the community has been working toward, including greater structure, clearer expectations around data sharing, and efforts to reduce burden at the point of application. At the same time, they highlight how much the details of implementation matter.

Data management plans should not be static compliance documents. Their value lies in supporting thoughtful, collaborative planning across the research lifecycle and in connecting that planning to the systems that enable data to be shared, discovered, and reused. When planning becomes more lightweight or more isolated, there is a risk that these connections weaken over time. The impact of that shift may not be immediately visible, but it can emerge later in the form of data that is harder to interpret, less consistently structured, and more difficult to integrate into broader workflows.

Because NSF and NIH play such a key role in the US and global research communities, their approaches are also likely to influence others. This creates both risk and opportunity. If new models emphasize simplicity without connectivity, fragmentation may increase. If they successfully balance structure, interoperability, and meaningful planning, they can help establish a stronger foundation for the next phase of research data infrastructure.

The path forward does not require choosing between reducing burden and supporting richer, more connected planning. The elements needed to do both are already visible: structured, machine-readable inputs; flexibility in how plans are created and shared; interoperability across systems; and a distinction between early-stage commitments and more detailed, post-award planning.

Bringing these elements together would allow DMPs to function as intended: not just as part of the application process, but as living components of the research lifecycle that support data sharing in practice. As these changes continue to evolve, there is an opportunity for funders, institutions, and the broader community to work together to ensure that DMPs remain both usable and meaningful.

Copyright © 2026 Becky Grady, Maria Praetzellis. Distributed under the terms of the Creative Commons Attribution 4.0 License.

Note: This post is a cross-post of an article written for Upstream blog to make sure DMP Tool followers are aware of these important changes.  Please refer to that site as the version of record; DOI: 10.54900/fbq63-61s08

DMP Tool v5.54 Release Notes

We’ve just released an update to the DMP Tool. While mostly minor or behind the scenes updates that we don’t expect users to notice, there were a couple important changes to DMP exports based on feedback that we wanted to make people aware of.

  • Fixed an issue where PDF exports that were intended to be size 11 were being exported smaller than size 11, causing rejections from some funders running automated formatting checks for size 11+ text.  Note that this may make PDF exports longer than before, so plan writers should be aware and consider if they need to reduce their text or remove question text in order to keep to the 2 page requirement of many funders.  Also note that for existing plans, there will need to be some change to the text saved in order for it to generate a new PDF and use the new system.
  • Fixed an issue where DOCX exports were coming in at size 8, when most funders expect size 11.  The new default is size 11 text. There are still some known issues that some parts of formatting, like font color and non-default size, are not exported properly to DOCX downloads.
  • Fixed an issue with being unable to make some plans public if conditional logic of the underlying template meant that they had not answered 50% of the total number of questions.  Visibility setting should now check against having answered 50% of displayed questions rather than 50% of the total questions on the template.
  • Upgraded to Ruby v3.4 and Node.js v22.

If you experience any new issues, please report them by emailing dmptool@ucop.edu. 

As we’ve stated before, we are not making significant updates to the tool at this time so that we can focus development work on our rebuild. However, we want to make sure the site remains up to date and usable, which is why we will address major issues that get in the way of core functionality, such as exporting DMPs as PDFs.

Evolving Data Management Plans: Adapting to news from NSF and NIH

Like many in the research data management community, we have been closely following updates from the National Science Foundation (NSF) and National Institutes of Health (NIH) about changes to their data management and sharing plans (DMSPs, also known as DMPs).  

For those not aware, both the NSF and NIH are moving away from free-form narrative document DMSPs towards more structured, standardized forms, which can then potentially be embedded directly into their proposal systems. NSF announced that their DMSPs will, starting on April 27th 2026, be completed as a form on Research.gov rather than uploaded as a separate document. NIH is also making a major change to their DMSP template starting May 25th 2026, also moving away from free-form text narrative to mostly Yes/No questions about data sharing, plus a list of expected outputs and their intended repositories and a space to explain any exceptions to data sharing. 

These changes reflect a broader shift on how funders approach data management planning. Rather than narrative documents, DMSPs are becoming structured inputs that can be more easily reviewed, compared, and in some cases tracked over the course of a project.

Community Impact

We are happy to see a move towards structured, machine-actionable questions over free text and reducing burden on researchers applying for grants. However, these changes have the potential to disrupt the way data management and planning is done throughout the research lifecycle. 

  • NSF’s new form may include the standard sections recommended in a DMSP, but the fact that it will only be accessible on Research.gov may make it harder for collaboration between researchers and data librarians to take place.
  • NIH’s form will still be uploaded as a document as far as we are aware, but limiting to mostly Yes/No questions may take away much of the planning that needs to happen before data is collected.

We understand both these updates are new and will undergo evaluation and feedback periods – we look forward to working with NSF and NIH to see how these new forms perform and if there are areas of improvement for the future.

The two main areas the DMP Tool team will be watching is cross-institutional communication and interoperability.  In our experience, researchers and grants teams value personalized university guidance and the ability to collaborate with local data librarians and research IT teams to get feedback on their DMSPs. These new changes will require a shift in the way the community works but may also require further refinement from the agencies.

We also hope to see more investment in interoperability in the future. Locking the DMSP information into a closed system without an API risks creating a new silo of important research information that will make it harder for other researchers to find and track data outputs from funded research. We hope that the agencies look for new ways for researchers to engage with their platforms that enable these types of interoperability and connectivity.  

Adjusting to new workflows 

While the DMP tool team continues to understand the implications of these new workflows, we are also committed to meeting the needs of our communities. Many have reached out to ask specific questions around how we will adjust the tool to work with NSF and NIH’s new approaches.

  • For NSF, as soon as the final version of the Research.gov form is available, we will implement a copy of it into the tool.  People who complete their DMSP for NSF in the DMP Tool will still be able to use its collaboration and guidance for help filling it out, though at the end they will likely need to copy/paste the information into the Research.gov form rather than export as a PDF.  Regardless, the key features that support collaboration and communication will still be available for institutions to use in NSF proposal consultations.
  • For NIH, we have already started work implementing the new form based on the preview provided.  The questions are already entered, and we’re working with members of our DMP Tool Editorial Board to add appropriate guidance, recommendations, and relevant policies to the elements.  As soon as the NIH form is finalized and we have that entered, we will publish it on the DMP Tool so researchers can start to use it for upcoming submissions, and organizations can start adding customizations and extra guidance if they wish.

Stay updated on the latest!  We will message our status and next steps on this blog, our LinkedIn account, and direct emails to all member organization contacts.

Implications on our ongoing platform development

As we described above, in the immediate future, we will continue to support creating DMSPs in the tool for NSF, NIH, and many other US and international funders however they structure their templates.  In parallel, our rebuild work continues on.  We will be taking these new announcements as opportunity to reflect and adjust our priorities and timelines.  We think that many of the new functionalities coming in the new tool fit well with this evolving landscape.  For example, the new tool will support creating a Project that can house multiple related plans and allow uploads of plans created elsewhere.  This could allow, for example, people to upload a copy of the plan they submitted to NSF to the tool, and house related plans within one research project. This allows for support of Data Security Plans, Software Management Plans, and other documents that many universities and field stations now require.  

In the long term, we’re committed to evolving the DMP Tool to meet the needs of the community, even as those needs change.  We will continue to have open conversations about how to properly prioritize and adapt our current efforts for the changes we see coming on the horizon.  

Our core commitment is to serve and promote best practices in data management planning, and that goes beyond the document itself.  We know that our community’s strengths are in the customized guidance, collaboration, and resources that we all bring together from researchers, funders, and universities into one place, and we think that is more valuable than ever.  We will keep you all posted as we address the evolving landscape together! 

UC3 New Year Series: Data Management Planning in 2026

Cross-posted from our UC3 blog

Welcome to the second post of UC3’s New Year blog post series, where different services of UC3 take a look at the coming year.  If you haven’t already read it, check out the first one on digital preservation.

Over in the world of Data Management Planning, we’ve got a lot of exciting work this year to share!

DMP Tool Rebuild

Our main project continues to be working on the rebuild of the DMP Tool.  While we initially hoped to have it ready early this year, we’re now targeting the summer of 2026.  This gives us more time to make sure it’s at a high level of quality, and also releases it at a time that will hopefully be less disruptive to people who teach classes using the DMP Tool.  There’s a chance it will take longer than the summer though – we’re focused on quality over speed.

We’ve done 3 rounds of user testing so far on the site, and each time has given us a lot of valuable information.  We’ve gotten a lot of positive feedback about new features we will be offering, such as alias email addresses, adding collaborators to templates, a revamped API, and much more.  Other changes, though, have caused some confusion for people used to the current tool, and through testing we have found opportunities to improve the workflow and usability of the new site.  These are the types of changes that mean the rebuild will take longer than initially planned to complete, but we think are worth the time to get right.

DMP Tool logo

To keep updates about the rebuild in one place, we have a Rebuild Hub page on our blog.  We’ll keep this page up to date with the latest information about the release date, FAQ, status updates, and more.  We plan to make posts leading up to the new release showing the major changes and giving guidance to make the transition as seamless as possible.  If you’d like to help with testing at any point, please sign up for our user panel to get invitations to future feedback sessions.

As we’ve said before, we’re limiting updates to the current tool so we can focus our limited resources on the rebuild; but of course we also want to keep the tool live and helpful during the transition.  We’re fixing any major issues that come up, such as keeping it up to date with new ROR API and schema, and addressing user tickets as quickly as possible.  We are trying to keep funder templates up to date as well, but the frequency of new information and potential changes has made it difficult to perfectly capture all updates to federal guidelines.  We want to make sure we have the most relevant information possible on the tool without changing templates too often (as that can lose organization guidance), so we’ve been collecting updates from our Editorial Board members for a template release in the near future.  If you see any instances where a template in our tool does not match a funder template, please reach to us by email so we can get it corrected.

Get Involved with API Integrations

With our rebuild is coming a complete revamped API to take advantage of our new machine-actionable functionality.  We’re currently looking for partners that would like early access to our new API in order to develop new integrations for our rebuild.  Our goal is that the new API can do anything the user interface can do, which means the sky (or more relevant, the cloud) is the limit for possible tools.  If you’ve been wanting to connect to our API for some sort of automation that our current API did not offer the capability for, we’d love to hear from you. You can hear more about past pilot integrations and how to work with our API at this recording of our webinar from the Machine-Actional Plans pilot project.  We’ll be following the common API standard being developed with the Research Data Alliance, meaning many integrations with our tool should work for other DMP service providers as well.  If you have an idea for an integration you’d like to build on our new API, please reach out to dmptool@ucop.edu

Matching to Published Research Outputs

We’ve talked before about a major project to use machine learning models to help match DMPs to their eventual research outputs, like datasets and software publications, to help make data from published DMPs easier to find and re-use.  This work has continued and we plan to release it with the rebuilt DMP Tool.  Since our last update, we’ve made some significant steps towards this goal, including:

  • Moving the infrastructure onto our own servers to prepare for integration into the DMP Tool
  • Adding new sources of data, such as grant award pages that list published outputs
  • Getting the normalized corpus into OpenSearch to aid us in the matching process
  • Expanding our ground truth dataset of true matches and non-matches to help test our matching algorithm
  • Utilizing a Learning to Rank model that will improve over time as it learns from accepted and rejected matches
  • Building out the user interface for how users will see potential matches and accept or reject them
Screenshot of a webpage that says "Published Research Outputs at the top and includes a list of scholarly research citations.  Next to each item in the list are buttons that say "Accept" and "Reject", as well as information about the work such as date found, source, and confidence of the match.
New user interface showing a list of published outputs that have been matched to a DMP in our rebuilt DMP Tool.  Interface is subject to change before release.

Improvements we plan to work on over 2026 include:

  • Adding in related outputs based on accepted outputs (i.e., finding matches to any Accepted works in addition to matching against the DMP itself)
  • Looking at options to improve the matching algorithm, such as vector search with an embedding model
  • Working with the COMET team on tooling that can extract award IDs from published outputs, which will improve the quality of matching to DMPs that include an award ID

We’re excited for people to get to use this tool with the rebuild and start accepting and rejecting potential matches so we can learn from this and improve the matching algorithm further over time.  People will also be able to manually add DOIs as research outputs, like they can on the current tool, which will also help train the model over time on what we missed as potential matches.  This will be available for all DMPs that have been published, i.e., registered for a DMP ID.  Accepted works will be added to the metadata for the plan as related identifiers.

DMP Chef

Another exciting area we’re exploring is the use of generative AI to assist in writing Data Management Plans.  We’ve partnered with the FAIR Data Innovations Hub to work on the DMP Chef, a project to explore using large language models (LLMs) to draft DMPs.  Our goal is not to take away the key decisions in data management planning from a researcher, but instead to simplify the process as much as possible by asking a few critical questions, combining that responses with funder requirements that need to be met, and using those to produce a draft of a DMP for their review and edits.

We have promising early results, with both automated statistics and human evaluations showing the LLM-drafted DMPs can be comprehensive, accurate, and follow best practices.  Commercial models are performing better than the open-source models, but since we want to remain open-source, we’re looking at ways to improve the open-source models through additional retrieval augmented generation and other options.  And we’ll be testing carefully how accurate and helpful the output is, as well as looking at ways to help ensure researchers read and edit the plan as needed, rather than just accept the output right away.

DMP SourceOverall Satisfaction rating (1-5)Average Error Count per DMPAccuracy in guessing LLM vs Human
Human3.17.2 65%
LLMs (combined)3.44.943%
    Llama 3.32.67.570%
    GPT-4.14.22.315%
Results presented at the Research Data Alliance 2025 plenary, showing GPT-4.1 generated DMPs with higher satisfaction ratings and fewer errors reported than human-written exemplar DMPs from NIH.  N = 20 participants rating a DMP from each source, for a total of 60 DMP ratings

Over the course of 2026, we plan to keep testing and improving this model, starting with NIH and NSF plans.  The ultimate goal is a general use model that can be used within the DMP Tool for any funder to get a first draft of either a whole DMP or specific sections a researcher is struggling with.  We have a working prototype tool for DMP generation we will use for testing purposes, with integration into the DMP Tool planned for further out.  If you’d like to be part of testing out this new tool, please sign up for our user panel.

Thanks for reading about our major initiatives for the year!  Keep an eye out on this space for the next post in our series, about our 2026 plans for persistent identifiers.

We are grateful to the Institute of Museum and Library Services, the National Science Foundation, and the Chan Zuckerberg Initiative for each supporting core components of these initiatives.

MAP Pilot Project: New Resources and Report Available

TL;DR

The Machine Actionable Plans (MAP) Pilot project is currently in its final phase, providing institutions with resources to enable them to explore the potential uses of machine-actionable data management plans (maDMPs). The project webpage includes newly released resources including the final report, case studies, and key recommendations, as well as links to recorded webinars and other materials.

Pilot Overview

The pilot was funded by the Institute of Museum and Library Services (IMLS LG-254861-OLS-23) and grew out of a partnership between the California Digital Library and the Association of Research Libraries. Designed to address the urgent needs of academic libraries to meet increasing requirements for sharing research data, it explored the integration of maDMPs with existing research and IT systems. 

The pilot, discussed in past blog posts, worked directly with several institutions, providing the opportunity to take the infrastructure built by the DMP Tool and implement machine-actionable approaches in alignment with their organization’s goals. Each institution designed its own project with consideration given to local data management challenges and opportunities. Some focused on technical developments using API integrations, including automation and prototype tool build, while others prioritized collaboration and relationship-building across departments in support of research data management. Partners found value in not only progressing pilots at their own institutions, but sharing learnings and outcomes across institutions, deepening insight into common challenges and opportunities, as well as expanding collaborative relationships. 

CDL’s Maria Praetzellis notes:

At California Digital Library (CDL), we collaborate with UC campus Libraries and other partners to amplify the academy’s capacity for innovation, knowledge creation and research breakthroughs. The MAP Pilot project is an excellent example of this being realized. We’ve seen so many examples of collaboration, innovation, and expertise resulting in impressive tangible solutions for institutions in the face of increasing challenges and opportunities. Even in cases where institutions were unable to advance a solution within the span of the pilot, they were able to explore new paths to doing so in the future, all while building meaningful connections across campus and obtaining clarity on paths forward to advance institutional strategic priorities. This work has been strongly representative of the kinds of innovation CDL strives to facilitate.

Another key aim of the MAP pilot was to gather feedback to inform improvements to the DMP Tool. This feedback focused on workflows for uploading existing plans, automatic linking of plans to related outputs, enhancing API integrations, and improving the overall user experience. The input from the pilot institutions was crucial for identifying gaps and shaping the design of new DMP Tool features, which will be incorporated in the upcoming DMP Tool Rebuild. CDL’s Becky Grady comments: 

Receiving feedback on the DMP Tool user interface and API during the course of the pilot was incredibly useful for its development. Our pilot partners provided important perspectives on their experience using the tool and the API, which informed key developments in our user interface redesign. The DMP Tool team feels more confident in our direction for continued development, now with greater clarity on the priorities to provide the biggest benefits for researchers and institutions.

Several new resources have been created for institutions, informed by key learnings from the pilot.

MAP Pilot Report 🔗

An overview report for the pilot has been prepared to provide information around the project’s background, summary of pilot activities and DMP Tool development, pilot observations, and key recommendations for institutions. 

Case Studies 🔗

Pilot partners, including Arizona State University, Northwestern University, Pennsylvania State University, the University of California, Riverside, and the University of Colorado Boulder, share their pilot activities, learnings, and recommendations in a series of short case studies. 

Key Recommendations 🔗

A collection of short recommendation guides has been prepared for institutional stakeholder groups to support those exploring maDMPs. Guides are available for researchers, librarians, IT & Information Security departments, and grant offices. 

Several partner institutions are also preparing additional reports with more detail to be made available to the wider community. These will be listed on the MAP Pilot Project webpage as they become available. 

The MAP Pilot team hopes that institutions and DMP Tool administrators will find these resources useful in engaging with colleagues at their institution to explore the deep benefits that maDMPs can yield. They would like to thank all of the pilot institutions for their participation, collaboration, and generosity with their time in sharing their learnings with the community.

Announcing our Webinar Series: Insights from the Machine-Actionable Data Management Plans Pilot


Want to learn about how technological advancements in data management plans can benefit research at your university? Have you heard the term “machine-actionable” a lot but aren’t sure what it is or why it’s important? Are you looking for strategies to reduce burden on researchers and administrators in working on data management plans?

Abstract image of arrows moving forward

Join our free webinar series to learn from several US institutions that explored and piloted machine-actionable approaches to data management plans (DMPs).

Funded by the Institute of Museum and Library Services (award LG-254861-OLS-23), and led jointly by the California Digital Library (CDL) and the Association of Research Libraries (ARL), the Machine Actionable Plans (MAP) Pilot initiative enabled institutions to test and pilot data management plans that are machine-actionable and facilitate communication with other university research and IT systems. Each institution developed its own projects in alignment with their institutional mission, and with their specific challenges and opportunities taken into consideration. The DMP Tool team also worked with pilot partners to test features and advance technical developments to improve usability, best practice adoption, compliance, and efficiency.

In this series of webinars, we invite librarians, administrators, data managers, IT & security staff to find out more about the motivations of these institutions to explore machine-actionable DMP integrations: what they did, how they did it, and what they learned. For those interested in more technical aspects of integrations, some webinars will also provide detail on the API of the DMP Tool, along with more detailed implementation instructions and advice.

Webinar 1: Streamlining Research Support: Lessons from maDMP Pilots  

  • Tuesday, May 6, Noon EDT / 9:00 a.m. PDT Duration: 1 hour, with an optional additional 15 minutes for Q & A

This webinar is for those looking to improve the efficiency, collaboration, and coordination of research support within their institutions. Learn from several institutions about their explorations of maDMP integrations to facilitate automated notifications for coordination across campus, and about how they used the pilot more broadly to facilitate discovery and collaboration within their institutions. This webinar will provide an overview of each institution’s activity, rather than detailed instructions about integrations.

Presenters include:  Katherine E. Koziar, Briana Wham, Matt Carson, Andrew Johnson

Register

Webinar 2: Creative Approaches for Seamless and Efficient Resource Allocation 

  • Tuesday, May 20, Noon EDT / 9:00 a.m. PDT
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

Don’t miss this webinar if you’re interested in new ways to enable efficient resource allocation. Institutions will share their experiences in leveraging maDMPs to develop integrations for automation systems that enable such allocations. This webinar will provide an overview of each institution’s activity, rather than detailed technical instructions about integrations.

Presenters include:  Katherine E. Koziar, Andrew Johnson

Register

Webinar 3: Five Technological Advancements in DMPs to Benefit Your Organization 

  • Tuesday, June 3, Noon EDT / 9:00 a.m. PDT 
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

If you’re interested in emerging technologies within the pilot project and the DMP Tool and how they can help your institution expedite research sharing, compliance, and operational efficiency, this webinar will provide a strong introduction. We’ll also hear from pilot partners about promising AI developments related to reviewing DMPs, and will hear more detail on technical advancements coming to the DMP Tool based on feedback from the pilot. 

Presenters include:  Jim Taylor, Becky Grady

Register

WEBINAR 4: How to Implement Machine-Actionable DMPs at your Institution

  • Tuesday, June 17, Noon EDT / 9:00 a.m. PDT 
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

If you want to find out more about specific integrations and how to implement maDMPs, this webinar is for you. Hear from the DMP Tool team about the API, common challenges and how to overcome them, and actionable recommendations for campus buy-in.

Presenters include:  Becky Grady, Brian Riley

Register

Working Toward a Common Standard API for Machine-Actionable DMPs

TL:DR

  • We’re participating in a new group formed to develop a common API standard for DMP service providers
  • The goal is to make it easy for anyone wanting to build integrations with maDMPs to have it work for any DMP service provider
  • The group had its first kick-off meeting to make initial outlines, with work continuing over the next few months
  • We plan to support the new API (as well as all existing functionality and integrations) in our new rebuilt DMP Tool application

DMP Tool and the Research Data Alliance

Our work at DMP Tool has been shaped from the ground up through collaborations at the Research Data Alliance (RDA). From the earliest conversations about machine-actionable Data Management Plans (maDMPs) to the creation of the DMP common standard and the DMP ID, the RDA has served as the convening space where we’ve found shared purpose, co-developed solutions, and built lasting partnerships with peers across the globe. That same spirit is captured in the Salzburg Manifesto on Active DMPs, which outlines a vision for DMPs as living, integrated components of the research lifecycle. That vision continues today, as we are helping launch a new initiative at RDA to update a common API standard for DMP service providers. This effort will help ensure our systems can connect more seamlessly and serve the broader research ecosystem more effectively. This post gives some context on why this new effort is needed, what we’ve done so far for it, and what we have coming next.

DMP Tool implementation of the RDA common standard

The DMP Tool team were early advocates of maDMPs and saw the potential value of capturing structured information during the creation of a DMP. The goal is to use as many persistent identifiers (PIDs) as possible to help facilitate integrations with external systems. To gather this data, we introduced new fields into the DMP Tool to capture detailed information about project contributors (ORCIDs, RORs, and CRediT roles) as well as what repositories (re3data), metadata standards (RDA metadata standards) and licenses (SPDX) would be used when creating a project’s research outputs. These new data points are captured alongside the traditional DMP narrative. We also started allowing researchers to publish their DMPs. This process generates a DMP ID, a DOI customized to capture and deliver DMP-focused metadata. This approach allows the DMP to be discoverable in knowledge graphs like DataCite Commons. Once the DOI is registered, the DMP Tool provides a landing page for the DOI.

Screenshot of the DMP Tool showing how to register your plan for a DMP ID

One of the main points of collecting all of this structured metadata is to facilitate integrations with other systems. To make that possible, we introduced a new version of the API that outputs the DMP metadata in the common standard developed with RDA. Our first integration was with the RSpace electronic lab notebook system. When a researcher is working in RSpace, they are able to connect RSpace with the DMP Tool to fetch their DMPs in PDF format and store the document alongside their other research outputs. Once connected, RSpace is able to send the DMP Tool the DOIs of any research outputs that the researcher deposits in repositories like Dataverse or Zenodo. These DOIs are then available as part of the DMPs structured metadata.

Moving the Standard Forward 

The original RDA DMP common standard was released 3 years ago. Since that time, systems like the DMP Tool have found areas where we need to deviate from the base standard. This is a normal process when any standard is developed and first put into use. We have discovered key fields that should be added to the standard (e.g., contributor affiliation information) and areas that don’t really make sense to capture within the DMP itself (e.g., the PID systems a particular repository supports). 

Other DMP systems have also been implementing the common standard and making it available via API calls, but this was done without conformity as to how an external system can access those APIs. This results in systems like RSpace needing to develop and maintain separate integrations for each tool. Over time, this extra work leads to fewer integrations between systems, making each more siloed.

RDA is made up of Interest Groups and Working Groups where members across the world join together to work on a common topic, making guidelines, best practices, tools, standards, and other resources for the wider community. To tackle this use case and address shared issues, our RDA group decided to release a new version of the common standard, v1.2, and forming a new working group to develop API standards that each tool should support. Members of the DMP community gathered together at the end of March to discuss both topics. The DMP systems represented at the meeting included Argos, DAMAP, Data Stewardship Wizard, DMPonline, DMP OPIDoR, DMP Tool, DMPTuuli, and ROHub.

Our DMP Tool team attended the meeting to make sure that the needs of our funders, researchers and institutions were properly represented. The meeting was split into two parts: 

  • Common Standard revisions: In the morning, the group reviewed issues and feature requests submitted to the DMP Common Standard GitHub repository over the past three years. These were synthesized into major themes for discussion, resulting in a set of proposed non-breaking changes for a v1.2 release. More complex revisions were deferred for a future v2. Those interested can explore the open issues here.
  • Drafting the API specification: In the afternoon, the group reviewed user stories from current and planned integrations to identify common needs. This discussion led to the initial outline of a shared set of API endpoints that each DMP service should support. Work on refining this draft will continue in the coming months.
Photograph of 14 meeting attendees representing a variety of service providers in a conference room
Meeting attendees, representing a variety of DMP service providers, worked together on the common standard

Next steps

The original common metadata standard working group plans to incorporate the proposed non-breaking changes this summer as release v1.2. We have also committed to keep the conversation going about future enhancements as we work towards v2.

Meanwhile, the new RDA working group also hopes to release an official API specification this summer. The individual tools would then be tasked with ensuring that their systems support the new API endpoints. For our part, the DMP Tool will ensure that our new website supports this API standard when it launches, as well as additional endpoints specific to our application. The goal is that integrator services like RSpace will then be able to connect more easily with any DMP service, making connections across the research system more robust.

Anyone can review the new DMP common API for maDMP working group proposed work statement. We would value your input, and if you’re interested in joining the group and contributing to the API specification, you can join RDA (its free!) and join our Working Group.

UC3 New Year Series: Looking Ahead through 2025 for the DMP Tool

We’re gearing up for a big year over at the DMP Tool!  Thousands of researchers and universities across the world use the DMP Tool to create data management plans (DMPs) and keep up with funder requirements and best practices.  As we kick off 2025, we wanted to share some of our major focus areas to improve the application, introduce powerful new capabilities, and engage with the wider community.  We always want to be responsive to evolving community needs and policies, so these plans could change if needed.

New DMP Tool Application

Our primary goal for the year is to launch the rebuild of the DMP Tool application.  You can read more detail about this work in this blog post, but it will include the current functionality of the tool plus much more, still in a free, easy to use website.  The plan is still to release this by the end of 2025, likely in the later months (no exact date yet).  We’re making good progress towards a usable prototype of core functionality, like creating an account and making a template with basic question types.

In-development screenshot of account profile page in the new tool. Page is not final and is subject to change.
In-development screenshot of editing a template in the new tool. Page is not final and is subject to change.

Another common request is to offer more functionality within our API.  For example, people can already read registered DMPs through the API, but many librarians want to be able to access draft DMPs to integrate a feedback flow on their own university systems.  As part of our rebuild, we are moving to a system that is going to use the same API on the website as the one available to external partners (GraphQL for those interested).  This will allow almost any functionality on the website to be available through the API.  This should be released at the same time as the new tool, with documentation and training to come. Get your integration ideas ready!

Finally, we are continuing to work on our related works matching, tracking down published outputs and connecting them to a registered DMP.  This is part of an overall effort to make DMPs more valuable throughout the lifecycle of a project, not just at the grant submission stage, and to reduce burden on researchers, librarians, and funders to connect information within research projects.  It’s too early to tell when this will be released publicly on the website, but likely will come some time after the rebuild launch.

AI Exploration

While most of our focus will be on the above projects, we are in the early stages of exploring topics for future development of the DMP Tool.  One big area is in the use of generative AI to assist in reviewing or writing data management plans.  We’ve heard interest from both researchers and librarians in using AI to help construct plans.  People sometimes write their DMP the night before a grant is due and request feedback without enough time for librarians to provide it.  AI could help review these plans, if trained on relevant policy, to give immediate feedback when there’s not enough time for human review.

We’re also interested in exploring the possibility of an AI assistant to help write a DMP.  We know many people are more comfortable answering a series of multiple choice questions than they are in crafting a narrative, and it’s possible we could help turn that structured data into the narrative format that funders require, making it easier for researchers to write a plan and keeping the structured data for machine actionability. Another option is an AI chatbot within the tool that can help provide our best practice guidance in a more interactive format.  It will be important for us to balance taking some of the writing burden off of researchers while making sure that they are still the one responsible for the content within it.

These ideas are in early phases – it’s something we’ll be exploring with some external partners but likely not releasing to the public this year – however we’re excited about their potential to make best practice DMPs easier to create.

Community Engagement

While we’ll sometimes be heads down working on these big projects, we also want to make sure we’re communicating to and participating in the wider community more than ever.  As we get towards a workable prototype of the new tool, we’ll be running more user research sessions.  The initial sessions, reviewed here, offered a lot of valuable insight that shaped the current designs, and we know once people get their hands on the new tool they’ll have more feedback.  If you haven’t already, sign up here to be on the list for future invites. 

We also want to be more transparent with the community about our operations and goals.  We’ve started putting together documents within our team about our Mission and Vision for the DMP Tool, which we’ll be sharing with everyone shortly.  Over 2025, we want to continue to work on artifacts like those we can share regularly so that you all know what our priorities are.  One goal is to create a living will, recommended by the Principles of Open Scholarly Infrastructure, outlining how we’d handle the potential winddown of CDL managing the DMP Tool.  This is a sensitive area because we have no plans to wind down the tool, and don’t want to give the impression that its going away!  But it’s important for trust and transparency for us to have a plan in place if things change, as we know people care about the tool and their data within it.

Finally, we’ll be wrapping up our pilot project with ARL this year, where we had 10 institutions pilot implementation of machine-actionable DMPs at their university.  We’ve seen prototypes and mockups for integrations related to resource allocation, interdepartmental communication, security policies, AI-review, and so much more. We’ve brought on Clare Dean to help us create resources and toolkits, disseminate the findings, and host a series of webinars about what we’ve learned to help others implement at their own universities.  We’ll be presenting talks on the DMP Tool at IDCC25 in February, RDAP in March, and we plan to submit for other conferences throughout the year, including IDW/RDA in October, to share what we’ve learned with others. We hope to continue working with DMP-related groups in RDA to ensure our work is compatible with others in the space, and we’re following best practices for API development.

We hope you’re as excited for these projects as we are!  We’re a small team but we work with many amazing partners that help us achieve ambitious goals.  Keep an eye on this space for more to come.