What is the Future of Data Management Plans? [X-Post from Upstream]

Note: This post is a cross-post of an article written for Upstream blog to make sure DMP Tool followers are aware of these important changes.  Please refer to that site as the version of record; DOI: 10.54900/fbq63-61s08

As stated in a prior post, we will be adding the updated NIH and NSF forms to the DMP Tool and expect to have both available by the end of the month.

Over the past decade, there has been an international effort across the research community to make data management and sharing plans (DMSPs, also called DMPs) more than static, narrative documents. Through work on machine-actionable DMPs (maDMPs), shared metadata standards, and integration with research infrastructure, the goal for a growing number of groups around the world has been to make DMPs more structured, more connected, and more meaningful across the research lifecycle.

This work has led to real progress. DMPs are increasingly seen not just as compliance requirements, but as part of a broader ecosystem that connects researchers, institutions, repositories, and funders. The idea that DMPs should be interoperable, reusable, and able to support downstream workflows is now more widely accepted than ever.

At the same time, recent developments from the National Science Foundation (NSF) and the National Institutes of Health (NIH) suggest a shift in how this vision is being implemented. Both agencies are moving away from free-form narrative plans toward more structured formats. NSF has announced that, starting April 27, 2026, their DMPs will be completed directly within Research.gov as a webform, while NIH is introducing a revised template for their DMSPs beginning May 25, 2026 that emphasizes structured responses and simplified inputs.

We have recently outlined these changes in a post on our DMP Tool blog, and in many ways, these changes reflect the direction the community has been advocating for. But they also raise an important question: as DMPs become more streamlined and embedded in funder systems, how do we ensure they remain interoperable, collaborative, and connected to the broader research data ecosystem?

Improvements in the DMP landscape

Many of the recent changes from funders reflect directions that the community has been actively working toward for years. Efforts around maDMPs, shared metadata standards, and stronger connections between planning and outputs have all been grounded in a common goal: to make DMPs more structured, more usable, and more integrated into the research lifecycle. In that context, the move away from free-form narrative plans toward more structured formats is both expected and welcome.

Several aspects of the evolving landscape stand out as particularly positive:

  • Moving toward structured questions helps reduce ambiguity and brings greater consistency to how plans are created and reviewed. 
  • A clearer expectation that data should be shared, with exceptions requiring justification, reinforces a shift from recommendation to norm. 
  • Embedding DMP creation into proposal systems meets researchers where they are and has the potential to reduce administrative burden at the point of application.

There is also a broader opportunity here. More structured plans make it easier to connect DMPs to downstream activities, including tracking data sharing over the course of a project and linking plans to outputs such as datasets, repositories, and related identifiers. These are areas where the community has invested significant effort, through initiatives such as maDMPsDMP IDs, and tools designed to support more dynamic and reusable integrations.

Taken together, these changes signal real progress. They suggest that funders are not only encouraging data sharing, but also rethinking how planning can better support it in practice.

At the same time, as these ideas move from principle to implementation, new questions begin to emerge. The benefits of structure, simplicity, and integration depend on how well they connect to the broader ecosystem and whether they continue to support meaningful, collaborative planning. These are the areas where the details of implementation will matter most.

Changes at NSF

Recently, NSF has moved toward a structured, webform-based DMP. While the full form has not yet been released, it is expected to include a set of core questions covering familiar elements of data management planning:

  • What kind of data is being shared
  • What concerns limit the sharing of data and why
  • What is the format of the shared data
  • Where will it be shared
  • For how long will it be available
  • What is the source of the data
  • Who is responsible for managing the data

This shift toward structured input is an important development. It brings greater consistency to how plans are created and reviewed and aligns with long-standing efforts to make DMPs more machine-readable and actionable. At the same time, the decision to implement this form within Research.gov introduces a new set of questions about how these plans will connect to the broader research data ecosystem.

maDMPs have been developed with the goal of enabling information to move between systems, supporting workflows that extend beyond the point of proposal submission. As NSF stated in a past Dear Colleague Letter:

A machine-readable document allows a computer program to interpret the DMP, such as to prepare a data repository for an eventual deposit of a large or complicated dataset….A benefit of DMP tools for researchers is that they can generate both a PDF version of the DMP that is suitable for inclusion in a grant proposal and a machine-readable version suitable for sharing with an intended recipient data repository or the researcher’s home institution.

If DMPs are created and maintained entirely within a closed system, without mechanisms such as APIs or support for interoperable formats, it becomes more difficult to realize this vision. Rather than flowing across systems, key information may remain siloed, requiring researchers or institutions to recreate plans in other environments in order to support downstream use. This not only introduces additional effort, but also increases the risk that multiple versions of a plan diverge over time.

There are also implications for the broader infrastructure that has been developing around DMPs. Persistent identifiers such as DMP IDs, along with shared metadata standards developed through efforts like the Research Data Alliance, are intended to support discovery, tracking, and integration across the research lifecycle. If DMPs created in funder systems cannot easily be registered, exported, publicized, or linked to these services, an important layer of connectivity may be lost and some of the core principles of maDMPs are not realized.

Finally, the shift to a funder-hosted form changes how DMPs are created in practice. Data management planning is often a collaborative process, involving researchers, librarians, and institutional support staff. External tools and shared documents make it easier to iterate on plans, incorporate guidance, and ensure alignment with institutional policies and available resources. When plans are created directly within submission systems, that collaborative process can become more difficult, which may reduce opportunities for support and lead to plans that are harder to implement in practice.

NSF’s approach reflects important progress toward more structured and usable DMPs. At the same time, it highlights the importance of ensuring that structure is paired with interoperability, so that DMPs can function not only within funder systems, but across the broader ecosystem they are intended to support.

Changes at NIH

NIH has updated their DMSP template to reflect a different, but equally important, shift in approach. Unlike NSF’s webform, the NIH plan will still be created outside of a submission system for now, allowing researchers to use tools such as the DMP Tool and to collaborate more easily with institutional partners (though some discussions indicate NIH may consider a webform in the future). This supports many of the goals the community has been working toward, including integration with existing tools, the ability to register and reuse plans, and more flexible, collaborative workflows.

The NIH’s emphasis seems to be on creating a streamlined, structured format, which is understandable. By focusing on a small number of core questions, primarily centered on whether data will be shared, where it will be shared, and what outputs are expected, their new template reduces the burden on researchers at the proposal stage and aligns with broader efforts to simplify the DMP process and more easily track compliance with data sharing.

At the same time, this simplification introduces a different kind of tension.

Data management plans are most effective when they prompt researchers to think prospectively about how data will be managed throughout the lifecycle of a project. As stated by NIH regarding the 2023 policy:

Prospectively planning for how scientific data will be managed and ultimately shared is a crucial first step in optimizing the reach of data generated from NIH-funded research. Investigators and institutions are encouraged to consider these crucial elements early in research planning. 

A more minimal template may make it easier to complete a plan, but it may also reduce the extent to which researchers engage with these aspects of planning. When the primary interaction becomes confirming that data will be shared, there is a risk that important details are deferred until later in the project, when options may be more limited and challenges more difficult to address.  Key elements such as metadata, standards, preservation, and access will be less likely to be considered in advance, leaving researchers less positioned to produce data that is usable by others.

There is also a subtle shift in how researchers interact with institutional support. One of the benefits of more detailed DMSPs has been the opportunity for researchers to engage with data librarians and stewards, who bring expertise in policies, repositories, and best practices. A simplified form may reduce the need for that engagement, which lowers burden, but may also reduce access to guidance that helps ensure plans are both compliant and achievable.

NIH’s approach creates a challenge not about interoperability, but about maintaining the role of DMPs as meaningful planning tools. The move toward simplicity is an important step in reducing friction, but it also raises the question of how to preserve the depth of planning that enables effective data sharing in practice.

What we’d like to see

Taken together, these changes from NSF and NIH reflect progress and also highlight an important inflection point. As DMPs become more structured and more embedded in funder workflows, the next question is: how do we ensure they remain connected to the broader ecosystem they are intended to support?

Focus on Interoperability

One area where this alignment becomes especially important is interoperability.

Supporting mechanisms such as APIs, along with the ability to import and export DMPs in structured, machine-readable formats, allows each plan created to connect with institutional tools, repositories, and other parts of the research lifecycle. This would preserve the benefits of webform-based submission, including structured input, integration with proposal systems, and funder-side tracking, while also enabling the kinds of workflows envisioned through machine-actionable DMPs.

In practice, this could support multiple pathways for researchers. Some may choose to complete a plan directly within a funder system, while others may develop it in a tool such as DMP Tool or a similar service and submit it through interoperable formats. Institutions could build integrations that allow DMPs to be shared across systems, reducing duplication of effort and improving consistency between planning and implementation.

More broadly, enabling access to DMPs through APIs would allow the ecosystem to build on them. Institutions could connect plans to grant management systems, track compliance with data sharing commitments, and provide targeted support to researchers working with complex data. Connections to persistent identifiers and other research infrastructure would further strengthen the ability to discover, link, and reuse data over time.

Pre- and post-award versions of DMPs

A second area for consideration is how DMPs are used across different stages of the research lifecycle.

There is a strong case for distinguishing between planning at the proposal stage and planning after funding has been awarded. A lighter-weight, structured plan at the application stage can support review and reduce burden for both applicants and reviewers. At the same time, more detailed planning is often most valuable once a project is funded, when researchers have greater clarity about their data and stronger incentives to ensure their plans are actionable.

This staged approach is already used in other contexts such as Horizon Europe, where an initial statement of intent is followed by a more comprehensive plan developed after funding. Applying a similar model here could balance efficiency with effectiveness: keeping proposal requirements streamlined while ensuring that funded projects benefit from more thorough, collaborative planning.

Such an approach would also better align with institutional support structures. Libraries and data support teams could focus their efforts where they are most impactful, working closely with funded projects to develop plans that reflect available resources, appropriate repositories, and relevant standards. Providing a defined window after funding to complete this work would allow researchers the time and context needed to engage meaningfully with the process.

Taken together, these directions point toward a model where DMPs are both simpler and more connected: easy to create at the point of application, but also interoperable, extensible, and capable of supporting the full research lifecycle.

Conclusion

The recent updates from NSF and NIH mark an important moment in the evolution of data management planning. They reflect many of the directions the community has been working toward, including greater structure, clearer expectations around data sharing, and efforts to reduce burden at the point of application. At the same time, they highlight how much the details of implementation matter.

Data management plans should not be static compliance documents. Their value lies in supporting thoughtful, collaborative planning across the research lifecycle and in connecting that planning to the systems that enable data to be shared, discovered, and reused. When planning becomes more lightweight or more isolated, there is a risk that these connections weaken over time. The impact of that shift may not be immediately visible, but it can emerge later in the form of data that is harder to interpret, less consistently structured, and more difficult to integrate into broader workflows.

Because NSF and NIH play such a key role in the US and global research communities, their approaches are also likely to influence others. This creates both risk and opportunity. If new models emphasize simplicity without connectivity, fragmentation may increase. If they successfully balance structure, interoperability, and meaningful planning, they can help establish a stronger foundation for the next phase of research data infrastructure.

The path forward does not require choosing between reducing burden and supporting richer, more connected planning. The elements needed to do both are already visible: structured, machine-readable inputs; flexibility in how plans are created and shared; interoperability across systems; and a distinction between early-stage commitments and more detailed, post-award planning.

Bringing these elements together would allow DMPs to function as intended: not just as part of the application process, but as living components of the research lifecycle that support data sharing in practice. As these changes continue to evolve, there is an opportunity for funders, institutions, and the broader community to work together to ensure that DMPs remain both usable and meaningful.

Copyright © 2026 Becky Grady, Maria Praetzellis. Distributed under the terms of the Creative Commons Attribution 4.0 License.

Note: This post is a cross-post of an article written for Upstream blog to make sure DMP Tool followers are aware of these important changes.  Please refer to that site as the version of record; DOI: 10.54900/fbq63-61s08

Working Toward a Common Standard API for Machine-Actionable DMPs

TL:DR

  • We’re participating in a new group formed to develop a common API standard for DMP service providers
  • The goal is to make it easy for anyone wanting to build integrations with maDMPs to have it work for any DMP service provider
  • The group had its first kick-off meeting to make initial outlines, with work continuing over the next few months
  • We plan to support the new API (as well as all existing functionality and integrations) in our new rebuilt DMP Tool application

DMP Tool and the Research Data Alliance

Our work at DMP Tool has been shaped from the ground up through collaborations at the Research Data Alliance (RDA). From the earliest conversations about machine-actionable Data Management Plans (maDMPs) to the creation of the DMP common standard and the DMP ID, the RDA has served as the convening space where we’ve found shared purpose, co-developed solutions, and built lasting partnerships with peers across the globe. That same spirit is captured in the Salzburg Manifesto on Active DMPs, which outlines a vision for DMPs as living, integrated components of the research lifecycle. That vision continues today, as we are helping launch a new initiative at RDA to update a common API standard for DMP service providers. This effort will help ensure our systems can connect more seamlessly and serve the broader research ecosystem more effectively. This post gives some context on why this new effort is needed, what we’ve done so far for it, and what we have coming next.

DMP Tool implementation of the RDA common standard

The DMP Tool team were early advocates of maDMPs and saw the potential value of capturing structured information during the creation of a DMP. The goal is to use as many persistent identifiers (PIDs) as possible to help facilitate integrations with external systems. To gather this data, we introduced new fields into the DMP Tool to capture detailed information about project contributors (ORCIDs, RORs, and CRediT roles) as well as what repositories (re3data), metadata standards (RDA metadata standards) and licenses (SPDX) would be used when creating a project’s research outputs. These new data points are captured alongside the traditional DMP narrative. We also started allowing researchers to publish their DMPs. This process generates a DMP ID, a DOI customized to capture and deliver DMP-focused metadata. This approach allows the DMP to be discoverable in knowledge graphs like DataCite Commons. Once the DOI is registered, the DMP Tool provides a landing page for the DOI.

Screenshot of the DMP Tool showing how to register your plan for a DMP ID

One of the main points of collecting all of this structured metadata is to facilitate integrations with other systems. To make that possible, we introduced a new version of the API that outputs the DMP metadata in the common standard developed with RDA. Our first integration was with the RSpace electronic lab notebook system. When a researcher is working in RSpace, they are able to connect RSpace with the DMP Tool to fetch their DMPs in PDF format and store the document alongside their other research outputs. Once connected, RSpace is able to send the DMP Tool the DOIs of any research outputs that the researcher deposits in repositories like Dataverse or Zenodo. These DOIs are then available as part of the DMPs structured metadata.

Moving the Standard Forward 

The original RDA DMP common standard was released 3 years ago. Since that time, systems like the DMP Tool have found areas where we need to deviate from the base standard. This is a normal process when any standard is developed and first put into use. We have discovered key fields that should be added to the standard (e.g., contributor affiliation information) and areas that don’t really make sense to capture within the DMP itself (e.g., the PID systems a particular repository supports). 

Other DMP systems have also been implementing the common standard and making it available via API calls, but this was done without conformity as to how an external system can access those APIs. This results in systems like RSpace needing to develop and maintain separate integrations for each tool. Over time, this extra work leads to fewer integrations between systems, making each more siloed.

RDA is made up of Interest Groups and Working Groups where members across the world join together to work on a common topic, making guidelines, best practices, tools, standards, and other resources for the wider community. To tackle this use case and address shared issues, our RDA group decided to release a new version of the common standard, v1.2, and forming a new working group to develop API standards that each tool should support. Members of the DMP community gathered together at the end of March to discuss both topics. The DMP systems represented at the meeting included Argos, DAMAP, Data Stewardship Wizard, DMPonline, DMP OPIDoR, DMP Tool, DMPTuuli, and ROHub.

Our DMP Tool team attended the meeting to make sure that the needs of our funders, researchers and institutions were properly represented. The meeting was split into two parts: 

  • Common Standard revisions: In the morning, the group reviewed issues and feature requests submitted to the DMP Common Standard GitHub repository over the past three years. These were synthesized into major themes for discussion, resulting in a set of proposed non-breaking changes for a v1.2 release. More complex revisions were deferred for a future v2. Those interested can explore the open issues here.
  • Drafting the API specification: In the afternoon, the group reviewed user stories from current and planned integrations to identify common needs. This discussion led to the initial outline of a shared set of API endpoints that each DMP service should support. Work on refining this draft will continue in the coming months.
Photograph of 14 meeting attendees representing a variety of service providers in a conference room
Meeting attendees, representing a variety of DMP service providers, worked together on the common standard

Next steps

The original common metadata standard working group plans to incorporate the proposed non-breaking changes this summer as release v1.2. We have also committed to keep the conversation going about future enhancements as we work towards v2.

Meanwhile, the new RDA working group also hopes to release an official API specification this summer. The individual tools would then be tasked with ensuring that their systems support the new API endpoints. For our part, the DMP Tool will ensure that our new website supports this API standard when it launches, as well as additional endpoints specific to our application. The goal is that integrator services like RSpace will then be able to connect more easily with any DMP service, making connections across the research system more robust.

Anyone can review the new DMP common API for maDMP working group proposed work statement. We would value your input, and if you’re interested in joining the group and contributing to the API specification, you can join RDA (its free!) and join our Working Group.

Prepare for launch in 3… 2… 1…

In about two weeks we will launch the new DMPTool on Tues, 27 Feb. The much-anticipated third version of the tool represents an exciting next step in what has always been a community-driven project. We’ve now successfully merged the primary US- and UK-based data management planning tools into a single codebase (DMP Roadmap): the engine under the new DMPTool hood.

Why are we doing this?

A little background for those who haven’t been following along with our codevelopment journey: in 2016 the University of California Curation Center (UC3) decided to join forces with the Digital Curation Centre (DCC) to maintain a single open-source platform for DMPs. We took this action to extend our reach beyond national boundaries and move best practices forward, with a lofty goal to begin making DMPs machine actionable (i.e., useful for managing data). We’ll continue to run our own branded services (DMPTool, DMPonline, DMPTuuli, DMPMelbourne) on the shared codebase, and incorporate partners in Canada, Argentina, South Africa, and throughout Europe who are already running their own instances (full list).

In parallel with our co-development efforts we’ve been making the rounds of Research Data Alliance, Force11, IDCC, and disciplinary meetings to collect use cases for machine-actionable DMPs (details here) and help define common standards (RDA Working Group; just posted pre-print for 10 Simple Rules for Machine-Actionable DMPs). We also got an NSF EAGER grant so we can begin prototyping muy pronto.

The new version of the DMPTool will enable us to implement and test machine-actionable things in a truly global open science ecosystem. Successful approaches to making DMPs a more useful exercise will require input from and adoption by many stakeholders so we look forward to working with our existing DMP Roadmap community (an estimated 50k+ users, 400+ participating institutions, and a growing list of funder contacts across the globe) and welcoming others into the fold!

Preparing for Launch

To help DMPTool administrators prepare themselves and their institutional users for the upcoming launch, we will host a webinar on:

Mon, 26 Feb 2018, 9-10 AM Pacific Time
Zoom link (recording on Vimeo; Q&A and slides)

By that time we’ll have a new user guide for administrators, a new Quick Start Guide for researchers, and refreshed promo materials. Everyone will have seamless access to their existing DMPTool accounts, just through a new user interface that looks and feels more like DMPonline (spoiler alert: we made it blue). And one of the most exciting things about the new tool is that it contains 34 freshly updated funder templates with links to additional funder guidance.

Stay tuned to the DMPTool communication channels in the coming weeks (blog, admin email list, Twitter) for more news and updates. We look forward to seeing you at the webinar and welcome your feedback at any point.

On the right track(s) – DCC release draws nigh

blog post by Sarah Jones

Eurostar photo

Eurostar from Flickr by red hand records CC-BY-ND

Preliminary DMPRoadmap out to test

We’ve made a major breakthrough this month, getting a preliminary version of the DMPRoadmap code out to test on DMPonline, DMPTuuli and DMPMelbourne. This has taken longer than expected but there’s a lot to look forward to in the new code. The first major difference users will notice is that the tool is now lightning quick. This is thanks to major refactoring to optimise the code and improve performance and scalability. We have also reworked the plan creation wizard, added multi-lingual support, ORCID authentication for user profiles, on/off switches for guidance, and improved admin controls to allow organisations to upload their own logos and assign admin rights within their institutions. We will run a test period for the next 1-2 weeks and then move this into production for DCC-hosted services.

Work also continues on additional features needed to enable the DMPTool team to migrate to the DMPRoadmap codebase. This includes additional enhancements to existing features, adding a statistics dashboard, email notifications dashboard, enabling a public DMP library, template export, creating plans and templates from existing ones, and flagging “test” plans (see the Roadmap to MVP on the wiki to track our progress). We anticipate this work will be finished in August and the DMPTool will migrate over the summer. When we issue the full release we’ll also provide a migration path and documentation so those running instances of DMPonline can join us in the DMPRoadmap collaboration.

Machine-actionable DMPs

Stephanie and Sarah are also continuing to gather requirements for machine-actionable DMPs. Sarah ran a DMP workshop in Milan last month where we considered what tools and systems need to connect with DMPs in an institutional context, and Stephanie has been working with Purdue University and UCSD to map out the institutional landscape. The goal is to produce maps/diagrams for two specific institutions and extend the exercise to others to capture more details about practices, workflows, and systems. All the slides and exercise from the DMP workshop in Milan are on the Zenodo RDM community collection, and we’ll be sharing a write-up of our institutional mapping in due course. I’m keen to replicate the exercise Stephanie has been doing with some UK unis, so if you want to get involved, drop me a line. We have also been discussing potential pilot projects with the NSF and Wellcome Trust, and have seen the DMP standards and publishing working groups proposed at the last RDA plenary host their initial calls. Case statements will be out for comment soon – stay tuned for more!

We have also been discussing DMP services with the University of Queensland in Australia who are doing some great work in this area, and will be speaking with BioSharing later this month about connecting up so we can start to trial some of our machine-actionable DMP plans.

The travelling roadshow

Our extended network has also been helping us to disseminate DMPRoadmap news. Sophie Hou of NCAR (National Center for Atmospheric Research) took our DMP poster to the USGS Community for Data Integration meeting (Denver, CO 16–19 May) and Sherry Lake will display it next at the Dataverse community meeting (Cambridge, MA 14-16 June). We’re starting an inclusive sisterhood of the travelling maDMPs poster. Display the poster, take a picture, and go into the Hall of Fame! Robin Rice and Josh Finnell have also been part of the street team taking flyers to various conferences on our behalf. If you would like a publicity pack, Stephanie will send out stateside and Sarah will share through the UK and Europe. Just email us your contact details and we’ll send you materials. The next events we’ll be at are the Jisc Research Data Network in York, the EUDAT and CODATA summer schools, the DataONE Users Group and Earth Science Information Partners meetings (Bloomington, IN), the American Library Association Annual Conference (Chicago, IL), and the Ecological Society of America meeting (Portland, OR) . Catch up with us there!

RDA-DMP movings and shakings

RDA Plenary 9

We had another productive gathering of #ActiveDMPs enthusiasts at the Research Data Alliance (RDA) plenary meeting in Barcelona (5-7 Apr). Just prior to the meeting we finished distilling all of the community’s wonderful ideas for machine-actionable DMP use cases into a white paper that’s now available in RIO Journal. Following on the priorities outlined in the white paper, the RDA Active DMPs Interest Group session focused on establishing working groups to carry things forward. There were 100+ participants packed into the session, both physically and virtually, representing a broad range of stakeholders and national contexts and many volunteered to contribute to five proposed working groups (meeting notes here):

  • DMP common standards: define a standard for expression of machine-readable and -actionable DMPs
  • Exposing DMPs: develop use cases, workflows, and guidelines to support the publication of DMPs via journals, repositories, or other routes to making them open
  • Domain/infrastructure specialization: explore disciplinary tailoring and the collection of specific information needed to support service requests and use of domain infrastructure
  • Funder liaison: engage with funders, support DMP review ideas, and develop specific use cases for their context
  • Software management plans: explore the remit of DMPs and inclusion of different output types e.g. software and workflows too

The first two groups are already busy drafting case statements. And just a note about the term “exposing” DMPs: everyone embraced using this term to describe sharing, publishing, depositing, etc. activities that result in DMPs becoming open, searchable, useful documents (also highlighted in a recent report on DMPs from the University of Michigan by Jake Carlson). If you want to get involved, you can subscribe to the RDA Active DMPs Interest Group mailing list and connect with these distributed, international efforts.

Another way to engage is by commenting on recently submitted Horizon2020 DMPs exposed on the European Commission website (unfortunately, the commenting period is closed here and here — but one remains open until 15 May).

DMPRoadmap update

Back at the DMPRoadmap ranch, we’re busy working toward our MVP (development roadmap and other documentation available on the GitHub wiki). The MVP represents the merging of our two tools with some new enhancements (e.g., internationalization) and UX contributions to improve usability (e.g., redesign of the create plan workflow) and accessibility. We’ve been working through fluctuating developer resources and will update/confirm the estimated timelines for migrating to the new system in the coming weeks; current estimates are end of May for DMPonline and end of July for DMPTool. Some excellent news is that Bhavi Vedula, a seasoned contract developer for UC3, is joining the team to facilitate the DMPTool migration and help get us to the finish line. Welcome Bhavi!

In parallel, we’re beginning to model some active DMP pilot projects to inform our work on the new system and define future enhancements. The pilots are also intertwined with the RDA working group activities, with overlapping emphases on institutional and repository use cases. We will begin implementing use cases derived from these pilots post-MVP to test the potential for making DMPs active and actionable. More details forthcoming…

Upcoming events

The next scheduled stop on our traveling roadshow for active DMPs is the RDA Plenary 10 meeting in Montreal (19–21 Sept 2017), where working groups will provide progress updates. We’re also actively coordinating between the RDA Active DMPs IG and the FORCE11 FAIR DMPs group to avoid duplication of effort. So there will likely be active/FAIR/machine-actionable DMP activities at the next FORCE11 meeting in Berlin (25–27 Oct)—stay tuned for details.

And there are plenty of other opportunities to maintain momentum, with upcoming meetings and burgeoning international efforts galore. We’d love to hear from you if you’re planning your own active DMP things and/or discover anything new so we can continue connecting all the dots. To support this effort, we registered a new Twitter handle @ActiveDMPs and encourage the use of the #ActiveDMPs hashtag.

Until next time.

Active, actionable DMPs

IDCC workshop participants

Roadmap project IDCC debriefing
We had a spectacularly productive IDCC last month thanks to everyone who participated in the various meetings and events focused on the DMPRoadmap project and machine-actionable DMPs. Thank you, thank you! Sarah has since taken the traveling road show onward to a meeting at CERN (slides) and Stephanie discussed institutional infrastructure for DMPs at a meeting of California data librarians. In the midst of travels we’ve been wrangling the mountain of inputs into a draft white paper on machine-actionable DMP use cases. For now, we offer a preview of the report and an invitation to keep the momentum going at the RDA plenary in Barcelona, which is just around the corner (5–7 April).

The white paper represents the outputs of the IDCC workshop: ”A postcard from the future: Tools and services from a perfect DMP world” (slides, etc. here). We convened 47 participants from 16 countries representing funders, educational institutions, data service providers, and the research community. There was so much interest in the topic that we added an overflow session to accommodate everyone who wanted to weigh in. We’re gratified to discover how many folks have been thinking about DMPs as much as we have, and aim to continue synthesizing your stakeholder-balanced, community-driven solutions for improving the data management enterprise.

mind map exercise

Solving DMPs with rainbow stickies

The contributions from IDCC align with previously gathered information and drive the agenda summarized here. Consensus emerged to:

  • Focus on integrating existing systems (Interoperability was top-voted topic for the workshop)
  • Integrate DMPs into active research workflows to emphasize benefits of planning to researchers, but keep in mind that funders still drive demand.
  • Consider the potential of persistent identifiers (ORCID iDs, Crossref Funder Registry, etc.)
  • Explore ways to offer tailored, discipline-specific guidance at appropriate points

Next steps…
All stakeholders expressed a need for common standards and protocols to enable information to flow between plans and systems in a standardized manner. This would support APIs to both read and write to DMPs, as well as creating a framework for the development of new use cases over time. Therefore, it is a top priority to define a minimum data model with a core set of elements for DMPs. The model should incorporate existing standards and avoid inventing something new; it could potentially be based on a template structure and/or use the DMPRoadmap themes. Additional requirements in this area include that it:

  • Must make use of existing vocabularies and ontologies whenever possible
  • Must employ common exchange protocols (e.g., json)
  • Must be open to support new data types, models, and descriptions
  • Should be available in a format that can be rendered for human use
  • Should accommodate versioning to support actively updated DMPs

At the RDA 9th Plenary meeting in Barcelona during the Active DMPs IG session (6 April, 9:30-11:00) we propose establishing a working group to develop standards for DMPs. This isn’t our particular area of expertise so once again we’re relying on all of you to help steer the DMP ship. We hope that additional working groups might spin out from the session and invite your ideas and contributions (e.g., publishing DMPs).

…and beyond
The DCC and UC3 will continue to pursue international collaborations related to DMPRoadmap through pilot projects. As part of an iterative process for developing, implementing, testing, and refining these use cases we’re beginning to model domain-specific and institutional pilot projects to determine what information can realistically move between stakeholders, systems, and research workflows. We have some existing funds to support a subset of this work and are actively seeking additional sources of funding to carry the project forward. In addition to technical solutions, these projects will expand our capacity to connect with key stakeholders, with particular emphasis on addressing the needs and practices of researchers and funders. Stay tuned for more details in the coming weeks and months.

You can also track our progress and find oodles of documentation on the DMPRoadmap GitHub wiki.

DMPTool and RDM consultants support humanities grant submission

The following is a guest post by Quinn Dombrowski of the UC Berkeley RDM Program. The original is available at http://researchdata.berkeley.edu/stories

sarcophagus photo

When preparing a proposal to a funding agency, researchers focus on the grant narrative, framing their work in the most innovative and compelling way possible. Crafting a narrative that can stand as a surrogate for a scholar’s research for reviewers to evaluate is itself a time-consuming process; for the National Endowment for the Humanities (NEH) Digital Humanities grants, it’s only one of nine components of the application. Grant proposals must include a data management plan, a document that Assistant Professor of Near Eastern Studies Rita Lucarelli had not encountered prior to preparing her grant submission last fall. “I found the instructions to be clear, but I hadn’t thought about those issues before,” Professor Lucarelli said in a recent Research Data Management (RDM) workshop on DMPTool for the humanities.

The short version of the NEH guidelines states:

Prepare a data management plan for your project (not to exceed two pages). The members of your project team should consult this document throughout the life of the project and beyond the grant period. The plan should describe how your project team will manage and disseminate data generated or collected by the project. For example, projects in this category may generate data such as software code, algorithms, digital tools, reports, articles, research notes, or websites.

In addition, proposals of the type Professor Lucarelli was submitting require a sustainability plan. Following the basic prompts provided by the NEH, Professor Lucarelli drafted a brief paragraph for the data management plan and the sustainability plan, and sent the materials to the RDM team for review.

Starting early proved to be key. By having a draft done two months in advance, Lucarelli was able to send her proposal to the NEH for feedback, where she learned that her proposal — to fund a workshop, and development of a portal that would bring together a number of Egyptology projects that are building 3D models — would be eligible for a “level 2” grant, but not a “level 3” grant as Professor Lucarelli originally drafted: “level 3” grants are intended for projects that already had a finished prototype. “It’s important to figure out what level grant you’re applying to early,” Lucarelli reflected. “Deciding on that sooner would have saved me from drafting the sustainability plan that wasn’t applicable to the grant I ended up applying for.”

Involving the RDM team in the process early also allowed Lucarelli to work with an RDM consultant to refine her data management plan. Rick Jaffe, an RDM consultant, met with Lucarelli and talked through the scope and nature of the project she was proposing. After their first meeting, Jaffe logged into DMPTool, the Data Management Planning tool developed and supported by the California Digital Library (CDL), which provides templates and additional guidance for preparing data management plans for most major funding agencies. He pulled up the template for the NEH, and began to organize and expand upon his notes from the meeting, using the headers and prompts suggested by the DMPTool. Jaffe used the DMPTool’s private sharing function to make the draft data management plan visible and editable by Lucarelli and her collaborator at the University of Memphis, Joshua Roberson.

Drafting a data management plan in the DMPTool interface is convenient because it juxtaposes the questions and guidance for each section with a text box where you can write your responses. At a certain point in the process, it may be easier to download your draft data management plan and move it into Microsoft Word for editing. While it may be tempting to answer each of the questions in the prompt at great length, the overall two-page limitation forces grant applicants to be brief and specific. Quinn Dombrowski, another RDM consultant, worked with Lucarelli on winnowing the six-page version drafted in DMPTool into the required two pages.

“Even if I don’t get this grant, it was hugely valuable to prepare a data management plan,” explained Lucarelli. “When you’re working a new project, you never think about things like what will happen if you’re not involved with the project anymore — it’s hard to even imagine that! But a data management plan makes you think through all the details about what data you’ll actually get in your project, how you’ll store it, and how you’ll manage it in the long term. I was lucky to be working with a collaborator who knew some of the technical details about how to store audio files, because I would have been at a loss, myself. And it was very helpful to be able to sit down with RDM consultants who can help you think through all the issues involved in running a project like this. I feel much better prepared now for the next time I put together a grant application, whether or not a data management plan is required.”

Hang A DMPTool Poster!

In addition to working hard on the new version of the DMPTool (to be released in May), we are also working on outreach and education materials that promote the use of the DMPTool. Our latest addition to these materials is a generic poster about the DMPTool, including information about what’s to come in the new version. You can download a PDF version, or a PPTX version that you can customize for your institution. We plan on updating this poster when the new version of the DMPTool is released, so keep an eye out!

“DMPTool: Expert Resources & Support for Data Management Planning”. 30″x38″ poster

Slide1
Posters available as:

  • PDF (cannot be customized)
  • PPTX (can be customized)

DMPTool adds 100th institution!

From Flickr by Anvica

From Flickr by Anvica

We are pleased to announce that as of September 23rd, with the addition of Baylor University100 institutions have taken the step of customizing the DMPTool to provide local guidance and resources for their researchers. Check out the full list of participating institutions.

While institutions do not have to customize the DMPTool for their researchers to take advantage of the tool, taking that step can provide many benefits to their researchers as well as their data management and stewardship programs. These include:

  • Integration with Shibboleth so that researchers can use their institutional credentials;
  • Ability to add help text and links to institutional resources;
  • Ability to add contact information for the units that support data management; and
  • Ability to add text that can be copied into a data management plan.

With the release of the new and improved DMPTool in early 2014, there will be even more functionality for those institutions who integrate with and customize the tool. These features include:

  • An interface to manage all of the customizations directly;
  • Improved institutional branding;
  • The ability to add institution specific data management requirements;
  • Reviews of DMPs on a case by case basis or as a required step for all researchers; and
  • Multiple roles for administrative users including as editors of requirements and reviewers of DMPs, so that you can have appropriate teams working on the DMPTool.

If you are interested in talking to us more about this process, please contact us. It is a straightforward process that we are happy to walk you through.

Report on DMPTool at ESA 2013

Last week in Minneapolis, about 4,000 ecologists got together to geek out and enjoy the midwest for eight days. The DMPTool had a couple of appearances in the course of this 2013 Ecological Society of America Meeting– a workshop on managing ecological data and a session on data management planning and the DMPTool. It was also mentioned in numerous presentations about the DataONE Investigator Toolkit, of which the DMPTool is a part.

Here I want to briefly mention the special session on the DMPTool, which occurred on the first official day of #ESA2013. The session was 75 minutes long, and Bill Michener of DataONE and I were sharing the podium. He planned to introduce DMPs generally, followed by my explanation and demonstration of the DMPTool, including the new version of the tool due out in Winter 2013-2014.

Fifteen minutes before the presentations were due to start, the room was packed. Attendees were sitting on the floor in the aisles by 5 minutes before, and there was a nonstop trickle of attendees entering throughout the 75 minute session. The catch? We had no power.

That’s right: Bill and I were forced to talk about data management plans, demo the DMPTool, and discuss future plans to a packed room, all without a microphone, a projector, or even a chalk board. We managed to get through the session, and folks seemed to appreciate the impromptu soliloquies on data management by myself and Bill. The power came on about 5 minutes before the close of the session (of course), at which point I scurried behind the podium to show screen shots of the DMPTool. By the time I looked up from my laptop, about 60% of the audience had left. Apparently slides were not the big draw for our session.

My takeaway lessons? When giving a talk, be prepared for anything; people enjoy the element of surprise and improvisation; and researchers are dying to learn about DMPs, regardless of the potential hurdles put before them. This is most likely due to funder requirements for DMPs, but I’d like to think it also relates to my and Bill’s dulcet tones.

The slides I didn’t get to show are available on slideshare.

Screen-Shot-2013-08-12-at-8.37.16-AM