Working Toward a Common Standard API for Machine-Actionable DMPs

TL:DR

  • We’re participating in a new group formed to develop a common API standard for DMP service providers
  • The goal is to make it easy for anyone wanting to build integrations with maDMPs to have it work for any DMP service provider
  • The group had its first kick-off meeting to make initial outlines, with work continuing over the next few months
  • We plan to support the new API (as well as all existing functionality and integrations) in our new rebuilt DMP Tool application

DMP Tool and the Research Data Alliance

Our work at DMP Tool has been shaped from the ground up through collaborations at the Research Data Alliance (RDA). From the earliest conversations about machine-actionable Data Management Plans (maDMPs) to the creation of the DMP common standard and the DMP ID, the RDA has served as the convening space where we’ve found shared purpose, co-developed solutions, and built lasting partnerships with peers across the globe. That same spirit is captured in the Salzburg Manifesto on Active DMPs, which outlines a vision for DMPs as living, integrated components of the research lifecycle. That vision continues today, as we are helping launch a new initiative at RDA to update a common API standard for DMP service providers. This effort will help ensure our systems can connect more seamlessly and serve the broader research ecosystem more effectively. This post gives some context on why this new effort is needed, what we’ve done so far for it, and what we have coming next.

DMP Tool implementation of the RDA common standard

The DMP Tool team were early advocates of maDMPs and saw the potential value of capturing structured information during the creation of a DMP. The goal is to use as many persistent identifiers (PIDs) as possible to help facilitate integrations with external systems. To gather this data, we introduced new fields into the DMP Tool to capture detailed information about project contributors (ORCIDs, RORs, and CRediT roles) as well as what repositories (re3data), metadata standards (RDA metadata standards) and licenses (SPDX) would be used when creating a project’s research outputs. These new data points are captured alongside the traditional DMP narrative. We also started allowing researchers to publish their DMPs. This process generates a DMP ID, a DOI customized to capture and deliver DMP-focused metadata. This approach allows the DMP to be discoverable in knowledge graphs like DataCite Commons. Once the DOI is registered, the DMP Tool provides a landing page for the DOI.

Screenshot of the DMP Tool showing how to register your plan for a DMP ID

One of the main points of collecting all of this structured metadata is to facilitate integrations with other systems. To make that possible, we introduced a new version of the API that outputs the DMP metadata in the common standard developed with RDA. Our first integration was with the RSpace electronic lab notebook system. When a researcher is working in RSpace, they are able to connect RSpace with the DMP Tool to fetch their DMPs in PDF format and store the document alongside their other research outputs. Once connected, RSpace is able to send the DMP Tool the DOIs of any research outputs that the researcher deposits in repositories like Dataverse or Zenodo. These DOIs are then available as part of the DMPs structured metadata.

Moving the Standard Forward 

The original RDA DMP common standard was released 3 years ago. Since that time, systems like the DMP Tool have found areas where we need to deviate from the base standard. This is a normal process when any standard is developed and first put into use. We have discovered key fields that should be added to the standard (e.g., contributor affiliation information) and areas that don’t really make sense to capture within the DMP itself (e.g., the PID systems a particular repository supports). 

Other DMP systems have also been implementing the common standard and making it available via API calls, but this was done without conformity as to how an external system can access those APIs. This results in systems like RSpace needing to develop and maintain separate integrations for each tool. Over time, this extra work leads to fewer integrations between systems, making each more siloed.

RDA is made up of Interest Groups and Working Groups where members across the world join together to work on a common topic, making guidelines, best practices, tools, standards, and other resources for the wider community. To tackle this use case and address shared issues, our RDA group decided to release a new version of the common standard, v1.2, and forming a new working group to develop API standards that each tool should support. Members of the DMP community gathered together at the end of March to discuss both topics. The DMP systems represented at the meeting included Argos, DAMAP, Data Stewardship Wizard, DMPonline, DMP OPIDoR, DMP Tool, DMPTuuli, and ROHub.

Our DMP Tool team attended the meeting to make sure that the needs of our funders, researchers and institutions were properly represented. The meeting was split into two parts: 

  • Common Standard revisions: In the morning, the group reviewed issues and feature requests submitted to the DMP Common Standard GitHub repository over the past three years. These were synthesized into major themes for discussion, resulting in a set of proposed non-breaking changes for a v1.2 release. More complex revisions were deferred for a future v2. Those interested can explore the open issues here.
  • Drafting the API specification: In the afternoon, the group reviewed user stories from current and planned integrations to identify common needs. This discussion led to the initial outline of a shared set of API endpoints that each DMP service should support. Work on refining this draft will continue in the coming months.
Photograph of 14 meeting attendees representing a variety of service providers in a conference room
Meeting attendees, representing a variety of DMP service providers, worked together on the common standard

Next steps

The original common metadata standard working group plans to incorporate the proposed non-breaking changes this summer as release v1.2. We have also committed to keep the conversation going about future enhancements as we work towards v2.

Meanwhile, the new RDA working group also hopes to release an official API specification this summer. The individual tools would then be tasked with ensuring that their systems support the new API endpoints. For our part, the DMP Tool will ensure that our new website supports this API standard when it launches, as well as additional endpoints specific to our application. The goal is that integrator services like RSpace will then be able to connect more easily with any DMP service, making connections across the research system more robust.

Anyone can review the new DMP common API for maDMP working group proposed work statement. We would value your input, and if you’re interested in joining the group and contributing to the API specification, you can join RDA (its free!) and join our Working Group.

UC3 New Year Series: Looking Ahead through 2025 for the DMP Tool

We’re gearing up for a big year over at the DMP Tool!  Thousands of researchers and universities across the world use the DMP Tool to create data management plans (DMPs) and keep up with funder requirements and best practices.  As we kick off 2025, we wanted to share some of our major focus areas to improve the application, introduce powerful new capabilities, and engage with the wider community.  We always want to be responsive to evolving community needs and policies, so these plans could change if needed.

New DMP Tool Application

Our primary goal for the year is to launch the rebuild of the DMP Tool application.  You can read more detail about this work in this blog post, but it will include the current functionality of the tool plus much more, still in a free, easy to use website.  The plan is still to release this by the end of 2025, likely in the later months (no exact date yet).  We’re making good progress towards a usable prototype of core functionality, like creating an account and making a template with basic question types.

In-development screenshot of account profile page in the new tool. Page is not final and is subject to change.
In-development screenshot of editing a template in the new tool. Page is not final and is subject to change.

Another common request is to offer more functionality within our API.  For example, people can already read registered DMPs through the API, but many librarians want to be able to access draft DMPs to integrate a feedback flow on their own university systems.  As part of our rebuild, we are moving to a system that is going to use the same API on the website as the one available to external partners (GraphQL for those interested).  This will allow almost any functionality on the website to be available through the API.  This should be released at the same time as the new tool, with documentation and training to come. Get your integration ideas ready!

Finally, we are continuing to work on our related works matching, tracking down published outputs and connecting them to a registered DMP.  This is part of an overall effort to make DMPs more valuable throughout the lifecycle of a project, not just at the grant submission stage, and to reduce burden on researchers, librarians, and funders to connect information within research projects.  It’s too early to tell when this will be released publicly on the website, but likely will come some time after the rebuild launch.

AI Exploration

While most of our focus will be on the above projects, we are in the early stages of exploring topics for future development of the DMP Tool.  One big area is in the use of generative AI to assist in reviewing or writing data management plans.  We’ve heard interest from both researchers and librarians in using AI to help construct plans.  People sometimes write their DMP the night before a grant is due and request feedback without enough time for librarians to provide it.  AI could help review these plans, if trained on relevant policy, to give immediate feedback when there’s not enough time for human review.

We’re also interested in exploring the possibility of an AI assistant to help write a DMP.  We know many people are more comfortable answering a series of multiple choice questions than they are in crafting a narrative, and it’s possible we could help turn that structured data into the narrative format that funders require, making it easier for researchers to write a plan and keeping the structured data for machine actionability. Another option is an AI chatbot within the tool that can help provide our best practice guidance in a more interactive format.  It will be important for us to balance taking some of the writing burden off of researchers while making sure that they are still the one responsible for the content within it.

These ideas are in early phases – it’s something we’ll be exploring with some external partners but likely not releasing to the public this year – however we’re excited about their potential to make best practice DMPs easier to create.

Community Engagement

While we’ll sometimes be heads down working on these big projects, we also want to make sure we’re communicating to and participating in the wider community more than ever.  As we get towards a workable prototype of the new tool, we’ll be running more user research sessions.  The initial sessions, reviewed here, offered a lot of valuable insight that shaped the current designs, and we know once people get their hands on the new tool they’ll have more feedback.  If you haven’t already, sign up here to be on the list for future invites. 

We also want to be more transparent with the community about our operations and goals.  We’ve started putting together documents within our team about our Mission and Vision for the DMP Tool, which we’ll be sharing with everyone shortly.  Over 2025, we want to continue to work on artifacts like those we can share regularly so that you all know what our priorities are.  One goal is to create a living will, recommended by the Principles of Open Scholarly Infrastructure, outlining how we’d handle the potential winddown of CDL managing the DMP Tool.  This is a sensitive area because we have no plans to wind down the tool, and don’t want to give the impression that its going away!  But it’s important for trust and transparency for us to have a plan in place if things change, as we know people care about the tool and their data within it.

Finally, we’ll be wrapping up our pilot project with ARL this year, where we had 10 institutions pilot implementation of machine-actionable DMPs at their university.  We’ve seen prototypes and mockups for integrations related to resource allocation, interdepartmental communication, security policies, AI-review, and so much more. We’ve brought on Clare Dean to help us create resources and toolkits, disseminate the findings, and host a series of webinars about what we’ve learned to help others implement at their own universities.  We’ll be presenting talks on the DMP Tool at IDCC25 in February, RDAP in March, and we plan to submit for other conferences throughout the year, including IDW/RDA in October, to share what we’ve learned with others. We hope to continue working with DMP-related groups in RDA to ensure our work is compatible with others in the space, and we’re following best practices for API development.

We hope you’re as excited for these projects as we are!  We’re a small team but we work with many amazing partners that help us achieve ambitious goals.  Keep an eye on this space for more to come.

Progress Update: Matching Related Works to Data Management Plans

TL;DR

  • We’re making progress on our plan to match DMPs to associated research outputs.
  • We’ve brought in partners from COKI who have applied machine-learning tools to match based on the content of a DMP, not just structured metadata.
  • We’re getting feedback from our maDMSP pilot project to learn from our first pass.
  • In our new rebuilt tool, we plan to have an automated system to show researchers potential connected research outputs to add to the DMP record.

Have you ever looked at an older Data Management Plan (DMP) and wondered where you could find resulting datasets it mentioned would be shared? Even if you don’t sit around reading DMPs for fun like we do, you can imagine how useful it would be to have a way to track and find published research outputs from from a grant proposal or research protocol.

To make this kind of discovery easier, we aim to make DMPs more than just static documents used only in grant submissions.  By using the rich information already available in a DMP, we can create dynamic connections between the planned research outputs — such as datasets, software, preprints, and traditional papers — and their eventual appearance in repositories, citation indexes, or other platforms.

Rather than linking each output manually to their DMP, we’re using the new structure of our machine actionable data management and sharing plans (maDMSPs) from our rebuild to help automate these connections as much as possible.  By scanning relevant repositories and matching the metadata to information in published DMPs, we can find potential connections that researchers or librarians just have to confirm or reject, without adding the information themselves.  This keeps them in control and helps ensure connections are accurate, while reducing the burden of how much information they have to enter. 

Image from an early version of this in the DMP Tool showing a list of citations for potential marches with buttons to Review and a status column showing them as Approved or Pending
Image from an early version of this in the DMP Tool showing a list of citations for potential marches with buttons to Review and a status column showing them as Approved or Pending

This helps support the FAIR principles, particularly making the data outputs more findable, and helps transform DMPs into useful, living documents that provide a map to a research project’s outputs throughout the research lifecycle.

Funders, librarians, grant administrators, research offices, and other researchers will all benefit from a tracking system like this being available. And thanks to a grant from the Chan Zuckerberg Initiative (CZI), we were able to start developing and improving the technology to start searching across the scholarly ecosystem  and matching to DMPs.  

The Matching Process

AI generated image from Google Gemini of a monkey holding two pieces of paper next to each other

We started with DataCite, matching based on titles, contributors (names and ORCIDs), affiliations, and funders (names, RORs and Crossref funder ids).  Turns out, when you have a lot of prolific researchers, they can have many different projects going on in the same topic area, so that’s not always enough information to to find the dataset from this particular project. We don’t want to just find any datasets or papers that any monkey-researcher has published about monkeys, we want to find the ones that are from this particular grant about monkey behavior.

To help expand the datasets and other outputs we could find, we partnered with the Curtin Open Knowledge Initiative (COKI) to ingest information from OpenAlex and Crossref, and we’re working on including additional sources like the Data Citation Corpus from Make Data Count. COKI’s developers are also applying machine-learning, using embeddings generated by large language models and vector similarity search to compare the text from the title and abstract of a DMP to those descriptive fields within the datasets, rather than just the metadata for authors and funders.  That will help us match if, say, the DMP mentions “monkeys” but the dataset uses the work “simiiformes.”

To confirm the matches, we used pilot maDMSPs from institutions that are part of our projects with our partners at the Association of Research Libraries, funded by the Institute of Museum and Library Sciences and the National Science Foundation.  This process recently yielded a list of 1,525 potential matches to registered DMPs from the pilot institutions. We asked members of the pilot cohort to evaluate the accuracy of these matches, providing us with a set of training data we can use to test and refine our models.  For now we provided the potential matches in a Google Sheet, but in the future with our rebuild we plan to integrate this flow directly in the tool.

Screenshot from one university’s Google Sheet for matching DMP-IDs to research output DOIs, showing some marked as Yes, No, and Unsure for if its a match

Initial Findings

It will take some time for the partners to finish judging all the matches, but so far about half of the potential related works were confirmed as related to the DMP. This means we’ve got a good start and can use the ones that didn’t match to train our model better.  We’ll use those false positives, as well as false negatives gathered from partners, to refine our matching and get better over time.  Since we’re asking the researchers to approve the matches, we’re not too worried about false matches, but we do want to find as many as possible.

This process is still early, but here are some of our initial learnings:

  • Data normalization is an important and often challenging step within the matching process. In order to match DMPs to different datasets, we need to make sure that each field is represented consistently. Even a structured identifier like a DOI can be represented with many different formats across and within the sources we’re searching.  For example, sometimes they might include the full URL, sometimes just the identifier, and some are cut off and therefore have an incorrect ID that needs to be corrected in order to resolve. That’s just one small example, but there are many more that make the cleanup difficult, including normalization of affiliation, funder, grant and researcher identifiers across and within the datasets.  Without the ability to properly parse the information, even a seemingly comprehensive source of data may not be useful for finding matches.
  • Articles are still much easier to find and match than datasets. This is not surprising, given the more robust metadata associated with DOIs for articles that make them easier to find. Data deposited into repositories often does not have the same level of metadata available to match, if a DOI and associated metadata are even available at all.  We’re hoping we can use those articles, which may mention datasets, to find more matches in our next pass.
  • There is not likely to be a magic solution that gets us to completely automate the process of matching a research output to a DMP without changes in our scholarly infrastructure.  Researchers conduct a lot of research in the same topic area, so it’s difficult to know for sure if a paper or dataset came from a DMP, unless they specifically include these references.  There are ways to improve this, such as using DOIs and their metadata to create bi-directional links between funding and their outputs (as opposed to one-directional use of grant identifiers), including in data repositories. DataCite and Crossref are both actively working to build a community around these practices, but many challenges still remain. Because of this, we plan to have the researcher confirm matches before they are added to a record, rather than attempt to add them automatically.

Next Steps

We’re continuing to spend most of our development work on our site rebuild, which is why we’re grateful for our funding from CZI and our partnership with COKI to improve our matching.  Our next step is including information from the Make Data Count Data Citation Corpus, as well as following up on the initial matches once pilot partners finish their determinations.

We hope to have this Related Works flow added to our rebuilt dmptool.org website in the future.  The mockup is below (where we show researchers that we have found potential related works on a DMP, and would then ask them to confirm if it’s related so it can be added to the metadata for the DMP-ID and become part of the scholarly record).  We’ll want to balance confidence and breadth, finding an appropriate sensitivity so that we don’t miss potential matches but also don’t spam people with too many unrelated works.

Mockup of a project block in the new DMP Tool which a red pip and test saying "Related works found"
Mockup of a project block in the new DMP Tool which a red pip and test saying “Related works found”

If you have feedback on how you would want this process to work, feel free to reach out! 

Behind the Scenes: Insights from User Testing the new DMP Tool Designs

TL;DR

  • The rebuild of the technology behind the DMP Tool offered a chance to refresh the user interface
  • We conducted 12 user testing sessions to have real users walk through wireframes of our new tool designs to offer feedback and find issues
  • People liked the new designs but had a lot of small areas of confusion around some aspects like sharing and visibility settings
  • We made tons of small changes based on feedback and continue to make updates for better usability
  • Fill out this short form to have the option to join future feedback sessions

Why we needed new designs

As mentioned in our last blog post, the team behind the DMP Tool has been working on a rebuild of the application to improve usability, add new features, and provide additional machine-actionable features.  To provide all of this advanced functionality, we needed to do a pretty big overhaul of the technology behind the DMP Tool, and it was a good time to give the design a more modern upgrade as well, adding new functionality while hopefully making existing features easier to use.

A graphic showing a Machine-Actionable DMP connected to nodes that say Compliance, Integrity, Guidance, Tracking, and Scalability

How we made the first drafts and tested them

Over the past few months, we’ve worked closely with a team of designers to create interactive wireframes—prototype mockups that allow us to test potential updates to the user interface without fully developing them. These wireframes are crucial for gathering feedback from real users early, ensuring that our vision for a better tool meets their expectations.  While a lot of thought and planning went into these initial designs, we wanted to make sure people were finding the new site as easy and intuitive as possible, while still offering new, more intricate features.

To do this, we recruited three groups of people, 12 total, who work on different parts of the tool to test out these designs:

  • 5 researchers, who would be writing DMPs in the tool
  • 4 organizational administrators, who would be adding guidance to template in the tool
  • 3 members of the editorial board or funder representatives, who would be creating templates in the tool

We recruited volunteers from the pilot project members, from our editorial board, from social media, and from asking those we recruited to share the invitation with others. We conducted virtual interviews with each person individually, where we let them explore the wireframe for their section, gave them tasks to complete (e.g., “Share this DMP with someone else”), and asked questions about their experience.  For the most part we let people walk through the wireframes as if they were using it for real, thinking out loud about what they were experiencing and expecting.

What we found from testing

It was illuminating for the team to see live user reactions from these sessions, and watch them use this new tool we’re excited to continue work on. 

We loved to hear users say how excited they were for a particular new feature or how much they liked a new page style.  At times it could be disheartening, watching a user not find something that we thought was accessible, but those findings are even more important because it means we have an area to improve.  We made a report about the findings after each group of users and worked with the designers on how to address the pain points.  Sometimes the solution was straightforward, while other times we wrestled with different options for weeks after testing.

Overall, we found that people liked the new designs and layout and could get through most tasks successfully.  They appreciated the more modern layout and additional options. But there were many areas that the testers identified as confusing or unclear.  There are specific examples, with before-and-after screenshots, in the Appendix.  Some of the top changes made revolved around the following areas:

  • Decreasing some text in areas that felt overwhelming, moving less important information to other pages or collapsed by default
  • Adding some text to areas that were particularly unclear, such as what selecting “Tags” for a template question would do
  • Connecting pages if people consistently went somewhere else, such as adding a link to sharing settings on the Project Members page since that’s where people looked for it first
  • Moving some features to not show until they’re needed, such as having Visibility settings as an option in the publishing step and not the drafting step
  • Clarifying language throughout when things were unclear, such as distinguishing whether “Question Requirements” was about what the plan writer was required to write when creating their DMP or whether that was about the template creator marking whether a question is required or had display logic
  • Having additional preview options when creating a template or adding guidance to understand what a question or section would look like to a user writing a DMP
  • Making certain buttons more prominent if they were the primary action on a page, like downloading a completed DMP that originally was hard to find

Even though the main structure worked well for people, these small issues would have added up to a lot more confusion and obstacles for users if we hadn’t identified them before releasing.  

Wrapping up and moving forward

The whole team learned a ton from these sessions, and we’re grateful to all the participants who signed up and gave their time to help us improve the tool.  This sort of testing was invaluable to find areas to improve – we made dozens, if not hundreds, of small and large changes to the wireframes based on this testing, and we hope it’s now much better than it was originally. We’re still working on updates as we build our designs for more areas of the site, but feel better now about our core functionality.

If you’d like to be invited to participate in surveys, interviews, or other feedback opportunities like this for the DMP Tool, please fill out this brief form here: Feedback Panel Sign-Up. For anyone that signed up but wasn’t selected for this round, we may reach out in the future! 

We loved seeing how excited people are about this update, and we can’t wait to share more.  The most common question we get is – when is it releasing!  That’s going to be quite some time, and we don’t have more to share yet, as we’re still too early in the development process.  But stay tuned here for more updates as we do! 

We want to thank Chan Zuckerberg Initiative (CZI) for their generous support for rearchitecting our platform. We wouldn’t be able to make all of these helpful updates along with our back-end transformations without it.

Appendix: Specific Examples

Important note: The “updated wireframes” shown here are not final designs. We have not yet completed a design pass for things like fonts, colors, spacing, and accessibility; this is just a quick functionality prototype so we could get early feedback. Even the functionality shown here may change as we develop based on additional feedback, technical challenges, or other issues identified. Additionally, these wireframes are mockups and do not have real data in them, so there may be inconsistent or incorrect info in affiliations, templates, etc; we were focused on the overall user interface in testing, not specific content.

Sharing settings

For those who want some more details and specific examples, here are a few of the top areas of confusion we found:

There was sometimes confusion in how to share a plan with others, and what the distinction is between a Project collaborator (e.g., another researcher on the grant who may not be involved in the DMP) and a DMP collaborator (e.g., a peer who is giving feedback on writing the DMP but not on the project).  The current live tool has both “Project Contributors” and “DMP Collaborators” on the same page which we thought contributed to this confusion, so we wanted to separate those who can edit the DMP into a separate Sharing section.  However, testers had a hard time finding these sharing settings, and often went to the Collaborators page to grant DMP access.  So, we added a link to these settings where people were looking (the new section in the green box), and added more detail to the sharing page about whether they were invited or had access due to being a collaborator, changed some language within this like “Collaborator” to “Project Member,” with the option to change access.


Current tool:

On the current tool, these two types of collaborators are on one page.

Initial wireframes:

The Collaborators page in the initial wireframes, which was part of the overall project details and was not related to sharing access to the DMP itself.
A separate Sharing page on the plan itself had sharing settings, and was completely distinct from Collaborators.

Updated wireframes:

This page was renamed to Project Members for clarity, with a link to the page for sharing access to the DMP since so many people looked for it here.
This page was updated to give more information and control on invitations, and to make clear if people were added on because of an invite or because they were a project collaborator.

Card layout

Many parts of the tool used a new, more modern card format for displaying lists of items to choose from.  This allowed us to show more information than in a list, and adapt to smaller screens. However, we saw in some areas that people had trouble scanning these cards to find what they were looking for, like a plan or template, when they expected to search in alphabetical order.

For example, picking a template in the first draft used a boxier card format. People found it harder to find the template they were looking for, since they wanted to quickly scan the titles vertically.  So we changed it to a different format that should be easier to scan, even if it doesn’t show as many on one page.  Note we also now have the option to pick a template other than from your funder, a common request in the current tool.  

Current tool:

Currently, selecting your funder brings up a list of templates with no other information, and you can’t select a different template.

Initial wireframe:

This format allows more information if we want to add details that might help people pick the right template.

Updated wireframe:

This update still allows us to show more information, but the vertical layout means a person’s eyes can move in the same spot down the list to scan titles more easily if they know what they want.

Flow through the tool

People appreciated that they could move around more freely in the new design, as compared to the more linear format of the current tool. However, that also occasionally made people feel “lost” as to where they were in the process of writing a DMP. Especially as there is now a “Project” level above each plan to help support when people have multiple DMPs for the same research project.  So we added more guidance, breadcrumbs, and navigation while still allowing the freedom of movement throughout the process.

For example, while writing a plan, users will now be able to see the other sections available and understand where they are in the Project tree.  We also reduced some of the text on screen due to people feeling overwhelmed with information, putting some best practices behind links that people can visit if they wish to, and moved the Sample Answer people were most interested in to above the text box for better visibility.

Current tool:

The current tool has more distinct phases from writing a plan to publishing. In this view, a person is answering a single question and then would move on to the next.

Initial wireframe:

In our first draft, people clicked into each question rather than having all one one expandable page. But people weren’t always sure where they were in the process or how to get back.

Updated wireframe:

We added the navigation seen on the left and top here to allow people to see what else is in the plan and more easily get to other sections or the Project. We are also still working on how to reduce how much text is on the screen at once, for example by minimizing the guidance, but this is not final. We also moved the sample text above the question and removed the answer library for now.

Layout changes

In addition, there were tons of small changes throughout, changing layouts, wordings, and ordering of options in response to areas of confusion.  Some places we scaled back a bit of functionality since the number of new options were overwhelming, while other places we added a bit more that people needed.

In the first draft of the wireframes, the visibility settings of the plan were on the main overview page of the plan.  This was concerning to users since they were still drafting at this stage, and even if they may want it public once they published it, the setting in this location made it seem like it was public now.  Instead we added a status and setting on the overview page, but the visibility setting does come up until a person gets to the Publish step, somewhat like the current tool that has those options later than in the plan writing stage.

Current tool:

Currently, setting visibility is later in the “Finalize” stage.

Initial wireframe:

In the first draft, this visibility settings were on the main plan page, which made people think it was public already as opposed to that it would be public once published.

Updated wireframe:

The updated main page, with many changes based on feedback, including visibility as a status on the right, which isn’t set until it is published, and more control over changing project details per plan.
Now, visibility is set only once a person goes to publish their DMP.

We made similar change to creating a template, moving the visibility settings to be selected in the publishing stage instead of being in a Template Options menu people didn’t always see right away.  They expected to set that visibility at the time they published it, so that’s where we moved that option to be, consistent with how the plan creation flow works.

Announcing the DMP Tool Rebuild

TL;DR

  • We’re starting work on an ambitious project rebuilding the DMP Tool application
  • The rebuilt tool, coming hopefully some time next year, will use machine-actionable structures for the whole DMP and have many new features
  • The current site will remain as it is until the new version is released, though we’re limiting work on it to resolving critical issues
  • Sign up for our newsletter to hear occasional updates about this work!

History of the DMP Tool

Over the past 13 years, the DMP Tool has grown from a grassroots tool beginning at 8 institutions to one that serves thousands of universities across multiple continents. We’ve had a few big milestones in that time, such as adding the ability to register a DMP-ID and publish a DMP publicly, and creating the admin interface to allow universities to provide custom guidance on templates. The tool started in response to new requirements from U.S. funders for data management plans (DMPs; also known as data management and sharing plans–DMSPs), and our growth follows the research and library communities’ needs in this area.

Adding Machine-Actionable Functionality

Now, it’s time for our next big milestone in the DMP Tool: fully machine-actionable data management and sharing plans (maDMSPs).  In 2022, the U.S. CHIPS and Science Act was signed into law, requiring DMPs submitted to the National Science Foundation (NSF) to be “machine-readable.”  Machine-readable, or actionable, means that information is structured in a way that enables automatic connections and transformations without the need for manual intervention.  

A screenshot excerpt from the CHIPS and Science Act of 2022 which reads "(b) DATA MANAGEMENT PLANS.— (1)IN GENERAL.—The Director shall require that every proposal for funding for research include a machine-readable data management plan that includes a description of how the awardee will archive and preserve public access to data, software, and code developed as part of the proposed project."
Excerpt from the CHIPS & Science Act, referring to NSF-funded research

On the current DMP Tool, some parts of the DMP have been made machine-actionable already, such as the DMP-ID and metadata.  When you go to a registered DMP’s landing page, like this public plan for example, you see structure information like title and contributors pulled from a database.  Other systems can work with that information through our public API, allowing for integrations with various research applications.

Now, we want to make all parts of the DMP – such as the narrative responses to the questions describing the plan – machine-actionable, and open up more tooling to work with structured maDMSPs, as was outlined in a Dear Colleague letter in 2019.

There are many benefits to maDMSPs, such as:

  • Having persistent identifiers that allow tracking of data publications and connections to other PIDs, like ORCIDs and ROR and DOIs
  • Creating opportunities for sharing information about DMPs between different campus units
  • Allowing integrations with research systems, like electronic lab notebooks, that can help researchers use DMPs in existing workflows
  • Establishing links to research outputs, like published datasets, that came from a DMP, to help link work and track compliance with the statements in a DMP

Rebuilding the DMP Tool

To implement these major changes, we realized a significant overhaul of the current DMP Tool was needed to accommodate these new features and underlying structural changes.  For years, the DMP Tool rebuild has been a regular discussion point; we’ve long recognized its areas for improvement and regularly fielded requests for specific features.  However, our team of two had limited ability to implement many of our, and the community’s, grand ideas. 

Fortunately, we were able to obtain funding from an NSF EAGER grant that allowed us to explore a rebuild of the application, which would allow us to develop these features of the new tool and bring about these needed changes.

Our official rebuild work kicked off in April 2024 with a week-long workshop with our new team of consultants led by Paula Reeves from Reeves Branding and Zach Antony from Cazinc Digital. During that week, we dove into every aspect of the current application, mapping out existing features and brainstorming how to incorporate new ones. This included the machine-actionable data and formatting required for interoperability and the structured metadata needed to fuel the creation of machine-actionable data management plans. We reviewed the existing architecture, explored user personas, and redesigned workflows to facilitate project-centric planning. We also focused on building and customizing templates, adding guidance tools, and ensuring accessibility as we outlined development timelines and workflows for future phases. 

Photograph of seven team members at the in-person rebuild kickoff meeting
The seven team members at the rebuild kickoff meeting

We’re excited to also get in a few top feature requests as well as maDMSP functionality, though we will be rolling them out in stages and cannot get to everything.  Some of the areas we have currently prioritized include:

  • Additional API functionality, such as the ability with unpublished or in-progress DMPs
  • Ability to upload and register existing DMPs
  • Improved account management, such as being able to add secondary emails
  • Increased flexibility in creating templates, such as additional question types and streamlined ability to copy templates
  • Finding and connecting DMPs to published research outputs like datasets
  • Improved notification, comment, and feedback systems

Since the kick-off, the designers have been developing wireframes for the new tool, while we’ve added some new machine actionable elements to the current DMP Tool for testing.  We’ve been working with the Association of Research Libraries (ARL) on a pilot project with 10 institutions, funded by the Institute for Museum and Library Sciences, gathering feedback from their use of the tool and conducting interviews about their efforts developing local integrations. Our first visit was to Northwestern University, which can read more about on ARL’s blog, with more coming soon.

What’s next

To stay focused on delivering this work, and due to the site’s technological constraints, we will be limiting updates to the current application. We’ll prioritize resolving critical issues while taking feature requests as requests for the new site. 

We can’t wait to share more information over time about this project as it develops.  While it’s too early to announce a release date, we’re hopeful it will be sometime before the end of next year.  We recently wrapped up user testing on the wireframes, and will have a blog post coming soon about what we found.  We’ll also be sharing information at upcoming conferences, such as a talk at IDCC25 called “Piloting maDMSPs for Streamlined Research Data Management Workflows.”  Keep an eye on this space, and sign up for our newsletter, to hear occasional updates about this work!

We want to also thank Chan Zuckerberg Initiative (CZI) for their generous support for rearchitecting our platform. The back-end transformations and refactoring activities were funded through their generous support.

Roadmap back to school edition

Summer activities and latest (major 2.0.0) release
The DMPRoadmap team is checking in with an overdue update after rotating holidays and work travels over the past few months. We also experienced some core team staff transitions and began juggling some parallel projects. As a result we haven’t been following a regular development schedule, but we have been busy tidying up the codebase and documentation.

This post summarizes the contents of the major release and provides instructions for those with existing installations who will need to make some configuration changes in order to upgrade to the latest and greatest DMPRoadmap code. In addition to infrastructure improvements, we fixed some bugs and completed some feature enhancements. We appreciate the feedback and encourage you to keep it coming since this helps us set priorities (listed on the development roadmap) and meet the data management planning needs of our increasingly international user community. On that note, we welcome Japan (National Institute for Informatics) and South Africa (NeDICC) as additional voices in the DMP conversation!

Read on for more details about all the great things packed into the latest release, as well as some general updates about our services and of course machine-actionable DMPs. The DCC has already pushed the release out to its services and the DMPTool will be upgrading soon – separate communications to follow. Those who run their own instances should check out the full release notes and a video tutorial on the validations and data clean-up (thanks Gavin!) to complete the upgrade.

DMPRoadmap housekeeping work (full release notes, highlights below)

  • Instructions for existing installations to upgrade to the latest release. Please read and follow these carefully to prevent any issues arising from invalid data. We highly recommend that you backup your existing database before running through these steps to prepare your system for Roadmap 2.0.0!
  • Added a full suite of automated unit tests to make it easier to incorporate external contributions and improve overall reliability.
  • Added data validations for improved data integrity.
  • Created new and revised existing documentation for coding conventions, tests, translations, etc (Github wiki). We can now update existing translations and add new ones more efficiently.

DMPRoadmap new features and bug fixes

  • Comments are now visible by default without having to click ‘Show.’ Stay tuned for additional improvements to the plan comments functionality in upcoming sprints.
  • Renamed/standardized text labels for ‘Save’ buttons for clarity.
  • Added a button to download a list of org users as a csv file (Admin > ‘Users’ page)
  • Added a global usage report for total users and plans for all orgs (Admin > ‘Usage’ page)
  • Admins can create customized template sections and place them at the beginning or end of funder templates via drag-and-drop
  • Removed multi-select box as an answer format and replaced with multiple choice

DCC/DMPonline subscriptions [Please note: this does not apply to DMPTool users] Another recent change is in the DMPonline service delivery model. The DCC has been running DMP services for overseas clients for several years and is now transitioning the core DMPonline tool to a subscription model based on administrator access to the tool. The core functionality (developing, sharing and publishing DMPs) remains freely accessible to all, as well as the templates, guidance and user manuals we offer. We also remain committed to the Open Source DMPRoadmap codebase. The charges cover the support infrastructure necessary to run a production-level international service. More information is available for our users in a recent announcement. We’re also growing the support team to keep up with the requests we’re receiving. If you are interested in being at the cutting edge of DMP services and engaging with the international community to define future directions, please apply to join us!

Machine-actionable DMPs
Increasing the opportunities for machine-actionability of DMPs was one of the spurs behind the DMPRoadmap collaboration. Facilities already exist via use of a number of standard identifiers and we’re moving on both the standards development tracks and code development and testing.

The CDL has been prototyping for the NSF EAGER grant and started a blog series focused on this work (#1, #2, next installation forthcoming), with an eye to seeding conversations and sharing experiences as many of us begin to experiment in multiple directions. CDL prototyping efforts are separate from the DMPRoadmap project currently but will inform future enhancements.

We’re also attempting to inventory global activities and projects on https://activedmps.org/ Some updates for this page are in the works to highlight new requirements and tools. Please add any other updates you’re aware of! Sarah ran a workshop in South Africa in August on behalf of NeDICC to gather requirements for machine-actionable DMPs there and the DCC will be hosting a visit from DIRISA in December. All the content from the workshop is on Zenodo and you can see how engaged the audience got in mapping our solutions. The DCC is also presenting on recent trends in DMPs as part of the OpenAIRE and FOSTER webinar series for Open Access week 2018. The talk maps out the current and emerging tools from a European perspective. Check out the slides and video.

You can also check out the preprint and/or stop by the poster for ‘Ten Principles for Machine-Actionable DMPs’ at Force2018 in Montreal and the RDA plenary in Botswana. This work presents 10 community-generated principles to put machine-actionable DMPs into practice and realize their benefits. The principles describe specific actions that various stakeholders are already undertaking or should take.

We encourage everyone to contribute to the session for the DMP Common Standards working group at the next RDA plenary (Nov 5-8 in Botswana). There is community consensus that interoperability and delivery of DMP information across systems requires a common data model; this group aims to deliver a framework for this essential first step in actualizing machine-actionable DMPs.

New DMPTool launched today!

dmptool logoWe’re delighted to announce a successful launch of DMPTool version 3 today. This milestone represents the convergence of the two most popular data management planning tools—US-based DMPTool and UK-based DMPonline—into a single, internationalized platform. We plan to bring the many other installations of the tool in Canada, Australia, South Africa, Argentina, and throughout Europe along for the ride as we work together to make DMPs a more useful exercise for everyone!

Currently the DMPTool supports 226 institutions and more than 28,000 users worldwide. The new DMPTool retains all of the existing functionality plus some handy new things, all in a shiny new package:

For everyone

For organizational administrators

  • New administrator help guide
  • Updated resources for promoting the DMPTool coming soon (stickers, postcards, and slide decks). Order new promo materials using the form below.
  • Institutional branding in the main banner (upload a new logo, provide contact information)
  • Create themed guidance that can be applied across all templates
  • A usage dashboard and report of plans created by users at your organization
  • Ability to view guidance and templates created by other organizations

Order form for new stickers and postcards – we’ll ship materials in early May 2018

Please report any issues or enhancement requests via GitHub Issues. Or you can always contact us directly! If you notice anything amiss with your existing plans and/or templates, let us know and we will fix it in short order.

On the right track(s) – DCC release draws nigh

blog post by Sarah Jones

Eurostar photo

Eurostar from Flickr by red hand records CC-BY-ND

Preliminary DMPRoadmap out to test

We’ve made a major breakthrough this month, getting a preliminary version of the DMPRoadmap code out to test on DMPonline, DMPTuuli and DMPMelbourne. This has taken longer than expected but there’s a lot to look forward to in the new code. The first major difference users will notice is that the tool is now lightning quick. This is thanks to major refactoring to optimise the code and improve performance and scalability. We have also reworked the plan creation wizard, added multi-lingual support, ORCID authentication for user profiles, on/off switches for guidance, and improved admin controls to allow organisations to upload their own logos and assign admin rights within their institutions. We will run a test period for the next 1-2 weeks and then move this into production for DCC-hosted services.

Work also continues on additional features needed to enable the DMPTool team to migrate to the DMPRoadmap codebase. This includes additional enhancements to existing features, adding a statistics dashboard, email notifications dashboard, enabling a public DMP library, template export, creating plans and templates from existing ones, and flagging “test” plans (see the Roadmap to MVP on the wiki to track our progress). We anticipate this work will be finished in August and the DMPTool will migrate over the summer. When we issue the full release we’ll also provide a migration path and documentation so those running instances of DMPonline can join us in the DMPRoadmap collaboration.

Machine-actionable DMPs

Stephanie and Sarah are also continuing to gather requirements for machine-actionable DMPs. Sarah ran a DMP workshop in Milan last month where we considered what tools and systems need to connect with DMPs in an institutional context, and Stephanie has been working with Purdue University and UCSD to map out the institutional landscape. The goal is to produce maps/diagrams for two specific institutions and extend the exercise to others to capture more details about practices, workflows, and systems. All the slides and exercise from the DMP workshop in Milan are on the Zenodo RDM community collection, and we’ll be sharing a write-up of our institutional mapping in due course. I’m keen to replicate the exercise Stephanie has been doing with some UK unis, so if you want to get involved, drop me a line. We have also been discussing potential pilot projects with the NSF and Wellcome Trust, and have seen the DMP standards and publishing working groups proposed at the last RDA plenary host their initial calls. Case statements will be out for comment soon – stay tuned for more!

We have also been discussing DMP services with the University of Queensland in Australia who are doing some great work in this area, and will be speaking with BioSharing later this month about connecting up so we can start to trial some of our machine-actionable DMP plans.

The travelling roadshow

Our extended network has also been helping us to disseminate DMPRoadmap news. Sophie Hou of NCAR (National Center for Atmospheric Research) took our DMP poster to the USGS Community for Data Integration meeting (Denver, CO 16–19 May) and Sherry Lake will display it next at the Dataverse community meeting (Cambridge, MA 14-16 June). We’re starting an inclusive sisterhood of the travelling maDMPs poster. Display the poster, take a picture, and go into the Hall of Fame! Robin Rice and Josh Finnell have also been part of the street team taking flyers to various conferences on our behalf. If you would like a publicity pack, Stephanie will send out stateside and Sarah will share through the UK and Europe. Just email us your contact details and we’ll send you materials. The next events we’ll be at are the Jisc Research Data Network in York, the EUDAT and CODATA summer schools, the DataONE Users Group and Earth Science Information Partners meetings (Bloomington, IN), the American Library Association Annual Conference (Chicago, IL), and the Ecological Society of America meeting (Portland, OR) . Catch up with us there!

Roadmap retrospective: 2016

be kind rewind2016 in review

The past year has been a wild ride, in more ways than one… Despite our respective political climates, UC3 and DCC remain enthusiastic about our partnership and the future of DMPs. Below is a brief retrospective about where we’ve been in 2016 and a roadmap (if you will…we also wish we’d chosen a different name for our joint project) for where we’re going in 2017. Jump to the end if you just want to know how to get involved with DMP events at the International Digital Curation Conference (IDCC 2017, 20–23 Feb in Edinburgh, register here).

In 2016 we consolidated our UC3-DCC project team, our plans for the merged platform (see the roadmap to MVP), and began testing a co-development process that will provide a framework for community contributions down the line. We’re plowing through the list of features and adding documentation to the GitHub repo—all are invited to join us at IDCC 2017 for presentations and demos of our progress to date (papers, slides, etc. will all be posted after the event). For those not attending IDCC, please let us know if you have ideas, questions, anything at all to contribute ahead of the event!

DMPs sans frontières

Now we’d like to take a minute and reflect on events of the past year, particularly in the realm of open data policies, and the implications for DMPs and data management writ large. The open scholarship revolution has progressed to a point where top-level policies mandate open access to the results of government-funded research, including research data, in the US, UK, and EU, with similar principles and policies gaining momentum in Australia, Canada, South Africa, and elsewhere. DMPs are the primary vehicle for complying with these policies, and because research is a global enterprise, awareness of DMPs has spread throughout the research community. Another encouraging development is the ubiquity of the term FAIR data (Findable, Accessible, Interoperable, Reusable), which suggests that we’re all in agreement about what we’re trying to achieve.

On top of the accumulation of national data policies, 2016 ushered in a series of related developments in openness that contribute to the DMP conversation. To name a few:

  • More publishers articulated clear data policies, e.g., Springer Nature Research Data Policies apply to over 600 journals.
  • PLOS and Wiley now require an ORCID for all corresponding authors at the time of manuscript submission to promote discoverability and credit. Funders—e.g., Wellcome Trust, Swedish Research Council, and US Department of Transportation—are also getting on the ORCID bandwagon.
  • The Gates Foundation reinforced support for open access and open data by preventing funded researchers from publishing in journals that do not comply with its policy, which came into force at the beginning of 2017; this includes non-compliant high-impact journals such as Science, Nature, PNAS, and NEJM.
  • Researchers throughout the world continued to circumvent subscription access to scholarly literature by using Sci-Hub (Bohannon 2016).
  • Library consortia in Germany and Taiwan canceled (or threatened to cancel) subscriptions to Elsevier journals because of open-access related conflicts, and Peru canceled over a lack of government funding for expensive paid access (Schiermeier and Rodríguez Mega 2017).
  • Reproducibility continued to gain prominence, e.g., the US National Institutes of Health (NIH) Policy on Rigor and Reproducibility came into force for most NIH and AHRQ grant proposals received in 2016.
  • The Software Citation Principles (Smith et al. 2016) recognized software as an important product of modern research that needs to be managed alongside data and other outputs.

This flurry of open scholarship activity, both top-down and bottom-up, across all stakeholders continues to drive adoption of our services. DMPonline and the DMPTool were developed in 2011 to support open data policies in the UK and US, respectively, but today our organizations engage with users throughout the world. An upsurge in international users is evident from email addresses for new accounts and web analytics. In addition, local installations of our open source tools, as both national and institutional services, continue to multiply (see a complete list here).

Over the past year, the DMP community has validated our decision to consolidate our efforts by merging our technical platforms and coordinating outreach activities. The DMPRoadmap project feeds into a larger goal of harnessing the work of international DMP projects to benefit the entire community. We’re also engaged with some vibrant international working groups (e.g., Research Data Alliance Active DMPs, FORCE11 FAIR DMPs, Data Documentation Initiative DMP Metadata group) that have provided the opportunity to begin developing use cases for machine-actionable DMPs. So far the use cases encompass a controlled vocabulary for DMPs; integrations with other systems (e.g., Zenodo, Dataverse, Figshare, OSF, PURE, grant management systems, electronic lab notebooks); passing information to/from repositories; leveraging persistent identifiers (PIDs); and building APIs.

2017 things to come

This brings us to outlining plans for 2017 and charting a course for DMPs of the future. DCC will be running the new Roadmap code soon. And once we’ve added everything from the development roadmap, the DMPTool will announce our plans for migration. At IDCC we’ll kick off the conversation about bringing the many local installations of our tools along for the ride to actualize the vision of a core, international DMP infrastructure. A Canadian and a French team are our gracious guinea pigs for testing the draft external contributor guidelines.

IDCC DMP/BoF session

There will be plenty of opportunities to connect with us at IDCC. If you’re going to be at the main conference, we encourage you to attend our practice paper and/or join a DMP session we’ll be running in parallel with the BoFs on Wednesday afternoon, 22 Feb. The session will begin with a demo and update on DMPRoadmap; then we’ll break into two parallel tracks. One track will be for developers to learn more about recent data model changes and developer guidelines if they want to contribute to the code. The other track will be a buffet of DMP discussion groups. Given the overwhelming level of interest in the workshop (details below), one of these groups will cover machine-actionable DMPs. We’ll give a brief report on the workshop and invite others to feed into discussion. The other groups are likely to cover training/supporting DMPs, evaluation cribsheets for reviewing DMPs, or other topics per community requests. If there’s something you’d like to propose please let us know!

IDCC DMP utopia workshop

We’re also hosting a workshop on Monday, 20 Feb entitled “A postcard from the future: Tools and services from a perfect DMP world.” The focus will be on machine-actionable DMPs and how to integrate DMP tools into existing research workflows and services.

The program includes presentations, activities, and discussion to address questions such as:

  • Where and how do DMPs fit in the overall research lifecycle (i.e., beyond grant proposals)?
  • Which data could be fed automatically from other systems into DMPs (or vice versa)?
  • What information can be validated automatically?
  • Which systems/services should connect with DMP tools?
  • What are the priorities for integrations?

We’ve gathered an international cohort of diverse players in the DMP game—repository managers, data librarians, funders, researchers, developers, etc.—to continue developing machine-actionable use cases and craft a vision for a DMP utopia of the future. We apologize again that we weren’t able to accommodate everyone who wanted to participate in the workshop, but rest assured that we plan to share all of the outputs and will likely convene similar events in the future.

Keep a lookout for more detailed information about the workshop program in the coming weeks and feel free to continue providing input before, during, and afterward. This is absolutely a community-driven effort and we look forward to continuing our collaborations into the new year!

Finding our Roadmap rhythm

Image from page 293 of "The life of the Greeks and Romans" (1875) by Guhl, Koner, and Hueffer. Retrieved from the Internet Archive https://archive.org/details/lifeofgreeksroma00guhl

Image from page 293 of “The life of the Greeks and Romans” (1875) by Guhl, Koner, and Hueffer. Retrieved from the Internet Archive https://archive.org/details/lifeofgreeksroma00guhl

In keeping with our monthly updates about the merged Roadmap platform, here’s the short and the long of what we’ve been up to lately:

Short update

Long(er) update

This month our main focus has been getting into a steady 2-week sprint groove that you can track on our GitHub Projects board. DCC/DMPonline is keen to migrate to the new codebase asap so in preparation we’re revising the database schema and optimizing the code. This clean-up work not only makes things easier for our core development team, but will facilitate community development efforts down the line. It also addresses some scalability issues that we encountered during a week of heavy use on the hosted instance of the Finnish DMPTuuli (thanks for the lessons learned, Finland!). We’ve also been evaluating dependencies and fixing all the bugs introduced by the recent Rails and Bootstrap migrations.

Once things are in good working order, DMPonline will complete their migration and we’ll shift focus to adding new features from the MVP roadmap. DMPTool won’t migrate to the new system until we’ve added everything on the list and conducted testing with our institutional partners from the steering committee. The UX team from the CDL is helping us redesign some things, with particular attention to internationalization and improving accessibility for users with disabilities.

The rest of our activities revolve around gathering requirements and refining some use cases for machine-actionable DMPs. This runs the gamut from big-picture brainstorming to targeted work on features that we’ll implement in the new platform. The first step to achieving the latter involves a collaboration with Substance.io to implement a new text editor (Substance Forms). The new editor offers increased functionality, a framework for future work on machine-actionability, and delivers a better user experience throughout the platform. In addition, we’re refining the DMPonline themes (details here)—we’re still collecting feedback and are grateful to all those who have weighed in so far. Sarah and I will consolidate community input and share the new set of themes during the first meeting of a DDI working group to create a DMP vocabulary. We plan to coordinate our work on the themes with this parallel effort—more details as things get moving on that front in Nov.

Future brainstorming events include PIDapalooza—come to Iceland and share your ideas about persistent identifiers in DMPs!—and the International Digital Curation Conference (IDCC) 2017 for which registration is now open. We’ll be presenting a Roadmap update at IDCC along with a demo of the new system. In addition, we’re hosting an interactive workshop for developers et al. to help us envision (and plan for) a perfect DMP world with tools and services that support FAIR, machine-actionable DMPs (more details forthcoming).

Two final pieces of info: 1) We’re still seeking funding to speed up progress toward building machine-actionable DMP infrastructure; we weren’t successful with our Open Science Prize application but are hoping for better news on an IMLS preliminary proposal (both available here). 2) We’re also continuing to promote greater openness with DMPs; one approach involves expanding the RIO Journal Collection of exemplary plans. Check out the latest plan from Ethan White that also lives on GitHub and send us your thoughts on DMP workflows, publishing and sharing DMPs.