UC3 New Year Series: Data Management Planning in 2026

Cross-posted from our UC3 blog

Welcome to the second post of UC3’s New Year blog post series, where different services of UC3 take a look at the coming year.  If you haven’t already read it, check out the first one on digital preservation.

Over in the world of Data Management Planning, we’ve got a lot of exciting work this year to share!

DMP Tool Rebuild

Our main project continues to be working on the rebuild of the DMP Tool.  While we initially hoped to have it ready early this year, we’re now targeting the summer of 2026.  This gives us more time to make sure it’s at a high level of quality, and also releases it at a time that will hopefully be less disruptive to people who teach classes using the DMP Tool.  There’s a chance it will take longer than the summer though – we’re focused on quality over speed.

We’ve done 3 rounds of user testing so far on the site, and each time has given us a lot of valuable information.  We’ve gotten a lot of positive feedback about new features we will be offering, such as alias email addresses, adding collaborators to templates, a revamped API, and much more.  Other changes, though, have caused some confusion for people used to the current tool, and through testing we have found opportunities to improve the workflow and usability of the new site.  These are the types of changes that mean the rebuild will take longer than initially planned to complete, but we think are worth the time to get right.

DMP Tool logo

To keep updates about the rebuild in one place, we have a Rebuild Hub page on our blog.  We’ll keep this page up to date with the latest information about the release date, FAQ, status updates, and more.  We plan to make posts leading up to the new release showing the major changes and giving guidance to make the transition as seamless as possible.  If you’d like to help with testing at any point, please sign up for our user panel to get invitations to future feedback sessions.

As we’ve said before, we’re limiting updates to the current tool so we can focus our limited resources on the rebuild; but of course we also want to keep the tool live and helpful during the transition.  We’re fixing any major issues that come up, such as keeping it up to date with new ROR API and schema, and addressing user tickets as quickly as possible.  We are trying to keep funder templates up to date as well, but the frequency of new information and potential changes has made it difficult to perfectly capture all updates to federal guidelines.  We want to make sure we have the most relevant information possible on the tool without changing templates too often (as that can lose organization guidance), so we’ve been collecting updates from our Editorial Board members for a template release in the near future.  If you see any instances where a template in our tool does not match a funder template, please reach to us by email so we can get it corrected.

Get Involved with API Integrations

With our rebuild is coming a complete revamped API to take advantage of our new machine-actionable functionality.  We’re currently looking for partners that would like early access to our new API in order to develop new integrations for our rebuild.  Our goal is that the new API can do anything the user interface can do, which means the sky (or more relevant, the cloud) is the limit for possible tools.  If you’ve been wanting to connect to our API for some sort of automation that our current API did not offer the capability for, we’d love to hear from you. You can hear more about past pilot integrations and how to work with our API at this recording of our webinar from the Machine-Actional Plans pilot project.  We’ll be following the common API standard being developed with the Research Data Alliance, meaning many integrations with our tool should work for other DMP service providers as well.  If you have an idea for an integration you’d like to build on our new API, please reach out to dmptool@ucop.edu

Matching to Published Research Outputs

We’ve talked before about a major project to use machine learning models to help match DMPs to their eventual research outputs, like datasets and software publications, to help make data from published DMPs easier to find and re-use.  This work has continued and we plan to release it with the rebuilt DMP Tool.  Since our last update, we’ve made some significant steps towards this goal, including:

  • Moving the infrastructure onto our own servers to prepare for integration into the DMP Tool
  • Adding new sources of data, such as grant award pages that list published outputs
  • Getting the normalized corpus into OpenSearch to aid us in the matching process
  • Expanding our ground truth dataset of true matches and non-matches to help test our matching algorithm
  • Utilizing a Learning to Rank model that will improve over time as it learns from accepted and rejected matches
  • Building out the user interface for how users will see potential matches and accept or reject them
Screenshot of a webpage that says "Published Research Outputs at the top and includes a list of scholarly research citations.  Next to each item in the list are buttons that say "Accept" and "Reject", as well as information about the work such as date found, source, and confidence of the match.
New user interface showing a list of published outputs that have been matched to a DMP in our rebuilt DMP Tool.  Interface is subject to change before release.

Improvements we plan to work on over 2026 include:

  • Adding in related outputs based on accepted outputs (i.e., finding matches to any Accepted works in addition to matching against the DMP itself)
  • Looking at options to improve the matching algorithm, such as vector search with an embedding model
  • Working with the COMET team on tooling that can extract award IDs from published outputs, which will improve the quality of matching to DMPs that include an award ID

We’re excited for people to get to use this tool with the rebuild and start accepting and rejecting potential matches so we can learn from this and improve the matching algorithm further over time.  People will also be able to manually add DOIs as research outputs, like they can on the current tool, which will also help train the model over time on what we missed as potential matches.  This will be available for all DMPs that have been published, i.e., registered for a DMP ID.  Accepted works will be added to the metadata for the plan as related identifiers.

DMP Chef

Another exciting area we’re exploring is the use of generative AI to assist in writing Data Management Plans.  We’ve partnered with the FAIR Data Innovations Hub to work on the DMP Chef, a project to explore using large language models (LLMs) to draft DMPs.  Our goal is not to take away the key decisions in data management planning from a researcher, but instead to simplify the process as much as possible by asking a few critical questions, combining that responses with funder requirements that need to be met, and using those to produce a draft of a DMP for their review and edits.

We have promising early results, with both automated statistics and human evaluations showing the LLM-drafted DMPs can be comprehensive, accurate, and follow best practices.  Commercial models are performing better than the open-source models, but since we want to remain open-source, we’re looking at ways to improve the open-source models through additional retrieval augmented generation and other options.  And we’ll be testing carefully how accurate and helpful the output is, as well as looking at ways to help ensure researchers read and edit the plan as needed, rather than just accept the output right away.

DMP SourceOverall Satisfaction rating (1-5)Average Error Count per DMPAccuracy in guessing LLM vs Human
Human3.17.2 65%
LLMs (combined)3.44.943%
    Llama 3.32.67.570%
    GPT-4.14.22.315%
Results presented at the Research Data Alliance 2025 plenary, showing GPT-4.1 generated DMPs with higher satisfaction ratings and fewer errors reported than human-written exemplar DMPs from NIH.  N = 20 participants rating a DMP from each source, for a total of 60 DMP ratings

Over the course of 2026, we plan to keep testing and improving this model, starting with NIH and NSF plans.  The ultimate goal is a general use model that can be used within the DMP Tool for any funder to get a first draft of either a whole DMP or specific sections a researcher is struggling with.  We have a working prototype tool for DMP generation we will use for testing purposes, with integration into the DMP Tool planned for further out.  If you’d like to be part of testing out this new tool, please sign up for our user panel.

Thanks for reading about our major initiatives for the year!  Keep an eye out on this space for the next post in our series, about our 2026 plans for persistent identifiers.

We are grateful to the Institute of Museum and Library Services, the National Science Foundation, and the Chan Zuckerberg Initiative for each supporting core components of these initiatives.

MAP Pilot Project: New Resources and Report Available

TL;DR

The Machine Actionable Plans (MAP) Pilot project is currently in its final phase, providing institutions with resources to enable them to explore the potential uses of machine-actionable data management plans (maDMPs). The project webpage includes newly released resources including the final report, case studies, and key recommendations, as well as links to recorded webinars and other materials.

Pilot Overview

The pilot was funded by the Institute of Museum and Library Services (IMLS LG-254861-OLS-23) and grew out of a partnership between the California Digital Library and the Association of Research Libraries. Designed to address the urgent needs of academic libraries to meet increasing requirements for sharing research data, it explored the integration of maDMPs with existing research and IT systems. 

The pilot, discussed in past blog posts, worked directly with several institutions, providing the opportunity to take the infrastructure built by the DMP Tool and implement machine-actionable approaches in alignment with their organization’s goals. Each institution designed its own project with consideration given to local data management challenges and opportunities. Some focused on technical developments using API integrations, including automation and prototype tool build, while others prioritized collaboration and relationship-building across departments in support of research data management. Partners found value in not only progressing pilots at their own institutions, but sharing learnings and outcomes across institutions, deepening insight into common challenges and opportunities, as well as expanding collaborative relationships. 

CDL’s Maria Praetzellis notes:

At California Digital Library (CDL), we collaborate with UC campus Libraries and other partners to amplify the academy’s capacity for innovation, knowledge creation and research breakthroughs. The MAP Pilot project is an excellent example of this being realized. We’ve seen so many examples of collaboration, innovation, and expertise resulting in impressive tangible solutions for institutions in the face of increasing challenges and opportunities. Even in cases where institutions were unable to advance a solution within the span of the pilot, they were able to explore new paths to doing so in the future, all while building meaningful connections across campus and obtaining clarity on paths forward to advance institutional strategic priorities. This work has been strongly representative of the kinds of innovation CDL strives to facilitate.

Another key aim of the MAP pilot was to gather feedback to inform improvements to the DMP Tool. This feedback focused on workflows for uploading existing plans, automatic linking of plans to related outputs, enhancing API integrations, and improving the overall user experience. The input from the pilot institutions was crucial for identifying gaps and shaping the design of new DMP Tool features, which will be incorporated in the upcoming DMP Tool Rebuild. CDL’s Becky Grady comments: 

Receiving feedback on the DMP Tool user interface and API during the course of the pilot was incredibly useful for its development. Our pilot partners provided important perspectives on their experience using the tool and the API, which informed key developments in our user interface redesign. The DMP Tool team feels more confident in our direction for continued development, now with greater clarity on the priorities to provide the biggest benefits for researchers and institutions.

Several new resources have been created for institutions, informed by key learnings from the pilot.

MAP Pilot Report 🔗

An overview report for the pilot has been prepared to provide information around the project’s background, summary of pilot activities and DMP Tool development, pilot observations, and key recommendations for institutions. 

Case Studies 🔗

Pilot partners, including Arizona State University, Northwestern University, Pennsylvania State University, the University of California, Riverside, and the University of Colorado Boulder, share their pilot activities, learnings, and recommendations in a series of short case studies. 

Key Recommendations 🔗

A collection of short recommendation guides has been prepared for institutional stakeholder groups to support those exploring maDMPs. Guides are available for researchers, librarians, IT & Information Security departments, and grant offices. 

Several partner institutions are also preparing additional reports with more detail to be made available to the wider community. These will be listed on the MAP Pilot Project webpage as they become available. 

The MAP Pilot team hopes that institutions and DMP Tool administrators will find these resources useful in engaging with colleagues at their institution to explore the deep benefits that maDMPs can yield. They would like to thank all of the pilot institutions for their participation, collaboration, and generosity with their time in sharing their learnings with the community.

Announcing our Webinar Series: Insights from the Machine-Actionable Data Management Plans Pilot


Want to learn about how technological advancements in data management plans can benefit research at your university? Have you heard the term “machine-actionable” a lot but aren’t sure what it is or why it’s important? Are you looking for strategies to reduce burden on researchers and administrators in working on data management plans?

Abstract image of arrows moving forward

Join our free webinar series to learn from several US institutions that explored and piloted machine-actionable approaches to data management plans (DMPs).

Funded by the Institute of Museum and Library Services (award LG-254861-OLS-23), and led jointly by the California Digital Library (CDL) and the Association of Research Libraries (ARL), the Machine Actionable Plans (MAP) Pilot initiative enabled institutions to test and pilot data management plans that are machine-actionable and facilitate communication with other university research and IT systems. Each institution developed its own projects in alignment with their institutional mission, and with their specific challenges and opportunities taken into consideration. The DMP Tool team also worked with pilot partners to test features and advance technical developments to improve usability, best practice adoption, compliance, and efficiency.

In this series of webinars, we invite librarians, administrators, data managers, IT & security staff to find out more about the motivations of these institutions to explore machine-actionable DMP integrations: what they did, how they did it, and what they learned. For those interested in more technical aspects of integrations, some webinars will also provide detail on the API of the DMP Tool, along with more detailed implementation instructions and advice.

Webinar 1: Streamlining Research Support: Lessons from maDMP Pilots  

  • Tuesday, May 6, Noon EDT / 9:00 a.m. PDT Duration: 1 hour, with an optional additional 15 minutes for Q & A

This webinar is for those looking to improve the efficiency, collaboration, and coordination of research support within their institutions. Learn from several institutions about their explorations of maDMP integrations to facilitate automated notifications for coordination across campus, and about how they used the pilot more broadly to facilitate discovery and collaboration within their institutions. This webinar will provide an overview of each institution’s activity, rather than detailed instructions about integrations.

Presenters include:  Katherine E. Koziar, Briana Wham, Matt Carson, Andrew Johnson

Register

Webinar 2: Creative Approaches for Seamless and Efficient Resource Allocation 

  • Tuesday, May 20, Noon EDT / 9:00 a.m. PDT
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

Don’t miss this webinar if you’re interested in new ways to enable efficient resource allocation. Institutions will share their experiences in leveraging maDMPs to develop integrations for automation systems that enable such allocations. This webinar will provide an overview of each institution’s activity, rather than detailed technical instructions about integrations.

Presenters include:  Katherine E. Koziar, Andrew Johnson

Register

Webinar 3: Five Technological Advancements in DMPs to Benefit Your Organization 

  • Tuesday, June 3, Noon EDT / 9:00 a.m. PDT 
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

If you’re interested in emerging technologies within the pilot project and the DMP Tool and how they can help your institution expedite research sharing, compliance, and operational efficiency, this webinar will provide a strong introduction. We’ll also hear from pilot partners about promising AI developments related to reviewing DMPs, and will hear more detail on technical advancements coming to the DMP Tool based on feedback from the pilot. 

Presenters include:  Jim Taylor, Becky Grady

Register

WEBINAR 4: How to Implement Machine-Actionable DMPs at your Institution

  • Tuesday, June 17, Noon EDT / 9:00 a.m. PDT 
  • Duration: 1 hour, with an optional additional 15 minutes for Q & A

If you want to find out more about specific integrations and how to implement maDMPs, this webinar is for you. Hear from the DMP Tool team about the API, common challenges and how to overcome them, and actionable recommendations for campus buy-in.

Presenters include:  Becky Grady, Brian Riley

Register

UC3 New Year Series: Looking Ahead through 2025 for the DMP Tool

We’re gearing up for a big year over at the DMP Tool!  Thousands of researchers and universities across the world use the DMP Tool to create data management plans (DMPs) and keep up with funder requirements and best practices.  As we kick off 2025, we wanted to share some of our major focus areas to improve the application, introduce powerful new capabilities, and engage with the wider community.  We always want to be responsive to evolving community needs and policies, so these plans could change if needed.

New DMP Tool Application

Our primary goal for the year is to launch the rebuild of the DMP Tool application.  You can read more detail about this work in this blog post, but it will include the current functionality of the tool plus much more, still in a free, easy to use website.  The plan is still to release this by the end of 2025, likely in the later months (no exact date yet).  We’re making good progress towards a usable prototype of core functionality, like creating an account and making a template with basic question types.

In-development screenshot of account profile page in the new tool. Page is not final and is subject to change.
In-development screenshot of editing a template in the new tool. Page is not final and is subject to change.

Another common request is to offer more functionality within our API.  For example, people can already read registered DMPs through the API, but many librarians want to be able to access draft DMPs to integrate a feedback flow on their own university systems.  As part of our rebuild, we are moving to a system that is going to use the same API on the website as the one available to external partners (GraphQL for those interested).  This will allow almost any functionality on the website to be available through the API.  This should be released at the same time as the new tool, with documentation and training to come. Get your integration ideas ready!

Finally, we are continuing to work on our related works matching, tracking down published outputs and connecting them to a registered DMP.  This is part of an overall effort to make DMPs more valuable throughout the lifecycle of a project, not just at the grant submission stage, and to reduce burden on researchers, librarians, and funders to connect information within research projects.  It’s too early to tell when this will be released publicly on the website, but likely will come some time after the rebuild launch.

AI Exploration

While most of our focus will be on the above projects, we are in the early stages of exploring topics for future development of the DMP Tool.  One big area is in the use of generative AI to assist in reviewing or writing data management plans.  We’ve heard interest from both researchers and librarians in using AI to help construct plans.  People sometimes write their DMP the night before a grant is due and request feedback without enough time for librarians to provide it.  AI could help review these plans, if trained on relevant policy, to give immediate feedback when there’s not enough time for human review.

We’re also interested in exploring the possibility of an AI assistant to help write a DMP.  We know many people are more comfortable answering a series of multiple choice questions than they are in crafting a narrative, and it’s possible we could help turn that structured data into the narrative format that funders require, making it easier for researchers to write a plan and keeping the structured data for machine actionability. Another option is an AI chatbot within the tool that can help provide our best practice guidance in a more interactive format.  It will be important for us to balance taking some of the writing burden off of researchers while making sure that they are still the one responsible for the content within it.

These ideas are in early phases – it’s something we’ll be exploring with some external partners but likely not releasing to the public this year – however we’re excited about their potential to make best practice DMPs easier to create.

Community Engagement

While we’ll sometimes be heads down working on these big projects, we also want to make sure we’re communicating to and participating in the wider community more than ever.  As we get towards a workable prototype of the new tool, we’ll be running more user research sessions.  The initial sessions, reviewed here, offered a lot of valuable insight that shaped the current designs, and we know once people get their hands on the new tool they’ll have more feedback.  If you haven’t already, sign up here to be on the list for future invites. 

We also want to be more transparent with the community about our operations and goals.  We’ve started putting together documents within our team about our Mission and Vision for the DMP Tool, which we’ll be sharing with everyone shortly.  Over 2025, we want to continue to work on artifacts like those we can share regularly so that you all know what our priorities are.  One goal is to create a living will, recommended by the Principles of Open Scholarly Infrastructure, outlining how we’d handle the potential winddown of CDL managing the DMP Tool.  This is a sensitive area because we have no plans to wind down the tool, and don’t want to give the impression that its going away!  But it’s important for trust and transparency for us to have a plan in place if things change, as we know people care about the tool and their data within it.

Finally, we’ll be wrapping up our pilot project with ARL this year, where we had 10 institutions pilot implementation of machine-actionable DMPs at their university.  We’ve seen prototypes and mockups for integrations related to resource allocation, interdepartmental communication, security policies, AI-review, and so much more. We’ve brought on Clare Dean to help us create resources and toolkits, disseminate the findings, and host a series of webinars about what we’ve learned to help others implement at their own universities.  We’ll be presenting talks on the DMP Tool at IDCC25 in February, RDAP in March, and we plan to submit for other conferences throughout the year, including IDW/RDA in October, to share what we’ve learned with others. We hope to continue working with DMP-related groups in RDA to ensure our work is compatible with others in the space, and we’re following best practices for API development.

We hope you’re as excited for these projects as we are!  We’re a small team but we work with many amazing partners that help us achieve ambitious goals.  Keep an eye on this space for more to come.

Behind the Scenes: Insights from User Testing the new DMP Tool Designs

TL;DR

  • The rebuild of the technology behind the DMP Tool offered a chance to refresh the user interface
  • We conducted 12 user testing sessions to have real users walk through wireframes of our new tool designs to offer feedback and find issues
  • People liked the new designs but had a lot of small areas of confusion around some aspects like sharing and visibility settings
  • We made tons of small changes based on feedback and continue to make updates for better usability
  • Fill out this short form to have the option to join future feedback sessions

Why we needed new designs

As mentioned in our last blog post, the team behind the DMP Tool has been working on a rebuild of the application to improve usability, add new features, and provide additional machine-actionable features.  To provide all of this advanced functionality, we needed to do a pretty big overhaul of the technology behind the DMP Tool, and it was a good time to give the design a more modern upgrade as well, adding new functionality while hopefully making existing features easier to use.

A graphic showing a Machine-Actionable DMP connected to nodes that say Compliance, Integrity, Guidance, Tracking, and Scalability

How we made the first drafts and tested them

Over the past few months, we’ve worked closely with a team of designers to create interactive wireframes—prototype mockups that allow us to test potential updates to the user interface without fully developing them. These wireframes are crucial for gathering feedback from real users early, ensuring that our vision for a better tool meets their expectations.  While a lot of thought and planning went into these initial designs, we wanted to make sure people were finding the new site as easy and intuitive as possible, while still offering new, more intricate features.

To do this, we recruited three groups of people, 12 total, who work on different parts of the tool to test out these designs:

  • 5 researchers, who would be writing DMPs in the tool
  • 4 organizational administrators, who would be adding guidance to template in the tool
  • 3 members of the editorial board or funder representatives, who would be creating templates in the tool

We recruited volunteers from the pilot project members, from our editorial board, from social media, and from asking those we recruited to share the invitation with others. We conducted virtual interviews with each person individually, where we let them explore the wireframe for their section, gave them tasks to complete (e.g., “Share this DMP with someone else”), and asked questions about their experience.  For the most part we let people walk through the wireframes as if they were using it for real, thinking out loud about what they were experiencing and expecting.

What we found from testing

It was illuminating for the team to see live user reactions from these sessions, and watch them use this new tool we’re excited to continue work on. 

We loved to hear users say how excited they were for a particular new feature or how much they liked a new page style.  At times it could be disheartening, watching a user not find something that we thought was accessible, but those findings are even more important because it means we have an area to improve.  We made a report about the findings after each group of users and worked with the designers on how to address the pain points.  Sometimes the solution was straightforward, while other times we wrestled with different options for weeks after testing.

Overall, we found that people liked the new designs and layout and could get through most tasks successfully.  They appreciated the more modern layout and additional options. But there were many areas that the testers identified as confusing or unclear.  There are specific examples, with before-and-after screenshots, in the Appendix.  Some of the top changes made revolved around the following areas:

  • Decreasing some text in areas that felt overwhelming, moving less important information to other pages or collapsed by default
  • Adding some text to areas that were particularly unclear, such as what selecting “Tags” for a template question would do
  • Connecting pages if people consistently went somewhere else, such as adding a link to sharing settings on the Project Members page since that’s where people looked for it first
  • Moving some features to not show until they’re needed, such as having Visibility settings as an option in the publishing step and not the drafting step
  • Clarifying language throughout when things were unclear, such as distinguishing whether “Question Requirements” was about what the plan writer was required to write when creating their DMP or whether that was about the template creator marking whether a question is required or had display logic
  • Having additional preview options when creating a template or adding guidance to understand what a question or section would look like to a user writing a DMP
  • Making certain buttons more prominent if they were the primary action on a page, like downloading a completed DMP that originally was hard to find

Even though the main structure worked well for people, these small issues would have added up to a lot more confusion and obstacles for users if we hadn’t identified them before releasing.  

Wrapping up and moving forward

The whole team learned a ton from these sessions, and we’re grateful to all the participants who signed up and gave their time to help us improve the tool.  This sort of testing was invaluable to find areas to improve – we made dozens, if not hundreds, of small and large changes to the wireframes based on this testing, and we hope it’s now much better than it was originally. We’re still working on updates as we build our designs for more areas of the site, but feel better now about our core functionality.

If you’d like to be invited to participate in surveys, interviews, or other feedback opportunities like this for the DMP Tool, please fill out this brief form here: Feedback Panel Sign-Up. For anyone that signed up but wasn’t selected for this round, we may reach out in the future! 

We loved seeing how excited people are about this update, and we can’t wait to share more.  The most common question we get is – when is it releasing!  That’s going to be quite some time, and we don’t have more to share yet, as we’re still too early in the development process.  But stay tuned here for more updates as we do! 

We want to thank Chan Zuckerberg Initiative (CZI) for their generous support for rearchitecting our platform. We wouldn’t be able to make all of these helpful updates along with our back-end transformations without it.

Appendix: Specific Examples

Important note: The “updated wireframes” shown here are not final designs. We have not yet completed a design pass for things like fonts, colors, spacing, and accessibility; this is just a quick functionality prototype so we could get early feedback. Even the functionality shown here may change as we develop based on additional feedback, technical challenges, or other issues identified. Additionally, these wireframes are mockups and do not have real data in them, so there may be inconsistent or incorrect info in affiliations, templates, etc; we were focused on the overall user interface in testing, not specific content.

Sharing settings

For those who want some more details and specific examples, here are a few of the top areas of confusion we found:

There was sometimes confusion in how to share a plan with others, and what the distinction is between a Project collaborator (e.g., another researcher on the grant who may not be involved in the DMP) and a DMP collaborator (e.g., a peer who is giving feedback on writing the DMP but not on the project).  The current live tool has both “Project Contributors” and “DMP Collaborators” on the same page which we thought contributed to this confusion, so we wanted to separate those who can edit the DMP into a separate Sharing section.  However, testers had a hard time finding these sharing settings, and often went to the Collaborators page to grant DMP access.  So, we added a link to these settings where people were looking (the new section in the green box), and added more detail to the sharing page about whether they were invited or had access due to being a collaborator, changed some language within this like “Collaborator” to “Project Member,” with the option to change access.


Current tool:

On the current tool, these two types of collaborators are on one page.

Initial wireframes:

The Collaborators page in the initial wireframes, which was part of the overall project details and was not related to sharing access to the DMP itself.
A separate Sharing page on the plan itself had sharing settings, and was completely distinct from Collaborators.

Updated wireframes:

This page was renamed to Project Members for clarity, with a link to the page for sharing access to the DMP since so many people looked for it here.
This page was updated to give more information and control on invitations, and to make clear if people were added on because of an invite or because they were a project collaborator.

Card layout

Many parts of the tool used a new, more modern card format for displaying lists of items to choose from.  This allowed us to show more information than in a list, and adapt to smaller screens. However, we saw in some areas that people had trouble scanning these cards to find what they were looking for, like a plan or template, when they expected to search in alphabetical order.

For example, picking a template in the first draft used a boxier card format. People found it harder to find the template they were looking for, since they wanted to quickly scan the titles vertically.  So we changed it to a different format that should be easier to scan, even if it doesn’t show as many on one page.  Note we also now have the option to pick a template other than from your funder, a common request in the current tool.  

Current tool:

Currently, selecting your funder brings up a list of templates with no other information, and you can’t select a different template.

Initial wireframe:

This format allows more information if we want to add details that might help people pick the right template.

Updated wireframe:

This update still allows us to show more information, but the vertical layout means a person’s eyes can move in the same spot down the list to scan titles more easily if they know what they want.

Flow through the tool

People appreciated that they could move around more freely in the new design, as compared to the more linear format of the current tool. However, that also occasionally made people feel “lost” as to where they were in the process of writing a DMP. Especially as there is now a “Project” level above each plan to help support when people have multiple DMPs for the same research project.  So we added more guidance, breadcrumbs, and navigation while still allowing the freedom of movement throughout the process.

For example, while writing a plan, users will now be able to see the other sections available and understand where they are in the Project tree.  We also reduced some of the text on screen due to people feeling overwhelmed with information, putting some best practices behind links that people can visit if they wish to, and moved the Sample Answer people were most interested in to above the text box for better visibility.

Current tool:

The current tool has more distinct phases from writing a plan to publishing. In this view, a person is answering a single question and then would move on to the next.

Initial wireframe:

In our first draft, people clicked into each question rather than having all one one expandable page. But people weren’t always sure where they were in the process or how to get back.

Updated wireframe:

We added the navigation seen on the left and top here to allow people to see what else is in the plan and more easily get to other sections or the Project. We are also still working on how to reduce how much text is on the screen at once, for example by minimizing the guidance, but this is not final. We also moved the sample text above the question and removed the answer library for now.

Layout changes

In addition, there were tons of small changes throughout, changing layouts, wordings, and ordering of options in response to areas of confusion.  Some places we scaled back a bit of functionality since the number of new options were overwhelming, while other places we added a bit more that people needed.

In the first draft of the wireframes, the visibility settings of the plan were on the main overview page of the plan.  This was concerning to users since they were still drafting at this stage, and even if they may want it public once they published it, the setting in this location made it seem like it was public now.  Instead we added a status and setting on the overview page, but the visibility setting does come up until a person gets to the Publish step, somewhat like the current tool that has those options later than in the plan writing stage.

Current tool:

Currently, setting visibility is later in the “Finalize” stage.

Initial wireframe:

In the first draft, this visibility settings were on the main plan page, which made people think it was public already as opposed to that it would be public once published.

Updated wireframe:

The updated main page, with many changes based on feedback, including visibility as a status on the right, which isn’t set until it is published, and more control over changing project details per plan.
Now, visibility is set only once a person goes to publish their DMP.

We made similar change to creating a template, moving the visibility settings to be selected in the publishing stage instead of being in a Template Options menu people didn’t always see right away.  They expected to set that visibility at the time they published it, so that’s where we moved that option to be, consistent with how the plan creation flow works.

Roadmap retrospective: 2016

be kind rewind2016 in review

The past year has been a wild ride, in more ways than one… Despite our respective political climates, UC3 and DCC remain enthusiastic about our partnership and the future of DMPs. Below is a brief retrospective about where we’ve been in 2016 and a roadmap (if you will…we also wish we’d chosen a different name for our joint project) for where we’re going in 2017. Jump to the end if you just want to know how to get involved with DMP events at the International Digital Curation Conference (IDCC 2017, 20–23 Feb in Edinburgh, register here).

In 2016 we consolidated our UC3-DCC project team, our plans for the merged platform (see the roadmap to MVP), and began testing a co-development process that will provide a framework for community contributions down the line. We’re plowing through the list of features and adding documentation to the GitHub repo—all are invited to join us at IDCC 2017 for presentations and demos of our progress to date (papers, slides, etc. will all be posted after the event). For those not attending IDCC, please let us know if you have ideas, questions, anything at all to contribute ahead of the event!

DMPs sans frontières

Now we’d like to take a minute and reflect on events of the past year, particularly in the realm of open data policies, and the implications for DMPs and data management writ large. The open scholarship revolution has progressed to a point where top-level policies mandate open access to the results of government-funded research, including research data, in the US, UK, and EU, with similar principles and policies gaining momentum in Australia, Canada, South Africa, and elsewhere. DMPs are the primary vehicle for complying with these policies, and because research is a global enterprise, awareness of DMPs has spread throughout the research community. Another encouraging development is the ubiquity of the term FAIR data (Findable, Accessible, Interoperable, Reusable), which suggests that we’re all in agreement about what we’re trying to achieve.

On top of the accumulation of national data policies, 2016 ushered in a series of related developments in openness that contribute to the DMP conversation. To name a few:

  • More publishers articulated clear data policies, e.g., Springer Nature Research Data Policies apply to over 600 journals.
  • PLOS and Wiley now require an ORCID for all corresponding authors at the time of manuscript submission to promote discoverability and credit. Funders—e.g., Wellcome Trust, Swedish Research Council, and US Department of Transportation—are also getting on the ORCID bandwagon.
  • The Gates Foundation reinforced support for open access and open data by preventing funded researchers from publishing in journals that do not comply with its policy, which came into force at the beginning of 2017; this includes non-compliant high-impact journals such as Science, Nature, PNAS, and NEJM.
  • Researchers throughout the world continued to circumvent subscription access to scholarly literature by using Sci-Hub (Bohannon 2016).
  • Library consortia in Germany and Taiwan canceled (or threatened to cancel) subscriptions to Elsevier journals because of open-access related conflicts, and Peru canceled over a lack of government funding for expensive paid access (Schiermeier and Rodríguez Mega 2017).
  • Reproducibility continued to gain prominence, e.g., the US National Institutes of Health (NIH) Policy on Rigor and Reproducibility came into force for most NIH and AHRQ grant proposals received in 2016.
  • The Software Citation Principles (Smith et al. 2016) recognized software as an important product of modern research that needs to be managed alongside data and other outputs.

This flurry of open scholarship activity, both top-down and bottom-up, across all stakeholders continues to drive adoption of our services. DMPonline and the DMPTool were developed in 2011 to support open data policies in the UK and US, respectively, but today our organizations engage with users throughout the world. An upsurge in international users is evident from email addresses for new accounts and web analytics. In addition, local installations of our open source tools, as both national and institutional services, continue to multiply (see a complete list here).

Over the past year, the DMP community has validated our decision to consolidate our efforts by merging our technical platforms and coordinating outreach activities. The DMPRoadmap project feeds into a larger goal of harnessing the work of international DMP projects to benefit the entire community. We’re also engaged with some vibrant international working groups (e.g., Research Data Alliance Active DMPs, FORCE11 FAIR DMPs, Data Documentation Initiative DMP Metadata group) that have provided the opportunity to begin developing use cases for machine-actionable DMPs. So far the use cases encompass a controlled vocabulary for DMPs; integrations with other systems (e.g., Zenodo, Dataverse, Figshare, OSF, PURE, grant management systems, electronic lab notebooks); passing information to/from repositories; leveraging persistent identifiers (PIDs); and building APIs.

2017 things to come

This brings us to outlining plans for 2017 and charting a course for DMPs of the future. DCC will be running the new Roadmap code soon. And once we’ve added everything from the development roadmap, the DMPTool will announce our plans for migration. At IDCC we’ll kick off the conversation about bringing the many local installations of our tools along for the ride to actualize the vision of a core, international DMP infrastructure. A Canadian and a French team are our gracious guinea pigs for testing the draft external contributor guidelines.

IDCC DMP/BoF session

There will be plenty of opportunities to connect with us at IDCC. If you’re going to be at the main conference, we encourage you to attend our practice paper and/or join a DMP session we’ll be running in parallel with the BoFs on Wednesday afternoon, 22 Feb. The session will begin with a demo and update on DMPRoadmap; then we’ll break into two parallel tracks. One track will be for developers to learn more about recent data model changes and developer guidelines if they want to contribute to the code. The other track will be a buffet of DMP discussion groups. Given the overwhelming level of interest in the workshop (details below), one of these groups will cover machine-actionable DMPs. We’ll give a brief report on the workshop and invite others to feed into discussion. The other groups are likely to cover training/supporting DMPs, evaluation cribsheets for reviewing DMPs, or other topics per community requests. If there’s something you’d like to propose please let us know!

IDCC DMP utopia workshop

We’re also hosting a workshop on Monday, 20 Feb entitled “A postcard from the future: Tools and services from a perfect DMP world.” The focus will be on machine-actionable DMPs and how to integrate DMP tools into existing research workflows and services.

The program includes presentations, activities, and discussion to address questions such as:

  • Where and how do DMPs fit in the overall research lifecycle (i.e., beyond grant proposals)?
  • Which data could be fed automatically from other systems into DMPs (or vice versa)?
  • What information can be validated automatically?
  • Which systems/services should connect with DMP tools?
  • What are the priorities for integrations?

We’ve gathered an international cohort of diverse players in the DMP game—repository managers, data librarians, funders, researchers, developers, etc.—to continue developing machine-actionable use cases and craft a vision for a DMP utopia of the future. We apologize again that we weren’t able to accommodate everyone who wanted to participate in the workshop, but rest assured that we plan to share all of the outputs and will likely convene similar events in the future.

Keep a lookout for more detailed information about the workshop program in the coming weeks and feel free to continue providing input before, during, and afterward. This is absolutely a community-driven effort and we look forward to continuing our collaborations into the new year!

NIH Policy on Rigor and Reproducibility

You’ve all heard about the reproducibility crisis in science. But you may not be aware of a (relatively) new National Institutes of Health (NIH) policy designed to address the issue. The NIH Policy on Rigor and Reproducibility became effective for proposals received on or after January 25, 2016 and applies to most NIH and Agency for Healthcare Research and Quality (AHRQ) grant applications. We just learned about the policy ourselves thanks to the combined efforts of UCSD library and research staff to raise awareness on their campus (and here’s a noteworthy mention in a Nature review of 2015 science news). To aid researchers in meeting the new criteria, UCSD produced this handy guide that we (and they) would like to share with the wider community.

The new policy does not involve any changes to data sharing plans. It is related and important enough, however, that we inserted a statement and link in the “NIH-GEN: Generic” template (Please note the Rigor and Reproducibility requirements that involve updates to grant application instructions and review criteria [but not Data Sharing Plans]).

The policy does involve:

  • Revisions to application guide instructions for preparing your research strategy attachment
  • Use of a new “Authentication of Key Biological and/or Chemical Resources” attachment (example from UCSD library website)
  • Additional rigor and transparency questions reviewers will be asked to consider when reviewing applications

These policies are all meant to achieve basically the same goals: to promote openness, transparency, reproducibility, access to, and reuse of the results of scientific research. We’re grateful to the folks at UCSD—Dr. Anita Bandrowski, Ho Jung Yoo, and Reid Otsuji—for helping to consolidate the message and for providing some new educational resources.

Roadmaps galore

Data management planning is moving and shaking at all scales—local, national, international—these days. We had excellent conversations at IDCC about coordinating responses to proliferating data policies and sharing experiences across borders and disciplines. All of the slides and materials from the international DMP workshop are available here.

So far the community has responded positively to our proposal for building a global infrastructure for all things DMP. Our big-picture plans include a merged platform based on the DMPonline codebase and incorporating recent internationalization work by the Portage Network in Canada (check out their bilingual DMP Assistant). We’re completing a gap analysis to add existing functionality from the DMPTool to DMPonline and will issue a joint roadmap in the coming months. Drawing together these disparate development efforts also presents an opportunity to set best practices for future work (stay tuned). This will allow us to consolidate value upstream and ensure maximum benefits to the entire community.

To facilitate our capacity-building efforts, we submitted a proposal entitled (what else) “Roadmap” to the Open Science Prize. You can read the Executive Summary on their website here and peruse the full proposal here (also view our snazzy promo video below). The prize seemed like the perfect opportunity to reposition DMPs as living documents using the biomedical research community as a pilot group. We’ll know by the end of April whether our bid is successful. Regardless of the outcome, we would love to know what you think about the proposal.

And finally, an update on the near-future roadmap for the DMPTool. We just added some new API calls in response to requests for more usage statistics and to facilitate integration projects with other data management systems. Admins can now get info about templates used to create plans at their institution (including private plans!) and a list of institutional templates. Check out the updated documentation on the GitHub wiki. The next order of business is working through the backlog of bug fixes. You can follow our progress in the GitHub issue tracker. Once the bugs are eliminated, we’ll circle back to high priority feature enhancements that contribute to our long-range plans.


DMPs are going global

…well international at least, with global aspirations. The US-based DMPTool and UK-based DMPonline have collaborated from the beginning to provide data management planning services and training on our respective sides of the pond. As more and more funders, institutions, and nations—the entire EU, for instance—adopt data sharing policies, we find ourselves supporting data management planning initiatives farther and wider.

To meet the exploding demand and facilitate connecting the dots (e.g., promoting community standards for DMPs), we’ve decided to formalize our partnership and move toward a single platform for all things DMP. You can learn more about our evolving partnership in this joint paper that we’ll be presenting at the International Digital Curation Conference (IDCC) at the end of Feb. Stay tuned for updates about a joint roadmap and timeline in the coming months. Our individual roadmaps will remain in place for now.

As always, we invite your feedback! And if you happen to be attending IDCC, consider joining us and the DART Project for an international DMP workshop on Thurs, Feb 25 (registration info).

GlobalSelfieMosaic

NASA’s Global Digital Selfie 2014 http://www.gigapan.com/gigapans/155294

Hang A DMPTool Poster!

In addition to working hard on the new version of the DMPTool (to be released in May), we are also working on outreach and education materials that promote the use of the DMPTool. Our latest addition to these materials is a generic poster about the DMPTool, including information about what’s to come in the new version. You can download a PDF version, or a PPTX version that you can customize for your institution. We plan on updating this poster when the new version of the DMPTool is released, so keep an eye out!

“DMPTool: Expert Resources & Support for Data Management Planning”. 30″x38″ poster

Slide1
Posters available as:

  • PDF (cannot be customized)
  • PPTX (can be customized)