DUG Planning Call Tuesday June 21st 10 am PT / 11 am MT / 12 noon CT and 1pm ET. Conference Dial-in Number: (712) 775-7100 Participant Access Code: 128383# Attending: Amber Budden, Suzie Allard, Steven Abrams, Bill Michener, Trisha Cruse, Bob Sandusky, Rebecca Koskela, Bob Cook Apologies: Matt Jones, Bruce Wilson, Richard Huffine, Viv Hutchison, Mike Frame AGENDA 1) A discussion of the draft DUG agenda and expectations (see attached) 2) Meeting logistics 3) AOB 1) DRAFT DUG AGENDA Monday July 11th 08.30 - 10.00 Session 1 Welcome and Introductions: Robert Sandusky, Richard Huffine Meeting Objectives: Amber Budden Introduction to DataONE: Bill Michener (30-45 mins) Status of DataONE CI: Matt Jones (30 minutes) How much material will Matt cover in terms of the ITK? - needs to communicate this with Bruce and Bob. Also potential overlap with Bill's Overview. Demos instead since Bill will talk about CI. General Comment: Make sure that our tone is not focussed on MNs 10.20 - 12.00 Session 2: Breakout Breakout 1: DataONE Investigator Toolkit: What is it, how it will be used, proposed future developments, community feedback (prioritization of tools) Bruce Wilson, Bob Cook Info on release / development schedule (based on discussions occuring within CCIT) and ask community for feedback. Look at in the context of the Data Life Cycle (see joint UA/SC WG materials). Breakout 2: DataONE Education Resources: Current and future materials, mechanisms for training, community feedback Viv Hutchison, Amber Budden DataONEpedia (BPs, STs), DMP Tool, online teaching modules, workshops etc. Amber to invite Andrew Sallans to attend and he would be able to help in this session. Breakout 3: Citation and Preservation within the DataONE framework: Matt Jones, Stephen Abrams Include some of the results from the preservation WG from last December. On the citation side, can provide an update on initiative from the DataCite consortium (EasyID). SC have a draft white paper and Heather has been working on related materials. Feedback - validation of DataONE approach. Stephen to talk to John Kunze and Matt directly. 13.30 - 15.00 Session 3: Breakout Repeat of session 2 15.20 – 16.30 Session 4: Report Back from Breakouts 16.30 – 17.30 Session 5: Process of becoming / adding a Member Node Bob Cook Provide summary of materials developed / presented at last meeting and feedback from sub-group looking at the prioritization / selection of MNs. Mike probably unavailable for discussion. Bob to talk to Suzie and Trisha about content? Material from sub-group on prioritization may not be ready - subgroup got stalled briefly. Tuesday July 12th 08.00 – 08.30 Session 1 Objectives and Logistics: Amber Budden 08.30 – 09.45 Session 2 Member feedback on DataONE marketing materials: Trisha Cruse, Amber Budden Present marketing plan - vaue proposition for Member Nodes, marketing language for different stakeholders, brainstorm session on where to engage with potential MNs for the delivery of messages. Value of building out the DUG - ask for feedback on that. eg. What is the value of being a DUG member if not a Member Node? Some hestitency if we don't yet have a strategy for bring MNs on board. Feedback on ESA mock-ups. Trisha unable to attend, Bill to moderate in place. Bill and Trisha to talk. 10.15 – 11.30 Session 3 Member feedback on proposed mechanism for community input following public release: Suzie Allard, Bob Sandusky This didn't get addressed during UA/SC WG. Could use the session to find out what users feel about feedback mechanisms; determine characteristics that make them feel heard, are valuable to use. Alternatively, we could use the session to get feedback on the demos etc that Matt presented. Frame the session more generally in terms of feedback. Bob, Bob, Suzie and Amber to work out the details via email. 11.30 – 13.00 DUG Business Meeting: Bob Sandusky, Richard Huffine Report from the Chairs Review of Charter? Chair nominations and voting Calendaring next meeting - always meeting with ESIP? Look at venue for upcoming ESIP meeting. Migrating from one meeting to the next wouldn't be sustainable. Goals for expanding the membership - get attendees to help nominate future members. From the Member Node working group Proposed Rubric for Determining Member Node Candidate Priority Assumption D1 engagement with MN candidates needs to be prioritized since leadership, administrative, and technical resources will be required to engage each MN, especially for the first set of member nodes beyond the original three. Each candidate MN needs to meet minimum requirements. Mechanism A very simple zero-one evaluation of desired dimensions could provide a numerical evaluation of MN characteristics which could then be used to rank candidate MNs against each other. That evaluation could be performed collaboratively by a committee, or it could be done by the individuals on a committee and then averaged. Such an evaluation mechanism could also be used to provide feedback about the strong or weak points of a MNs candidacy without necessarily exposing the actual priority rank of the candidate. Minimum Requirements These are the absolute minimum requirements of any candidate MN. A MN cannot be considered a candidate without meeting ALL of these criteria. * The metadata format used by the candidate MN is supported by D1, or D1 agrees to begin supporting the metadata format in question. * At least some data in the collection is public or can be shared upon request. * A basic level of physical and cyber-security is in place. * The candidate MN intends to implement at least the Tier 1 D1 MN API. Priority Evaluation Dimensions These dimensions assume a a candidate MN already meets the minimal, required criteria. These dimensions serve to prioritize engagement of a candidate MN. Data ✓ Quality assurance. Does the candidate MN have clear, effective quality assurance standards for both data and metadata? ✓ Data sharing. Is a large proportion of the data in the collection public or available to researchers upon request? ✓ Scientific value. Is the data in the collection unique in the broader community? Does it fill gaps in the content already available through D1? In combination with existing/or near-term D1 collections, does it enable new science? ✓ Extent of collection. Is the collection exceptionally strong in breadth, depth, or both? ✓ Risk. Is the collection at risk? Strategic ✓ Community. Is the size or visibility of the community represented by the candidate MN particularly important? Does the community support the candidate MN? ✓ Partnership. Would admitting the candidate as a MN help form a strategic partnership beneficial to D1? ✓ Funding. Would admitting the candidate as a MN help make new streams of funding available to D1? Is the candidate MNs funding source different from other MNs in D1? ✓ Technical expertise. Does the candidate MN bring a high level of technical expertise, allowing it to contribute to broader D1 technical efforts? ✓ Data management expertise. Does the candidate MN bring a high level of data management expertise, allowing it to contribute to broader D1 efforts? ✓ Synergy. Does the candidate MN bring a high potential for synergy (technical, administrative, scientific) with D1? Diversity ✓ Geographic. Would admitting the candidate as a MN add a new state, region, country, or continent to the D1 network? ✓ Underrepresented groups. Is the candidate MN run by an underrepresented group or does it primarily serve an underrepresented group? ✓ Linguistic or cultural diversity. Would admitting the candidate as a MN increase the linguistic or cultural diversity of the D1 network? ✓ Institution type. Would admitting the candidate as a MN increase the institutional diversity of the D1 network? Leadership/Management ✓ Administrative. Is the candidate MN an effectively managed and stable organization? Is there an institutional commitment to the D1 relationship? Technical ✓ Human resources. Does the candidate MN have the necessary technical skills to minimize demands on CCIT technical support? ✓ Technical resources. Does the candidate MN offer technical resources (e.g., storage) BEYOND those it needs to support its own D1 deployment? ✓ Technical compatibility. Is the existing technical infrastructure of the candidate MN largely compatible with the D1 infrastructure, minimizing complexity of deployment? Is the data model relatively compatible? ✓ Technical stability. Does the candidate MN have a history of satisfactory availability? ✓ Tier 2 implementation. The candidate MN intends to implement the Tier 2 MN APIs. ✓ Tier 3 implementation. The candidate MN intends to implement the Tier 3 MN APIs. ✓ Tier 4 implementation. The candidate MN intends to implement the Tier 4 MN APIs. ✓ Tier 5 implementation. The candidate MN intends to implement the Tier 5 MN APIs. Commentary The downside of this method for determining prioritization is that each item counts equally. If desired, larger weights could be given to some items providing a weighted score. An alternative would be to focus on the 5 headings: data, strategic, technical, adminstrative, and diversity, using the individual items under each heading to simply to determine a 0/1 for the heading as a whole. Then the overall numerical evaluation would range from 0 to 5 instead of 0 to twentysomething.