OPeNDAP MN Implementation Notes
===========================

Conference call 9/19/2013
--------------------------------------

Participants: Matt Jones, Dave Vieglais, Jing Tao, Laura Moyers, James Gallagher, Bruce Wilson, Chris Jones, Dave Fulker

James:
Challenges:
What DataONE Tiers would we target?
Which servers would we target?
Which nodes would we target?
Who would work on the project?
How much work is required?  When would we schedule it?

What might we do? Two potential collaborations:
    1) Modify the DAP engine to implement the DataONE API
        - simple REST-based API
        - identifiying a 'dataset' in an OPeNDAP server may be a challenge due to the hierarchy
        - Four Potential Tiers to implement
        - Tier 1 (read-only, public access)
            production API docs: http://releases.dataone.org/online/api-documentation-v1.0.0/apis/MN_APIs.html
            trunk / latest docs: http://mule1.dataone.org/ArchitectureDocs-current/apis/MN_APIs.html
            Example calls to implement:
            /object (listObjects())
            /meta (getSystemMetadata())
            /checksum (getChecksum())
            etc.
        - Tier 2 (authenticated access) - X509 certificate-based
        - Tier 3 (CRUD write access)
        - Tier 4 (Replication)
    2) Use the DAP data model to be able to manipulate data within DataONE

Advanced issues: Data subsetting and aggregation discussion

Action items
* Dave F: Contact NODC (specifically including Ken Casey) about interest in becoming MN
* Dave F: Contact JGOFS about interest in becoming MN
* Matt: Set up next call

Next steps: decide who and how much investment
Next meeting: Thursday October 3, 2013

Discussion 2012-09-25
================

- Resources available for devloping a software stack to work with OPeNDAP
- Need to develop a work plan that outlines the activity and specifies significant development milestones
- First step should be the design.  Can this be done in such a way that it works with any DAP server, perhaps as a filter that sits on top of the server, accepting the output DAP information and translating that into DataONE speak.  In other words, does the DAP protocol provide all of the information that's needed?  If not, getting into the actual server itself (whichever ones) is a lot more difficult of a proposition.  
- Work will be put out to bid by UNM, but we can select the awardee (within reason).
- Who can do the work?
  - Would be much more efficient to get the work done if participants at the management side are familiar with both DAP and DataONE, for example, by getting a month or so of time for someone at OPeNDAP to have time to work with people to implement this.  
- Does a subsetting operation, where there is, for example, a computational result file, with some complex subsetting operation needed to get the data as a data set.  Dealing with identifiers for subsets is an issue.  URL with all of the parameters is a unique identifier.  Each different result is a different URL and a different URL.  Handling a potentially infinite number of datasets is a problem.  DataONE assumes that there are discrete datasets, each with their own identifier, and that each data set is then registered into DataONE.  Ties into the whole need for how does DataONE handle service endpoints, which is a question still under discussion.  
  
- Next Steps: