[Esip-discovery] Service description use cases

Steve Richard steve.richard at azgs.az.gov
Fri May 15 14:07:15 EDT 2015


Soren-- good points.

Since I haven't been able to participate in the scoping discussions, and couldn't find any related docs on the ESIP discovery wiki, I'd like to get a clearer idea of the use cases.

>From your mail (paraphrased, embellished by me...)

* basic documentation of a service: defining parameters, route elements (relative paths for resources at an HTTP service endpoint?). I'd add information models (RDA data types) for resources and available representations (interchange formats)



* enable client application to “Try it out” by generating a correctly formatted request from the method document. (I'm guessing this means generating a web page with example requests for a human to look at and learn from).



Amplified use scenarios based on discussions with you and Ruth

--1. Discoverable (crawlable) documents describing service endpoints to allow automated discovery and cataloging of services; ideally be able to catalog the data offered by the service as well.

-- 2.  generation of web pages from description documents (like the swagger UI, e.g. http://petstore.swagger.io/) to help people figure out how to use a web service

 -- 3. provide links in Atom documents (or other metadata interchange docs, e.g. ISO) to enable clients to drill down from search results for aggregate resources (data series, dataset) to more granular search (granule, records, features). Currently based on openSearch descriptions, but this generalizes to providing a URI template with an associated description document that specifies the parameters (data type, domains, semantics...) in the template.  Sub cases:

 3.a.   Initial user search is broken into two steps by the query processor--find datasets meeting one set of criteria, then search within those based on additional criteria. The idea would be to automate such a 'chained' search process so its largely invisible to the user. Example-- find well logs for boreholes deeper than 500m.  First search is to find a service that provides well logs and includes a property for total depth of log; second search within the discovered datasets is to find logs with TD > 500m

3.b. can't automate the chaining of the query process, but can identify datasets that include the information of interest for the granular search; present user with some kind of form interface to construct the 'drill down' queries against a discovered dataset service (related to case 2 above)



Additional scenario of interest--

a hypermedia document provides a variety of affordances (links) for accessing some resource; a software application needs to parse those affordances to determine which one is appropriate for its requirements, and automate connection to the data source.  Basic requirements to address here would be a service protocol, information model and encoding scheme that the client application 'knows'.  Example: a dataset is distributed via file download in several formats (xls, csv, json), via WFS with some identified featureType(s), WMS, ESRI REST service with identified feature type, a GeoWS csv RESTful service.  The client is designed to consume GML features of a particular type, so it needs to identify the WFS with a matching featureType and get information necessary to connect and access data.





Stephen Richard

Chief Geoinformatics Section

Arizona Geological Survey

416 E. Congress #100, Tucson 85745

steve.richard at azgs.az.gov<mailto:steve.richard at azgs.az.gov>

520-209-4127

________________________________
From: Esip-discovery <esip-discovery-bounces at lists.esipfed.org> on behalf of Soren Scott via Esip-discovery <esip-discovery at lists.esipfed.org>
Sent: Wednesday, May 13, 2015 10:23 AM
To: <esip-discovery at lists.esipfed.org>
Subject: Re: [Esip-discovery] Notification of next ESIP Discovery Telecon 1:00pm PST / 4:00pm EST today

Doug and Chris (and Steve),

It might be useful to clarify the use cases for Swagger, RAML, or any of those RESTful API documentation frameworks and the alignment with OGC and OpenSearch at least. From our scoping round, there’s two main cases - there’s the basic documentation (defining parameters, route elements) and then there’s the promise (in Swagger) of being able to “Try it out” so generating a correctly formatted request from the method document.

The first is just data entry, really. The second one is where the dependencies in the OGC query parameters *and* their values or the OpenSearch parameters differences between datasets cause problems for any of these frameworks. For example, we’re describing a WMS so we enter WMS for the service and pick a supported version. There’s no place in Swagger to then say if you have selected SERVICE value WMS and VERSION value 1.3.0, the query parameter key is CRS instead of SRS for 1.1.1. Because it doesn’t grok those route or parameter dependencies, the actual generation of a correctly structured URL is back on the dev building the client or the user and not the more generic actionable self-describing services. And where having a solid Swagger and/or RAML OGC extension would be fantastic from a dev point of view. But it’s a modification of the spec and the interface and a conceptual understanding that the Swagger group, as of last fall, felt was an edge case that they wouldn’t support (not their idea of RESTful).

For OpenSearch, it’s a question of not having a way to access the enumerations for a parameter, ie I can’t get a list of dataset names without running the dataset search and understanding how to parse that - pull the granule links and, hopefully, get a new OSDD back or something. We have some numbers about OS services supporting that second level OSDD access. Nothing about differing parameter requirements, though.

I hear whispers of broader implementation of Swagger at different repositories but not of anyone handling the depency issues. And I am very much in favor of someone tackling the first case for things like OGC - just having some structured document that could be used to generate the docs for Swagger or RAML or 19119 or whatever would save so much time. But the second case is more what we’re all after.

My two cents,
Soren


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.deltaforce.net/pipermail/esip-discovery/attachments/20150515/0310b9da/attachment.html>


More information about the Esip-discovery mailing list