Automated API traversal
Armed with a thesaurus and an almanac of system functionality we can write robots that program themselves
In Restful HATEOAS design, web applications provide endpoints that provides a list of web resources related to the current request that can also be introspected in the API.
A restaurant resource has links or URLs to a booking resource because you can book a restaurant.
A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports.
A system should also publish a series of workflows that it expects people to use.
This way we can write a fuzzy logic for a system based on a rough description of what to do - based on the thesaurus and almanac of a system.
"Export all my tweets to file"
All has an thesaurus entry for "list", "listAll", "getAll".
So the service knows it has to loop over this collection and save all fields to a file.
But this is how APIs work already, that almanac is called "documentation" and it is already often machine-readable (read the core protocols and open api?).
Well, they have a limitation, they do not return vocabularies associated with their object types, and Mime-Types or Content-Types are not sufficiently informative. They could, if those APIs returned JSON-LD respones, or simply decorating their responses with metaformat's polycontext metasymbol would be enough to bind them to schemas defined in concepts also linked via polycontext metasymbol, and everyone could reason about everything while retrieving data about everything, traversing APIs.
A bunch of existing solutions that require human work, and already work (they are called expert systems) for this wishful thinking to get realized, because there exists a lot of not very well documented software, where software types are poorly linked with ontological (linguistic) types.
To create that linguistic connect so that everyone could auto-generate queries in numerous databases just by thinking that one would want to know something, will require the training of an AI system to map it from human examples. There are a couple of layers:
All of that, can be beautifully linked up by defining a single good polycontext metasymbol (I think we should come up with and evolve the polycontext metasymbol for humanity, in a similar way that we evolve protocols, through RFCs), and that would allow to eventually have that desired property of being able to reason this way, really transcendentally, in all protocols and with all information systems.
I'm familiar with expert systems such as Drools which uses the rete algorithm which is really clever. And I know about OpenAPI
But they still have to be explicitly coded.
// But they still have to be explicitly coded.
But what you propose (the thesaurus and an almanac of system functionality) would also have to be explicitly coded, wouldn't it?
You say: "A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports."
You need to code those systems to publish their functionality, so you'd have to modify every system that has functionality, to be able to publish it. How would your approach avoid this need for explicit coding to modify those systems to publish descriptions of themselves to the "thesaurus and an almanac"?
I think, there should be a version of nnn for REST APIs. The REST API as a filesystem, and then extending the
nnn
util to handle.json
files would do it. However, I found that FUSE is not very performant, and Linus Torvalds famously says FUSE filesystems are nothing more than toys...