-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JSON representation should not have a map at its root #22
Comments
Editors 1st Sept decided to keep key but make it registered extension name. As per use of registered extension name the filename is derived rather than primary key. |
What is the reasoning here? This is not a typical representation. For example, an inventory object is not nested under a, say, |
My assumption here, please correct me if I'm wrong, is that a part of the reasoning here is to support some form of extension chaining? If this is the case, I think it would be good to define what extension chaining means and why it is helpful to specify json files in this fashion. Speaking of my own understanding, I do not know what extension chaining means outside of the context of a storage layout extension, and even there I suspect our understandings may differ. To me, you could conceivably view a storage layout as a series of generic predicates that are chained together to map an OCFL object id to a storage path. For example, there might be one extension that strips a prefix from an id, another that splits an id into tuples, one that computes a digest, etc. A layout like PR #12 might be defined like:
A layout like PR #16 might be defined like:
A layout like PR #19 is not so easily defined. I'll come back to it in a minute. The relationship of the extensions is important, even in these simple examples. If multiple extensions were defined within a single json file in the current map structure, I would have no idea how to interpret it. Instead, if there is a desire to chain to extensions, like I have described, then I think a storage layout extension chaining extension should be written. This extension would describe how extensions are chained together as well as the mechanism for defining what extensions are chained and in what order. For example, the extension chaining extension's json file could be as simple as: {
"extensions": [
"0015-strip-id-layout-extension",
"0010-n-tuple-layout-extension",
]
} This identifies which extensions are involved in the chain and how they are related. The chained extensions themselves would then each have their own json file that contains their own parameters. Back to PR #19, this is slightly tangential but I want to make it clear that chaining is not necessarily straight forward for all use cases. The crux of PR #19 is that it use a hash of the OCFL object id to create an n-tuple tree and then uses an encoded version of the original object id as the encapsulation directory. In the earlier examples, the extensions were all taking a string input and returning a string output which was then either fed into the next extension or used as the storage path, if it was the final extension in the chain. For PR #19, you could imagine using the generic hash extension and the generic n-tuple extension, but the problem is creating the encapsulation directory. It is not possible to create the encapsulation directory based solely on the output of the previous extension because it needs to know what the original object id was. This is not an insurmountable problem. You could, for example, say that the input to each extension is the original object id and the output of the previous extension. My point is just that extension chaining is not necessarily simple and that, if it's done at all, it should be done from within the context of an extension that defines what the behaviors are. |
2020-12-01 Editors agree fixed by #29. The root is now an object not a map. |
The following is the current example of a parameter json file:
The map keyed on the file name is unneeded; it should just be the parameters object. For example:
The text was updated successfully, but these errors were encountered: