# Content Directory Encoding Discussion ## Author Bedeho ## Participants - Jens - Alex - Bedeho ## Time and place 4th April 2019, 12am GMT+1 ## Topics How content should be represented in the content directory is not specified in detail in the whitepaper, only sketched out conceptually, and Alex and Jens had proposed a general data model they wanted to review, referred to as the "JSON approach". This proposal was discussed and compared to another proposal that Bedeho already had had in mind, referred to as the "statically typed approach". ### The "JSON approach" 1. Each content item is not classified by type (e.g. video, music, episodic, film) explicitly, but rather contains a raw json of relevant properties, along with a reference to a json schema index kept on chain. This reference is a sort of implicit type definition. The only other runtime field would be any relevant set of data item ids, and the rationale for lifting them out of the json was for the runtime to enforce that the ids were in fact in the data directory. 2. Adding a new content type, or _altering an existing content type schema_, amounts to adding a new schema to the index, and updating user facing software to write and read to the content directory with a new awareness of what this new schema means. New content items would be able to reference the new schema, but would not have to in principle. 3. Importantly, the runtime would in no way validate whether the json encoding of some added or edited content matched the given schema, or was valid json at all, this would be up to the client software to do. ### The "statically typed" approach Fully specify, as in a relational database context, all content types in the runtime, and have the runtime enforce integrity of any mutation of the content directory (add/edit). When the the format for some content type changed, this would require a migration, and when a new type was added, this would require a runtime upgrade. Clients would need to stay up to date with what the latest content system was in order to work correctly. ### Discussion Jens and Alex were concerned that doing migrations on schema modifications for a given type was too costly to allow the frequency of changes to existing schemas. Also adding new content type would be harder, as it would require a runtime upgrade, but it was less of an issue, since it would happen less frequently. Alex also brought up the issue of how doing migrations on schema changes would break third party user interfaces that were not actively in synch with the platform dynamics. Bedeho was concerned that having the runtime state contain invalid (e.g. a podcast containing a podcast as the cover image data item) or deprecated (e.g. having N different now irrelevant encodings of what a podcast actually is), and that we would need to find a way to do frequent upgrades and migrations anyway. ## Conclusions We agreed to use the "json approach" for Athens. ## Bedeho post-mortem opinion 1. I think that we must distinguish between the specific implementation of the "json approach" now used, and . As was acknowledged by Jens, there will probably be a number of other properties that it could make sense to lift out of the json for different types, e.g. category information. The current implementation blocks this, or at least requires that all types have the same category information rules. A half way house would be to have some typing for different types, but still have use of jsons paylouds in these types. 2. **The issue of migrations imposing design constraints is a fundamental one, and we need more information on these before we start basing an increasing number of decisions upon it. Therefore I suggest we make it a point to try doing a migration as an educational exercise for the network after Athens in order to learn more about it, and in particular how to do ATP (see whitepaper) migrations that would be free of time constraints** 3. The benefit of supporting low investment/effort third party user interfaces is outweighed by the cost of having to move slowly and become a least common denominator over time. Excellent user experiences are of much higher value. Although not a priority, there is actually a way to incentivise a diversity of user interfaces which is briefly touched upon in the whitepaper in the discussion section under bullet point **Screening and suspension and web 2.0 assets**: _...This may coincidentally ameliorate one of the other primary limitations of the platform, due to the state of current web/internet* infrastructure, which is the inability to properly own assets such as domains and app store entries. An ecosystem of such screeners can own these assets, conduct their own screening procedure, and capture in the upside the membership base they bring to the platform._ *Added for disambiguation ## Note If any participant wishes to add their own post-mortem, please open a PR.