Thank you. We note that prediction (P/B pictures) don't fall so easily into the meta-box, but it can be done (using item references). We might prefer to say that if you need inter-prediction, you should use a track, however. We note that the problem of item/track cross-references is probably soluble (shared number space, fragment references, using meta boxes embedded in tracks).
We wonder about saying that single images, or ‘bags’ of images (that have no ordering, timing etc.) are in a meta-box, but as soon as there is a ‘sequence’, prediction, or timing, you use a track. So, an ‘animated GIF’ would probably need a metabox and a track?
What is the ‘overhead’ of these approaches? We estimate a metabox around 50 bytes, and a track around a few hundred (perhaps 400 bytes)?
It does seem that the JP2 approach has less applicability: the format would need a lot of adjustment (it wasn’t designed as an extensible general image or media file format), and the fact that it doesn’t use offset/pointer makes it less flexible to ‘multi-headed’ approaches. (But see below for the JPX discussion).
The meta-box does have some interesting tools that tracks lack (notably the extents provision, which allow progressive interleaved loading, naming, and so on).
We need to be careful with branding and documenting (notably branding is currently ‘or’ only). Branding should be much clearer about requirements (a) of what must be in the file and (b) what a reader must support (a larger list).
We initially focused on the simple case (e.g. non-tiled, no layering etc.). We do need to consider tiling, scalability, and so on, at some point.
We’ll need NB support but we already have Finland, Italy, Germany, Sweden, France, Austria and (pending confirmation) the USA.
We note that the term ‘meta box’ is getting seriously strange when it’s used like this for primary media data. We probably need to call it the ‘untimed stuff’ container…
Dostları ilə paylaş: |