Comments:
3DV is providing a compressed representation of video+depth. Camera parameters are also represented in the bitstream. It may be possible to need a mechanism for transforming from the image domain in the physical domain.
There is an encoder mode that is optimizing the compression for view synthesis purposes. This mode can be in contradiction with AR requirements.
MVC + Depth is DAM. Test streams are available (2-3 views + the corresponding depth). Alternative already decoded representation (including the supplementary information) is available (I'll receive the information by email).
Consistency of depth in time due to the fact that analysis is done frame by frame. Control of the precision of the depth reconstruction (from the decoder) with respect to the distance from the camera is possible.
Tune the encoder to increase the accuracy of the depth estimation, representation and encoding in specific zones that will be augmented.
Step 1: use the original (not encoded) test data: texture, depth, camera parameters and depth calibration parameters.
Step 2: use the encoded bitstreams
AP: locate the bistreams that can be used in the CE.
AP: create a virtual scene that can fit to the real scene from the selected bitstreams
AP (3DV): A player would be very appreciated
AP: create an AhG with the objective of preparing a document showing the MPEG technologies (3DV, CDVS, UD) in the framework of AR.
|