International organisation for standardisation organisation internationale de normalisation



Yüklə 7,43 Mb.
səhifə103/105
tarix03.11.2017
ölçüsü7,43 Mb.
#29078
1   ...   97   98   99   100   101   102   103   104   105

2.AhG reports

2.1MPEG-V


http://wg11.sc29.org/doc_end_user/documents/108_Valencia/ahg_presentations/MPEG-VAhGreport.ppt-1396812411-MPEG-VAhGreport.ppt

2.2Augmented Reality


http://wg11.sc29.org/doc_end_user/documents/108_Valencia/ahg_presentations/ARAhGreport.ppt-1396812286-ARAhGreport.ppt

2.3RGC


http://wg11.sc29.org/doc_end_user/documents/108_Valencia/ahg_presentations/AHG_report_presentation_MTL.pptx-1396876751-AHG_report_presentation_MTL.pptx

2.43D Graphics compression


http://wg11.sc29.org/doc_end_user/documents/108_Valencia/ahg_presentations/AhGGraphicsCompressionReport.pptx-1396876675-AhGGraphicsCompressionReport.pptx

2.53D 3DG Activities are reported in the Wednesday and Friday plenary


Wednesday: http://wg11.sc29.org/doc_end_user/documents/108_Valencia/presentations/MPEG3DGraphicsMercredi.ppt-1396429559-MPEG3DGraphicsMercredi.ppt
Friday:

http://wg11.sc29.org/doc_end_user/documents/108_Valencia/presentations/MPEG3DGraphicsVendredi.ppt-1396620152-MPEG3DGraphicsVendredi.ppt




3.Analysis of contributions




RGC

m33258

MPEG-4
3DG

Updates on RGC CAL Reference Software Status

Christian Tulvan, Marius Preda

ARAF

m33382




ARAF guidelines: PROTOs implementations

Revision of the use case.



Traian Lavric, Marius Preda

m33411




Use Case for AR Social networking service

Information from social networks can be used to augment the ARAF scene.

The user profile (and user preference) will be used to get dara from a Social Network Service. The server provides "SNS Data" and this infomation should be converted to the scene.

A SNS PROTO and the SNS_Container PROTO is provided.

Resolution: Consider the User Description under n categories: "static data", "SNS_related_data" (location, mood, last post, …), "…"


B. S. Choi, Young Ho Jeong, Jin Woo

Hong





MAR RefMod

Review of RefModel

- Section 11 MAR System API: have a



C. Perrey




3DV for AR

Depth Estimation Reference Software may be used but the estimation is not always of good quality

An idea is to combine two approaches : a time of flight and color camera(s)



Internal session




CDVids (now called CDVA)

Idea for the AR use case with remote detection: several descriptors (top matches) can be sent by the Detection Library and they can be used for local tracking

BB : bounding box

The objective is to create models (signatures) of a set of classes. The browser will extract a signature of a whole image and send it to the processing server that is providing a set of classes found in the image.

The classes planned now are for automotive (vehicles (cars as subset), pedestrians).

Tracking is also an object.


Joint with CDVids




3D Video in AR

1. Augment the "Book arrival" video that will help to see the relation between the coordinate systems.


2. the depth estimation solution is not real time and is computation extensive (http://vision.middlebury.edu/stereo/eval/)


Joint with Video




User Description

Presentation of PROTOS containing user static data and user social network activity.
SNS_Static_data and fit it in the mpeg-7:PersonalType and the UD:UserDescriptionType








Audio 3D

Head Tracking is needed to render the audio.

3DAudio can be used to modulate the audio perception with respect to the user position and orientation. Currently similar approach is used at the production side but it can be used at the user side (in real time).


The 3D position and orientation of the graphical objects (enriched with audio) is known and it should be forwarded to the 3D audio engine. The relative positions between the sources and the user are prefered.
Draw a diagram showing that the scene is sending to the 3D audio engine the relative position of all the sources and get back the sound for the headphones.
Reference software implementation exists but is working using files: the chain is the following: (1) 3D decoder (multi-channel); some of the outputs are objects and higher order ambisonic. (2) Object renderer. The 3D coordinates are included as a metadata in the bitstream (it is represented as an angle of view: the user is facing the screen), but an entry can be done in the Object Renderer taking the input from the scene.
Represent the acoustic radiation model that can be attached per classes of objects.
Personalized binauralisation may be possible and this can be used in AR for personalized perception (HRTF model).
What may be the limits of the low-latency. 50ms is something doable today (including encoder). Homework: try to see what are the use cases when we need less than this.








Joint with JPEG

The MAR Reference Model was updated to also take into consideration the situation when the MAR Engine has pre-programmed behavior (such as the case when an AR application is used and the developper embedded directly in the code the application behavior).




MPEG-V

m32959

MPEG-V
3DG

Proposal of 3D Printer Capability Description

A set of capabilities is introduced for materials, colors, printer type …

There is no need to have the schema for all various types of materials but to include a material name and a material provider.

Add the PrintingServiceCapability supported file type.



Seungwook Lee, Jinsung Choi, Kyoungro Yoon, Min-Uk Kim, HyoChul Bae

m32960




Proposal of 3D Printer User Preference Description

Prefered material and prefered material characteristics. There is also a notion of printing service introduced by aspects such as maximum price and delivery limit.

Add the PrintingServicePreferences including cost, delivery time


Seungwook Lee, Jinsung Choi, Kyoungro Yoon, Min-Uk Kim, HyoChul Bae

m33097




Corrections of the makeup avatar type in MPEG-V Part 4 3rd edition

Add a mechanism to define the equation in the cosmetic model.

Some errors in the binary representation are corrected.


Jin-Seo Kim, In-Su Jang, Soon-Young Kwon, Sang-Kyun Kim, Yong Soo Joo




Summary of votes for MPEG-V


CD for Part 2: Approved

CD for Part 3: Approved but one comment from UK

CD for Part 4: Approved

CD for Part 5: Approved

CD for Part 6: Approved











MPEG-V output documents

Decision to not promote the Parts 2, 3, 4, 5 and 6 as DIS because the reference software is not yet CD.

Action points: propose the RefSoft and conformance for the next meeting. The tools that will not be covered by the ref soft will be removed from the DIS.

Multisensory application formats and the MPEG-V APIs are postponed.







3DG

m33114

MPEG-4
3DG

Proposal for opening an exploration activity within SC29WG11 for the definition of a standard technology for the compression, storage and streaming of genome data

Current technologies are able to sequence individual genome.

Need for a compressed form of the genome that contains also annotations.

Need for interoperability because various actors are involved in the chain.

A set of requirements are proposed: lossless, having acces point, variable size sequence lenghts, quaryable, extensible per parts of genome, integrate annotations.


claudio.alberti@epfl.ch Claudio Alberti, Ioannis Xenarios, marco.mattavelli@epfl.ch Marco Mattavelli, Heinz Stockinger, Yann Thoma

m32308

MPEG-4
3DG

Results for Open3DGC and OpenCTM benchmarking

Christian Tulvan, Marius Preda

m33466




3D Tele-Immersion Use Cases and Requirements

Lazar Bivolarsky

m33467




Crosscheck the Benchmarking Resultsa and Test Procedure for SC3DMC, Open3DGC_and_OpenCTM

Lazar Bivolarsky




Yüklə 7,43 Mb.

Dostları ilə paylaş:
1   ...   97   98   99   100   101   102   103   104   105




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin