International organisation for standardisation organisation internationale de normalisation



Yüklə 7,38 Mb.
səhifə103/105
tarix02.11.2017
ölçüsü7,38 Mb.
#28032
1   ...   97   98   99   100   101   102   103   104   105

Task Groups





  1. MPEG-H 3D Audio

  2. MPEG-4 DRC

  3. Maintenance of MPEG-2, MPEG-4 and MPEG-D



  1. Output Documents


No.

Title

TBP

Available




23003-2 – Spatial Audio Object Coding







14054

DoC on ISO/IEC 23003-2:2010/PDAM 3, Dialog Enhancement

N

13/11/01

14055

ISO/IEC 23003-2:2010/DAM 3, Dialogue Enhancement

N

13/11/01

14056

Defect Report on SAOC text.

N

13/11/01




23003-3 – Unified Speech and Audio Coding







14057

Test Report on Stereo Coding Performance of the USAC Common Encoder, JAME

YES

13/11/01

14058

Guide to use of USAC Common Encoder, JAME

N

13/11/01

14059

Defect Report on USAC Conformance and Reference Software

N

13/11/01




23008-3 – 3D Audio







14060

Working Draft Text of MPEG-H 3D Audio CO RM0

N

13/11/01

14061

Working Draft Text of MPEG-H 3D Audio HOA RM0

N

13/11/01

14062

Software for MPEG-H 3D Audio CO RM0

N

13/11/01

14063

Software for MPEG-H 3D Audio HOA RM0

N

13/11/01

14064

Submission and Evaluation Procedures for 3D Audio Phase 2

YES

13/11/01

14065

Workplan on 3D Audio

N

13/11/01

14066

Workplan on Binauralization CE

N

13/11/01




Exploration - Audio Dynamic Range Control







14067

Working Draft on Dynamic Range Control

N

13/11/01

14068

Workplan on Dynamic Range Control

N

13/11/01




Exploration - Audio Synchronization







14069

Workplan for Audio Synchronization

N

13/11/01




Liaison Statements







14070

Liaison statement to SMPTE on 3D Audio content format

N

13/11/01

14071

Liaison statement to EBU on Audio Model and MPEG-4 File Format

N

13/11/01

14072

Liaison statement to IEC TC100

N

13/11/01




Promotion







14073

Website Content Audio Promotion: MPEG-audio.org

YES

13/11/01


  1. Agenda for the 107th MPEG Audio Meeting



Agenda Item

  1. Opening of the meeting

  2. Administrative matters

    1. Communications from the Chair

    2. Approval of agenda and allocation of contributions

    3. Review of task groups and mandates

    4. Approval of previous meeting report

    5. Review of AhG reports

    6. Joint meetings

    7. Received national body comments and liaison matters

  3. Plenary issues

  4. Task group activities

    1. MPEG-H 3D Audio

    2. Exploration: Dynamic Range Control

    3. Exploration: Audio Synchronization

    4. Maintenance: MPEG-2, MPEG-4, and MPEG-D

  5. Discussion of unallocated contributions

  6. Meeting deliverables

    1. Responses to Liaison and NB comments

    2. Recommendations for final plenary

    3. Establishment of new Ad-hoc groups

    4. Approval of output documents

    5. Press statement

  7. Future activities

  8. Agenda for next meeting

  9. A.O.B

  10. Closing of the meeting


  1. 3DG report

Source: Marius Preda, Chair

Summary





1 Opening 4

2 Roll call of participants 4

3 Approval of agenda  4

4 Allocation of contributions 4

5 Communications from Convenor 4

6 Report of previous meetings 4

7 Processing of NB Position Papers 4

8 Workplan management 5

9 Organisation of this meeting 18

10 WG management 22

11 Administrative matters 25

12 Resolutions of this meeting 26

13 A.O.B. 26

14 Closing 26

Annex A – Attendance list 27

Annex B – Agenda 33

Annex C – Input contributions 39

Annex D – Output documents 80

Annex E – Requirements report 86

1. Requirements documents approved at this meeting 86

2. MPEG-4 87

3. MPEG-H 87

4. Explorations 88

Annex F – Systems report 92

1. General Input Documents 93

2. MPEG-2 Systems (13818-1) 95

3. MPEG-4 ISO Base File Format (14496-12) 97

4. MPEG-4 AVC File Format (14496-15) 99

5. Font Streaming (14496-18) 104

6. Composite Font Representation (14496-28) 104

7. Reference Software (21000-8) 105

8. Surveillance AF (23000-10) 106

9. Stereoscopic Video AF (23000-11) 107

10. MP-AF (23000-15) 107

11. Common encryption format for ISO BMFF (23001-7) 108

12. Common encryption format for MPEG-2 TS (23001-9) 109

13. Timed metadata metrics of Media in ISOBMFF (23001-10) 110

14. MPEG-U (23007-3) 111

15. MMT (23008-1) 111

16. MMT FEC (23008-10) 114

17. MMT CI (23008-11) 115

18. MMT Implementation Guide (23008-12) 116

19. Media presentation description and segment formats (23009-1) 116

20. Conformance and Ref. SW. for DASH (23009-2) 128

21. Implementation Guidelines (23009-3) 129

22. Green MPEG 130

Annex G – Video report 132

1 Organization of the work 132

2 MPEG-4 part 10 AVC 135

3 Web Video Coding 136

4 MPEG-7 Visual 136

5 CDVS 136

6 Internet Video Coding Exploration 143

7 Video Coding for Browsers 153

8 Reconfigurable Media Coding – Video related 155

9 HEVC 156

10 Wednesday Video plenary status review 160

11 Closing plenary topics 162

Annex H – JCT-VC report 165

1 Administrative topics 166

2 AHG reports (22) 182

3 Project development, status, and guidance (34) 197

4 Core experiments in SHVC (24) 204

5 Core experiments in Range Extensions (28) 212

6 Non-CE Technical Contributions (161) 221

7 Plenary Discussions and BoG Reports 281

8 Project planning 282

9 Establishment of ad hoc groups 285

10 Output documents 293

11 Future meeting plans, expressions of thanks, and closing of the meeting 296

Annex A to JCT-VC report:
List of documents 297

Annex B to JCT-VC report:


List of meeting participants 320

Annex I – JCT-3V report 326

12 Administrative topics 328

13 AHG reports (15) 341

14 Project development, status, and guidance (4) 352

15 Core experiments 354

16 3DV standards development (incl. software, conformance) (10) 388

17 High-level syntax (78) 391

18 No-CE technical contributions (12) 403

19 Alternative depth formats (2) 406

20 Non-normative contributions (1) 407

21 Withdrawn, unclear allocation (6) 407

22 Plenary Discussions and BoG Reports 407

23 Project planning 408

24 Establishment of ad hoc groups 414

25 Output documents 421

26 Future meeting plans, expressions of thanks, and closing of the meeting 427

Annex A to JCT-3V report:


List of documents 429

Annex B to JCT-3V report:


List of meeting participants 446

Annex J – Audio report 451

1 Opening Audio Plenary 454

2 Administrative matters 454

3 Record of AhG meetings 455

4 Task group activities 458

5 Closing Audio Plenary and meeting deliverables 468

Annex A Participants 469

Annex B Audio Contributions and Schedule 470

Task Groups 474

Annex C Output Documents 475

Annex D Agenda for the 107th MPEG Audio Meeting 477

Annex K – 3DG report 478

23. Opening of the meeting 480

24. AhG reports 483

25. Analysis of contributions 484

26. General issues 494

27. General 3DG related activities 494

28. Resolutions from 3DG 495

29. Resolutions related to MPEG-4 495

30. Resolutions related to MPEG-A 496

31. Resolutions related to MPEG-C 496

32. Resolutions related to MPEG-V 497

33. Resolutions related to MPEG-M 498

34. Resolutions related to Explorations 499

35. Establishment of 3DG Ad-Hoc Groups 499



36. Closing of the Meeting 501



  1. Opening of the meeting




    1. Roll call

    2. Approval of the agenda


The agenda is approved.
    1. Goals for the week


The goals of this week are:

  • Review contributions related to Graphics Tool Library

  • Review contributions related to 3D graphics aspects of MPEG-V

  • Review contributions related to 3D graphics aspects of AR

  • Review contributions related to Multi-resolution 3D mesh coding Reference Software and Conformance

  • Review contributions related to Pattern-based 3DMesh Coding (PB3DMC)

  • Status of ref soft and conformance for Pattern-based 3DMesh Coding (PB3DMC)

  • Investigate future developments for MPEG 3D Graphics Compression

  • Relationship with WebGL and WebCL

  • Relationship with X3D/COLLADA/others

  • 3D graphics based tele-immersion

  • Review and issue liaison statements

  • SC24 liaison

  • Review the votes

  • Web-site

  • MPEG database


    1. Standards from 3DGC



4

5

2001

Amd.32

RS multi-resolution 3D mesh compression




11/12

13/01

13/04

14/04

3

4

5

2001

Amd.35

RS Pattern based 3D mesh compression




13/11










3

4

16

2011

Amd.3

Web3DG coding







14/01

14/07

14/11

3

4

16

2011

Amd.4

Pattern-based 3D Mesh Compression







13/08







3

4

27

2009

Amd.5

C for multi-resolution 3D mesh compression







13/01

13/04

14/04

3

4

27

2009

Amd.5

C for Pattern based 3D mesh compression




13/11










3

A

13

201x

Amd.1

ARAF RS & C







13/01

13/04

13/11

3

A

14

201x

1st Ed.

ARRM







12/10




14/01

3

C

4

2010

Amd.3

GTL for RMC framework




11/03

12/02

13/04

13/11

3

C

5

201x

Amd.1

GTL RS & C




12/02

12/10

13/08

14/01

3

V

1

201x

3rd Ed.

Architecture







13/11

14/01

14/07

3

V

2

201x

3rd Ed.

Control information







13/11

14/01

14/07

3

V

3

201x

3rd Ed.

Sensory information







13/11

14/01

14/07

3

V

4

201x

3rd Ed.

Virtual world object characteristics







13/11

14/01

14/07

3

V

5

201x

3rd Ed.

Data formats for interaction devices







13/11

14/01

14/07

3

V

6

201x

3rd Ed.

Common types and tools







13/11

14/01

14/07

3



    1. Room allocation


3DGC: 15
    1. Joint meetings


During the week, 3DG had several joint meetings with Requirements, Video, Audio and Systems.


Groups

What

Day

Time1

Time2

Where

R, 3

ARAF 2

Tue

11:00

12:00

3

V, 3

CDVS in ARAF 2

Tue

12:00

13:00

3

R, A, V, S, 3

MPEG assets

Wed

12:30

13:00

3

A, 3

3D Audio in ARAF 2

Thu

12:00

13:00

A



    1. Schedule at a glance





Monday

Session

Agenda/Status/Preparation of the week

14h to 14h30

Session

3DG Plenary, review of contributions

14h30 to 18h

Tuesday (room 15)

Session

MPEG-V

9h to 11h

Session

Joint Meeting with Requirements for Augmented Reality

11h to 12h

Session

Joint Meeting with Video for Augmented Reality and CDVS

12h to 13h

Session

ARAF contributions

14h to 18h

Session

RMC BoG (room 7)

9h to 18h

Wed

Session

RGC

11:30h to 13h

Session

Realistic Material

14:00h to 15:00h

Session

ARAF Browser Interop.

15h to 16h

Session

ARAF Remote Detection

16:50h to 17:30h

Session

ARAF POI

17:30h to 18h

Thu

Session

MPEG-V

9h to 10h

Session

ARAF E2

10h to 11:30h

Session

MPEG Assets

11:30h to 12:00h

Session

3D Audio in ARAF 2


12h to 13h

Session

RGC BoG

9h to 11:30h

Session

ARAF E2

14h to 16h

Session

MAR Ref Model

16h to 18h

Friday

Session

3DG Plenary, 3DG Output Documents Review, resolutions

Review of MPEG-V TuC

9h to 12h
  1. AhG reports


R, A, V, S, 3

MPEG assets

Thu

11:30

12:00

3

A, 3

3D Audio in ARAF 2

Thu

12:00

13:00

A



    1. MPEG-V


http://wg11.sc29.org/doc_end_user/documents/105_Vienna/ahg_presentations/MPEG-VAhGreport.ppt-1375087462-MPEG-VAhGreport.ppt
    1. Augmented Reality


http://wg11.sc29.org/doc_end_user/documents/105_Vienna/ahg_presentations/ARAhGreport.ppt-1375121531-ARAhGreport.ppt
    1. RGC


http://wg11.sc29.org/doc_end_user/documents/105_Vienna/ahg_presentations/AHG_report_presentation_MTL.pptx-1375088396-AHG_report_presentation_MTL.pptx

  1. Analysis of contributions




m31381

PB3DMC bitstream modification

Proposal to add 4 bits for the version number.

Resolution: accepted


KangYing Cai, Luo Tao, Mary-Luc Champel

m31455

Preliminary results for SC3DMC, Open3DGC and OpenCTM benchmarking

Resolution: continue the experiment and cover the entire range of distortions.

Specify in the current contribution to which distortion the ratio of 1:1,6 corresponds.


Christian Tulvan, Marius Preda

m31477

RelisticMaterial node and its mapping method for MPEG-4 Part 11

Resolution: to define the SpectrumTexture node on the same level with ImageTexture.

Review again Wed.

Wed: two nodes are specified: SpectrumTexture and SpectrumColor.

Accepted (add in the WD)


Jin-Seo Kim, In-Su Jang, Soon-Young Kwon, Sang-Kyun Kim, Yong Soo Joo

m31476

LightSpectrum node for MPEG-4 Part 11

Resolution: add the WLColor that defines the Color as a set of maximum 300 values (for each wavelength).

Review again Wed.


Jin-Seo Kim, In-Su Jang, Soon-Young Kwon, Sang-Kyun Kim, Yong Soo Joo

RGC


m31452

Updates on TFAN network description using MTL FUs and MTL granular FUs

Christian Tulvan, Marius Preda

m31453

Updated Collection of Datasets for MTL FU level testing

Christian Tulvan, Marius Preda

m31454

Updates on RGC (RVC-CAL) - reference software status

Christian Tulvan, Marius Preda

m31553

Status Report of SVA implementation for ISO/IEC 23002-4/DAM3

Li Cui, JinYeon Choi, Seungwook Lee, Euee S. Jang




Wed:

Three FUs are modified:

- Algo_ED_BitPrecision: come back to a previous version because it simplifies the network configuration

- Algo_ContextModeling_SVA_INDEXES: to support the special case of decoding the first triangle in SVA

Reference Software: not all the FUs are available on the SVN. The software should be uploaded before Friday morning and the availability table updated.

- Include an informative annex explaining the alternative C++ implementation.

Network Configuration:

Add the TFAN and SVA FNDs in the Annex F (informative). Both xml and picture.

Conformance: Keep 20 3D objects with various attributes and characteristics. For each object chose different encoding parameters.

Create a table with file name, encoding parameters, and input/output token at each FU level.









MPEG-V





m31478

Errors in the text of ISO/IEC 23005-2 2nd edition

Some editorial comments and length of the code that should be aligned with Part 6. Accepted



Sang-Kyun Kim, Yong Soo Joo, Seung-Jun Yang, Chunghyun Ahn

m31479

Errors in the text of ISO/IEC 23005-3 2nd edition

BaseWaveFormCS is not specified either in Part 3 nor 6. It should be added in Part 6 as well as the binary representation.



Sang-Kyun Kim, Yong Soo Joo, Seung-Jun Yang, Chunghyun Ahn

m31480

Errors in the text of ISO/IEC 23005-4 2nd edition

Only some editorial comments. Accepted



Sang-Kyun Kim, Yong Soo Joo, Seung-Jun Yang, Chunghyun Ahn

m31481

Errors in the text of ISO/IEC 23005-5 2nd edition

The ScentCS should be specified on 9 bits and not 16.



Sang-Kyun Kim, Yong Soo Joo, Seung-Jun Yang, Chunghyun Ahn

m31483

Errors in the text of ISO/IEC 23005-6 2nd edition

Sang-Kyun Kim, Yong Soo Joo, Seung-Jun Yang, Chunghyun Ahn

m31517

Proposal of adding occupancy sensor type to ISO/IEC 23005-5

This sensor provides a boolean information if the space around is occupied or not

Resolution: merge this proposal with the ProximitySensorType:

- change the "value" with "distance"

- Add the ProximitySensorCapabilityType with the attribues: coverage and units.

Tuesday updates: ProximitySensorCapabilityType has an attribute called sensedRange.



Hyunjin Yoon, Jae-Kwan Yun, Hyun-Woo Oh, Jong-Hyun Jang

m31519

Proposal of adding thermographic camera type to ISO/IEC 23005-5

This type of camera is producing a matrix of temperatures.

Resolution: create a TheromographicCameraSensorType extending the CameraSensorType and introducing the attribute "temperature"

Tuesday updates: use matrix instead of array. All changes accepted.



Hyunjin Yoon, Hyun-Woo Oh, Jae-Kwan Yun, Jong-Hyun Jang

m31529

Proposal of adding COG(Center of Gravity) type to ISO/IEC 23005-5

Resolution: change the synthax in order to represent a 2D vector formed by the origin of the coordinate system and the projection of the CoG on the horizontal plan.

Illustrate the vector on an image.

Tuesday updates: Create the CoMSensor that extend BodyWeightSensor and add the CoGBodyWeightSensorCapability. (Center of Mass)

Thursday: definition of the COMSensor as an extension of the BodyWeightSensorType + a 2D vector of incline

Definition of the BodyWeightCapabilityType.




Jae-Kwan Yun, Hyun-Woo Oh, Hyunjin Yoon, Jae-Du Huh

m31562

Errors in the text of ISO/IEC 23005-5 2nd edition

Section 6.4.5 there is an error in the Example: the pts must be 60000 and there is a need to specify the isertTimeScale of 100.




Hyunjin Yoon, Jong-Hyun Jang

m31634

Reference software for the SEP Engine

A preliminary implementation of the SensoryEffectProcessing Engine is presented. There are some concerns of the fact that the API refers explicitly to Tables as a manner to represent internaty the SEM. This may be changed and replace Table with "InternatStructure".

Resolution: continue the implementation of the engine and provide some application examples.


Jae-Kwan Yun, Hyunjin Yoon, Hyun-Woo Oh, Jae-Du Huh


AR





m31418

Updates on POI – ARAF
- the POI is the physical place or object.
- add the possibility to attach images/videos/3D objects to the POIs:

mediaContent [http://bla.png, http://bla.mp4]

mediaType [image, video]

mediaDescription ["this is the terrace", "this is an indoor view"]


- define GlobalPosition with CRS, latitude, longitude (of type double with restriction) http://wg11.sc29.org/mpeg-v/?page_id=2302
- define altitude with CRS, altitude as a double http://wg11.sc29.org/mpeg-v/?page_id=2304
- use one single field for the type of POI
- OGC has a WG for specifying POI (github opengeospatial / poi)

- services available that are serving POIs: gazetteer.

- INSPIRE is an European initiative for establishing an infrastructure for geographic data
MarkerMap current functionalities:
- data related to POI:

position (location+radius), keywords, name, shape

- data related to the scene:

rotation, clickable, visible, enabled,

- data related to Player:

GPS position of the player

- data related to maps:

MAP zoom, map GPS center

- actions:

onPlayerArround, onPlayerLeft, onClick


Targeted functionalities


  • To be a visual representation of a POI

    • Support direct mapping between ALL the POI fields and (some) of the MapMarker fields.

  • Allow user interaction

    • Click, Double click, Dragg, Mouse over, Mouse up, Mouse down

    • Change its properties with respect to the user position

  • Open external applications (video player, web browser..).



Traian Lavric, Marius Preda

m31661

Comments on m26114 (Initial proto design for server side processing for augmented reality)

Traian Lavric, Marius Preda

m31239

Metadata for ARAF and its usage with map supporting node

B. S. Choi, Youngho Jeong, Jinwoo Hong

m31658

AR Browser Interoperability Status Report
This contribution introduces an initiative from some AR Browser Vendors to look for/create an interoperable format for geo-spatial information and associated actions (open a video, make a call, …).

Resolution: convert the example proposed into ARAF and send them the result.



Christine Perey (Wed 16h30 – 17h00)

m31444

Input on 14496-11 second edition CD
Keep the SimpleAugmentationRegion

Use of MFTime for activation times


Change the AugmentationRegion and use the same principle as SimpleAugmentationRegion
Change arProvider with an SFNode

Jean Le Feuvre, Marius Preda, Traian Lavric (17h00 – 18h00)

Joint meeting with Requirements




xxxx

Review and updates of ARAF Requirements
The document N12585 was updated to:

- add user description support

- add sensor data compression

- add 3D cameras

- add microphones

- add environmental sensors

- add support for point of interest

- add support for remotely processing of signal descriptors



Seungwook Lee, Jinsung Choi, Bon Ki Koo

m31505

Draft requirement and use cases for the 3D printing metadata
The contribution introduces a set of requirements for materials used in 3D printing. While acknowledging the growth of the 3D printing industry, the group expressed concerns about the lack of experts in the field participating to the standardization process. In order to ensure the quality of the standard and the deployment of it, it is necessary to have commitment from representatives of 3D printing industry to participate to the standardization process.


Seungwook Lee, Jinsung Choi, Bon Ki Koo


    1. Session: Agenda/Status/Preparation of the week








Ad hoc on Key Technologies for AR

Marius Preda




Ad hoc on Graphics Tool Library

Christian Tulvan, Seung Wook Lee




Ad hoc on Augmented Reality

Marius Preda, Minsu Ahn, James D.K. Kim




Ad hoc on MPEG-V

Marius Preda



    1. Session: 3D Audio and AR





Session

Thu 3DA and AR

14h-15h

At the 104th MPEG meeting the 3DG Chair summarized the needs of an Audio Augmented Reality system as follows:

  • Analyze scene and capture appropriate metadata (e.g. acoustic reflection properties) and attach it to a scene

  • Attach directional metadata to audio object

  • HRTF information for the user

  • The audio augmented reality system should be of sufficiently low complexity (or we have sufficiently high computational capability) that there is very low latency between the real audio and the augmented reality audio.

The 3DG Chair clarified that less than 50 ms is a realistic requirement for 3D audio latency.

The 3DG Chair gave an example of Augmented Reality: a person has a mobile device with a transparent visual display. The person sees and hears the real world around him, and can look through the transparent visual display (i.e. “see through mode) to see the augmented reality (e.g. with an avatar rendered in the scene).

The avatar has a pre-coded audio stream (e.g. MPEG-4 AAC bitstream) that it can play out, and ARAF knows its spatial location and orientation (i.e. which way it is facing). The required the audio metadata is:


  • Radiation properties of avatar (i.e. radiation power as a function of angle and distance)

The avatar audio could be presented via headphones or ear buds. The Audio Chair noted that ARAF may want to incorporate a head tracker so that the avatar can remain fixed within the physical world coordinate system. The 3DG Chair noted that if “see through” mode is used then the orientation of the mobile device screen would be sufficient. In that case, audio could be presented via the mobile device speakers.

The Audio Chair noted that the avatar could function as a “virtual loudspeaker” in MPEG-H 3D Audio such that the 3D Audio renderer could be used for presentation via headphones. However, 3D Audio is not able to add environmental effects (e.g. room reverberation) that are separate from the avatar sound. Furthermore, 3D Audio cannot add such environmental effects based on listener location or orientation (e.g. reverberation changes when the listener moved closer to a reflecting surface).

Yeshwant Muthusamy, Samsung, noted that KHRONOS and OpenSELS provide a 3D Audio API that does support e.g. reverberation based on user location and might offer a solution. However, the Audio Chair noted that ARAF does not know the acoustic properties (i.e. surface reflective properties) of the real-world environment (unless it can deduce them from the visual scene) and thus audio effects based on real-world environment is not possible, whether using MPEG-H 3D Audio or KHRONOS. 3DG experts will consider this problem and report back at a future joint meeting.
The following steps are defined:

Step 1: Integrate and test the BIFS 3D Audio nodes

Step 2: Integration of the MPEG-H engine that will be able to synthetize the audio signal at a specific position

Step 3: Investigate audio propagation of virtual sources taking into consideration the physical medium properties


BIFS 3D exists in three implementations (Technicolor, Orange, Fraunhofer).

Marius will ask the owners of the implementations for contributions.


Uptates in Geneva, Octobre 2013:
Topics:
RENDERING:

1. Spatial audio in ARAF scenes by using the nodes: AcousticScene, AcousticMaterial, DirectiveSound, PerceptualParameters. What are the usage scenarios for these nodes?
CAPTURE:

2. Microphone Sensor support in MPEG-V; arrays of microphones; ambisonic microphones, directional microphones.
It is difficult to have sensors that are able to obtain the acoustic parameters.
DETECTION & TRAKING:

3. Audio Scene Analyzer: identify a sound or sound properties and trigger an event based on this identification. Optionally, the sound (or a compact description of it) is used as an entry to a remote database (similar to CDVS). Question: what kind of audio properties can be used to model the context?
MPEG-7 has low-level parameters that permit music recognition. There is nothing yet for Compact Descriptor for Speech. The research work on such compact description is still in progress, making on the standard on it is too early.
4. Sound sources separation and augmentation of individual sources.
This is better supported with special type of microphones (check with Maurizio Omologo).
5. Audio actuator: directional speaker
Note: there may be some legal issues with capturing the audio in a public places.

    1. Session RMC BoG


The following table shows the list of the FUs specified and implemented in the reference software (and available on the SVN).


Algo_Parser_SC3DMC

Algo_InverseQuantization1D

Algo_InverseQuantizationND

Algo_InversePrediction1D

Algo_InversePredictionND

Algo_ED_AD_StaticBit

Algo_ED_AD_AdaptiveBit

Algo_ConnectivityDecoding_SVA

Algo_DecodeConnectivity_TFAN

Algo_LookUpTable1D

Algo_InverseLookUpTable

Algo_ED_AD

Algo_ED_AD_EG

Algo_ED_BitPreicison

Algo_ED_4bitsD

Algo_ED_FixedLength

Algo_ContextModeling_nType

Algo_ContextModeling_SVA_INDEXES

Algo_ContextModeling_SVA_VERTEX_ATTRIBUTE

Mgnt_Replicate_1_2

Mgnt_Replicate_1_4

Mgnt_Replicate_1_8

Mgnt_MUX_2_1

Mgnt_MUX_4_1

Mgnt_MUX_8_1

Mgnt_DEMUX_1_2

Mgnt_DEMUX_1_4

Mgnt_DEMUX_1_8

Mgnt_ExtractSegment

Algo_contextModeling

Mgnt_ExtractBytes (Mgnt_ExtractValue)

Algo_simpleMath_2op

Algo_Provider1D

Algo_Connectivity_InversePrediction_SVA

Algo_ExtractMask_SC3DMC

Algo_ExtractFaceDirection_SVA

Mgnt_ExtractBits

Algo_LookUpTable2D

The summary of the RGC related activities is the following:


1. We investigated the need to include multiple models in the FU ED_BitPrecision.

Since the data is processed in a sequential manner, the emuation of multiple models can be done by setting the STATIC_PREFIX_SIZE parameter to value 0 and generating the proper sequence of "prefix_size" input tokens, one for each model.

The already defined FU in the spec is generic enough to cover all the cases. There is no need to update the document.
2. We finalized the design of Algo_ContextModeling_SVA_Indexes to be compliant with ED_AD for the SVA decoder case.

It will be included in the document as an update to the already existing one.


3. We centralized all the GTL reference software and uploaded it to MPEG SVN
4. We updated the reference software status table
5. We included the TFAN and SVA xdf code as an informative annex to the document (electronic insert)
6. We created a preliminary design of the SVA schematic to be included as an informative annex.
    1. Session Joint with Requirements on 3D printer and ARAF v2 use cases and Req.


The digital representation of a 3D asset can include metadata that indicates the physical characteristics of the object to be used when using a 3D printer to create the physical object. Therefore, the designer of the document can specify these characteristics. Additionally, an intelligent component may interpret this data, together with user preferences and printer capabilities and indicate appropriate selections for the printing physical materials.

Resolution: Consider the 3D Printer as a 3D Actuator.


    1. Session 3DG Plenary

Review of the liaison letters, review of all the output documents.


  1. General issues

    1. General discussion

      1. Reference Software


It is recalled that the source code of both decoder AND encoder should be provided as part of the Reference Software for all technologies to be adopted in MPEG standards. Moreover, not providing the complete software for a published technology shall conduct to the removal of the corresponding technical specification from the standard.

Currently all the AFX tools published in the third edition are supported by both encoder and decoder implementation.


      1. Web site


The new web site is available at http://wg11.sc29.org/3dgraphics/. It is a blog-based web site and all members are allowed to post.
  1. General 3DG related activities

    1. Promotions

      1. Web Site


        Title

        Status of http://wg11.sc29.org/3dgraphics/

        Authors




        Summary

        The web site is now empty, contributions are needed.

        Resolution

        Action Point: Transfer the old web-site data to the new one.
    2. Press Release: GTL FDIS

At the 106th MPEG meeting, MPEG's Graphics Tool Library (GTL) - ISO/IEC 23002-4:2012 Amd. 1 has reached FDIS status and will be soon published as an International Standard. The MPEG GTL is a part of MPEG Reconfigurable Media Codec (RMC) framework.


The GTL consists of a collection of Functional Units (FUs) that implement generic or decoder specific functionalities. The compressed bitstream can be accompanied by the decoder network description and therefore, a run-time decoder is generated before the decoding starts. The network description can be constructed based on different types of constraints such as energy efficiency, adaptation to changing characteristics (network bandwidth, scene type, applications, …). GTL also makes possible the fast decoder prototyping that fits to content characteristics and usage scenarios.
One of the GTL objectives is to make possible the usage of different hardware and software platforms by locally generating the decoder implementation therefore taking advantage of the devices characteristics.
  1. Resolutions from 3DG

  2. Resolutions related to MPEG-4


    1. Yüklə 7,38 Mb.

      Dostları ilə paylaş:
1   ...   97   98   99   100   101   102   103   104   105




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin