Organisation internationale de normalisation


Requirements documents approved at this meeting



Yüklə 9,04 Mb.
səhifə18/277
tarix02.01.2022
ölçüsü9,04 Mb.
#24054
1   ...   14   15   16   17   18   19   20   21   ...   277
Source: Jörn Ostermann (Leibniz Universität Hannover)


  1. Requirements documents approved at this meeting





No.

Title

15021

AHG on Support of HDR and WCG

15022

AHG on FTV (Free-viewpoint Television)

15023

AHG on Compact Descriptors for Video Analysis

15024

AHG on Media-centric Internet of Things (MIoT)

15025

AHG on Requirements on Genome Compression and Storage

15026

AHG on Adaptive Screen Content Sharing Application Format (ASCS-AF)

15027

AHG on wearable MPEG

15028

Draft Call for Evidence (CfE) for HDR and WCG Video Coding

15029

Draft Requirements and Explorations for HDR and WCG Content Distribution

15030

Exploration on  Media-centric Internet of Things (draft)

15031

Liaison letter template on HDR and WCG

15032

Liaison letter to ARIB on HDR and WCG

15033

Liaison letter to ATSC on HDR and WCG

15034

Liaison letter to BDA on HDR and WCG

15035

Liaison letter to DECE on HDR and WCG

15036

Liaison letter to SMPTE on HDR and WCG

15037

Liaison letter to EBU on HDR and WCG

15038

Liaison letter to DVB on HDR and WCG

15039

Draft Requirements for MPEG Adaptive Screen Content Sharing Application Format

15040

Compact Descriptors for Video Analysis: Requirements for Search Applications

15041

Compact Descriptors for Video Analysis: Draft Evaluation Scenarios

15042

Compact Descriptor for Video Analysis (CDVA)

15043

Use Scenarios of CDVA for Surveillance Domain

15044

Liaison statement template on Compact Descriptors for Video Analysis (CDVA)

15045

Liaison Letter on Genome Compression and Storage

15046

Requirements on genome compression and Storage

15047

White Paper on Genome Compression and Storage

15048

Experimental Framework for FTV

15049

Draft Requirements for Media Linking Application Format (MLAF)

15050

Presentations of the Brainstorming Session of the Future of Video Coding Standardization


  1. Explorations

    1. Compact Descriptors for Video Analysis (CDVA)

The envisioned activity which will go beyond object recognition required for example in broadcast applications. Especially for automotive and security applications, object classification is of much more relevance than object recognition (, Figure 2). MPEG foresses IP-cameras with an CDVA encoder which enables search, detection and classification at low transmission rates. Related technology within MPEG can be found in MPEG-7 and video signatures.

Figure 1: The upper part of the diagram shows the “Analyze-Then-Compress” (ATC) paradigm. That is, sets of video features are extracted from raw frames end encoded before transmission resulting in low bandwidth communications. This is opposite to traditional “Compress-Then-Analyze” (CTA) paradigm, in which video features are extracted close to complex visual analysis.


Figure 2 Usage of CDVA for identification of classes and recognition.


It is foreseen that identification of classes is much more challenging than object recognition. This work might start later than the work on detection and recognition. For detection and recognition, challenges include flexible objects and specular surfaces. Furthermore, low latency is required.
In N15043 Use Scenarios of CDVA for Surveillance Domain, applications and usages for CDVA in the area of surveillance are presented. The concept of an IP-camera with a CDVA encoder is introduced. N15040 Compact Descriptors for Video Analysis: Requirements for Search Applications provides an update on requirements. In order to prepare for a CfP, N15041 Compact Descriptors for Video Analysis: Draft Evaluation Scenarios describes initial evaluation scenarios. A CfP is expected at the 111th meeting. N15042 Compact Descriptor for Video Analysis (CDVA) provides an overview for Search and Retrieval as well as Detection applications. N15023 AHG on Compact Descriptors for Video Analysis continues the work.

Several organizations are informed about MPEG’s plans by means of N15044 Liaison statement template on Compact Descriptors for Video Analysis (CDVA).



    1. Free Viewpoint TV

Free Viewpoint TV was the vision that drove the development of many different 3D video coding extensions. It is now time to take back a step and see where the future of 3D will go. Super-multiview displays and holographic displays are currently under development. They will provide horizontal as well as vertical parallax. Hence, we need further extensions of current multiview technology, which assumes a linear camera arrangement, in order to accommodate for more general camera arrangements for future displays. For interaction and navigation purposes, modern human computer interfaces need to be developed. The purpose of the exploration was previously described in N14546 Purpose of FTV Exploration.
N15048 Experimental Framework for FTV shows the further work of the FTV group. It was agreed to use high-resolution 10s FTV video without depth data as test material. The dismissal of depth data might create an FTV encoder/decoder solution that can work with arbitrary video input. For navigation applications, five arbitrary views will be used as test data. Evaluation will be done on stereo displays and displays with more views if they are available. As anchors, MV-HEVC and 3D_HEVC are used.
The adhoc group N15022 AHG on FTV (Free-viewpoint Television) will continue to work on this long-term exploration.
    1. Future Video Coding

A brainstorming session on the future of video coding was held. Representatives from Ericson, Google, Huawei, Netflix, Orange, and Samsung shared their views on the requirements and challenges for video coding. Presentations by Ericson, Google, Huawei, Netflix, and Orange are available in N15050 Presentations of the Brainstorming Session of the Future of Video Coding Standardization. Industry participants mentioned a further need for increasing compression. The request for increments of 50% in coding efficiency is because video is just 50% of the overall rate of a transport stream. For some applications, a further reduction of the video bitrate by 25% might be useful.

HDR and WCG was considered the next milestone towards a better TV-experience.

Netflix is expecting to release a high quality database with movie clips. Huawei pointed out that surveillance applications might the most important user of compression technology and should require special attention. User generated content is another application that is relying on video coding technology and must be considered especially for mobile applications.

The presenters consider royalty costs as not significant. However, they point out that the licensing process is costly and slow.


One of the very important users of MPEG-technology is the mobile industry. It is expected that the 5G-Network will be ready between 2020 and 2023. Therefore, it might make sense for MPEG to have a new generation of standards ready by 2020. This might require a new CfE on video coding technology in late 2015 or early 2016.

    1. Genome Compression

Today, DNA sequencing creates a lot of data. One data set is easily about 800 Mbyte. Typically, several data sets are made for one person. Given that today machines are optimized for speed and not for accuracy, an interesting opportunity for compression technology might exist. TC276 Biotechnology is currently not considering compression.

N15046 Requirements on genome compression and Storage lists initial requirements that need to be sanctioned by industry and users. N15047 White Paper on Genome Compression and Storage is made publicly available in order to promote this activity. N15045 Liaison Letter on Genome Compression and Storage is sent to several organisation related to genome processing. The adhoc group N15025 AHG on Requirements on Genome Compression and Storage will continue this exploration.


    1. High dynamic range and wide colour gamut content distribution

Several film studios currently master movies for digital cinema and DVD separately since the colour space of the cinema is much larger than the colour space of a regular TV set. The industry would like to master just one version of the content using the xyz colour space as one example of a Wide Color Gamut (WCG) colour space. Furthermore, future TV will use High Dynamic Range (HDR) displays. In order to adapt to the different displays and projectors, transformation hints are desirable which instruct the terminal how to scale the colours of the content to the capabilities of the display. Furthermore, the signal needs to be deployed with an amplitude resolution accommodating HDR. Deploying video in such a way would also allow consumers to actually benefit from a WCG and HDR of a new TV screen.
At this point, tools for supporting bit depth scalability as well as WCG scalability are available. An appropriate profile definition is on its way.

For B2B application like contribution, a bitrate of 25 Mbit/s is sufficient coding HDR WCG content using HEVC. For B2C applications like Blu-ray or broadcast, there might be a need for further improvements in coding efficiency. As anchors for evaluations, HEVC main at 10 bit or 12 bit might be used.

The group plans to create interchangeable test material. There might be a chance to issue a Call for Evidence at the 111th meeting. However, at this point only limited improvements at 10 Mbit/s were demonstrated.
N1502 Draft Requirements and Explorations for HDR and WCG Content Distribution summarizes the requirements and use cases for HDR and WCG applications.
Until the 111th meeting, the adhoc group N15021 AHG on Support of HDR XYZ Color Space and HDR will gather interchangeable test material, create anchors and evaluate HDR sequences during an adhoc group meeting in December. Based on N15028 Draft Call for Evidence (CfE) for HDR and WCG Video Coding, it is planned to issue a CfE at the 112th meeting and evaluate results until the 113th meeting. Depending on the results of the CfE and external requirements, a time line for further standardization work will be defined. It might happen that a CfP will be issued in parallel to ongoing standardization work.
Liaison letters N15031, N15032, N15033, N15034, N15035, N15036, N15037,and N15038

were sent to numerous organizations.


The adhoc group N14540 AHG on Support of HDR XYZ Color Space and HDR will gather further test material and evidence on the coding performance of HEVC. It is also charged with further developing N14547 Requirements and Use Cases for HDR /WCG Content Distribution as well as N14548 Test sequences and anchor generation for HDR and Wide Gamut Content Distribution. Experiments are defined in N14549 Exploration Experiments for HDR and Wide Gamut Content Distribution.


    1. Media Linking Application Format (MLAF)

Companion screen applications let user enjoying broadcast programmes access related information on other – typically internet-connected – devices. N15049 Draft Requirements for Media Linking Application Format (MLAF) provide use cases and requirements about a standard technology in this domain. Figure 1 shows the connections that a bridget should create from a professional production to social media or other websites.

Figure 3. Bridget creation workflow.


    1. Screen Content Sharing Application Format

N15039 Draft Requirements for MPEG Adaptive Screen Content Sharing Application Format combines technologies for transport, coding and composition in order to enable the description of contents of computer screens based on components like background, window, video etc. Screen content coding, digital item identifier, MMT and MPEG Composition Information are relevant technologies. The Adhoc group N15026 AHG on Adaptive Screen Content Sharing Application Format (ASCSAF) continues to refine the requirements and specification. A timeline needs to be specified.
    1. Media-centric Internet of Things

The Requirements subgroup recognizes that MPEG-V provides technology that is applicable in the area of Internet of Things. N15030 Exploration on Media-centric Internet of Things (draft) provides definitions, use cases and requirements. The Requirements subgroup would like to ask MPEG members to encourage relevant external organizations to share their views on this subject as it relates to digital media. Work continues in N15024 AHG on Media-centric Internet of Things (MIoT).


Figure 4 The adapted sensorial effects (actuator commands) are generated by combining sensorial effects (SEs) with sensed information (SI), sensor capabilities (SC), and actuator capabilities.

  1. Systems report

Source: Young-Kwon Lim, Chair


  1. Yüklə 9,04 Mb.

    Dostları ilə paylaş:
1   ...   14   15   16   17   18   19   20   21   ...   277




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin