Organisation internationale de normalisation


Part 19 – Content Sharing Application Format



Yüklə 5,54 Mb.
səhifə25/197
tarix02.01.2022
ölçüsü5,54 Mb.
#32757
1   ...   21   22   23   24   25   26   27   28   ...   197

MPEG-A


  1. Part 19 – Content Sharing Application Format

The final requirements are given in N15337 Requirements for MPEG Screen Content Sharing Application Format (SCS-AF).
  1. Explorations

    1. Compact Descriptors for Video Analysis (CDVA)

The envisioned activity which will go beyond object recognition required for example in broadcast applications. Especially for automotive and security applications, object classification is of much more relevance than object recognition (, Figure 2). MPEG foresses IP-cameras with a CDVA encoder which enables search, detection and classification at low transmission rates. Related technology within MPEG can be found in MPEG-7 and video signatures.

Figure 1: The upper part of the diagram shows the “Analyze-Then-Compress” (ATC) paradigm. That is, sets of video features are extracted from raw frames end encoded before transmission resulting in low bandwidth communications. This is opposite to traditional “Compress-Then-Analyze” (CTA) paradigm, in which video features are extracted close to complex visual analysis.


Figure 2 Usage of CDVA for identification of classes and recognition.


It is foreseen that identification of classes is much more challenging than object recognition. This work might start later than the work on detection and recognition. For detection and recognition, challenges include flexible objects and specular surfaces. Furthermore, low latency is required.
In N15043 Use Scenarios of CDVA for Surveillance Domain, applications and usages for CDVA in the area of surveillance are presented. N15040 Compact Descriptors for Video Analysis: Requirements for Search Applications gives the requirements. At this meeting, a CfP N15339 Call for Proposals for Compact Descriptors for Video Analysis - Search and Retrieval (CDVA) and the related evaluation framework N15338 Evaluation Framework for Compact Descriptors for Video Analysis - Search and Retrieval was issued. Evaluation of proposal will be done at the 114th meeting in San Diego in February 2016. Preregistration is due in October 2015. The database for evaluation includes between 800 and 1000 objects plus distractors. Objects include large objects like building facades, landmarks, and small objects like paintings, books, statues. Furthermore, scenes like interior scenes, natural scenes, multi-camera shots need to detected.
    1. Free Viewpoint TV

Free Viewpoint TV was the vision that drove the development of many different 3D video coding extensions. It is now time to take back a step and see where the future of 3D will go. Super-multiview displays and holographic displays are currently under development. They will provide horizontal as well as vertical parallax. Hence, we need further extensions of current multiview technology, which assumes a linear camera arrangement, in order to accommodate for more general camera arrangements for future displays. For interaction and navigation purposes, modern human computer interfaces need to be developed. The purpose of the exploration was previously described in N14546 Purpose of FTV Exploration.

In order to decide on a potential start of a standardization activity, a Call N15348 Call for Evidence on Free-Viewpoint Television: Super-Multiview and Free Navigation was issued. MPEG provides software in N15349 FTV Software Framework that may be used for the purpose of the call. Evaluation is planned for the 114th meeting in San Diego. Compared to the previous draft Call for Evidence, the submission deadline and the evaluation is delayed by one meeting cycle due to the anticipated extra time required for generating the anchors.


    1. Future Video Coding

After a brainstorming session on the future of video coding at the last meeting, N15340 Requirements for a Future Video Coding Standard v1 was created. To further improve the requirements, a workshop is planned for the 113th meeting as described in N15341 Workshop on Future Video Coding Applications and Technologies.
One of the very important users of MPEG-technology is the mobile industry. It is expected that the 5G-Network will be ready between 2020 and 2023. Therefore, it might make sense for MPEG to have a new generation of standards ready by 2020. This might require a new CfE on video coding technology in late 2015 or early 2016.
    1. Genome Compression

Today, DNA sequencing creates a lot of data. One data set is easily about 800 Mbyte. Typically, several data sets are made for one person. Given that today machines are optimized for speed and not for accuracy, an interesting opportunity for compression technology might exist. TC276 Biotechnology is currently not considering compression.

MPEG selected a test set for evaluating compression performance. The data set includes human, plant and animal genomes.

N15345 Requirements on Genome Compression and Storage v.1 lists requirements. In N1534 Investigation on Genome Compression and Storage gives an overview of the state of the art. To clarify the situation, a Call for Evidence with an evaluation the 114th meeting is prepared as N15539 Draft Call for Evidence for Genome Compression and Storage. Progress over the state of the art and additional functionalities like fast random access and lossy compression are of interest. In order to foster interest in this topic, a seminar is planned as described in N15347 Seminar on Prospect of Genome Compression and Storage Standardization.
N15527 Genome Information Compression 101 includes the slides of a tutorial from a previous MPEG meeting.

    1. High dynamic range and wide colour gamut content distribution

Several film studios currently master movies for digital cinema and DVD separately since the colour space of the cinema is much larger than the colour space of a regular TV set. The industry would like to master just one version of the content using the xyz colour space as one example of a Wide Color Gamut (WCG) colour space. Furthermore, future TV will use High Dynamic Range (HDR) displays. In order to adapt to the different displays and projectors, transformation hints are desirable which instruct the terminal how to scale the colours of the content to the capabilities of the display. Furthermore, the signal needs to be deployed with an amplitude resolution accommodating HDR. Deploying video in such a way would also allow consumers to actually benefit from a WCG and HDR of a new TV screen.
At this point, tools for supporting bit depth scalability as well as WCG scalability are available. An appropriate profile definition is on its way.

For B2B application like contribution, a bitrate of 25 Mbit/s is sufficient coding HDR WCG content using HEVC. For B2C applications like Blu-ray or broadcast, there might be a need for further improvements in coding efficiency. As anchors for evaluations, HEVC main at 10 bit or 12 bit might be used.

The group plans to create interchangeable test material. There might be a chance to issue a Call for Evidence at the 111th meeting. However, at this point only limited improvements at 10 Mbit/s were demonstrated.
N1502 Draft Requirements and Explorations for HDR and WCG Content Distribution summarizes the requirements and use cases for HDR and WCG applications.
MPEG thanks Apple, Arris, BBC, Dolby, Ericsson, FastVDO, InterDigital, Philips, Qualcomm, Technicolor, University of Warwick/goHDR, NGCodec, MovieLabs for responding to HDR and WCG CfE. Improvements over the anchor sequences were noticed in several proposals as described in N15350 Test Results of Call for Evidence (CfE) for HDR and WCG Video Coding. No core components of the HEVC coder were changed. Given the concerns expressed by some participants, MPEG plans to check at the 114th meeting whether the activity will meet market needs. In this context, the liaison N15542 Liaison on HDR/WCG was sent to several organizations.
Work continues in the video subgroup.

    1. Media-centric Internet of Things

The Requirements subgroup recognizes that MPEG-V provides technology that is applicable in the area of Internet of Things. N15351 Exploration on Media-centric Internet of Things (draft) provides definitions, use cases and requirements. N15351 Exploration on Wearable MPEG may contribute to IoT.
    1. Media Orchestration

For applications like augmented broadcast, concert recording or class room recording with several cameras as described in N15342 Draft of Context and Objectives for Media Orchestration, play back requires spatial and temporal synchronization of the different displays. Requirements were extracted and summarized in N15343 Requirements for Media Orchestration v.1. Concerns of the USNB are addressed in N15344 Response to USNB Technical Comment on Media Orchestration.



    1. Yüklə 5,54 Mb.

      Dostları ilə paylaş:
1   ...   21   22   23   24   25   26   27   28   ...   197




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin