6.7.1.1.1JVT-Y020 ( Ed. Draft) [V. Bottreau (Thomson), A. Eleftheriadis (LM)] SVC conformance testing draft
This contribution contains proposed modifications for the SVC conformance testing draft text (relative to JVT-X205).
Presentation uploaded (during meeting).
See also AHG report JVT-Y003.
January (before next meeting) target for delivery of bitstreams – however, there is a dependency on reference software.
82 bitstreams (from each of which can be derived more than one subset bitstream).
The editors are generally satisfied with the bitstream coverage.
Need to check correspondence of text description with editorially-final SVC text.
6.7.1.1.2JVT-Y021 ( Info) [V. Bottreau (Thomson)] SVC verif test bitstreams Scalable High profile
This information contribution provided SVC verification test bit streams for the Scalable High profile. The proposed bitstreams correspond to different bit rate settings relative to the SH2 test as described in MPEG’s SVC Verification Test Plan 1 (MPEG output document N9189). In addition, the contribution discussed alternative subjective evaluation methods that are used within the “Scalim@ges” project.
Scalim@ges is reportedly a project from the "Media and Networks Cluster” in France, launched in July 2006 for two years. The project is reportedly composed of four large companies (Thomson, France Telecom, Alcatel, TDF), some SMEs (whatever those are), and research institutes. The goal is to demonstrate the viability of SVC on business cases from the broadcast, broadband and mobile domains. The introduction of the 1080p format is also reportedly considered.
The "Media and Networks Cluster” brings together actors from higher education and academic research institutions, SMI/SMEs (SMI being another undefined TLA) and large companies mainly from Bretagne and Pays de la Loire regions, who are leaders in the media and networks fields.
As described in SVC Verification Test Plan 1 [N9189], the SH2 test corresponds to a low bit rate setting. This contribution proposed to use two different bit rate settings for the SH2 test, namely medium and high.
There are different subjective methodologies; for the current purpose the contributors proposed to use the SAMVIQ and preference methodologies.
SAMVIQ (described in an EBU technical report of 2003) is a multi stimuli continuous quality scale method using explicit and hidden references. It provides an absolute measure of the subjective video codec quality which can be compared directly with the reference, i.e. the original video signal. SAMVIQ reportedly permits a high degree of resolution in the grades given to the systems that fits accurate quality discrimination requirements. The SAMVIQ methodology produces, for each sequence, a MOS (Mean Observer Score), a confidence interval, and a standard deviation. The MOS is a value between 0 and 100 which represents the video quality and the confidence interval represents the range of values ([MOS-IC ; MOS+IC]), where there is a probability of 95% to include the MOS for a high number of subjects.
Preference tests produce, for each sequence, a percentage. This percentage represents the proportion of observers who prefers a first video clip to a second. Normalized to 100, it produces the proportion of observers who prefers the second sequences to the first. Preference tests reportedly do not provide a straightforward quality score, but were asserted to be useful in the following cases:
-
discriminate conditions or confirm tendencies in case of similar quality that cannot be distinguished with usual quality assessment methodology respect of interval of confidence;
-
comparison of video quality between different picture resolutions, since it is reportedly not possible to obtain a reliable quality score between different image sizes.
(This was an information document with respect to JVT consideration.)
7Multi-view coding (MVC)
Question: Will MVC profiles be designed to support interlaced video? (Needs to be answered in order to determine the set of features that need to be designed.)
Answer: No.
JVT decision: Agreed (at least this is the direction in which we are headed presently).
It was suggested to make a resolution to visibly note this, and this suggestion was agreed upon.
7.1.1.1.1JVT-Y041 ( Prop 2.2/3.1) [P. Lai, A, Ortega, P. Pandit, P. Yin, T. Dong, C. Gomila (Thomson & USC)] MVC CE2: Adapt ref filtering
This contribution was submitted in response to CE2 (JVT-X302) as created at the last meeting to evaluate the coding gain of adaptive reference filtering (ARF) for MVC. JVT-W065 proposed an adaptive reference filtering approach (ARF) for inter-view prediction in multi-view video coding. It was asserted to be aimed at coding multi-view video that exhibits mismatches in frames from different views. Following this, document JVT-X060 discussed the complexity of ARF and proposed to use a simpler filter (3x3) with a smaller region of support compared to 5x5 filter in JVT-W065. This contribution presented more detailed results for ARF according to CE2 experiment requirements. ARF was implemented for anchor pictures only for both P and B frame. Complete results were presented for anchor pictures (only).
Filters were designed as Wiener filters, customized for predicting each (original) source video sequence when operating the complete encoding process. There were several filters for each picture, selected based on a depth estimate classifier.
The planned experiments were not yet completed. Results were provided for the anchor pictures only. For this subset of the pictures, the average bit rate savings on those pictures was asserted to be 6%, or equivalently 0.3 dB (for the anchor pictures only). The gains are asserted to range from 0.07 dB to 0.59 dB. The asserted gain is larger for sequences with stronger focus mismatches.
The proponent did not finish the full intended set of experiments; and proposed continuing the CE.
Question: Will this work for interlace? Will MVC support interlace? (preliminarily No, per above)
Question: Is the exact set of filter tap values really critical?
Remark: Can just build filter into the MCP process.
Question: Is the motion search using the filtered reference picture for the motion estimation? Yes.
The proponent’s presentation was made available.
Software was not provided in the contribution.
Suggestion: Include filter simplification in CE investigation.
Conclusion: Continue CE.
7.1.1.1.2JVT-Y045-V ( Info) [H. Koo (LG)] MVC CE2: Verif of Thomson prop JVT-Y041
This document provided verification results of JVT-Y041 from Thomson. LG received source code, executables, coded bitstreams, and information including PSNR and bit rate generated by Thomson’s encoder (for anchor pictures only). LG ran the provided source code (without close inspection of its design), carried out the decoding process, and compared the decoded results with the information by encoder.
LG received source code, executables, coded bitstreams for six sequences, and information including PSNR and bit rate generated by Thomson’s encoder. A reference software problem unrelated to the contribution reportedly prevented verification for the Akko & Kayo sequences. For Akko&Kayo, since the view_id is discontinuous and does not start from 0, and the bitstream assembler does not work. For Flamenco2, in MVC coding structure, it does not have B views, so the result reportedly should be the same as Thomson’s old P views results, and was not checked. LG reported that all provided sequences were decoded successfully. The measured PSNR and bit rate during the decoding process were reported to be identical to the data provided by Thomson.
Dostları ilə paylaş: |