Results of previous meeting: JEM, meeting report, etc.
Goals of the meeting: New version of JEM, evaluation of status progress in EEs and new proposals, provide summary to parent bodies, define new EEs.
1.12Scheduling of discussions
Scheduling: Generally meeting time was scheduled during 0800–2000 hours, with coffee and lunch breaks as convenient. Ongoing scheduling refinements were announced on the group email reflector as needed. Some particular scheduling notes are shown below, although not necessarily 100% accurate or complete:
Thu. 26 May, 1st day
1400-1530 Opening, AHG reports (chaired by JRO and GJS)
1600-1900 EE1-EE3 (chaired by JRO)
Fri. 27 May, 2nd day
900-1100 BoG on QTBT (chaired by Kiho Choi)
1100-1300 EE4-6 (chaired by JRO)
1400-1600 BoG on JEM software & SCC tools (chaired by Xiang Li)
1600-19:30 Review BoG results, EE7, 6.1 (chaired by JRO)
Sat. 28 May, 3rd day
900-1245 6.2+6.3 (chaired by JRO)
1415-1930 6.4-6.6 (chaired by JRO)
Sun. 29 May, 4th day
900-1300 3+7; revisits; remaining docs; EE establishment (chaired by JRO)
1400-1930 BoG on test material including 4 (chaired by Teruhiko Suzuki)
Mon. 30 May, 5th day
1600-1800 BoG on QTBT config (chaired by Kiho Choi)
Tue. 31 May, 6th day
1015-1215 BoG reports (chaired by JRO)
1315-1500 JVET-C0023, JVET-C0105, JVET-C0044; revisits; AHG planning (chaired by JRO)
1530-1630 EE description review (chaired by Jill Boyce)
Wed. 1 June, 7th day
1100-1300 1230 Closing pPlenary (chaired by JRO): Approval of output documents, work plan for JEM text and software development, work plan for sequence testing, establishment of AHGs, closing of meeting.
1.13Contribution topic overview
The approximate subject categories and quantity of contributions per category for the meeting were summarized
AHG reports (5) (section 2)
Analysis and improvement of JEM (4) (section 3)
Test material (9) (section 4)
Exploration experiments (22) (section 5)
Non-EE technology proposals (40) (section 6)
Transforms and coefficient coding (6)
Motion compensation and vector coding (14)
Intra coding (10)
Perceptual metrics and evaluation criteria (2) (section 7)
Withdrawn (0) (section 8)
Joint meetings, plenary discussions, BoG reports, Summary of actions (section 9)
Project planning (section 10)
Output documents, AHGs (section 11)
2AHG reports (5)
JVET-C0001 JVET AHG report: Tool evaluation (AHG1) [M. Karczewicz, E. Alshina] This document reports the work of the JVET ad hoc group on Tool evaluation (AHG1) between the 2nd JVET meeting at San Diego, USA (20–26 February 2016) and the 3rd Meeting at Geneva, Switzerland (26 May – 1 June 2016).
Joint Exploration Test Model Software (HM-16.6-JEM-2.0) was released 19th of March, 2016. The software can be downloaded at:
Context model selection for transform coefficient levels
Multi-hypothesis probability estimation
Initialization for context models
At the 2nd JVET meeting common test conditions were modified and test sequences were updated. So direct performance comparison between JEM1.0 and JEM2.0 is difficult.
The table below shows JEM1.0 and JEM2.0 performance compared to HM if both GOP-size and QP/lambda selection are the same for test and reference.
JEM coding performance summary in RA test configuration compared to HEVC.
JEM1.0 vs HM16.6
GOP-size =8 for both
JEM2.0 vs HM16.9
GOP-size =16 for both
It was commented that the relationship between QP and lambda has been under study and modification, and noted that further consideration of this issue is under way in the JCT-VC and should be coordinated with JVET. This was taken into account for the table above, but not for the table below.
Net effect of enlarging of GOP size, QP/lambda selection modification and tools up-date in JEM2.0 compared to JEM1.0 is shown in Table 2. Test data were provided by JEM SW coordinators.
JEM2.0 (GOP-size =16) vs JEM1.0 (GOP-size =8) performance in RA test.
JEM2.0 vs JEM1.0
(GOP-size is different)
At the 2nd JVET meeting Exploration Experiments practice was established. For each new coding tool under consideration special SW branch was created. After implementation of each tool announcement via JVET reflector was done. There were 7 exploration experiments on new coding tools. For all of them input contribution for this meeting were submitted.
EE2.1: Quadtree plus binary tree structure integration with JEM tools
H. Huang, K. Zhang, Y.-W. Huang, S. Lei (MediaTek)
EE2.6: Modification of Merge candidate derivation: ATMVP simplification and Merge pruning
S. Lee, W.-J. Chien, L. Zhang, J. Chen, M. Karczewicz (Qualcomm)
EE2.5: Improvements on adaptive loop filter
M. Karczewicz, L. Zhang, W.-J. Chien, X. Li (Qualcomm)
J. Samuelsson, P. Wennersten, R. Yu, U. Hakeem (Ericsson)
Direction-dependent scan order with JEM tools
S. Iwamura, A. Ichigaya (NHK)
Multiple line-based intra prediction
J. Li (Peking Univ.), B. Li, J. Xu (Microsoft), R. Xiong (Peking Univ.), G.-J. Sullivan (Microsoft)
The AHG recommended:
To review all the related contribution.
To continue Exploration Experiments practice.
In the next round of tests, also provide results with same bit rate HEVC/JEM, such that it would be possible to identify whether the compression improvement is visible in terms of subjective quality. This should be done at rate points which are required by applications but not reachable with sufficient quality by HEVC.
It is also noted that the relatively high gain of class A2 may be misleading since some of these sequences are relatively easy to encode (rollercoaster, trafficflow)
JVET-C0002 JVET AHG report: JEM algorithm description editing (AHG2) [J. Chen, E. Alshina, J. Boyce]
This document reports the work of the JVET ad hoc group on JEM algorithm description editing (AHG2) between the 2nd JVET meeting at San Diego, USA (20–26 February 2016) and the 3rd Meeting at Geneva, Switzerland (26 May – 1 June 2016).
During the editing period, on top of JVET-A1001 Algorithm Description of Joint Exploration Test Model 1, the editorial team worked on the following three aspects to produce the final version of JVET-B1001 Algorithm Description of Joint Exploration Test Model 2.
Integrate the following normative adoptions of the 2nd JVET meeting
Add brief encoding logic description of the following JEM2 coding tools
Locally adaptive motion vector resolution (AMVR)
Overlapped block motion compensation (OBMC)
Local illumination compensation (LIC)
Mode dependent non-separable secondary transforms
Adaptive loop filter (ALF)
Overall text refinement and quality improvement
Currently the document contains the algorithm description as well as encoding logic description for all the new coding features in JEM2.0.
The AHG recommended to:
Continue to edit the Algorithm Description of Joint Exploration Test Model document to ensure that all agreed elements of JEM are described
Continue to improve the editorial quality of the Algorithm Description of Joint Exploration Test Model document.
JVET-C0003 JVET AHG report: JEM software development [X. Li, K.Suehring]
Software development was continued based on the HM-16.6-JEM-1.0 version. A branch was created in the software repository to implement the JEM-2 tools based on the decisions noted in the notes of 2nd JVET meeting. All integrated tools were included in macros to highlight the changes in the software related to that specific tool.
HM-16.6-JEM-2.0 was released on Mar. 22nd, 2016.
Several minor fixes were added to the trunk after the release of HM-16.6-JEM-2.0. Those fixes will be included in the next release of JEM.
As decided on the last meeting, several branches were created for exploration experiments. These branches are maintained by the proponents of exploration experiments.
The performance of HM-16.6-JEM-2.0 over HM-16.6-JEM-1.0 and HM-16.9 under test conditions defined in JVET-B1010 is summarized as follows.
As decided on the 2nd JVET meeting, SDT (Signal Dependent Transform) is disabled in HM-16.6-JEM-2.0 by default. The performance of JEM-2 with SDT on is summarized as follows.
Further discussion on software issues necessary (see section 3).
The aspect of integration of screen content tools was shortly discussed. At first sight, it appears simpler to integrate SC tools in JEM rather than porting JEM on top of HM 17 (which is likely the version with SCC). We would anyway expect that HM and JEM codebases are somewhat diverging, and not every encoder optimization trick used in HM would give benefit for JEM tools.
Both software packages should be aligned in a way that they can be run with similar coding conditions.
Give AHG a mandate to investigate the implementation of SCC tools in JEM (e.g. studying whether palette mode crashes with larger CTU, or in combination with other tools).
BoG (X.Li) to discuss this with more detail and identify potential difficulties that may occur.
This mainly relates to palette and CPR; ACT is only relevant for 4:4:4 and full-pel MV resolution may anyway be obsolete by some of the new tools.
Several experts expressed the opinion that the reduction of memory usage by JEM software would be important to investigate.
It was further mentioned that the presence of SDT in the main branch imposes some difficulties due to the long run time and high memory usage; it is currently necessary to test SDT in combination with all newly adopted tools.
JVET-C0004 JVET AHG report: Test material (AHG4) [T. Suzuki, J. Chen, A. Norkin, J. Boyce]
Reviewed Sat. morning
The document summarizes activities on test sequences selection between the 2nd JVET meeting at San Diego, USA (20–26 February 2016) and the 3rd Meeting at Geneva, Switzerland (26 May – 1 June 2016). Sequences chosen to be used in CTC at the last meeting are made available at the Aachen university ftp. As a response to the call of test material issued at San Diego meeting, several contribution with new test sequences are proposed.
JVET-B1002 “ Call for test material for future video coding standardization” was issued after San Diego meeting. As a response to the call, several test sequences are proposed.
Contributions to this meeting are as follows.
JVET-C0021 "GoPro test sequences for Virtual Reality video coding", A. Abbas (GoPro).
JVET-C0028 "Suggested 1080P Test Sequences Downsampled from 4K Sequences", H.Zhang, X. Ma, H. Yang (Huawei).
JVET-C0029 "Surveillance sequences for video coding development", H.Zhang, X. Ma, H. Yang (Huawei), W.Qiu (Hisilicon).
JVET-C0041 "Proposed test sequences for 1080p class", A. Norkin (Netflix).
JVET-C0044 "Response to B1002 Call for test materials: Five test sequences for screen content video coding", J. Guo, L. Zhao, T. Lin (Tongji Univ.), H. Yu (Futurewei).
JVET-C0048 "Lens distorted test sequence by an action camera for future video coding", K. Kawamura, S. Naito (KDDI Corp.).
JVET-C0050 "Test sequence formats for virtual reality video coding", K. Choi, V. Zakharchenko, M. Choi, E. Alshina (Samsung).
JVET-C0064 "Nokia test sequences for virtual reality video coding", J. Ridge, M. M. Hannuksela, E. B. Aksu, J. Lainema, A. Aminlou (Nokia).
JVET-C0067 "Ultra High Resolution (UHR) 360 Video", C. J. Murray (Panoaction).
The AHG recommends:
To review all related contributions.
To perform viewing of the test material.
To discuss further actions to select new test materials for JVET activity.
The review of new test material and refinement of test cases is expected again to be an important topic in this meeting.
BoG (T. Suzuki, tentatively scheduled Sunday afternoon room C2):
- Review the class A1/A2 selection made by last meeting, and propose possible changes
- Establish work plan towards the next meeting for investigating the 1080p sequences (with the goal to establish new class or replace class B by the next meeting
- Summarize the material offered in VR, identify if it covers all common methods of projection/rendering/stitching, and discuss possible methods of quality assessment
For the VR material, it should also be discussed with parent bodies how to coordinate the different activities in this area. The work of JVET should not be dominated by VR.
The new material for screen content should be brought to the attention of JCT-VC, could be used in the SCC verification test. Currently, the development of higher compression technology specifically for screen content is not in the focus of JVET.
JVET-C0005 JVET AHG report: Visual quality metrics (AHG5) [P. Nasiopoulos, M. T. Pourazad]
Reviewed Sat. morning
The document summarizes activities on visual quality metrics at the 3rd Meeting at Geneva, Switzerland (26 May – 1 June 2016).
There is one contribution in this meeting that suggests to include MS-SSIM for evaluating video quality as follows:
JVET-C0030 “Perceptual Quality Assessment Metric MS-SSIM", H. Zhang, X. Ma, Y. Zhao, H. Yang (Huawei)