This subtest was discussed 0900 Friday 13 July (chaired by GJS).
Overall gain relative to the VTM for these proposals was about 1.5/0.9/0.6 for AI/RA/LB. All proposals seemed roughly in that ballpark. All of them used about the same number of contexts (123–156).
The Samsung proposal 7.1.3 had a difference of initialization method, using 0.5 probability initialization. Variation (1) in the summary table tries to compensate for this by changing the reference to also use that initialization. The difference between the two was 0.0/0.2/0.6 for AI/RA/LB in the VTM.
There was also a difference in the VTM testing in the coding of remaining coefficients, where the Samsung proposal did not use additional features found in the BMS but others did. The proponent suggested focusing on the BMS results to avoid that difference of ~0.25%.
A proponent remarked that the method of training the context initialization was not known in general in this test for most technologies tested.
Test 7.1.2 is a scheme compatible with the K0071 trellis quantization method, but with that aspect disabled. A participant remarked that this scheme might have throughput issues due to the way that scheme could have a high number of context coded bins in the worst case.
Subtest 7.2: Comparison of dependent quantization and sign data hiding
This subtest was discussed 1920-2000 Thursday 12 July (chaired by GJS).
K0072 This has two sets of quantization reconstruction levels and a state machine to choose between them. The parity of the coefficient is used in the state machine. From the encoder perspective, it is suggested to be basically trellis coded quantization.
This uses double the number of context models for the significance flag and the absolute level greater than 1 flag.
The gain over the VTM is 5.0%/3.4%/2.7% for AI/RA/LD. The gain over the BMS is 2.5%/1.9%/1.6%. The encoder impact is about 10-13%.
This effectively has a combination of quantization and entropy coding together.
It was commented that were several relevant non-CE contributions which should be taken into account.
It was commented that also we need a fall-back mode that does not require encoder trellis search.
The decoding process is a bit more complicated.
This should be considered for testing with non-CE contributions in a CE. It was commented that at least one of the non-CE approaches is better and may be considered instead.
Subtest 7.5: Comparison of two configurations for transform domain sign prediction
This subtest was discussed 1050 Friday 13 July (chaired by GJS).
Test 7.5.1 (K0044) performs residual sign prediction in the transform domain, predicting up to 5 signs per transform block. The inverse transform requires the reconstructed samples of the neighbour blocks. This has a serious complexity impact (as per above with K0140). This also has a very significant impact on decoder runtime. The gain over the VTM is reported as 1.3%/1.0%/0.7% for AI/RA/LB.
Test 7.5.2 is the same, but only used for intra.
Further study was recommended. A way to avoid the serial dependency is especially needed.
JVET-K0044 CE7: Residual sign prediction in transform domain (Tests 7.5.1 and 7.5.2) [A. . Filippov, A. . Karabutov, V. . Rufitskiy, J. . Chen (Huawei)] JVET-K0069 CE7: Coefficient Coding (Test 1.1) [M. . Coban, J. . Dong, T. . Hsieh, M. . Karczewicz (Qualcomm)] JVET-K0071 CE7: Transform coefficient coding and dependent quantization (Tests 7.1.2, 7.2.1) [H. . Schwarz, T. . Nguyen, D. . Marpe, T. . Wiegand (Fraunhofer HHI)] JVET-K0138 CE7.1.3: Scan Region-based Coefficient Coding [Y. . Piao, W. . Choi, C. . Kim (Samsung)] JVET-K0140 CE7: Adaptive quantization step size scaling (Test 7.3.1) [Y. . Zhao, H. . Yang, J. . Chen (Huawei)] JVET-K0251 CE7.3.2: Extension of quantization parameter value range [S.-T. Hsiang, S.-M. Lei (MediaTek)] JVET-K0252 CE7.3.3: Derivation of chroma QP from luma QP [S.-T. Hsiang, S.-M. Lei (MediaTek)] JVET-K0321 CE 7.1.4: JEM 7.0 coefficient coding with complexity reduction [C. . Auyeung, J. . Chen (Huawei)] JVET-K0398 CE7: Block size dependent coefficient scanning (CE7.4.1) [Y. . Kidani, K. . Kawamura, S. . Naito (KDDI)] [late] JVET-K0457 Crosscheck for CE7-1.2 [M. . Gao, W. . Zhang (Hulu)] [late] JVET-K0459 Crosscheck for CE7-5.1 [M. . Gao, W. . Zhang (Hulu)] [late]