Organisation internationale de normalisation



Yüklə 7,54 Mb.
səhifə121/166
tarix03.01.2022
ölçüsü7,54 Mb.
#33742
1   ...   117   118   119   120   121   122   123   124   ...   166

17.11Quantization


17.11.1.1.1.1.1.1.1JCTVC-D024 Compact representation of quantization matrices for HEVC [M. Zhou, V. Sze (TI)]

Due to large block size transform and quantization used in the HEVC, carrying the quantization matrices in the picture parameter set will lead to a significant overhead. In this document, a quantization matrix compression algorithm is proposed to achieve compact representation of quantization matrices for HEVC. The algorithm reportedly provides 7x more compression when compared to the AVC quantization matrix compression method if all the compression steps are enabled. It was recommended to set up an ad-hoc group to investigate the perceptual quantization of the HEVC coding with large block transforms, and to specify the compact quantization matrix representation format for efficient carriage of the quantization matrices in the HEVC picture parameter set.

Compression of the quantization matrix is lossy

The effect on visual quality was not investigated.

It was asked whether the compression technique guarantees some maximum error for the individual elements of the matrix.

It was suggested that, potentially, definition of a small set of switchable matrices would be better (VQ style).

Would it be appropriate to change quantization matrices for every picture? If not, the compression of the matrices may not be as necessary, particularly as the effect of lossy compression is currently unknown.

This would be a tough subject to study, including subjective investigation – and it was thought doubtful whether we should put much resources into this currently.

17.11.1.1.1.1.1.1.2JCTVC-D038 Delta QP signaling at sub-LCU level [M. Budagavi, M. Zhou (TI)]

Delta QP is widely used in practice for perceptual quantization and rate control purposes. In the current design of HM 1.0, delta QP is sent only at LCU level i.e. once every 64x64 block. Hence the spatial granularity at which QP can change is reduced when compared to AVC, which allows for signaling of delta QP at the macroblock (16x16) level. The reduced granularity at which delta QP can be signaled in HM 1.0 impacts the visual quality performance of perceptual rate control techniques that adapt the QP based on the source content. This contribution proposes that delta QP be signaled at a sub-LCU level to maintain the spatial granularity of signaling at the level supported by AVC (i.e. at 16x16 blocks). This contribution also presents syntax modifications for sub-LCU level signaling. It also proposes prediction methods for calculating delta QP at sub-LCU levels.

Examples were shown from AVC streams. (In fact many streams also exist that do not change QP at the macroblock level.) A purpose for such a scheme could be rate control or subjective adapted quantization (e.g. lower QP in flat MBs).

For rate control, changes at the CU level would appear to be sufficient.

For subjective adaptation, it is claimed that the ability to change at the level of 16x16 is needed.

Specific scan structures are proposed as the 16x16 blocks are not row-sequential as in AVC.



  • It may be difficult to get evidence of at which level the ability to change QP is really needed (unless doing extensive testing)

  • It was questioned whether 16x16 QP change ability really needed or if it was only historically at that granularity because the MB size was previously 16x16 all the time

  • One independent expert expressed that 64x64 may be a bit too coarse

  • There were some other voices raised in favor

  • If yes, it would certainly be useful to make it switchable

  • A problem how to evaluate the encoding

One idea considered was whether the change of QPs as currently used in AVC streams could be modeled e.g. as a Markov process. It was suggested to study how frequently it changes and by how much.

Further study in some AHG was encouraged – which should include study of the need of more frequent QP changes also in the context of rate control needs.

17.11.1.1.1.1.1.1.3JCTVC-D041 Finer scaling of quantization parameter [D. Hoang (Zenverge)]

The current Test Model under Consideration (TMuC) employs a quantization parameter (QP) scaling that is borrowed/inherited from the AVC standard design. In AVC, the quantization step size increases by approximately 12.25% with each increment of QP, so that the quantization step size doubles when QP is incremented by 6. For the purpose of rate control, this 12.25% increment was asserted to be too coarse for certain applications, such as low-delay coding. This contribution proposes a specification that allows the granularity of the quantization parameter (QP) to be varied at the slice and picture levels. Backward-compatibility with the AVC approach is maintained at one of the granularity settings. In addition to varying the granularity, this contribution also proposes a variable QP offset that can be specified at the slice and picture levels.

A suggested problem was that, for rate control, the granularity may be too coarse due to the larger CUs (see notes in the context of slice discussions).

It was remarked that delta QP at the sub-CU level could also resolve this issue.

17.11.1.1.1.1.1.1.4JCTVC-D258 CU Adaptive dQP Syntax Change for Better Decoder pipelining [L. Dong, W. Liu, K. Sato]

The current HM adaptive quantization syntax places "dQP" at the very end of each LCU when the whole LCU is not coded with SKIP mode. Such syntax could introduce delay in decoding. The proposed change is to place "dQP" after the mode information of the 1st non-skipped CU in an LCU. It was asserted that the proposed change will avoid unnecessary decoder delay and avoid unnecessary signaling when an LCU is further partitioned but every sub-CU is then skipped.

It was agreed that this seems to be a reasonable suggestion, but before adopting it should be cross-checked with the originators of the HM software.

Another suggestion was made that it would be better to send this after CBF. This was agreed.

Decision: Adopted (to signal after CBF).

17.11.1.1.1.1.1.1.5JCTVC-D308 On LBS and Quantization [Kazushi Sato] (initial version rejected as a placeholder upload)

The concept of the Coding Unit has been introduced to HEVC, and it reportedly greatly contributes to coding efficiency improvement. The current HEVC specification defines delta-QP only at the slice level, and in the current HM it can be specified only at the LCU level, but it was asserted that this is not sufficient for subjective quality improvement with the degree of flexibility of QP control as found in MPEG-2/H.262 or AVC, where the size of macroblock is fixed as 16x16.

In this document it is reported that the current dQP specification is not sufficient, and dQP overhead bit usage does not have an impact in coding efficiency if all dQP=0.

The contribution reported results with AQ on/off for LCU sizes 16 and 64. In general, adaptive quantization leads to losses in PSNR.

Overhead of dQP is reportedly 0.47% with QP37 for 16x16 and 0.047% for 64x64. Less overhead (in percentage terms) is used when QP is smaller, since using a small QP implies using more bits for quantized coefficient data.

It was remarked that there is a inconsistency of dQP definition between the current WD and software.

Comments:



  • To really show evidence, it would be necessary to implement dQP at the sub-CU level (i.e. use LCU 64x64 with dQP 16x16). For the examples given, adaptive quantization at 64x64 looks better than adaptive quantization at 16x16, which is certainly due to the better compression performance of the 64x64 LCU.

  • Is 16x16 dQP really needed for ultra HD? (It was already used for QCIF & CIF traditionally.)

Further study appears needed to get evidence about the necessity of smaller-blocksize QP adaptation. Such further study was encouraged to take place in an associated AHG.
17.11.1.1.1.1.1.1.6JCTVC-D384 Quantization with Hard-decision Partition and Adaptive Reconstruction Levels for low delay setting [Xiang Yu, Jing Wang, Da-ke He, En-hui Yang (RIM)]

This contribution proposed a quantization scheme for the low-delay low-complexity setting based on "hard decision quantization" and adaptive reconstruction levels. Specifically, for each inter frame, the proposed forward quantization process uses the hard decision quantization as implemented in the HM scheme, following by a statistics collection for reconstruction levels. The reconstruction levels are then selectively transmitted to the decoder and are used for reconstructing the next frame. Compared with the anchor with the low-delay low-complexity setting, the proposed scheme reportedly reduces the computational complexity while boosting the rate distortion coding performance. In terms of RD performance, the proposed scheme reportedly outperforms the reference configuration with the low-delay low-complexity setting by 1.2%, 2.2% and 3.0% by BD BR for the Y, U, and V, respectively; in terms of complexity, the proposed method reportedly saves the RDOQ computation on the encoder side without changing the decoding complexity.

In the proposed method, a delta-i (deviation for quantization cell i) is transmitted when it gives benefit (improvement larger than a threshold).

Runtime reduction (due to RDOQ off) was reported as 6% at encoder, with a slight increase at the decoder.

Partitioning is not changed at the encoder.

Questions/comments



  • Reconstruction levels become non-uniformly distributed

  • How is it performing in intra-only & random access LC, and in HE cases?

  • The differential transmission of delta-i values could be problematic in case of losses – also we should study the case where the actual deviation from a uniform reconstruction quantizer is signaled for each frame.

  • When QP is changed, the partitioning is changed. How is that handled? The proponent indicated that a normalization was performed.

Further study appeared needed.

One expert mentioned that using RDOQ in general may not be a widely-supported feature for encoders, so looking into tools that avoid this are interesting.

It was also suggested to put this topic under the mandates of a quantization AHG.


Yüklə 7,54 Mb.

Dostları ilə paylaş:
1   ...   117   118   119   120   121   122   123   124   ...   166




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin