JCTVC-B059 [J. Zheng (HiSilicon & Huawei)] Adaptive frequency weighting quantization in macroblock level
(refers to previous JCTVC-A111 and JCTVC-A028) Usage of parameterized frequency weighting models (instead of direct specification of quant matrices) at picture level – assign weighting parameters to frequency bands; those weighting factors are also shared by transforms of different block sizes. Switch quant matrix on or off at MB level. Syntax allows usage of different weighting modes (up to 7 excluding case of weighting off) that can be selected. Mode to be selected at MB levels can also be encoded dependent on modes of adjacent MBs, and on MB type.
Adapted once per frame.
Results reported from KTA experimentation using 2 different adaptive quant are around 3% BR saving for class B and C, only around 0.5% for class D.
Not compared against AQMS from KTA (which gives PSNR losses versus flat quant)
Not clear yet how it would be implemented in TMuC
Concern raised about restricted flexibility due to parameterized weighting
Not clear how weigthing parameters at picture level are determined. Orally it is said that it depends on number and relevance of coefficients found in the respective band; depending on the subjective importance of the band, the weight is either increased or decreased compared to the default (which is determined from a standard visual model)
Unclear how frequency weigthing can increase PSNR (that uses flat weighting of quadratic errors) – may this be due to the unequal quantization of picture types that is happening in the local adaptation?
The following aspects were recommend for further study:
-
Better explain how the model parameters are derived.
-
Propose how to combine this into TMuC.
-
Justify the necessity of up to 7 models.
-
Analyse how the effective quantization step size is adapted in various frame types.
Dostları ilə paylaş: |