The Joint Collaborative Team on Video Coding (jct-vc) of itu-t wp3/16 and iso/iec jtc 1/sc 29/wg 11 held its second meeting during 21-28 July, 2010 at the itu-t premises in Geneva, ch


JCTVC-B082 [K. Sato (Sony)] New large block structure with reduced complexity



Yüklə 402,98 Kb.
səhifə35/127
tarix09.01.2022
ölçüsü402,98 Kb.
#92585
1   ...   31   32   33   34   35   36   37   38   ...   127
JCTVC-B082 [K. Sato (Sony)] New large block structure with reduced complexity

The AVC standard employs the concept of macroblocks that consist of 16x16 luma samples and typically 8x8 chroma samples, and most proposals of the response to CfP extended the macroblock (maximum coding tree block) size to 64x64 or 32x32, which was reportedly one of the key tools of coding efficiency improvement.

However increasing the macroblock size reportedly has two problems in terms of implementation: First, it causes an increase in buffering to store samples from the neighboring macroblocks for intra prediction or deblocking filter application. Second, the order of processing for each of 16x16 blocks differs from MPEG-2 or AVC if larger macroblock than 16x16 is employed. This would reportedly become a bottleneck in developing multi-codec LSI chips.

This contribution proposed using a non-square scheme such as 32x16 or 64x32, which would reportedly provide a trade-off solution between coding efficiency and complexity or compatibility with existing codec designs.

The reported simulation results (based on KTA2.6r1) compared the average gain obtained with 32x32 and 32x16, with the assertion that 91.4% of the gain was obtained for IPPP with lower QPs, 79.1% for IPPP with higher QPs, 83.1% for HierB with lower QPs, and 64.5% for HierB with higher QPs.

It was suggested that using a non-square coding block provides a good trade-off between coding efficiency and implementation cost or interoperability. It was proposed for this topic to be included in the discussion of AhG on Large block structures.

It was remarked that having an excessively large macroblock size could cause problems in relation to typical memory designs which have problems with units larger than 1024 bytes.

A participant indicated that it would be better (from the implementation perspective) to fix the maximum coding tree block (CTB) / macroblock size rather than requiring decoders to support raster scans of different block sizes.

It was remarked that some features of the TMuC, such as merge flags with 16x16 maximum CTB size, might enable some similar functionality in a different way – although another participant indicated that the current TMuC may not support the use of a merge flag across CTB boundaries.

During the discussion, voices are raised that between 32x16 and 32x32 there is no real difference in burden with regard to hardware implementation, but possibly for larger units such as 64x64 and 128x128.

One suggestion was given that the same could be done using the merge flag of TMuC (binding together two 16x16 CUs). This would however require that the merge flag can be applied at the top level (which is not the case currently)

Continued study (in TE or AHG activity) was encouraged.



Yüklə 402,98 Kb.

Dostları ilə paylaş:
1   ...   31   32   33   34   35   36   37   38   ...   127




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin