Deblocking filtering process is similar to those in HEVC. In VVC, the deblocking filtering process is applied on a CU boundaries, transform subblock boundaries and prediction subblock boundaries. The prediction subblock boundaries include the prediction unit boundaries introduced by the SbTMVP and affine modes, and the transform subblock boundaries include the transform unit boundaries introduced by SBT and ISP modes, and transforms due to implicit split of large CUs. As done in HEVC, the processing order of the deblocking filter is defined as horizontal filtering for vertical edges for the entire picture first, followed by vertical filtering for horizontal edges. This specific order enables either multiple horizontal filtering or vertical filtering processes to be applied in parallel threads, or can still be implemented on a CTB-by-CTB basis with only a small processing latency. Compared to HEVC deblocking, the following modifications are introduced.
The filter strength of the deblocking filter dependent of the averaged luma level of the reconstructed samples.
Deblocking tC table extension and adaptation to 10-bit video
As done in HEVC, the filter strength of the deblocking filter in VVC is controlled by the variables β and tC which are derived from the averaged quantization parameters qPL of the two adjacent coding blocks. In VVC, luma-adaptive deblocking can further adjust the filtering strength of deblocking filter based on the averaged luma level of the reconstructed samples. This additional refinement is used to compensate the nonlinear transfer function such as Electro-Optical Transfer Function (EOTF) in the linear light domain.
In this method, deblocking filter controls the strength of the deblocking filter by adding offset to qPL according to the luma level of the reconstructed samples. The reconstructed luma level LL is derived as follow:
LL= ( ( p0,0 + p0,3 + q0,0 + q0,3 ) >> 2 ) / ( 1 << bitDepth ) (3-0)
where, the sample values pi,k and qi,k with i = 0..3 and k = 0 and 3 are derived as shown in Figure 53.
Figure 53 – Sample position of pi,k and qi,k
The variable qPL is derived as follows:
qPL = ( ( QpQ + QpP +1 ) >> 1 ) + qpOffset (3-0)
where QpQ and QpP denote the quantization parameters of the coding units containing the sample q0,0 and p0,0, respectively. The offset qpOffset is dependent on transfer function and the reconstructed luma level LL. The mapping function of qpOffset and the luma level are signalled in the SPS and should be derived according to the transfer characteristics of the contents since the transfer functions vary among video formats.