6.13.3Other
6.13.3.1.1.1.1.1.1JCTVC-F162 Entropy coding performance simulations [T. Davies, A. Fuldseth (Cisco)]
(Information document)
Various coding conditions were simulated with both CAVLC and CABAC. Under common conditions it is reported that CABAC provides gain between 6.1% and 7.6%. The performance of the entropy coders is also investigated with various encoder restrictions: no RDOQ, no adaption during RDO mode search, and tile-based coding. These assumptions individually reduce the average gap between CABAC and CAVLC by 0.9%, 1.75% and 0.75%, respectively, in LC settings. When RDOQ is off, RDO adaption is off and tiles are also enabled the gap is reported to be between 2.0% and 3.8% on average in LC settings.
6.13.3.1.1.1.1.1.2JCTVC-F176 Improved PIPE/V2F for low complexity entropy coding [K. Sugimoto, R. Hattori, S. Sekiguchi, K. Asai (Mitsubishi)]
In this contribution, PIPE/V2F using 8 bit fixed length code for low complexity condition is proposed. In the proposed scheme, V2F(Variable Length to Fixed Length) coders are used instead of V2V(Variable Length to Variable Length) coder for PIPE, and all V2F coders except the coders for the highest and the lowest MPS probabilities are designed to generate 4 bit fixed length code. For the lowest MPS probability V2F coder, input 8 bins are directly output as 8bit fixed length code to realize memory-less 8bit V2F conversion. For the highest MPS probability V2F coder, truncated unary bins are converted to 8 bit fixed length code. Number of preceding "0"s are the 8 bit fixed length code for each entry to realize memory-less. The proposed scheme is implemented on HM3.0, and simulations are conducted using common test configurations of low complexity settings. It is reported that the proposed scheme achieves BD-rate reduction 3.9%, 4.3% and 3.2% on average for AI, RA and LD respectively. It is also reported that the increase of decoding time compared to the anchor is around 10%, 1% and 0% for AI, RA and LD respectively.
The proposal is using the context modelling of CABAC but replaces the arithmetic coding engine by a fixed length code.
It was noted that a loss in chroma occurs compared to CAVLC.
6.13.3.1.1.1.1.1.3JCTVC-F284 Cross-check of Mitsubishi Electric’s improved PIPE/V2F for low complexity entropy coding proposal JCTVC-F176 [J. Stegemann (Fraunhofer HHI)]
6.13.3.1.1.1.1.1.4JCTVC-F268 Unified PIPE-Based Entropy Coding for HEVC [D. Marpe, H. Kirchhoffer, B. Bross, V. George, T. Nguyen, M. Preiß, M. Siekmann, J. Stegemann, T. Wiegand (Fraunhofer HHI)]
This contribution presents a unified entropy coding architecture for HEVC. The proposed method is based on probability interval partitioning entropy (PIPE) coding and supports two operation modes: a low-complexity (LC) mode as a substitute for CAVLC and a high-efficiency (HE) mode as a substitute for CABAC. Both entropy coding modes of this unified architecture share the same syntax and semantics, the same binarization schemes as well as the usage of the same set of PIPE/V2V codes. In addition, a common process for initialization of probability models based on 8-bit initialization values is included. The HE operation mode is conceptually similar to CABAC in the sense that it provides the same degree of adaptivity, both by exploiting higher order statistical dependencies through the use of the same set of conditional probabilities and by adapting the model probabilities to the actual source statistics using the same probability estimation process. In the LC operation mode, all conditional probabilities based on prior encoded/decoded symbols as well as all adaptive probability models are replaced by appropriately initialized, fixed model probabilities. In terms of coding efficiency, both operation modes of the proposed unified entropy coding method provide approximately the same R-D performance as their currently specified counterparts. The reported decoder run times in HE and LC configuration allegedly indicate some advantages in terms of decoding complexity relative to CABAC and CAVLC, respectively.
Could the same not be done using CABAC (i.e. scaling it down in complexity)?
Several experts opined that in general, the idea of having a complexity-scalable entropy coder is interesting.
Is PIPE better parallelizable? (As before, there was no initial consensus on this.)
Question: How was the multiplexing problem of PIPE solved? (The contribution does not really seem to be answering this.)
May investigate in an AHG or CE? Discussed further in Tuesday JCT-VC plenary. We may define a CE if there is a specific experiment to be performed. However, the buffering aspect is critical to address. An AHG can more broadly study the subject area.
Suggestion made (by T. Wiegand) to study the possibility of complexity-scalable entropy coder in AHG, and the performance of JCTVC-F176 and JCTVC-F268 compared to CAVLC in CE
Also relates to the question: Do we need two different entropy coders after all?
6.13.3.1.1.1.1.1.5JCTVC-F762 Entropy Coders: How many do we need in HEVC? [K. McCann (Samsung/ZetaCast)] [late reg. 07-19, upload 07-19]
A new input JCTVC-F762 (Samsung) was presented: It was advocated that the best option would be to have only one entropy coder, second best would be one complexity-scalable coder, and third best would be two different coders. The two entropy coders have more converged since the days of TMuC (more approached each other in terms of complexity and performance). Could be investigated in AHG coordinated with CE.
One expert mentions that the question of one or two entropy coders and the specific technology proposed here are two different issues.
CE would have the main purpose to collect data for such an assessment, not necessarily target adoption or removal of technology by the next meeting.
The question how to assess complexity/throughput (incl. multiplexing in PIPE) is not really resolved.
6.13.3.1.1.1.1.1.6JCTVC-F180 cross-check of (JCTVC-F268) [K. Sugimoto, S. Sekiguchi (Mitsubishi)] [late upload 07-12]
Dostları ilə paylaş: |