Video coding standards k. R. Rao, Do Nyeon Kim and Jae Jeong Hwang Springer 2014



Yüklə 0,57 Mb.
səhifə17/17
tarix17.01.2019
ölçüsü0,57 Mb.
#99534
1   ...   9   10   11   12   13   14   15   16   17

AVS China:


  1. AVS Video Expert Group, “Information Technology – Advanced coding of audio and video – Part 2: Video (AVS1-P2 JQP FCD 1.0),” Audio Video Coding Standard Group of China (AVS), Doc. AVS-N1538, Sep. 2008.

  2. AVS Video Expert Group, “Information technology – Advanced coding of audio and video – Part 3: Audio,” Audio Video Coding Standard Group of China (AVS), Doc. AVS-N1551, Sep. 2008.

  3. L. Yu et al, “Overview of AVS-Video: Tools, performance and complexity,” SPIE VCIP, vol. 5960, pp. 596021-1~ 596021-12, Beijing, China, July 2005.

  4. L. Fan, S. Ma and F. Wu, “Overview of AVS video standard,” IEEE Int’l Conf. on Multimedia and Expo, ICME '04, vol. 1, pp. 423–426, Taipei, Taiwan, June 2004.

  5. W. Gao et al, “AVS – The Chinese next-generation video coding standard,” National Association of Broadcasters, Las Vegas, 2004.

  6. AVS-China official website: http://www.avs.org.cn

  7. AVS-china software download: ftp://159.226.42.57/public/avs_doc/avs_software

[AVS8] IEEE Standards Activities Board, “AVS China - Draft Standard for Advanced Audio and Video Coding”, adopted by IEEE as standard IEEE 1857. Website: stds.ipr@ieee.org

  1. S. Ma, S. Wang and W. Gao, “Overview of IEEE 1857 video coding standard,” IEEE ICIP, pp. 1500 – 1504, Sept. 2013.

[AVS9] D. Sahana and K.R. Rao, “A study on AVS-M standard”,

Calin ENACHESCU, Florin Gheorghe FILIP, Barna IANTOVICS (Eds.),

Advanced Computational Technologies published by the Romanian Academy Publishing House, pp. 311-322, Bucharest, Rumania, 2012.

[AVS10] Swaminathan Sridhar, “Multiplexing/De-multiplexing AVS video and AAC audio while maintaining lip sync”, M.S. Thesis, EE Dept., University of Texas at Arlington, Arlington, Texas, Dec. 2010.

[AVS 11] W. Gao and S. Ma, “Advanced video coding systems”, Springer, 2015.

References on Screen Content Coding


[SCC1] D. Flynn, J. Sole and T. Suzuki, “High efficiency video coding (HEVC) range extensions text specification”, Draft 4, JCT-VC. Retrieved 2013-08-07.

[SCC2]M. Budagavi and D.-Y. Kwon, “Intra motion compensation and entropy coding improvements for HEVC screen content coding”, IEEE PCS, pp. 365-368, San Jose, CA, Dec. 2013.

[SCC3]M. Naccari et al, “Improving inter prediction in HEVC with residual DPCM for lossless screen content coding”, IEEE PCS, pp. 361-364, San Jose, CA, Dec. 2013.

[SCC4]T. Lin et al, “Mixed chroma sampling-rate high efficiency video coding for full-chroma screen content”, IEEE Trans. CSVT, vol. 23, pp.173-185, Jan. 2013.

[SCC5] Braeckman et al, “Visually lossless screen content coding using HEVC base-layer”, IEEE VCIP 2013, pp. 1-6, Kuching, China, 17-20, Nov. 2013.

[SCC6]W. Zhu et al, “Screen content coding based on HEVC framework”, IEEE Trans. Multimedia , vol.16, pp.1316-1326 Aug. 2014 (several papers related to MRC) MRC: mixed raster coding.

[SCC7]ITU-T Q6/16 Visual Coding and ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio

[SCC8]Title: Joint Call for Proposals for Coding of Screen Content

Status: Approved by ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q6/16 (San Jose, 17 January 2014). Final draft standard expected: late 2015.

[SCC9]F. Zou, et al, “Hash Based Intra String Copy for HEVC Based Screen Content Coding, “ IEEE ICME (workshop), Torino, Italy, June 2015

[SCC10]T. Lin, et al, “Arbitrary Shape Matching for Screen Content Coding,” in Picture Coding Symposium (PCS), pp. 369 – 372 , Dec. 2013.

[SCC11]T. Lin , X. Chen and S. Wang, “ Pseudo-2d-matching based dual – coder architecture for screen contents coding,” IEEE International Conference in Multimedia and Expo Workshops (ICMEW), pp. 1 – 4, July 2013.

[SCC12]B. Li, J. Xu and F. Wu, “ 1d dictionary mode for screen content coding,” in Visual Communication and Image Processing Conference, pp. 189 – 192, Dec. 2014.

[SCC13]Y. Chen and J. Xu, “HEVC Screen Content Coding core experiment 4 (scce4) : String matching for sample coding,” JCTVC – Q1124, Apr. 2014.

[SCC14]H. Yu , et al, “Common conditions for screen content coding tests,” JCTVC – Q1015, vol. 24, no. 3, pp. 22-25, Mar.2014.

[SCC15]IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS)

Special Issue on Screen Content Video Coding and Applications: Final papers are due July 2016.

Guest Editors

Wen-Hsiao Peng wpeng@cs.nctu.edu.tw National Chiao Tung University, Taiwan

Ji-Zheng Xu jzxu@microsoft.com Microsoft Research Asia, China

Jöern Ostermann ostermann@tnt.uni-hannover.de Leibniz Universität Hannover, Germany

Robert Cohen cohen@merl.com Mitsubishi Electric Research Laboratories, USA

Important dates

- Manuscript submissions due 2016-01-22

- First round of reviews completed 2016-03-25

- Revised manuscripts due 2016-05-13

- Second round of reviews completed 2016-07-08

- Final manuscripts due 2016-07-22


[SCC16] J. Nam, D. Sim and I.V. Bajic, “HEVC-based Adaptive Quantization for Screen Content Videos,” IEEE Int. Symp. on Broadband Multimedia Systems, pp. 1-4, Seoul, Korea, 2012.

[SCC17] S. – H. Tsang, Y. – L. Chan and W. – C. Siu, “Fast and Efficient Intra Coding Techniques for Smooth Regions in Screen Content Coding Based on Boundary Prediction Samples”, ICASSP2015, Brisbane, Australia, April. 2015.


[SCC18] HEVC SCC Extension Reference software

https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/tags/HM-15.0+RExt-8.0+SCM-2.0/

[SCC19] HEVC SCC Software reference manual



https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/branches/HM-SCC-extensions/doc/software-manual.pdf

[SCC20] K. Rapaska et al, “Improved block copy and motion search methods for HEVC screen content coding,” [9599-49], SPIE. Optics + photonics, San Diego, California, USA, 9 – 13, Aug. 2015. Website : www.spie.org/op

[SCC21] B. Li, J. Xu and G. J. Sullivan, “Performance analysis of HEVC and its format range and screen content coding extensions”, [9599-45], SPIE. Optics + photonics, San Diego, California, USA, 9 – 13, Aug. 2015. Website : www.spie.org/op

[SCC22] S. Hu et al, “Screen content coding for HEVC using edge modes”, IEEE ICASSP, pp. 1714-1718, 26-31 May 2013.

[SCC23] H. Chen, A. Saxena and F. Fernandes, “ Nearest-neighbor intra prediction screen content video coding”, IEEE ICIP , pp. 3151 – 3155 , Paris, 27 – 30, Oct. 2014.

[SCC24] X. Zhang, R. A. Cohen and A. Vetro, “ Independent Uniform Prediction mode for screen content video coding,” IEEE Visual Communications and Image Processing Conference, pp. 129 – 132, 7 – 10 Dec. 2014.

[SCC25] J. Nam, D. Sim and I. V. Bajic, “ HEVC – based adaptive quantization for screen content videos,” IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pp. 1- 4 , June 2012.

[SCC26] L. Guo et al, “Color palette for screen content coding”, IEEE ICIP, pp. 5556 – 5560, Oct. 2014.

[SCC27] D. –K. Kwon and M. Budagavi, “Fast intra block copy (intra BC) search for HEVC Screen Content Coding “, IEEE ISCAS, PP. 9 – 12, June 2014.

[SCC28] J. -W. Kang, “ Sample selective filter in HEVC intra-prediction for screen content video coding,” IET Journals & Magazines, vol. 51, no.3, pp. 236 – 237, Feb. 2015.

[SCC29] H. Zhang et al, “HEVC – based adaptive quantization for screen content by detecting low contrast edge regions,” IEEE ISCAS , pp. 49 -52, May 2013.

[SCC30] Z. Ma et al, “Advanced screen content coding using color table and index map,” IEEE Trans. on Image Processing, vol. 23, no. 10 , pp. 4399 – 4412, Oct. 2014.

[SCC31] R. Joshi et al, “Screen Content Coding Test Model 3 Encoder Description (SCM 3),” Joint Collaborative Team on Video Coding, JCTVC-S1014, Strasbourg, FR, 17 -24 Oct. 2014.

[SCC32] S. wang et al, “Joint Chroma Downsampling and Upsampling for screen Content Image”, IEEE Trans. on CSVT. (Early Access).

[SCC33] H. yang, Y. Fang and W. Lin, “Perceptual Quality Assessment of Screen Content Images,” IEEE Trans. on Image processing. (Early Access).

[SCC34] I. J. S. W. R. Subgroup, “Requirements for an extension of HEVC for coding on screen content,” in MPEG 109 meeting, 2014.

[SCC35] H. Yang et al, “Subjective quality ASSESSMENT OF SCREEN CONETENT IMAGES,” in International Workshop on Quality of Multimedia Experience (QoMEX), 2014.

[SCC36] “SIQAD,” https://sites.google.com/site/subjectiveqa/. : Perceptual Quality Assessment of Screen Content Images.

[SCC37] S. Kodpadi, “Evaluation of coding tools for screen content in High Efficiency Video Coding” M.S. Thesis, EE Dept., University of Texas at Arlington, Arlington, Texas, Dec. 2015.

[SCC38] N.N. Mundegemane, “Multi-stage prediction scheme for Screen Content based on HEVC”, M.S. Thesis, EE Dept., University of Texas at Arlington, Arlington, Texas, Dec. 2015.



Access to JCT – VC resources:

Access to JCT-VC documents: http://phenix.it.-sudparis.eu/jct/

Bug tracking: http://hevc.hhi.fraunhofer.de/trac/hevc

Email list: http://mailman.rwth-aachen.de/mailman/listinfo/jct-vc


BEYOND HEVC:


[BH1] J. Chen et al, “Coding tools investigation for next generation video coding based on HEVC”, [9599 – 47], SPIE. Optics + photonics, San Diego, California, USA, 9 – 13, Aug. 2015.

[BH2] A. Alshin et al, “Coding efficiency improvements beyond HEVC with known tools,” [9599-48], SPIE. Optics + photonics, San Diego, California, USA, 9 – 13, Aug. 2015. References [2] through [8] describe various proposals related to NGVC (beyond HEVC) as ISO / IEC JTC – VC documents presented in Warsaw, Poland, June 2015. Several Projects can be implemented / explored based on [BH2] and these references.


Projects on BEYOND HEVC:


[BH-P1] In [BH2] several tools (some of these are straight forward extensions of those adopted in HVEC) are considered in NGVC (beyond HEVC). Some of these are increasing CU and TU sizes, up to 64 adaptive intra directional predictions, multi – hypothesis probability estimation for CABAC, bi-directional optical flow, secondary transform, rotational transform and multi-parameter intra prediction. Go through [BH2] in detail and evaluate the performance of each new tool, for all – intra, random access, low delay B and low delay P (see Table 1). Compare the computational complexity of each tool with that of HEVC.

[BH-P2] See [BH-P1]. Evaluate performance impact of enlarging CU and TU sizes (see Table 2). Also consider computational complexity.

[BH-P3] See [BH-P1]. Evaluate performance impact of fine granularity Intra prediction vs HEVC with enlarged CU and TU sizes (see Table 3). Also consider computational complexity.

[BH-P4] See [BH-P1]. Evaluate performance impact of multi-hypothesis probability estimation vs HEVC with enlarged CU and TU sizes (see Table 4). Also consider computational complexity.

[BH-P.5] See [BH-P1]. Evaluate performance impact of bi-directional optical flow vs HEVC with enlarged CU and TU sizes (see Table 6). Also consider computational complexity.

[BH-P6] See [BH-P1]. Evaluate performance impact of implicit secondary transform vs HEVC with enlarged CU and TU sizes (see Table 7). Also consider computational complexity.

[BH-P7] See [BH-P1]. Evaluate performance impact of explicit secondary transform vs HEVC with enlarged CU and TU sizes (see Table 9). Also consider computational complexity.

[BH-P8] See [BH-P1]. Evaluate performance impact of multi-parameter Intra prediction vs HEVC with enlarged CU and TU sizes (see Table 11). Also consider computational complexity.

[BH-P9] See [BH-P1]. Evaluate Joint performance impact of all tested tools on top of HEVC (see Table 6). Also consider computational complexity.

[BH-P10] Similar to [BH2], several coding tools (some of them were considered earlier during the initial development of HEVC) are investigated for NGVC based on HEVC [BH1]. These tools include large CTU and TU, adaptive loop filter, advanced temporal Motion Vector Prediction, cross component prediction, overlapped block Motion Compensation and adaptive multiple transform. The overall coding performance improvement resulting from these additional tools for classes A through F Video test sequences in terms of BD-rate are listed in Table 3. Class F sequences include synthetic (computer generated) video. Performance improvement of each tool for Random Access, All Intra and low-delay B is also listed in Table 4. Evaluate the performance improvement of these tools [BH1] over HM 16.4 and verify the results shown in Table 3.

[BH-P11] See [BH-P10]. Evaluate the performance improvement of each tool for All Intra, Random Access and Low-delay B described in Table 4. Test sequences for screen content coding (similar to class F in Table 3) can be accessed from

http://pan.baidu.com/share/link?shareid=3128894651&uk=889443731 .

[BH-P12] See [BH-P10] and [BH-P11]. Consider implementation complexity as another metric, as these additional tools invariably result in increased complexity. Evaluate this complexity for all classes (A through F) – see Table 3 and for All Intra, Random Access and low-delay B cases (see Table 4). See the papers below related to proposed requirements for Next Generation Video coding (NGVC).

1. ISO/IEC JTC1/SC29/WG11, “Proposed Revised Requirements for a Future Video coding Standard”, MPEG doc.M36183, Warsaw, Poland, June. 2015.

2. M. Karczewicz and M. Budagavi, “Report of AHGI on Coding Efficiency Improvements,” VCEG-AZ01, Warsaw, Poland, June 2015.

3. J. –R. Ohm et al, “Report of AHG on Future Video Coding Standardization Challenges,” MPEG Document M36782, Warsaw, Poland, June. 2015.



[BH-P13] In [BH1] for larger resolution video such as 4K and 8K, 64x64 INT DCT (integer approx.) is proposed to collaborate with the existing transforms (up to 32x32 INT DCT) to further improve the coding efficiency. Madhukar and Sze developed unified forward and inverse transform architecture (2D-32x32 INT DCT) for HEVC (see IEEE ICIP 2012) resulting in simpler hardware compared to implementation separately i.e., forward and inverse. See also Chapter 6 HEVC transform and quantization by Budagavi, Fuldseth and Bjontegaard in [E202]. Implement similar uniform forward and inverse transform architecture for 2D-64x64 INT DCT and evaluate the resulting simpler hardware compared to implementation separately i.e., forward and inverse.

[BH-P14] Refer to Figs. 6.7 and 6.8 of chapter 6 cited in [BH-P13]. These QM are based on visual sensitivity of the transform coefficients. Similar QM have been proposed / adopted in JPEG, MPEG – 1,2,4, AVS china etc. Also see the book by Rao and Yip, “Discrete Cosine Transform”, Academic press 1990, where in the theory behind developing the QM is explained . See also related references at the end of ch. 6 in [E202]. The QM matrices (Fig 6.8) for (16x16) and (32x32) transform block sizes are obtained by replicating the QM for (8x8) transform block size. These extensions are based on reducing the memory needed to store them. Develop QM for (16x16) and (32x32) transform block sizes independently based on their visual sensitivities (perception).

[BH-P15] See [BH-P14]. Develop QM for (64x64) INTDCT reflecting the visual perception of these transform coefficients. Again refer to the book by K.R. Rao and P. Yip, “Discrete Cosine Transform”, Academic press 1990.

[BH-P16] See [BH-P13] and [BH-P140. In Fig.6.2 (page 149) of chapter 6 [E202] (32x32) INTDCT, (4x4), (8x8), (16x16) INTDCTs are embedded. Develop (64x64) INTDCT where in the smaller size INTDCTs are embedded. Is it orthogonal? What are the norms of the 64 basis vectors?

[BH-P17] See “Y. Sugito et al, "A study on addition of 64x64 transform to HM 3.0.," Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-F192, Torino, Italy, July, 2011” which is reference [15] in [BH2]. Is the (64x64) INTDCT orthogonal? Embedding property?

Post – HEVC activity


Both MPEG and VCEG have established AHGs (ad hoc groups) for exploring next generation video coding.

Grois et al (See item 7 under tutorials) have suggested as follows:

Focusing on perceptual models and perceptual quality, and perceptually optimized video compression provision: http://www.provision-itn.eu

PROVISION is a network of leading academic and industrial organizations in Europe including international researchers working on the problems with regard to the state-of-the-art video coding technologies.

The ultimate goal is to make noteworthy technical advances and further improvements to the existing state-of-the-art techniques of compression video material.

ACKNOWLEDGEMENTS



The graduate students in Multimedia Processing Lab., University of Texas at Arlington, Arlington, Texas have been extremely helpful in all aspects of updating/revising this chapter including adding additional projects and additional references. Special thanks go to M. Budagavi, Samsung Research Lab, T. Richter, University of Stuttgart, Stuttgart, Germany, W. Gao, Peking University, Peking, D. Mukherjee, Google Inc., D. Grois, HHI –Fraunhofer Heinrich Hertz Institute, G.J. Sullivan, Microsoft Inc., T. Borer , BBC, UK, Z. Wang, Univ. of Waterloo, Waterloo, Canada and P. Topiwala, FasdtVDO for providing various resources in this regard.



Yüklə 0,57 Mb.

Dostları ilə paylaş:
1   ...   9   10   11   12   13   14   15   16   17




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin