JCTVC-B020 [D. Alfonso (STMicro)] Proposals for video coding complexity assessment
This contribution discussed some methods that are reportedly commonly used to estimate the complexity of software applications, and it proposed a methodology for the complexity assessment of video coding software systems based on the "Valgrind" tool suite.
The contribution suggested the following proposals for JCT-VC consideration:
-
To consider complexity assessment during the standardization process of HEVC and to evaluate contributions in terms of both coding efficiency and complexity efficiency.
-
To define a clear procedure for complexity assessment considering the present contribution as a starting point for further discussion.
-
To specify the complexity assessment procedure in a document entitled e.g. "Recommended simulation common conditions for complexity efficiency experiments".
Measuring computational complexity by execution time may be misleading as it e.g. includes CPU idle time, system access time etc. This could partially be resolved by measuring user time only, but examples of various identical AVC encoder runs reportedly show that even such numbers can exhibit considerable variation. Other measures such as cycles per instruction are also highly CPU dependent.
It was suggested to measure the instruction count per program execution. The instruction count in Linux profiling (instruction and data cache simulator) is e.g. identical to the number of accesses to the instruction cache. Furthermore, the number of data cache accesses is proportional to memory bandwidth.
The results also reportedly show that the index of dispersion (indicating variation) is very low in these measurements.
It was remarked that we should be cautious about using a particular simulation software as a substitute for true algorithmic complexity.
One subject that was raised in discussion was how to estimate hardware implementation complexity. It was remarked that this methodology does not give a means to measure hardware complexity.
The tool also measures implementation complexity, not algorithmic complexity – i.e. it provides results that are dependent on degree of optimization)
A participant asked whether the Valgrind tool provides consistent results across different machines, and it was reported that it should.
It was noted that the software tool that was proposed involves a dramatic slow-down of running speed, while our software is already very slow. CPU-specific profiling can also provide some useful information, although not with cross-platform consistent results.
It was noted that the proponent did not assert that a complexity measurement methodology should be considered a substitute for discussion and expert analysis.
It was suggested to investigate this type of analysis for longer term application – although right now it is extremely obvious that the current software is so inefficient as to make this kind of analysis unnecessary or inappropriate to identify issues.
To make it practical, would it only be necessary to run on few sequences? Perhaps not – there is certainly sequence dependency.
It was questioned how much additional information this methodology would give. The current TMuC, for example, has many places where reductions in run-time could easily be made. These places can also be found by conventional profiling (without a need for a cache simulator, and without increasing runtime substantially).
Activity in this area, at least in the longer term, should include analysis of both software and hardware complexity.
It was agreed to establish an ad hoc group on complexity measurement to further investigate these issues.
Dostları ilə paylaş: |