Day
|
Topic
|
Room
|
Time
|
Status
|
Monday
|
Video Plenary (planning of this week)
|
C.B.2
|
1230 – 13
|
DONE
|
Joint meeting with systems on RVC and MXM
|
Systems
|
16 - 1630
|
DONE
|
Review of MPEG-B contributions
|
Bellevue
|
1630 - 22
|
DONE
|
Review of MPEG-C Contributions
|
Tuesday
|
Study of FCD preparation
|
Bellevue
|
All day
|
done
|
Workshop
|
C.B.2
|
All day
|
done
|
Wednesday
|
Study of FCD preparation
|
Bellevue
|
11 – 19
|
done
|
Thursday
|
Study of FCD preparation
|
Bellevue
|
All day
|
done
|
AhG mandates
|
Bellevue
|
Workplan
|
Bellevue
|
Friday
|
Video Plenary
|
C.B.2
|
|
|
Output Documents:
No.
|
Title
|
TBP
|
Available
|
|
23001-4 Codec Configuration Representation
|
|
|
10165
|
Study Text of ISO/IEC FCD 23001-4 Codec Configuration Representation
|
N
|
08/12/31______23002-4_Video_Tool_Library'>08/12/31
|
10168
|
RVC Vision
|
Y
|
08/12/31
|
|
23002-4 Video Tool Library
|
|
|
10169
|
Study Text of ISO/IEC FCD 23002-4 Video Tool Library
|
N
|
08/12/31
|
10170
|
WD 3 of ISO/IEC 23002-4/Amd.1 (Conformance and Reference Software)
|
N
|
08/12/31
|
10171
|
WD 3 of ISO/IEC 23002-4/Amd.2 (Tools for MPEG-4 ASP, AVC HP and SVC)
|
N
|
08/11/14
|
10172
|
RVC Work Plan and FU Development Status
|
N
|
08/10/17
| 7Explorations – 3D Video
The goal of 3D video, as a first step towards a broader range of free-viewpoint (FTV) applications, is to generate interpolated views from available videos of multiview camera configurations. The target application is mostly seen for upcoming generations of (auto-) stereoscopic displays, for which only a low number (1) of video sequences shall be transmitted, but rendering of additional views shall be enabled by associated depth information. To make a next step towards development of such a system, a set of exploration experiments had been set up in Archamps, during which the available test sequences were used with the two available depth estimators and associated view interpolation methods. Various cases of sparseness (baseline distances between cameras to be taken from the dense set) were investigated. A first round of experts viewing using stereoscopic displays was performed in Hannover. Since the results were not satisfactory even for the lightest case of small baseline interpolation, further collaborative experiments were performed to improve the quality by better depth estimation (EE1) and improved view synthesis (EE2). In addition, an alternative method (layered depth video, which is as such an appropriate representation for a specific type of autostereoscopic displays, but could in principle be used for arbitrary view generation) was investigated in EE3. The results gained by the effort of the group and judged by another round of experts viewing in Busan were as follows:
-
EE1 & EE2:
-
Sub-pel for depth estimation gives good improvement, temporal consistency produces artifacts
-
some sequences are (almost) acceptable
-
Seems possible to continue with coding experiments
-
EE3 (alternative method layered-depth video):
-
More sequences acceptable than for EE1 & EE2, but results more divergent for various types of sequences
The main conclusion is that for some sequences the quality by “uncompressed” processing is largely acceptable. For those sequences, a first of experiments compressing both video and depth maps will be performed until the next meeting. Therefore, the new round of experiments was designed as follows:
-
EE1: Improvement of depth estimation
-
in particular resolve the problem of temporal consistency
-
EE2: Improvement of view interpolation
-
in particular, appropriate hole-filling and boundary processing methods
-
EE3: Investigation of alternative method for representation: Layered depth video (LDV)
-
for this, full source code for a view rendering method will be provided to the participants
-
EE4: Coding of Video/Depth data for part of the sequences
A general consensus was reached that this EE process targets (only) developing anchors for a possible upcoming CfP, for which the software (source) must be openly available. It may therefore be that either EE1/EE2 or EE1/EE3 combination is used as anchor, depending on better quality and/or compression performance (once the results of EE4 are available). In this context, it is also envisioned that minimum quality expectations are to be imposed for the renderer (e.g. projection of pixels, hole-filling approach, boundary processing). Whereas it is open whether this is relevant for standardization (most likely, most parts of renderers will never be normative), one purpose of the EEs is also to find out which such minimum operational/quality requirements according to the current state of the art could be expected.
A better understanding is still needed about the common target – the "vision" shall be discussed under AHG mandates, and accordingly a vision document is expected for the next meeting.
As the first set of coding experiments necessary to produce anchors for a possible CfP will be run between October and January, the shortest possible tentative timeline from the current perspective could be as follows:
-
Until 09/02: Perform first experiments to decide about bit rates for the MVC-based video-plus-depth anchors, selection of sequences
-
09/02: Experts viewing again, draft CfP
-
Until 09/04: Further coding experiments, first set of test conditions
-
09/04: Draft CfP, refinement of test conditions
-
09/07: Final CfP
Documents reviewed
m15795
|
Adaptive Non-uniform Quantization in Depth Format Conversion
|
Haitao Yang, Yilin Chang, Xiaoxian Liu, Shan Gao, Sixin Lin, Lianhuan Xiong
|
m15798
|
Results of Exploration Experiments in 3D Video Coding for Dog Data Set
|
Philipp Merkle, Aljoscha Smolic, Yongzhe Wang, Karsten Müller
|
m15802
|
3DV EE3 results on lovebird1 and leavinglaptop sequences
|
Patrick Lopez, Guillaume Boisson
|
m15817
|
3DV/FTV EE report on Champagne Tower
|
Taka Senoh, Kemji Yamamoto, Ryutaro Oi, Tomoyuki Mishina, Makoto Okui
|
m15820
|
Results of EEs in 3DV/FTV for Doorflowers
|
Shinya Shimizu, Hideaki Kimata
|
m15832
|
3DV/FTV EE1 and EE2 results on Alt Moabit sequence
|
Krzysztof WEGNER, Olgierd STANKIEWICZ
|
m15834
|
3D Video Exploration Experiment on LDV of Champagne Tower sequence
|
Lu Yu, Yin Zhao
|
m15836
|
Reference Software of Depth Estimation and View Synthesis for FTV/3DV
|
Masayuki Tanimoto, Toshiaki Fujii, Kazuyoshi Suzuki
|
m15837
|
Depth Estimation to improve boundary clarification
|
Masayuki Tanimoto, Toshiaki Fujii, Kazuyoshi Suzuki
|
m15847
|
EE1: Results on 'Pantomime? Sequence using Nagoya SW
|
Yun-Suk Kang, Cheon Lee, Yo-Sung Ho
|
m15850
|
EE2: View Synthesis Results on 'Pantomime' Sequence using Thomson SW
|
Jae-Il Jung, Cheon Lee, Yo-Sung Ho
|
m15851
|
View Synthesis Tools for 3D Video
|
Cheon Lee, Yo-Sung Ho
|
m15852
|
Results of Experiment on Temporal Enhancement for Depth Estimation
|
Sang-Beom Lee, Yo-Sung Ho
|
m15855
|
3DV/FTV EE1/EE2 results on Lovebird1 and EE3 result on Leaving Laptop sequence
|
Gun Bang, Gi Mun Um, Jaeho Lee, Namho Hur, Jin Woong Kim
|
m15859
|
3DTV Exploration Experiments on Pantomime sequence
|
Ivana Radulovic, Per Fröjdh
|
m15880
|
Results of Exploration Experiments in 3D Video for Lovebird2
|
Sehoon Yea, Anthony Vetro
|
m15881
|
3DV EE1 & EE2 on Leaving_Laptop
|
Dong Tian, Joan Llach
|
m15882
|
3DV EE1 & EE2 results on Arrive Book
|
Fons Bruls, Lincoln Lobo
|
m15883
|
Improvements on View Synthesis and LDV extraction Based on Disparity (ViSBD 2.0)
|
Dong Tian, Joan Llach, Fons Bruls, Meng Zhao
|
m15884
|
3DV EE3 LDV results on Arrive Book, Alt-Moa, Newspaper & Lovebird2.
|
Fons Bruls, Lincoln Lobo, Meng Zhao
|
m15886
|
Improved View Synthesis Based on Disparity (ViSBD 2.0.beta)
|
Dong Tian, Joan Llach
|
m15887
|
EE results on Newspaper
|
Jung Eun Lim, Jaewon Sung
|
m15888
|
Improved view synthesis algorithm
|
Yong-Joon Jeon
|
Output documents:
No.
|
Title
|
TBP
|
Available
|
10173
|
Description of Exploration Experiments in 3D Video Coding
|
N
|
08/10/17
|
Dostları ilə paylaş: |