CE1: Key point extraction
14.1.97.1.1.1.1.1.20m31987 CDVS: An invariant low memory implementation of the ALP detector with a simplified usage interface [Massimo Balestri, Gianluca Francini, Skjalg Lepsoy (Telecom Italia), Keundong Lee, Sang-il Na, Seungjae Lee (ETRI)]
The first objective for this contribution is to reduce memory and produce the same results as TM8.
Changes include: 1) Order feature selection and SIFT descriptor extraction has been reversed, 2) First octave is split in four parts, 3) Memory allocation once per octave, 4) LoG filtering and extrema detection reused buffer.
A trade-off between the memory and speed was discussed.
Images were divided into strips, and 5 buffers are needed: 4 scales (G1, G2, G3, G4) and temporary buffer. Why 4 stripes? – only splitting the first octave.
The typical gain factor (memory reduction) was reportedly around 10. The memory reduction is at the cost of speed (drop) by roughly a factor of 2.5.
Q: local filtering is perfromed on a stripe of 37 lines – and it was asked whether this is sufficient to preserve results? A: test showed that exactly the same key-points were detected and the overall results (TPR. mAP) were not changed.
Q: What was the reduction on the CDVS dataset? The reported reduction factor was 9.5.
Per m32039 a full crosscheck was reportedly successful.
This is a software optimization & maintenance proposal – it is not a technical proposal. It will be used (if applicable to final software implementation of the feature detector as a low-memory option), but not as a default option – since for the TM, software speed and readibility is higher priority than very low memory, so it will be released as a low-memory-usage optional branch.
14.1.97.1.1.1.1.1.21m32039 CDVS: Crosscheck of Telecom Italia and ETRI response to CE1 (m31987) [Alessandra Mosca, Massimo Mattelliano]
Crosscheck noted; results confirmed.
14.1.97.1.1.1.1.1.22m31991 CDVS: ETRI and TI’s Response to CE1 – A fast feature extraction based on ALP detector [Keundong Lee, Sang-il Na, Seungjae Lee, Weon-Geun Oh(ETRI), Massimo Balestri, Gianluca Francini, Skjalg Lepsøy (Telecom Italia)
Speed up BY: 1) Preliminary feature selection, 2) efficient partial gradient computation (implementation), 3) speed-up of the SIFT computation.
TM8 vs Fast-TM8: 1.54 times speedup.
TM8 vs Fast-Low memory TM8 1.13 times. Memory gain – approximately a factor of 9.
All performance results are reportedly within 0.1%.
Comment: Preliminary feature selection would change the pipeline under ballot. We can’t guarantee exactly the same results – however, the results were within 0.1% for all tests.
Two elements (2 & 3) are related to software implementation and should be implemented in the TM.
Preliminary feature selection (1) falls in the category of functional approximation and maintains the results within 0.1%. We need to consider deeper how to support functional approximation solutions in the CD, how to ensure interoperability, and how to define conformance. It was agreed to insert this software into the TM implementation as a (non-default and optional) switchable option on the entire image level.
Cross-verified as reported in m32266.
14.1.97.1.1.1.1.1.23m32266 Crosscheck of ETRI and TI’s Proposal m31991 Response to CE1 [Jie Chen, Ling-Yu Duan]
Cross-check noted; results confirmed.
14.1.97.1.1.1.1.1.24m32262 Peking University Response to CE1: Study on Interest Point Detector in TM8 [Jie Chen, Ling-Yu Duan, Tiejun Huang, Wen Gao]
This contribution was an analysis and simplification of the TM8.0 and analysis of the impact of ALP: refinement of the extrema to subpixel precision, with two additional attributes.
The contribution proposes to use a Taylor expansion in the refinement stage. It was asked whether the change of filter could form some drop in performance.
The CurveSigma was removed in the contribution to simplify the selection process.
TPR between TM8 and Taylor based refinement.
Taylor expansion: TPR – drop by 0.19%, Loc – drop 0.03%, MAP – same, Top match −0.02%
Curve ratio is useful but curve Sigma is not useful: TPR – 0.27%, localization +0.16%, mAP +0.02%, Top match +0.30%.
Proposal: replace extrema refinement with Taylor expansion.
Comments: There is no training in the ALP – just an offline projection.
Taylor expansion is polynomial when none of the terms are higher than second degree. The 9 dimensions used in APL are higher order and therefore more accurate. The TM interpolation function performs better. The Taylor expansion in the VL-feat is optimized beyond basic mathematical theory. It would be very complex to describe in the normative part. There was no positive impact on performance.
No benefit in switching to Taylor expansion was seen by the group (although the proponent did not agree with that – the proponent insisted that removal of the table would simplify the normative part, but the group did not share this view).
It was discussed whether the CurveSigma parameter should be removed.
Comments: There is some information. There could be some correlation with peak-ratio. What is the benefit of keeping it? What is the risk of removing it?
Conclusion: Re-revaluate the impact on TM9 when implemented and decide at that stage.
14.1.97.1.1.1.1.1.25m32477 Cross check of PKU Response to CE1: Study on Interest Point Detector in TM8 m32262 [Emanuele Plebani, Danilo Pau]
Cross-check noted; results confirmed.
14.1.97.1.1.1.1.1.26m32263 Peking University Response to CE1: Improving Interest Point Detector with BFLoG Filter [Jie Chen, Ling-Yu Duan, Tiejun Huang, Wen Gao]
Based on m32477. Combining BFLoG with ALP.
Changes: Use 3 level scale sampling. Apply smoothing operator to reduce response to a sudden change of image content (Gaussian filter). Perform a Taylor expansion. Removed curve sigma in feature selection.
Results: TPR drop −0.37%, Localisation drop −0.32%, mAP increase +1.13%, top match +1.16%.
Updated coefficients table for ALP (using TI tools).
Interoperability with TM8: TPR: −0.6%, Loc: −0.38%, −0.41 for mAP, −0.05% for top match.
Number of the scale space plus smoothing filtering in front.
Complexity: 176 ms, BFlog: 144mS, BFlog_ALP: 162 ms. Memory: 1.14 MB
Comment: the gain was mostly obtained in dataset 2 and 3.
There was no retraining of the global descriptor.
Frequency space implementation and block-based implementation are non-normative elements.
Conisdering normative modifications, the proposal increases the number of scale samplings from 2->3 and applies smoothing filtering and Taylor expansion instead of ALP approximation and the removal of the CurveSigma. Some increase in complexity (maybe small?) and improvement in mAP (+1.2%) but drop in TPR (−0.4%).
Comments: memory leak in fftw could be a problem. Global descriptor was trained with BFlog – it was not optimized for the ALP detector, and therefore it is optimized for the old detector. It was unclear whether the results would hold when global is retrained with the ALP detector.
Gaussian on Gaussian produces Gaussian. More scales were used.
Summary of results (compiled by PKU)
BFLOG Compared to LowM-TM8.
BFLOG Better: mAP, Top Match
BFLOG Worse: TPR, Loc
Implementation-based issues:
-
BFLOG Better: memory
-
BFLOG Worse: speed
Summarizing change: BFLOG filtering, pre-smoothing operator, additional scale in filters, feature selection excluding curveSigma, Taylor expansion, orientation coefficient. It was asked what brings the improvement in performance – perhaps the additional and new scale of the filters?
A suggestion was that the benefit most likely comes from using 3 scales, which could be deducted from past results in TM7.0 as quoted below:
Or perhaps using one more scale in extrema detection and some shift in scales range?
There was a suggestion on complexity increase from 4 to 8 for number of Gaussian/scales, but it was suggested that this could be simplified by clever implementation.
Conclusion: Perform experiments to clarify which elements bring the perfromance improvement and to determine how much is the improvement for retrained global in TM9.
14.1.97.1.1.1.1.1.27m32501 CDVS CE1: Crosscheck of PKU’s proposal (m32263) [Seungjae Lee, Keundong Lee, Sang-il Na, Weon-Geun Oh]
Cross-check noted.
Dostları ilə paylaş: