Lagadic Visual Servoing in Robotics



Yüklə 445 b.
tarix28.10.2017
ölçüsü445 b.


Lagadic

  • Visual Servoing in Robotics,

  • Computer Vision, and Augmented Reality

  • François Chaumette

  • IRISA / INRIA Rennes


The Lagadic group

  • Spin-off of the Vista project in January 2004

  • Created as an Inria project in December 2004

  • Currently 13 people:

    • François Chaumette, DR 2
    • Éric Marchand, CR 1, HDR 2004
    • Alexandre Krupa, CR 2, recruited in Sep. 2004 (LSIIT, Strasbourg)
    • Fabien Spindler, IR 2
    • 1 temporary research scientist: C. Collewet from Cemagref
    • 1 temporary assistant prof.: A. Remazeilles, INSA Rennes
    • 5 Ph. D. students: Master in Rennes (2), Strasbourg (2) and Grenoble (1)
    • 1 post-doc: S. Segvic from Croatia
    • 1 temporary engineer: F. Dionnet from LRP Paris


Research field

  • Visual servoing : vision-based control of a dynamic system

  • Modeling:

  • Control law:

  • Usually, highly nonlinear and coupled

  •  potential problems

  • Objective: cook so that is as linear as possible



Objectives

  • Modeling visual features

  • Considering high level tasks in complex environments

    • Robot navigation
    • Additional constraints (occlusions, joint limits avoidance, etc.)
  • Visual tracking

    • real-time
    • accurate for 6 dof
    • robust
    • mono-object
    • geometrical structure


Application fields

  • Robotics

    • Manipulating/grasping objects, target tracking
    • Nuclear/submarine/space/medical, etc.
    • Eye-in-hand/eye-to-hand systems
    • Robot arms, mobile robots, UAV
  • Augmented reality

    • Insert virtual objects in real images
  • Virtual reality

    • Viewpoint generation
    • Virtual cinematography
    • Control of virtual humanoid
  • Cogniscience



Experimental platforms

  • Eye-in-hand, eye-to hand systems, mobile robot, medical robot

  • Experimental validation, tests before transfer, demonstrations

  • Experimental results very time consuming

  • (same image never acquired, and useless after 40 ms)



Recent contributions



Modeling image moments

  • Determination of the analytical form of the interaction matrix for any moment

  • Determination of combinations of moments (from invariants) for decoupling and linearizing properties



Visual servoing from ultrasound images

  • Modeling features

    • No observation outside B-scan corresponding to the current 2D ultrasound image
  • Automation of spatial calibration procedure

    • Adaptive visual servoing to position B-scan on a cross-wire phantom
  • Robotized 3D «free-hand» ultrasound imaging

    • Conventional 2D ultrasound probe moved by a medical robot
    • Thanks to calibration step, B-Scans positioned in a 3D reference frame (collaboration with Visages)
  • Application field: remote examination



Navigation from an image database

  • Appearance-based representation

    • Topological description of the environment with key images (no 3D reconstruction)
    • Image path retrieval from indexing techniques (collaboration with Texmex)
  • Qualitative visual servoing

    • Navigation expressed as visual features to be seen (and not successive poses to be reached)
    • Confident interval for features
    • Automatic update of features used for navigation (by imposing a progress within the visibility corridor)


Tasks sequencing

  • Idea : to give as much freedom as possible to take constraints (joint limits, occlusions, obstacles) into account

    • Scheme more reactive than reactive path planning
    • Scheme more versatile than classical visual servoing
  • Redundancy framework revisited:

    • directional redundancy
    • non linear projection operator to increase the free space where secondary tasks are applied
  • Visual elementary task managed by a stack

    • Remove the good task for ensuring the constraints
    • Put the task back when possible


3D model-based tracking

  • Virtual visual servoing scheme for pose computation

    • Virtually moves a camera so that the projection of the 3D model of the object corresponds to the observed image
    • Statistically robust pose estimation to deal with outliers and occlusions (M-estimation)
    • Real-time capabilities
  • Application to visual servoing and augmented reality

  • Extension to articulated object tracking



Texture and contours-based tracking

  • 2D model-based tracking

  • 3D model-based tracking

    • Introducing spatio-temporal constraints in model-based tracking
    • Joint estimation of pose and displacement


Collaborations

  • Inside Inria : Visages (medical imaging)

  • Icare (Predit Mobivip, Robea Bodega)

  • In France : 5 Robea projects

    • Omni-directional vision: Lasmea, Crea, Lirmm
    • Small helicopters: I3S, CEA
    • Mobile robot navigation (Lasmea, UTC)
  • Outside France :

    • ANU Canberra: modeling, helicopters
    • ISR Lisbon: jacobian learning
    • KTH Stockholm, CSIRO Melbourne, Urbana-Champaign


Publications

  • Main journals : IEEE TRA(O): 6, IJRR: 5

  • Main conferences: ICRA:18, IROS:14

    • Best paper award : IEEE TRA 2002, RFIA’2004
    • Finalist papers : IROS’2004, AMDO’2004, ICRA’2004, IROS’2005


Transfert

  • Marker-less: 3D model-based tracker transferred to Total-Immersion for augmented reality (RIAM SORA)

  • France Télécom R&D: Augmented reality in urban environment

  • ESA: vision-based manipulation on the ISS with Eurobot



Software

  • ViSP: Open source software environment for visual servoing

    • Currently available for Linux and Mac OS with QPL license
    • Written in C++ (~ 100 000 lines of code)
    • Library of canonical vision-based tasks through many visual features
    • Suitable for 2D, 2½ D, 3D control laws
    • Eye-in-hand / eye-to-hand
    • Redundancy framework
    • Visual tracking algorithms
    • Independence wrt. the robotics platform, frame grabber
    • Simulator included (interface with OpenGL)


Positioning wrt. INRIA & French labs

  • INRIA scientific and technological challenges:

    • (4): Coupling models and data to simulate and control complex systems
    • (5): Combining simulation, visualization and interaction (real-time, augmented reality)
    • (7): Fully integrating ICST into medical technology (medical imaging, medical robotics)
  • Inside INRIA:

    • Icare (Num A: Control and complex systems): visual servoing and control
    • Vista, Movi, Isa: visual tracking
  • Other French labs:

    • LASMEA: visual tracking, position-based visual servoing
    • LSIIT: visual servoing for medical robotics
    • LRP, I3S


Worldwide positioning

  • Pioneering lab: CMU (1984 – 1994, no more active)

  • Main labs:

    • USA: (S. Hutchinson, G. Hager)
    • Australia (P. Corke), Japan (K. Hashimoto)
    • Europe: KTH (more recently)
  • Other labs : almost everywhere (Italy, Spain, Portugal, Germany, Canada, Mexico, Brazil, South Korea, China, etc.)

  • Visual tracking: Cambridge, EPFL

  • Lagadic: High visibility in the robotics community

    • AE IEEE TRA(O)
    • Look for “visual servoing” ∪ “visual servo” in Google Scholar


Evolution wrt. past objectives

  • From the 2001 Vista evaluation experts report: “Vista is planning to split off its activities in visual servoing and active vision as a separate project. This is an excellent decision”

  • Evolution wrt. scientific objectives: 80 % well done

    • Complex objects of unknown shape: image moments
    • Outliers: M-estimator integrated in the control loop
    • Applications in robotics: underwater, space, flying robots
    • Applications outside robotics: virtual reality, augmented reality
    • Visual servoing directly on image intensity: future objective


Objectives: modeling visual features

  • Spherical projection:

  • Modeling directly the image intensity

    • (no image processing, many unknown parameters, cooking very challenging)
  • Enclosing volume for 3D objects (global and sufficient information)

  • Mobile/flying robots: non holonomic or underactuated systems (modeling and control)



Objectives: medical robotics

  • Modeling adequate ultrasound features and their interaction

  • Automatic control of the probe motion to assist medical examination

    • Automatically follow an organ of interest along the patient skin
  • Hybrid force/vision control schemes

  • Remote examination without using haptic device

    • Robot control combining ultrasound images, force measurement and visual data of the patient provided by a remote camera
    • Autonomous exploration of a given area (organ, tumor)


Objectives: real-time visual tracking

  • New camera models

    • Omnidirectional cameras (3D model-based tracking)
  • Model-based vs model-free approaches

    • Structure estimation
    • On line structure estimation during visual servoing
      • Joint estimation of depth and displacement (controlled SLAM)
  • Initialization

    • Object detection, recognition and localization
    • Image-based model of the considered object
    • (collaboration with Vista and EPFL through FP6 Pegase proposal)



Dostları ilə paylaş:


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2017
rəhbərliyinə müraciət

    Ana səhifə