Aller au menu Aller au contenu Aller à la recherche
  • Français
  • English
aA - +Imprimer la page

CERBERE

Leader ImViA :  Cédric Demonceaux

Abstract :

In recent years, research and experimentation on autonomous vehicles have multiplied, with autonomous vehicles being one of the major challenges of tomorrow’s mobility. In the near future, users will have access to fleets of shared autonomous vehicles that can be booked at any time via a smartphone, while reducing the risks associated with human driving, as more than 90% of accidents are related to human error.
One of the main technological challenges for the autonomous vehicle is the understanding of its environment, which is usually perceived by sensors such as lidars, radars and cameras. The main objective of this project is the exploitation of a sensor in rupture with the existing solutions for the perception of the autonomous vehicle: the event camera.
The event camera is a bio-inspired sensor that, instead of capturing static images – while scenes are dynamic – at a fixed frequency, measures changes in illumination at the pixel level and asynchronously.
This property makes it particularly interesting for autonomous vehicles since it can address the remaining challenges in autonomous driving scenarios: scene with high dynamics (e.g. tunnel exit), latency and speed of detection of obstacles (other vehicles, pedestrians), while taking into account the constraints of computing power and limited data flow imposed by the autonomous vehicle.
The use of event cameras requires the development of new algorithms, since the classical computer vision algorithms are not adapted, the data provided by the event camera being fundamentally different. The application context (perception for autonomous vehicles) is radically different from the works that can be found at the moment. Indeed, most of the works use a mobile event camera in a static scene, or a static event camera observing a dynamic scene. In this project, the objective is to exploit a camera embedded in the vehicle and observing a dynamic scene. The events generated by the camera will be due to both its own movement and that of the objects in the scene, so it will be necessary to be able to dissociate them, which remains a challenge at the moment.
This change in the application context will lead to a number of new scientific challenges that we will try to solve in this project.
The perception for the autonomous vehicle must be three-dimensional in order to localize the different entities (other vehicles, motorcycles, cyclists, pedestrians) and to determine if there is a danger or if the situation is normal. This is why we are particularly interested in the innovative theme of event-based 3D for autonomous vehicles.
In addition to the detection and 3D reconstruction of moving objects, a recognition step will also be necessary to allow the autonomous vehicle to make the most appropriate decision according to the situation. The most efficient approaches at the moment on classical images are those based on CNN (Convolutional Neural Networks). Given the structure of the data provided by the event camera, this type of network is not adapted
and new approaches must be found.
The real-time aspect of the solution is very important if we do not want to lose the advantages of the event camera. An important part of this project will be dedicated to the Algorithm Architecture Adequacy (AAA) so that the developed algorithms can be integrated in the smart camera proposed by the industrial partner of this project.

Partners : LITIS, MIS, YUMAIN

Beginning : 01/01/2022

End : 31/12/2025

 

kc_data:
a:8:{i:0;s:0:"";s:4:"mode";s:2:"kc";s:3:"css";s:0:"";s:9:"max_width";s:0:"";s:7:"classes";s:0:"";s:9:"thumbnail";s:0:"";s:9:"collapsed";s:0:"";s:9:"optimized";s:0:"";}
kc_raw_content:
extrait:
lien_externe:
equipe:
tags:
ANR
nom_du_projet:
cerbere
porteur:
date_de_debut:

Log In

Create an account