At the heart of my work lies a foundational question that goes far beyond classical problems of algorithmic optimization or local performance: how can we design robotic systems capable of making relevant, adaptive, and explainable decisions in real, dynamic environments that are inherently shared with humans?
This question engages fundamental reflection on the very nature of embodied artificial intelligence. Acting autonomously in the physical world does not merely consist of reacting to stimuli or maximizing an abstract reward function; it requires a structured understanding of the environment, the ability to anticipate its possible evolutions, and the capacity to inscribe the robot's action within a social, functional, and ethical framework comprehensible to humans.
This perspective has gradually led me to move away from two dominant paradigms of contemporary robotics, whose limitations become apparent when seeking to go beyond controlled or artificially simplified scenarios.
On one side, purely reactive architectures, stemming from behavior-based robotics and subsumption architectures, offer undeniable robustness and execution speed, but at the cost of an almost total absence of prospective capacity. A reactive robot can avoid an immediate obstacle, but is fundamentally incapable of reasoning about the delayed consequences of its actions, for example when an obstacle is mobile, manipulable, or socially mediated by a human.
On the other side, end-to-end deep learning approaches have demonstrated spectacular performance on perception or navigation tasks, but they rely on distributed and opaque representations, making the produced decisions difficult, if not impossible, to explain, justify, or audit. However, in contexts such as assistive robotics, human-robot collaboration, or shared environments, decisional opacity constitutes a major obstacle to acceptability, trust, and safety.
Faced with these observations, I advocate for a third way, which I call frugal and explainable cognitive robotics. This approach is based on the idea that robotic autonomy can only be robust and socially integrable on the condition of relying on explicit representations, internal simulation mechanisms, and decision-making processes whose logic is traceable by construction.
This position is inspired by cognitive sciences and computational neurosciences, not in a naive biomimetic logic aiming to reproduce the human brain, but in an approach to understanding the computational principles that allow living beings to reason, anticipate, and act effectively in complex environments with limited resources.
A first fundamental pillar of this vision lies in the use of multi-level representations, ranging from raw geometry to semantics and situated action. A robot cannot settle for a perception of the world reduced to pixels or unstructured point clouds.
This stratification is not a simple stacking of independent layers, but the expression of a central cognitive hypothesis: abstraction is the key to generalization. A robot capable of reasoning about the abstract notion of controlled passage can transfer its skills between different types of doors without exhaustive relearning.
The second pillar of my approach concerns the central role of simulation in the decision-making process. In my conception, simulation is not a posteriori validation tool, but an a priori cognitive mechanism, directly involved in decision-making.
This approach is inspired by work on mental simulation and internal models in cognitive psychology and neuroscience, according to which biological agents evaluate the possible consequences of their actions by mentally exploring several plausible futures before acting.
Concretely, this idea is translated in my work through the use of multi-agent systems as internal simulation support. Entities perceived in the real environment are modeled as agents endowed with physical properties, action capacities, and specific constraints. The robot itself is integrated into this simulated world as a cognitive agent.
Before any engaging action, several scenarios are simulated, evaluated, and compared according to explicitly modeled criteria such as energy cost, collision risk, execution duration, or social acceptability. The actual execution of the action is then supervised by continuous comparison between predictions from simulation and actual observations.
The third pillar, which today constitutes the most structuring axis of my research, is that of cognitive frugality and meta-decision. A truly intelligent system does not merely decide what to do; it must also decide how to decide.
Faced with a given situation, several decision-making strategies are possible, ranging from complete planning from symbolic models, to reusing past experiences, to adopting simple reactive behaviors, or even explicitly delegating the decision to a human. The choice between these strategies should not be arbitrary, but guided by a rational evaluation of criteria such as:
This approach is part of the tradition of bounded rationality, as formulated by Herbert Simon and formalized by Russell and Wefald, while distinguishing itself through concrete anchoring in embedded robotics.
This cognitive frugality goes beyond simple technical optimization; it carries an ethical and ecological dimension. At a time when autonomous systems are multiplying, integrating computational parsimony as a design principle contributes to more sustainable and socially responsible robotics.
My methodological choices stem directly from this vision:
Through all these works, I defend the idea that the future of autonomous robotics does not lie in a race for computing power, but in the design of systems capable of mobilizing the right strategy at the right time.
Robotics that is explainable by construction, anticipative through simulation, frugal by principle, and collaborative through understanding constitutes, in my view, a credible and necessary path to sustainably integrate robots into our daily lives.
My past contributions and future ambitions are part of this perspective, at the intersection of symbolic AI, machine learning, cognitive sciences, and embedded robotics, with the objective of giving robots the capacity to understand before acting, to explain what they do, and to know, when the situation requires it, not to decide alone.