The research at CRIL deals the design of autonomous intelligent systems.

Depending on the information available, such systems should be able to take reasonable decisions to reach given goals. To do so, some form of reasoning is needed. The main difficulties to be solved to realize such systems are diverse.

First, the information to be exploited must be acquired. It can be learned from various types of data (descriptions of situations or scenarios, examples of use cases, operating traces and so on.) but also transmitted by different sources or agents with which the system under consideration interacts. Taking these sources into account requires the ability to synthesize, aggregate and reformulate the processed information in order to be able to exchange it and transmit the results (decisions, predictions, recommendations, actions to be taken,…) to the users (other agents, human or not) in an appropriate form. It is also necessary to be able to make this information evolve and to manage its dynamics, in particular when conflicts appear.

The available information is usually heterogeneous and imperfect. It typically includes knowledge transferred or extracted from data, beliefs about the state of the world in which the intelligent system evolves (e.g: physics law, but also data gathered from more or less reliable sensors). It contains the information about the other agents found in that world, the description of available actions and their effects, the preferences of the agents on the state of the world or the actions to perform. The imperfection of available information has several facets (which are correlated): incompleteness, uncertainty, inconsistency, contextuality, among others. In all cases, it is necessary to define models adapted to these different types of information, but also to design and analyze information representation formalisms that are appropriate to the targeted tasks.

The kind of inference necessary to simulate an “intelligent” behavior are multiple and must be modeled. It is also necessary to be able to explain the reasoning that is carried out, to justify given decisions and to evaluate their robustness. Finally, a last source of difficulty to integrate is linked to the computer: the inference and decision making processes considered are often sophisticated and computationally intractable in the worst case. It is therefore important to identify the sources of complexity involved in order to overcome them as best as possible, by developing efficient algorithms in practice, or by developing methods such as approximation or compilation that can sometimes avoid this computational difficulty.

In order to address those challenges, CRIL organizes its activities along three main and interconnected axes: data, knowledge, constraints.

Since 2018, disciplinary research carried out in each of these three axes on issues specific to data, knowledge or constraints has been jointly exploited within two cross-disciplinary actions: Explainable AI and AI at the service of other disciplines. We believe it would be a good idea to mobilize the broad-spectrum expertise present in the axes to seek synergies and develop original, relevant research in these two cross-disciplinary themes.