Overview In recent years, there has been a growing interest in the design of statistical learning algorithms for interpretable models, which are not only accurate but also understandable by human users. A related and desirable property is explainability, which refers to the computational ability of predictive models to explain their predictions in intelligible terms.
In this setting, decision trees are of paramount importance, as they can be easily read by recursively breaking a choice into sub-choices until a decision is reached.
Summary The ANR project BLaSST targets bridging combinatorial and symbolic techniques in automatic theorem prov- ing, in particular for proof obligations generated from B models. Work will be carried out on SAT-based techniques as well as on more expressive SMT formalisms. In both cases encoding techniques, optimized resolution techniques, model generation, and lemma suggestion will be considered. Combining both lines of work, the expected scientific impact is a substantially higher degree of automation of solvers for expressive input languages by leveraging higher-order reasoning and enumerative instantiations over finite domains.