Iin this talk, we will first introduce and motivate the verybasic principles of imprecise probabilistic (IP) approaches, that replace precise-valued probabilities and expectations by bounded intervals. The talk will then focus on the use of IP in supervised classification, using the extension of the naive Bayes classifier as an illustration. Finally, we will briefly mention some challenges concerning the application of IP to supervised learning over combinatorial domains.