Consistency Model
Definition
Let's start with this rather unrealistic but intuitive model. In this model, the prediction rule (from a set of examples) should be consistent with their observed labels. To formally state this, we say
a concept class is learnable in the consistency model if an algorithm A such that given any set of labeled example where and , finds a concept so that (consistent with the examples), or says there is no such concept.
Examples
Boolean Logic: monotone conjunctions, monotone disjunctions, conjunctions can be reduced to monotone conjuctions. They all can be proved learnable under the consistency model. Geometry: rectangles, half spaces More Boolean: K-CNF, DNF
Problems with the consistency model
This model doesn't say anything about how the concept that the algorithm learns can generalize to new data. It seems unrelated to what we mean by "learning". We need a new model.
Last updated