Consistency Model

Definition

Let's start with this rather unrealistic but intuitive model. In this model, the prediction rule (from a set of examples) should be consistent with their observed labels. To formally state this, we say

a concept class cCc \in C is learnable in the consistency model if \exists an algorithm A such that given any set of labeled example (x1,y1),...,(xm,ym)(x_1, y_1), ..., (x_m, y_m) where xiXx_i \in X and yi{0,1}y_i \in \{0,1\}, finds a concept cCc \in C so that c(xi)=yi  ic(x_i) = y_i\;\forall i (consistent with the examples), or says there is no such concept.

Examples

Boolean Logic: monotone conjunctions, monotone disjunctions, conjunctions can be reduced to monotone conjuctions. They all can be proved learnable under the consistency model. Geometry: rectangles, half spaces More Boolean: K-CNF, DNF

Problems with the consistency model

This model doesn't say anything about how the concept that the algorithm learns can generalize to new data. It seems unrelated to what we mean by "learning". We need a new model.

Last updated