Skip to content

Latest commit

 

History

History
49 lines (34 loc) · 2.35 KB

lecture-4.md

File metadata and controls

49 lines (34 loc) · 2.35 KB

Lecture 4: Feasibility of Learning

1. Learning is Impossible? -- absolutely no free lunch outside D

Learning from D (to infer something outside D) is doomed if any 'unknown' f can happen.

2. Probability to the Rescue -- probably approximately correct outside D

img

If N is large, we can probably infer unknown mu by known nu.

3. Connection to Learning -- verification possible if Einh small for fixed h

img

So, if Einh = Eouth and Einh small ==> Eouth small ==> h = f with respect to P.

img

Now, we can use 'historical records' (data) to verify 'one candidate formula' h.

4. Connection to Real Learning -- learning possible if H finite and Eing small

But in real learning, we have to deal with some BAD sample: Ein and Eout far away --can get worse when involving 'choice'.

img

  • if H = M finite, N large enough, for whatever g picked by A, Eoutg = Eing.
  • if A finds one g with Eing = 0, PAC guarantee for Eoutg = 0 ==> learning possible.