Hi dimant, thank you for the great question!
You are right. Sadly, CV *over*estimates the error of f, the classifier trained on all data. This means the true error of f will usually be slightly lower. In this sense, CV gives a conservative estimate: in truth, f is even more accurate than CV lets us think.
There is no way to measure the error of f. To measure it, we would need to train on all data, but then no data would be left for testing. The best we can do is to take:
k := n-1
This method, called leave-one-out CV (LOOCV) will give you a very, very precise estimate of the error of the classifier trained on all data. As far as I know, Billy is preparing an exercise on LOOCV.