May 28th – Workshop Generalization and Overfitting

Update: A reading list is now available.

A huge part of the recent success of highly parametrized ML models is due to their apparent ability to generalize to unseen data. This ability is seemingly in tension with mathematical results from traditional statistics (e.g. bias-variance trade-off) and statistical learning theory (e.g. PAC theorems) which rely heavily on either strong assumptions about the underlying probability distribution or restrictions on the hypothesis class. The predominant engineering epistemology claims failure of ML theory and suggests that contemporary ML models generalize well even beyond the classical overfitting regime.

This workshop aims to shed light at the generalization overfit tension and will address the following questions:

  • What measures of generalization and overfit are used in theory and in practice?
  • Do ML models really generalize well?
  • Are ML models really overfit?
  • What is overfitting anyhow?
  • Which theoretical explanations exist for the generalization overfit phenomena?
  • Which pragmatic explanations exist for the generalization overfit phenomena?

The workshop takes place (via WebEX and in person) at May 28th at HLRS in Stuttgart, Room Berkeley/Shanghai. WebEX stream will start at 10am CET.

WebEx Link:

Should you want to join in person please write to

Participants (confirmed):

Schedule (Version May 27th)