Michael Herrmann

Researcher

Neural Network’s Vulnerability to Adversarials

Recent advances in video and image manipulating techniques have revealed a weakness of machine learning techniques. Especially deep neural networks (DNN) which gained huge success in pattern recognition (e.g. face recognition or road sign models) tasks has shown to be vulnerable to so called adversarial examples. These machine learning models missclassify perturbed examples that are only slightly different from correctly classified examples drawn from training data set. The perturbation is a addition of a certain noise signal whose existence cannot be recognized by human beings. Since manipulated videos or photos can induce misinformation and reduce trust in media. For example GANs (Generative Adversarial networks) are considered to be threat to real world systems.

  • Adverarials provide undesired output of trained neural networks, even for inputs within the training data distribution, despite theoretical guarantees from Statistical learning theory. What is the (theoretical) explanation?
  • From an philosophy of science point of view this may shed light the epistemological power of machine learning algorithms: Do Adversarials question the idea of generalization?
  • The existence of successful adversarial examples may suggest that machine learning techniques may not learn the true underlying concepts that determine the correct output label.
  • How to evaluate the fact that ML systems can be cheated by a possible adversary from a security-critical context? To what extent issues of (mis-)trust in real world AI systems can be solved with technical and non-technical efforts?

The Epistemology of Machine Learning

There are reliability and transparency problems in Machine Learning. We are in urgent need of an epistemology of data science. There is already research going on at the intersection of philosophy and Machine Learning which focuses on the ethical and political questions. We lack a focus on studying these methods in their formal epistemological capacity, as attempts to make justiable inductive inference. So, I would like to focus instead on engagement with the mathematical details of machine learning methods. For example, there is a gap between theory and practice what has been called a paradox of deep learning: neural networks perform surprisingly well in practice, much better than they should, in theory.

Scientific Interests
  • Mathematics as a tool: its instrumental role in ML methods
  • Model-driven vs. Data-driven science and its styles of reasoning
  • Role of values in computer-intensive methods
  • Philosophy of Computer Simulation and Statistics

 

Work In Progress
  • Morals About (Artificially) Injecting Pseudorandomness in Computations of Deterministic Quantities (Abstract)
Publications
  • Michael Herrmann (2024): Der Wandel von Statistik zu Maschinellem Lernen – Ein Kuhn’scher Paradigmen-Konflikt? SieB – Siegener Beiträge zur Geschichte und Philosophie der Mathematik, Bd. 17, hrsg. von Ralf Krömer (Bergische Universität Wuppertal) und Gregor Nickel (Universität Siegen)
  • Christian Bischof, Nico Formanek, Petra Gehring, Michael Herrmann, Christoph Hubig, Andreas Kaminski und Felix Wolf (Hg.) (2017): Computersimulationen verstehen. Ein Toolkit für interdisziplinär Forschende aus den Geistes- und Sozialwissenschaften. Darmstadt: TU Prints, S. 35-99
  • Nico Formanek & Michael Herrmann (2017): Was ist eine numerische Lösung? (draft on demand)
  • Michael Herrmann (2019): Generieren wir eine Logik der Entdeckung durch Machine Learning? Steuern und Regeln: Jahrbuch Technikphilosophie 2019, S. 103-124
Recent Talks
  • Monte-Carlo-Integration – Instrumentelle Rechtfertigung stochastischer Mathematik und Epistemologie der Iteration, January 2021, Kolloquium CSS Lab RWTH Aachen University
  • Preliminaries to Trust in Machine Learning Algorithms and a in-principle argument for the influence of non-epistemic values in machine learning , Studienkolleg, University of Tübingen,16.12.2021
  • Neural Network’s Vulnerability to Deepfake Detectors Spring Workshop Trust in Information, Stuttgar, 10.06.2021.
  • Epistemic Ruptures: How to evaluate methodological shifts in Machine Learning? Colloquium HLRS, 23.01.2023
  • Fiktionen, Modelle und Idealisierungen, Workshop Fiktion und Computersimulation, 10.03.2023
  • Paradigm Shift and a Reaction to the ”Alchemy“-Debate in ML, Spring Workshop, HLRS, 27.03.2023
  • Book discussion “The shortcut (why Intelligent Machines Do think Like us) Spring Workshop with Nello Christianini, HLRS, 04.04.2023
  • Mathematical Practice – Differences between Statistics and Machine Learning. Models, Metaphors and Simulations. Epistemic Transformations in Literature, Science and the Arts – Conference of SLSAeu European Society for Literature, Science and the Arts and ELINAS Research Center for Literature and Natural Science, Erlangen-Nürnberg, 18.05.2023
  • Methodological Disagreement between Statistics and Machine Learning – a case for a Kuhnian Paradign Shift? 23. Rheinisch-westphälisches Seminar zur Geschichte der Mathematik, Paderborn, 07.07.2023
  • Machine Learning as a Post-Statistical Style Of Reasoning, Colloquium, University of Darmstadt, 02.04.2024
Teaching
  • Lectureship: Modellierung, Simulation und Optimierung I and II, University of Stuttgart, 01/2019 – 07/2020