Allgemeine Informationen rund um die Kurse von Prof. Dr. C. Nagel
- Dozent/in: Christian Nagel
In this learning module, the core algorithms of artificial intelligence and their applications are introduced. Students explore the fundamental principles of machine learning through supervised, unsupervised, and reinforcement learning techniques. The module demonstrates how to identify patterns in data and use these models to predict unseen outcomes. Additionally, theoretical knowledge is reinforced with practical exercises addressing real-world challenges.
Upon successful completion, participants will have a solid understanding of learning systems and their practical applications. They will be able to:
- Understand the overall concept of learning from data through optimization
- Differentiate between various data learning methods: supervised, unsupervised, and reinforcement learning
- Apply the mathematical foundations and key algorithms to independently train machine learning models
- Comprehend the structure of neural networks and the building blocks of deep learning for solving real-world problems
- Design and monitor a machine learning training process
- Evaluate and validate machine learning models using diverse loss functions
- Identify common pitfalls in model training and resolve them effectively
- Dozent/in: Christian Nagel
In this module, students learn to use more advanced algorithms of artificial intelligence and their applications on structures, unstructured and temporal data. The basic idea and mathematical backgrounds of neural networks are introduced. Students learn how to train simple neural networks to learn patterns from data for regression and classification tasks. Further, Deep Learning and its most common architectures are introduced, including Convolutions and recurrent connections. Students learn how to effectively train deep learning networks by choosing optimal hyperparameters and how to avoid overfitting. Thus, methods like Regularization and Dropout are explained. The goal of this module is further to introduce unsupervised learning to the students, as well as its application to solve clustering problems. The application of unsupervised learning in combination with neural networks is illustrated by introducing autoencoders. In addition, it is shown how to use unsupervised learning methods to reduce the dimensionality of datasets using feature selection and PCA techniques. After successfully attending this module, students know:
- How to handle structured, unstructured and temporal data
- What a neural network is and how it can be trained using backpropagation
- How to use different optimizers for neural networks
- The most important deep learning architectural layers like convolutions
- How to effectively train neural networks and to avoid overfitting
- The basic principles of unsupervised learning and their applications to real world problems
- How to used features selection and PCA methods to reduce the dimensionality of datasets
- Different forms of collaborative groups work
- How to gather knowledge and share it within their learning group
- How to summarize and present the most important information of a specific topic
- Dozent/in: Christian Nagel
- Dozent/in: Christian Nagel
In dieser Vorlesung erhalten Sie einen umfassenden Einblick in die Grundlagen und Anwendungen maschineller Lernverfahren. Nach Abschluss des Kurses sind Sie in der Lage, die grundlegenden Prinzipien des maschinellen Lernens zu verstehen und dieses Wissen zur Lösung praktischer Fragestellungen einzusetzen.
Dabei werden insbesondere folgende Bereiche vertieft:
- Verschiedene Ansätze, um aus Daten zu lernen
- Die mathematischen Grundlagen sowie zentrale Algorithmen, die Sie benötigen, um Machine Learning Modelle eigenständig zu trainieren
- Neuronale Netzwerke und deren Einsatzmöglichkeiten, etwa in den Bereichen Computer Vision und Sprachverarbeitung
- Verfahren zur Bewertung und Validierung von Machine Learning Modellen
- Die häufigsten Stolpersteine beim Training von Modellen und effektive Lösungsansätze zur Fehlerbehebung
- Dozent/in: Christian Nagel
This Master’s seminar studies how to build and improve LLM-based systems when compute is limited, using statistical principles for evaluation, optimization, and decision-making under uncertainty. Rather than training models from scratch, we focus on post-training and inference-time methods and on how to measure what works with reliable, sample-efficient experiments. We cover parameter-efficient adaptation, retrieval augmentation, preference-based optimization, and agentic workflows, with emphasis on benchmarks, uncertainty-aware evaluation, and cost-quality trade-offs.
Topics include:
- Post-training and alignment (instruction tuning, RLHF framing)
- Inference-time reasoning and controllable generation
- Parameter-efficient fine-tuning (PEFT) (e.g., LoRA, DoRA)
- Quantization and efficient adaptation (e.g., QLoRA, recent quantization methods)
- Constrained decoding and logit masking
- Retrieval-Augmented Generation (RAG) as non-parametric memory
- Preference-based evaluation and optimization (e.g., ORPO and ranking-based methods)
- Agentic systems: reasoning–acting loops, tool use, iterative self-improvement, and security
- Dozent/in: Christian Nagel
Die Vorlesung vermittelt Grundlagen zu wissensbasierten Systemen.
Dies beinhaltet unterschiedliche Formen der Wissensrepräsentation, wie z.B.
- Fakten und Regeln
- Klauseln
- Constraints
- Semantische Netze
- Neuronale Netze
Außerdem werden verschiedene Inferenzstrategien und klassische Suchalgorithmen vorgestellt, wie. z.B.
- Tiefen und Breitensuche
- Regelbasierte Systeme mit Vorwärts-/Rückwärtsverkettung
- Unifikation
- Constraintalgorithmen
- Spielbaumsuche
- Lernstrategien in Neuronalen Netzen
- Genetische Algorithmen
Grundlagen der Aussagen und Prädikatenlogik
Eine typische Programmiersprache für KI, in dieser Vorlesung an den Beispielen Prolog und Yacss
- Dozent/in: Christian Nagel
Weitere Kurse

In this module, students learn to use more advanced algorithms of artificial intelligence and their applications on structures, unstructured and temporal data. The basic idea and mathematical backgrounds of neural networks are introduced. Students learn how to train simple neural networks to learn patterns from data for regression and classification tasks. Further, Deep Learning and its most common architectures are introduced, including Convolutions and recurrent connections. Students learn how to effectively train deep learning networks by choosing optimal hyperparameters and how to avoid overfitting. Thus, methods like Regularization and Dropout are explained. The goal of this module is further to introduce unsupervised learning to the students, as well as its application to solve clustering problems. The application of unsupervised learning in combination with neural networks is illustrated by introducing autoencoders. In addition, it is shown how to use unsupervised learning methods to reduce the dimensionality of datasets using feature selection and PCA techniques. After successfully attending this module, students know:
- How to handle structured, unstructured and temporal data
- What a neural network is and how it can be trained using backpropagation
- How to use different optimizers for neural networks
- The most important deep learning architectural layers like convolutions
- How to effectively train neural networks and to avoid overfitting
- The basic principles of unsupervised learning and their applications to real world problems
- How to used features selection and PCA methods to reduce the dimensionality of datasets
- Different forms of collaborative groups work
- How to gather knowledge and share it within their learning group
- How to summarize and present the most important information of a specific topic
- Dozent/in: Dominik Rößle
- Dozent/in: Torsten Schön
- Dozent/in: Jörg Hunsinger
- Mitdozierende/r: Christian Nagel
Die Vorlesung vermittelt Grundlagen zu wissensbasierten Systemen.
Dies beinhaltet unterschiedliche Formen der Wissensrepräsentation, wie z.B.
- Fakten und Regeln
- Klauseln
- Constraints
- Semantische Netze
- Neuronale Netze
Außerdem werden verschiedene Inferenzstrategien und klassische Suchalgorithmen vorgestellt, wie. z.B.
- Tiefen und Breitensuche
- Regelbasierte Systeme mit Vorwärts-/Rückwärtsverkettung
- Unifikation
- Constraintalgorithmen
- Spielbaumsuche
- Lernstrategien in Neuronalen Netzen
- Genetische Algorithmen
Grundlagen der Aussagen und Prädikatenlogik
Eine typische Programmiersprache für KI, in dieser Vorlesung an den Beispielen Prolog und Yacss
- Dozent/in: Stefan Hahndel
- Mitdozierende/r: Christian Nagel