Allgemeine Informationen rund um die Kurse von Prof. Dr. Ignacio Alvarez
- Dozent/in: Ignacio Alvarez

Driver states such as drowsiness, cognitive overload, or inattention are critical to road safety, yet their recognition is often complicated by ambiguous or conflicting signals. Human behavior and states are inherently complex and uncertain, which is overlooked by traditional driver monitoring systems as they typically output a single predicted state. Recent advancements in large language models (LLMs) offer a unique opportunity to interpret such ambiguity by common sense reasoning over multimodal inputs, and provide multiple context-aware interpretations and interventions.
In this project, students will design and prototype an in-vehicle assistant that can detect and assess a chosen driver state (drowsiness, attention, workload) through ambiguity-awareness. The system will utilize multimodal inputs like facial expressions, posture, eye tracking, and driving behavior, along with contextual cues (time of day, traffic, driving duration) to resolve data conflicts and ambiguity via LLM reasoning. Based on its interpretation, the assistant will present its top inferred states combined with confidence levels, and appropriate next steps such as initiating an intervention or requesting further input. The system will also incorporate user feedback to refine its reasoning process and improve ambiguity resolutions. Students will evaluate system’s usability, perceived effectiveness in conveying ambiguous and complex states, exploring how LLM-driven reasoning can support more transparent and explainable human-vehicle interaction.
- Dozent/in: Ignacio Alvarez
- Dozent/in: Karthik Sai Pasupuleti
In this project, students will design and prototype an automotive agent powered by LLMs to detect and respond to driver sleepiness. The system will utilize multimodal inputs, such as facial expressions, voice tone, and driving behavior, to compute a sleepiness likelihood metric. Based on the metric and additional contextual awareness signals (e.g., time of day, driving duration), the agent will propose tailored interventions, such as adjusting cabin temperature, suggesting a rest stop, or initiating engaging conversations. Students will evaluate the system's usability, effectiveness, and user satisfaction, exploring the interplay between AI decision-making and driver interaction.
- Dozent/in: Ignacio Alvarez
In this course, students will customize AI agents to exhibit emotionally-aware behaviors that align with predefined user personas. Leveraging conversational design principles, they will develop tailored conversational flows, focusing on tone, empathy, and adaptability of the AI agent. Usability testing will be conducted to assess the assistant’s emotional relevance, user satisfaction, and the effectiveness of its personality in different in-car scenarios.
- Dozent/in: Ignacio Alvarez
Within this seminar, we will analyze aspects of aggression, vandalism, harassment, and abuse directed at autonomous robots (e.g. delivery service robots, automated vehicles). Our goal is to develop interface and technological solutions that mitigate these risks while ensuring the vehicles remain safe and functional for allusers. Based on an individual literature review (related work), students (in groups of two or three) will identify key challenges and explore innovative concepts. Possible topics will be developed together in a dedicated brainstorming session and refined/substantiated by the individual teams (with feedback from the lecturer).
- Dozent/in: Ignacio Alvarez
- Mitdozierende/r: Anna Preiwisch
In this project, students will design and implement a prototype in-vehicle game that incorporates input from an AV safety model, such as an AV agent equipped with RSS in the CARLA driving simulator to provide feedback on safety-related actions. Students will evaluate their designed game for usability, engagement, and its effectiveness in increasing safety awareness among in driving scenarios. Deliverables will include a functional game prototype, an evaluation report, and insights into the application of gamification in AV safety education.
- Dozent/in: Ignacio Alvarez

Generative AI can rapidly produce complex driving scenarios, but these outputs are often difficult for users to understand, control, or evaluate. Scenario descriptions appear as text or parameter lists that lack narrative structure, visual clarity, and transparent causal reasoning. This creates barriers for researchers, designers, and other stakeholders who must interpret scenario logic, detect unrealistic elements, or communicate scenario details to others. Treating scenarios as stories, e.g. visual, temporal, and causal sequences, offers a more intuitive way to generate, refine, and understand AI-driven simulations.
In this project, students will design interactive interfaces that bridge natural-language generation, storyboard visualization, and explainable AI. They will develop a prototype that integrates conversational scenario creation, storyboard-based scenario representation, and explanatory overlays or narratives. Students will evaluate the impact of these interfaces on scenario comprehensibility, user control, trust, and error detection.
Deliverables will include literature reviews, UX concept design and interaction flows, prototype of integrated xAI scenario creation, usability analysis, scientific paper, and a mini-conference poster or demo.
Research questions / Possible topics for exploration:
· How does a conversational interface influence user creativity, control, and precision when generating AI-driven scenarios?
· Do storyboard-based visualizations improve user comprehension, communication, and memorability of scenarios compared to text-only descriptions?
· Which explanation modalities (causal graphs, annotated storyboards, narrative summaries) best support error detection and trust in AI-generated scenarios?
- How does integrating conversation, visualization, and explanation into a single workflow affect perceived usability and mental model clarity?
- Dozent/in: Ignacio Alvarez
- Dozent/in: Atharva Mahindrakar

As vehicles become increasingly intelligent and assistive, the challenge of designing intuitive, safe, and low-distraction interaction methods becomes more critical. Voice, visual, and touch interfaces, although common and widely used, can be intrusive or unreliable in certain situations. Electromyography (EMG) sensors offer a promising alternative by detecting subtle muscle activations that reflect a driver's intent or emotional state. These muscle-based gestures can enable silent, quick, and low-distraction communication between driver and vehicle systems, supporting a more natural, intuitive, and adaptive human-vehicle interaction.
In this project, students will explore how EMG-based gestures can be leveraged to improve in-vehicle interaction and feedback. They will design and investigate how such gestures can support communication and feedback integration with an LLM-based in-cabin assistant. This includes how drivers might express intent, preferences, or reactions to system assistance through adaptive gestures; and how LLMs can interpret or adapt to such inputs to enable natural and context-aware interactions. Students will analyze how this modality for interaction affects overall usability, trust, and satisfaction, and examine how it complements/interferes with other modalities like voice or touch.
- Dozent/in: Ignacio Alvarez
- Mitdozierende/r: Vanchha Chandrayan
In this course, students will customize AI agents to exhibit emotionally-aware behaviors that align with predefined user personas. Leveraging conversational design principles, they will develop tailored conversational flows, focusing on tone, empathy, and adaptability of the AI agent. Usability testing will be conducted to assess the assistant’s emotional relevance, user satisfaction, and the effectiveness of its personality in different in-car scenarios.
- Dozent/in: Ignacio Alvarez
Within this seminar, we will analyze aspects of aggression, vandalism, harassment, and abuse directed at autonomous robots (e.g. delivery service robots, automated vehicles). Our goal is to develop interface and technological solutions that mitigate these risks while ensuring the vehicles remain safe and functional for all users. Based on an individual literature review (related work), students (in groups of two or three) will identify key challenges and explore innovative concepts. Possible topics will be developed together in a dedicated brainstorming session and refined/substantiated by the individual teams (with feedback from the lecturer).
- Dozent/in: Ignacio Alvarez
- Mitdozierende/r: Anna Preiwisch

Moodle for UXD Bachelor Elective - Introduction to Vibe Coding for User Experience Designers:
The course provides a practical introduction to the integration of generative AI and "vibe coding" into UX workflows.
Students will:
- Explore foundational concepts of Generative AI, focusing on LLMs.
- Learn techniques of prompt engineering to effectively direct AI tools.
- Apply AI-driven methods to enhance UX research processes, from data collection to synthesis and prototyping.
- Utilize AI-assisted development platforms, such as ChatGPT, Replit or Cursor, to prototype interactive user interfaces, tools and applications.
- Undertake a comprehensive final group project, culminating in designing, building, and presenting a software tool that addresses a defined UX issue.
- Dozent/in: Ignacio Alvarez
- Mitdozierende/r: Vanchha Chandrayan

Emotions like anger, frustration, or anxiety can compromise driving performance by impairing situational awareness, reaction times, and decision making. Recent advancements in large language models (LLMs) have shown reasoning capabilities in detecting complex emotions using multiple modalities. By integrating multimodal sensor data and contextual driving information, LLMs can also be leveraged to detect vehicle occupant emotions and trigger actions to mitigate their impact on driving performance.
In this project, the students will design and prototype an automotive in-cabin assistant to detect driver emotions and respond through context-aware recommendations to support safer driving. The system will utilize multimodal inputs (voice, facial features, heart rate, driving behavior), contextual signal like traffic conditions, driving duration, weather conditions. Based on this information, the assistant will generate appropriate responses such as offering calming suggestions, initiating empathetic conversations. Students will evaluate system’s usability, perceived appropriateness and supportiveness of the system, and investigate how LLM-driven emotion recognition and context-aware responses can enhance human-vehicle interaction.
- Dozent/in: Ignacio Alvarez
- Dozent/in: Vanchha Chandrayan

This course is a hands-on exploration of how Generative AI is fundamentally reshaping the field of User Experience. We will move beyond traditional UX workflows to embrace a new paradigm where designers augment their skills with AI to not only design but also build and deploy intelligent applications.
We will start from first principles, demystifying how Large Language Models (LLMs) work. From there, you will learn to apply AI tools across the entire UX lifecycle: from collecting and analyzing user data to prototyping interactive UIs and creating functional web and Python-based applications.
The course culminates in the design and deployment of an AI Agent, a specialized AI system that can autonomously solve a problem of your choosing. This course is designed for a diverse cohort and assumes no prior coding expertise. You will learn to act as a creative director, guiding powerful AI tools to augment your unique skills—whether in design, research, or engineering—and bring sophisticated, data-driven ideas to life.
- Dozent/in: Ignacio Alvarez
Weitere Kurse

The "Dissertation Seminar Human-Computer Interaction" provides a structure for you to start the process of researching and writing up your topic, in a group-learning based environment at THI. Research and high-level academic writing skills are also covered.
- Dozent/in: Jakob Peintner
- Dozent/in: Andreas Riener
- Mitdozierende/r: Ignacio Alvarez
- Mitdozierende/r: Tobias Huber
- Dozent/in: Andreas Riener
- Mitdozierende/r: Ignacio Alvarez
- Mitdozierende/r: Bernhard Axmann
- Mitdozierende/r: Dmitri Hubbe
- Mitdozierende/r: Jakob Peintner
- Mitdozierende/r: Stefan Stoll
- Dozent/in: Andreas Riener
- Mitdozierende/r: Ignacio Alvarez
- Mitdozierende/r: Anatoli Djanatliev
- Mitdozierende/r: Jan-Philipp Göbel
- Mitdozierende/r: Joaquin Sack
In diesem Kursraum werden alle Informationen inkl. Terminen zum Seminar Bachelorarbeit (für UXDB) angegeben.
- Dozent/in: Andreas Riener
- Mitdozierende/r: Ignacio Alvarez
- Mitdozierende/r: Tobias Huber
- Mitdozierende/r: Simon Nestler
- Mitdozierende/r: Veronika Ritzer
- Mitdozierende/r: Ingrid Stahl
- Mitdozierende/r: Christian Sturm
- Mitdozierende/r: Fuat Yüksel