Research How drivers and cars can understand each other better

From Sven Prawitz | Translated by AI 4 min Reading Time

Related Vendors

How a vehicle should communicate with its occupants depending on the degree of automation is the goal of the "Karli" research project. This should reduce motion sickness, for example.

Researchers are working on extracting relevant information from passenger monitoring and providing it as context for various vehicle functions, such as AI assistants and safety systems.(Image: Fraunhofer IOSB/Zensch)
Researchers are working on extracting relevant information from passenger monitoring and providing it as context for various vehicle functions, such as AI assistants and safety systems.
(Image: Fraunhofer IOSB/Zensch)

"Caution, if you continue reading now, you may feel sick on the winding road. We'll be on the highway in five minutes, then it'll be better." Or: "It's about to rain and we'll have to stop the automatic driving. Please prepare to drive a bit yourself. I'm sorry you'll have to securely stow your laptop now. Safety comes first."

This is how cars could communicate with their drivers in a few years. As vehicles become increasingly automated, the interaction with people also needs to be rethought. This task was undertaken by a research team from the Fraunhofer Institutes for Optronics, System Technology and Image Evaluation IOSB and for Labor Economics and Organization IAO, together with ten partners, including Continental, Ford and Audi, as well as a number of small and medium-sized enterprises and universities in the "KARLI" project. Karli stands for Artificial Intelligence (AI) for Adaptive, Responsive and Level-Conform Interaction in the vehicle of the future.

Communicate adapted to the level of automation

Depending on the level of automation, the drivers need to focus on the road or can engage in other activities. They have ten seconds to take over the wheel again, or in some cases, they don't need to intervene at all. These different requirements for users and the ability to switch between various stages depending on the road situation make it a complex task to define and design appropriate interactions for each level. Furthermore, interaction and design must ensure that drivers are always aware of the current level of automation, so they can fulfill their role in it.

The applications developed in the Karli project have three main focuses:

  • Warnings and indications are intended to promote behavior adapted to the level of automation. For example, they should prevent the driver from being distracted at a moment that requires attention to the road. The user address is adapted to the respective level—visually, acoustically, haptically or a combination of these possibilities. The interaction is controlled by AI agents, whose performance and reliability the partners evaluate.

  • Predicting and minimizing the occurrence of motion sickness—one of the biggest problems with passive driving. Between 20 and 50 percent of people suffer from this so-called motion sickness. Depending on expected accelerations on winding routes, the occupants should be given tips related to their current activities at the right time.

  • Generated User Interfaces, or "GenUIn" for short, are the third focus in the Karli project. GenUIn creates personalized addresses and gives tips on how to reduce nausea, should it still occur. These tips can relate to the current activity, which is detected by sensors, but also take into account what options are available in the current context. By expressing their wishes, users also have the opportunity to gradually personalize and adapt the entire interaction in the vehicle to their needs. The automation level is always taken into account in the interaction: the hints can be short and purely verbal if the driver is concentrating on the road, or detailed and through visual channels if the vehicle is currently performing the driving task.

Artificial Intelligence and Vision Language Model

Various sensors are used to capture activities in the car, with optical sensors from interior cameras playing a central role. These are mandatory anyway under current legislation on automated driving to ensure the driving ability of the drivers. The researchers then combine the visual data from the cameras with large language models to create so-called Vision-Language Models (VLM). These form the basis for modern assistance systems in (semi-) autonomous vehicles to semantically capture situations in the interior and react to them. Diederichs compares the interaction in the vehicle of the future to a butler who keeps in the background but knows the context and offers the occupants the best possible support.

"Decisive factors for the acceptance of such systems are trust in the service provider, data security and a direct benefit for the drivers," says Frederik Diederichs. Therefore, the best possible anonymization and data security as well as a transparent and explainable collection of data are essential. "Not everything that is in the field of vision of a camera is evaluated. It must be transparent what information a sensor collects and what it is used for." In another project (Anymos), the Fraunhofer researchers are working on anonymizing camera data, processing it in a data-efficient way, and effectively protecting it.

Series use from 2026

Another topic of the research project, which is funded by the Federal Ministry for Economic Affairs and Climate Protection, is data efficiency. The project participants have chosen an approach, which, according to their own information, requires only a few, high-quality AI training data. These are collected empirically and synthetically generated. Based on this database, automobile manufacturers can decide which data they need to collect later in series operation in order to be able to use the system. "This keeps the data effort manageable and makes the project results scalable", explains Diederichs.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Just recently, Diederichs and his team put a mobile research laboratory based on a Mercedes EQS into operation to research user needs in automated Level 3 driving on the road. There, the findings from the Karli project are tested and evaluated in practice. Thus, the first functions could be available in production vehicles from 2026. Diederichs sees a significant competitive advantage in a positive user experience with automated driving when car manufacturers "respond to user requirements with AI and individualize the interaction".

Project Karli: The participants

  • All-round team

  • Audi

  • Continental

  • Ford

  • Fraunhofer IAO

  • Fraunhofer IOSB

  • University of Media

  • Invensity

  • Semvox

  • Studiokurbos

  • TWT

  • University of Stuttgart IAT