Enhancing Trust in Neuro-Symbolic Explanations via Calibrated Linguistic Uncertainty

Aus SDQ-Wiki
Ausschreibung (Liste aller Ausschreibungen)
Typ Masterarbeit
Aushang Master Thesis Formalizing and Verifying LLM Based Abductive Reasoning for System Explanations-1.pdf
Betreuer Wenden Sie sich bei Interesse oder Fragen bitte an:

Nicolas Schuler (E-Mail: nicolas.schuler@kit.edu, Telefon: +49-721-608-46537), Vincenzo Scotti (E-Mail: vincenzo.scotti@kit.edu)

Motivation

Critical AI decision-making demands formally trustworthy explanations. Current neuro-symbolic pipelines use Multimodal Language Models (MLMs) to translate visual data into logic, but often discard valuable uncertainty by forcing binary decisions. While MLMs naturally express confidence through linguistic markers (e.g., ”likely”), these cues remain uncalibrated and unused in reasoning. This thesis aims to bridge this gap by extracting, calibrating, and propagating linguistic uncertainty into a probabilistic logic framework to enhance the robustness of explanations for self-adaptive systems.

Tasks

  • Investigate and compare different strategies for extracting epistemic uncertainty from Multimodal Language Models, ranging from verbalized markers to logit-based analysis.
  • Design and evaluate a calibration methodology to map these linguistic cues to reliable probability distributions.
  • Integrate the most promising approach into the existing logic pipeline to analyze the trade-offs between explanation complexity, logical validity, and user trust in the resulting probabilistic explanations.

Tools / Technology

Multimodal LLMs, Probabilistic Logic Programming, Uncertainty Calibration, Neuro-Symbolic AI

Note: Thesis offered in German or English