Abstract
Scientific understanding has long implied mechanisms and explanations, not just correct predictions. As black-box AI systems increasingly can generate hypotheses and guide experiments, questions arise. If an AI proposes a hypothesis that experiments confirm, but no human can explain why, has science advanced? Does opacity disqualify AI-driven results from “proper” science, or does it force us to rethink what understanding now means? For something to be “understood“ – who needs to understand it? What are the implications for the training of young researchers?
Moderator: Heiner Linke