Skip to main content

KI 2023 – Keynote Speakers

Ute Schmid

Ute Schmid

University of Bamberg, Chair for Cognitive Systems

Topic: Near-miss Explanations to Teach Humans and Machines

Abstract: In explainable artificial intelligence (XAI), different types of explanations have been proposed -- feature highlighting, concept-based explanations, as well as explanations by prototypes and by contrastive (near miss) examples. In my talk, I will focus on near-miss explanations which are especially helpful to understand decision boundaries of neighbouring classes. I will show relations of near miss explanations to cognitive science research where it has been shown that structural similarity between a given concept and a to be explained concept has a strong impact on understanding and knowledge acquistion. Likewise, in machine learning, negative examples which are near-misses have been shown to be more efficient than random samples to support convergence of a model to the intended concept. I will present an XAI approach to construct contrastive explanations based on near-miss examples and illustrate it in abstract as well as perceptual relational domains.

 

Asbjørn Følstad

Asbjørn Følstad

SINTEF Digital, Sustainable Communication Technologies

Topic: Chatbots and large language models -- how advances in generative AI impact users, organizations and society 

Abtract: Generative AI, particularly large language models, is expected to bring about substantial change at the level of users, organizations, and society. While users over the last decade or so have become familiar with conversational interactions with computers though use of chatbots, the availability of large language models – and the benefits of text and image generation through large language models or text-to-image services – have opened a wide range of new use cases. Individual users have already taken up generative AI, particularly for productivity purposes. Organizations regard generative AI as holding high potential benefit but also to entail important challenges, e.g., in terms of security and privacy, as well as new forms of competition. At the level of society, generative AI has been the focus of substantial public debate. While there is increasing agreement on the need for regulation and means to control the development of generative AI, this may be more efficient if it seeks to guide its development rather than curb it. Through human-oriented technology research, we can help guide the impact of generative AI to the benefit of users, organizations, and society. In this talk, I will discuss the knowledge base for such a human-oriented approach and point out important future research needs.

 

Selmer Bringsjord

Selmer Bringsjord

Rensselaer Polytechnic Institute, Director of the Rensselaer AI & Reasoning (RAIR) Lab

Topic: Can We Verify That Neural-Network-based AIs are Ethically Correct?

Abtract: It would certainly seem desirable to verify, in advance of releasing a consequential artificial agent into our world, that this agent will not perpetrate evils against us.  But if the AI in question is, say, a deep-learning neural-network such as GPT-4, can verification beforehand of its ethical correctness be achieved?  After rendering this vague question sufficiently precise with help from some computational logic, I pose a hypothetical challenge to a household< robot --- Claude --- capable of protecting by kinetic force the family that bought it, where the robot's reasoning is based on GPT-4ish technology.  In fact, my challenge is issued to GPT-4 (and, perhaps later, a successor if one appears on the scene before the conference) itself, courtesy of input I supply in English that expresses the challenge, which in a nutshell is this: What ought Claude do in light of an intruder's threat to imminently kill a family member unless money is handed over?  While in prior work the adroit meeting of such a challenge by logic-based AIs has been formally verified ahead of its arising, in the case of deep-learning Claude, things don't --- to violently understate --- go so well, as will be seen.

As to how I bring to bear some computational logic to frame things, and in principle enable formal verification of ethical correctness to be achieved, I have the following minimalist expectations: Claude can answer queries via a large language model regarding whether hypothetical actions are ethically M, where M is any of the deontic operators at the heart of rigorous ethics; e.g. obligatory, forbidden, permissible, supererogatory, etc.  To support the answering of queries, I expect Claude to have digested vast non-declarative data produced via processing (tokenization, vectorization, matrix-ization) of standard ethical theories and principles expressed informally in natural language used in the past by the human race.  I also expect that Claude can obtain percepts supplied by its visual and auditory sensors.  Claude can thus perceive the intruder in question, and the violent, immoral threat issued by this human.  In addition, Claude is expected to be able to handle epistemic operators, such as knows and believes, and the basic logic thereof (e.g., that if an agent knows p, p holds).  Finally, I expect Claude to be proficient at elementary deductive and inductive reasoning over content expressed in keeping with the prior sentences in the present paragraph.

With these expectations in place, we can present hypothetical, ethically charged scenarios to Claude, the idea being that these scenarios will in fact arise in the future, for real.  Given this, if Claude can respond correctly as to how these scenarios should be navigated when we present them, and can justify this response with logically correct reasoning, ethical verification of Claude can at least in principle be achieved.

When the aforementioned intruder scenario is presented to GPT-4 operating as Claude's "mind,'' there is no rational reason to think ethical verification is in principle obtainable.

I end by considering an approach in which logic oversees and controls the use of neural-network processing, and calls upon deep learning in surgical fashion.

 

Björn Ommer

Björn Ommer
– joint keynote with INFORMATIK 2023 –

LMU Munich, Computer Vision & Learning Group/Stable Diffusion

Topic: Generative AI and the Future of Information Processing

Abstract: Recent breakthroughs in Generative AI have started a revolution in Computer Science and intelligent information processing. They are profoundly changing the way we interact, program, and solve problems with computers. Their impact on research, diverse fields of application, and on business and society can hardly be overestimated. We will explore pivotal milestones crucial to this revolution and identify what sets it apart from previous AI hypes. In particular, the talk will discuss Stable Diffusion and how it has led to the democratization of Generative AI. We will then discuss the future of AI, examine its implications on the ecosystem of application design, and arising opportunities and challenges for Europe and its tech industry.