Declarative AI 2024

Rules, Reasoning, Decisions, and Explanations

Bucharest, Romania

16 - 22 September 2024

Keynotes

Thanks to our sponsors:

Jan Van den Bussche, Professor at Hasselt University  (Belgium)

16 September 2024

Shapes Constraint Language: Logic, Queries, and Provenance

Abstract: The Shapes Constraint Language SHACL is a W3C-proposed formalism for expressing properties (shapes) of nodes in graph data, as well as integrity constraints on graph data based on such shapes. We will look at SHACL both as an expressive logic and as a query language. What is the expressiveness of the logic? Can we retrieve shapes, or validate constraints, through database query processing? Can we use shapes as a means to extract relevant subgraphs? Can we formalize the provenance of shapes through subgraphs that explain why a node satisfies a shape? This talk synthesizes joint work done in the past few years with collaborators Bart Bogaerts, Thomas Delva, Anastasia Dimou, and Maxime Jakubowski.

About: Jan Van den Bussche is professor of databases and theoretical computer science at Hasselt University in Belgium. He received his PhD from the University of Antwerp in 1993, with Jan Paredaens. He served as PC chair, and chair of the council, for the International Conference on Database Theory, and also as PC chair, and chair of the executive committee, for the ACM Symposium on Principles of Database Systems. His main research interest is in models and query languages for a wide variety of types of data. In the Semantic Web field, recent interests are intelligent architectures for the decentralized Web, and shapes in graph data.

Alessandra Mileo, Associate Professor at Dublin City University  (Ireland)

17 September 2024

Neuro-Symbolic AI and Human-Centred Explainability

Abstract: Neuro-Symbolic AI is becoming a fast-growing area of research. However, there is still a lot of potential for leveraging neuro-symbolic approaches to address the need for explainability and confidence. These are key requirements when it comes to using AI to support human experts in high-stake decision making. In this talk I will discuss how Deep Representations, Knowledge Graphs, Cognitive Reasoning and Human Experts can be used as key ingredients in the design of a Neuro-Symbolic cycle for human-centred explainability. I will discuss challenges in the design of such a cycle as well as opportunities for the adoption of Neuro-Symbolic AI in real world scenarios.

About: Alessandra Mileo is Associate Professor in the School of Computing, Dublin City University. She is also a Principal Investigator in the Insight Centre for Data Analytics and Funded Investigator in the Advanced Manufacturing Research Centre, and a Fellow of the Higher Education Academy (FHEA). Dr. Mileo has secured over 1 million euros in funding including national (SFI, IRC), international (EU, NSF) and industry-funded projects, publishing 100+ papers and is an active PC member of over 20 conferences and journals. Dr. Mileo is a member of the European AI alliance, the Italian Association for AI (AIIA), AAAI, the Association of Logic Programing (ALP) and a Steering Committee member of the Web Reasoning and Rule Systems Association (RRA) since 2015. Her current research agenda is focused on Explainable Artificial Intelligence, specifically leveraging Neuro-Symbolic Learning and Reasoning as well as Knowledge Graphs to support high-stake Decision Making, with applications in diagnostic imaging and additive manufacturing.

Marko Grobelnik, co-lead at Artificial Intelligence Lab at Jozef Stefan Institute, (Slovenia)

17 September 2024

LLMs as a revitalizing pill for Symbolic AI

Abstract: In recent years, symbolic AI has faced challenges in adapting to complex, unstructured data environments. However, the emergence of large language models (LLMs) presents an opportunity to rejuvenate symbolic AI methodologies. This keynote will explore how LLMs can act as a revitalizing force for symbolic AI, offering new avenues for integrating data-driven learning with structured, rule-based systems. We will discuss techniques for leveraging LLMs to extract, refine, and operationalize symbolic representations that can enhance reasoning, decision-making, and knowledge inference. By bridging the gap between the flexible, statistical nature of LLMs and the rigorous, formal structures of symbolic AI, this approach aims to create hybrid systems that retain the strengths of both paradigms. Through case studies and practical applications, the talk will demonstrate how these revitalized symbolic AI systems can achieve greater transparency, interpretability, and robustness in complex AI-driven tasks.

About: Marko Grobelnik is a researcher in the field of Artificial Intelligence (AI). Marko co-leads Artificial Intelligence Lab at Jozef Stefan Institute, cofounded UNESCO International Research Center on AI (IRCAI), and is the CEO of Quintelligence.com specialized in solving complex AI tasks for the commercial world. Marko is co-author of several books, co-founder of several start-ups and is/was involved into over 100 EU funded research projects in various fields of Artificial Intelligence. Significant organisational activities include Marko being general chair of LREC2016 and TheWebConf2021 conferences. Marko represents Slovenia in OECD AI Committee (AIGO/ONEAI), in Council of Europe Committee on AI (CAHAI/CAI), NATO (DARB), and Global Partnership on AI (GPAI). In 2016 Marko became Digital Champion of Slovenia at European Commission.

Stefan Borgwardt, Research Group Leader at TU Dresden (Germany)

18 September 2024

Explaining Description Logic Reasoning

Abstract: While logic-based reasoning is explainable in theory, understanding the explanations often requires expert training and a lot of time. For explaining consequences entailed by description logic ontologies, the main form of explanations are so-called justifications that pinpoint the axioms responsible for an entailment. However, with large justifications or expressive logics, it can be hard to see why the entailment follows from the justification. In recent years, we have studied approaches for computing proofs, which consist of simple steps for explaining an entailment, but may contain many such steps. We investigated different methods of computing proofs that are optimized according to some quality criteria, such as the size of the proof. In addition to evaluating these ideas on existing description logic ontologies, we have conducted a series of user studies to find out how to best present proofs to users of description logic ontologies. Moreover, we implemented our algorithms in the Protégé plug-in Evee, which can not only explain why an entailment holds, but also why an expected entailment does not follow from the ontology.

About: Stefan Borgwardt is a junior research group leader affiliated with the Faculty of Computer Science at TU Dresden, Germany. He obtained two Master's-level degrees in Computer Science and Mathematics in 2010, and a PhD in Computer Science in 2014. Since then, his research focuses on ontology languages with quantitative features, such as temporal and fuzzy description logics, and on explaining logical reasoning. Currently, he is a principal investigator in the Center for Perspicuous Computing, pioneering interdisciplinary research on user-centric description logic explanations. Together with his co-authors, he has received Best Paper Awards at multiple conferences, including JELIA and RuleML+RR. He has been a member of the steering committees of the DL workshop and the KR conference, and helped with the organization of several international workshops and conferences.