Short Biography:
She is the recipient of the IJCAI-25 John McCarthy Award, the 2024 INFORMS Society on Data Mining Prize, and the $1M 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award and a 2022 Guggenheim Fellow. She is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, AAAI, and AAAS.
Prof. Rudin is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has also served on committees for AAAI, ACM SIGKDD, DARPA, the National Institute of Justice, the National AI Advisory Committee's subcommittee on Law Enforcement (NAIAC-LE), and the National Academies of Sciences, Engineering and Medicine.
Some of her collaborative projects are: (1) Sparse models: she develops practical code for optimal decision trees and sparse scoring systems, used for creating models for high stakes decisions. Some of these models are used to manage treatment and monitoring for patients in intensive care units of hospitals. (2) Power reliability: She led the first major effort to maintain a power distribution network with machine learning (in NYC). (3) Crime series analysis: She developed algorithms for crime series detection, which allow police detectives to find patterns of housebreaks. Her code was developed with detectives in Cambridge MA, and later adopted by the NYPD. (4) Interpretable neural networks: Her lab developed the ProtoPNet framework that uses case-based reasoning. (5) Observational causal inference: She develops matching methods with the Duke Almost Matching Exactly Lab. (6) Data visualization: Her lab developed the popular PaCMAP algorithm for dimension reduction for data visualization.
Title:
Many Good Models Leads To…
Abstract:
As it turns out, many good models leads to amazing things! The Rashomon Effect, coined by Leo Breiman, describes the phenomenon that there exist many equally good predictive models for the same dataset. This phenomenon happens for many real datasets and when it does, it sparks both magic and consternation, but mostly magic. In light of the Rashomon Effect, my collaborators and I propose to reshape the way we think about machine learning, particularly for tabular data problems in the nondeterministic (noisy) setting. I'll address how the Rashomon Effect impacts (1) the existence of simple-yet-accurate models, (2) flexibility to address user preferences, such as fairness and monotonicity, without losing performance, (3) uncertainty in predictions, fairness, and explanations, (4) reliable variable importance, (5) algorithm choice, specifically, providing advanced knowledge of which algorithms might be suitable for a given problem, and (6) public policy. I'll also discuss a theory of when the Rashomon Effect occurs and why: interestingly, noise in data leads to a large Rashomon Effect. My goal is to illustrate how the Rashomon Effect can have a massive impact on the use of machine learning for complex problems in society.
Short Biography:
Elias Bareinboim is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence (CausalAI) Laboratory at Columbia University. His research focuses on causal and counterfactual inference and their applications to artificial intelligence, machine learning, and data science in biomedical and social domains. His scientific contributions include the first general solution to the problem of 'data fusion,' providing practical methods for combining data generated under different experimental conditions and affected by various biases. Bareinboim currently serves as the editor-in-chief of the Journal of Causal Inference (JCI), the first journal dedicated to causal inference research, and as an action editor of the Journal of Machine Learning Research (JMLR), the premier journal focused on machine learning.
Short Biography:
Francisco Herrera received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain. He is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). He's an academician in the Royal Academy of Engineering (Spain).
He has been the supervisor over 65 Ph.D. students. He has published more than 650 journal papers, receiving more than 171000 citations (Scholar Google, H-index 191º). He has been nominated as a Highly Cited Researcher (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics). He acts as editorial member of a dozen of journals.
His current research interests include among others, computational intelligence, information fusion and decision making, trustworthy artificial intelligence, general purpose artificial intelligence and data science.
Title:
Not Just a Trend: Institutionalizing XAI for Responsible and Compliant AI Systems
Abstract:
As artificial intelligence (AI) systems increasingly mediate decisions in high-stakes domains—from healthcare and finance to public policy—the demand for explainable AI (XAI) has grown rapidly. Yet many current XAI approaches remain disconnected from the practical needs of stakeholders and the requirements of emerging regulatory frameworks. This talk argues that XAI must not be treated as a passing trend or optional technical add-on, but as a foundational principle in the design and deployment of AI systems. We critically examine the state of the field, exposing the gap between model-centric explainability and stakeholder-centric accountability. In response, we propose a framework that aligns explainability with legal, ethical, and social responsibilities, emphasizing co-design with affected users, sensitivity to institutional contexts, and governance over opacity. Our goal is to advance XAI from superficial compliance toward deeply integrated transparency that fosters trust, accountability, and responsible innovation.
Short Biography:
Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on developing AI systems that not only follow patterns but reason, generalize, and adapt to novel situations. She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award and a Fellow of the Royal Society of Edinburgh, the ACL, and Academia Europaea.
Mirella has also received best paper awards in leading NLP conferences and has served on the editorial boards of the Journal of Artificial Intelligence Research, the Transactions of the ACL, and Computational Linguistics. She was president of SIGDAT (the group that organizes EMNLP) in 2018. She has been awarded an ERC consolidator grant, a Royal Society Wolfson Research Merit Award, and a UKRI Turing AI World-Leading Researcher Fellowship.
Title:
Compositional Intelligence: Coordinating Multiple LLMs for Complex Tasks
Abstract:
Recent years have witnessed the rise of increasingly larger and more sophisticated language models (LMs) capable of performing every task imaginable, sometimes at (super)human level. In this talk, I will argue that in many realistic scenarios, solely relying on a single general-purpose LLM is suboptimal. A single LLM is likely to under-represent real-world data distributions, heterogeneous skills, and task-specific requirements. Instead, I will discuss Multi-LLM collaboration as an alternative to monolithic generative modeling. By orchestrating multiple LLMs, each with distinct roles, perspectives, or competencies, we can achieve more effective problem-solving while being more inclusive and explainable. I will illustrate this approach through two case studies: narrative story generation and visual question answering, showing how a society of agents can collectively tackle complex tasks while pursuing complementary subgoals. Additionally, I will explore how these agent societies leverage reasoning to improve performance.
Short Biography:
Nuria Oliver is Director of the ELLIS unit Alicante Foundation (https://ellisalicante.org), known as The Institute of Humanity-centric AI. She is co-founder and vice-president of ELLIS (https://ellis.eu). Previously, she was Chief Scientific Advisor to the Vodafone Institute, Director of Data Science Research at Vodafone, Scientific Director at Telefónica and researcher at Microsoft Research. She holds a PhD from the Media Lab at MIT and an Honorary Doctorate from the University Miguel Hernández. She is also Chief Data Scientist at DataPop Alliance. She is an IEEE Fellow, ACM Fellow, EurAI Fellow, ELLIS Fellow and elected permanent member of the Royal Academy of Engineering of Spain. She is also a member of CHI Academy, the Academia Europaea and corresponding member at the Academy of Engineering of Mexico. She is well known for her work in computational models of human behavior, human computer-interaction, mobile computing and big data for social good. Named inventor of 40 patents. She is passionate about the potential of AI to be a driver for Social Good.
Title:
Towards a fairer world -- Uncovering and addressing human and algorithmic biases
Abstract:
In my talk, I will first briefly present ELLIS Alicante (https://ellisalicante.org), the only ELLIS unit that has been created from scratch as a non-profit research foundation devoted to responsible AI for Social Good. Next, I will provide an overview of AI with a focus on the ethical implications and limitations of today’s AI systems, including algorithmic discrimination and bias. On this topic, I will present a few examples of our work on uncovering and mitigating both human and algorithmic biases with AI. On the human front, I will present the body of work that we have carried out in the context of AI-based beauty filters that are so popular on social media [1,2,3]. On the algorithmic front, I will explain the main approaches to address algorithmic discrimination and I will present three novel methods to achieve fairer decisions [4,5,6].
Short Biography:
Pedro Domingos is a professor of computer science at the University of Washington and the author of "The Master Algorithm" and "2040". He is a winner of the SIGKDD Innovation Award and the IJCAI John McCarthy Award, two of the highest honors in data science and AI, and a Fellow of AAAS and AAAI. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science. He helped start the fields of statistical relational AI, data stream mining, adversarial learning, machine learning for information integration, and influence maximization in social networks.
Title:
Tensor Logic: A Simple Unification of Neural and Symbolic AI
Abstract:
Deep learning has achieved remarkable successes in language generation and other tasks, but is extremely opaque and notoriously unreliable. Both of these problems can be overcome by combining it with the sound reasoning and transparent knowledge representation capabilities of symbolic AI. Tensor logic accomplishes this by unifying tensor algebra and logic programming, the formal languages underlying respectively deep learning and symbolic AI. Tensor logic is based on the observation that predicates are compactly represented Boolean tensors, and can be straightforwardly extended to compactly represent numeric ones. The two key constructs in tensor logic are tensor join and project, numeric operations that generalize database join and project. A tensor logic program is a set of tensor equations, each expressing a tensor as a series of tensor joins, a tensor project, and a univariate nonlinearity applied elementwise. Tensor logic programs can succinctly encode most deep architectures and symbolic AI systems, and many new combinations. In this talk I will describe the foundations and main features of tensor logic, and present efficient inference and learning algorithms for it. A system based on tensor logic achieves state-of-the-art results on a suite of language and reasoning tasks. How tensor logic will fare on trillion-token corpora and associated tasks remains an open question.
Short Biography:
Sašo Džeroski is Head of the Department of knowledge technologies at the Jozef Stefan Institute and full professor at the Jozef Stefan International Postgraduate School, both in Ljubljana, Slovenia. He is a fellow of EurAI, the European Association of AI, in recognition of his "Pioneering Work in the field of AI”. He is a member of the Macedonian Academy of Sciences and Arts and a member of Academia Europea. He is past president and current vice-president of SLAIS, the Slovenian Artificial Intelligence Society.
His research interests focus on explainable machine learning, computational scientific discovery, and semantic technologies, all in the context of artificial intelligence for science. His group has developed machine learning methods that learn explainable models from complex data in the presence of domain knowledge: These include methods for multi-target prediction, semi-supervised and relational learning, and learning from data streams, as well as automated modelling of dynamical systems.
Professor Džeroski has lead (as coordinator) many national and international (EU-funded ) projects and has participated in many more. He is also the technical coordinator of the Slovenian Artificial Intelligence Factory. The work of professor Džeroski has been extensively published and is highly cited: With more than 26500 citations and an h-index of 76 (in the GoogleScholar database), prof. Džeroski is the most frequently cited computer scientist in Slovenia (according to the 2025 ranking by Research.com).
Title:
Artificial Intelligence for Science
Abstract:
Artificial intelligence is already transforming science, with its future impact expected to be even greater. Realizing this potential requires addressing key scientific challenges, such as ensuring explainability (of models and their predictions), learning effectively from limited data, and integrating data with prior domain knowledge. It also requires the provision of support for open and reproducible science through formalizing and sharing scientific knowledge.
I will present an overview of my research on the development of AI methods suitable for use in science. These include methods for explainable machine learning — including multi-target prediction and relational learning — that deliver accurate yet interpretable models suitable for complex scientific domains. These methods have been applied in environmental science, life science and materials science.
Learning from limited data is critical in science. I will discuss two complementary approaches: semi-supervised learning, which leverages unlabeled data directly, together with labeled data, and foundation models, which use representations learned from vast unlabeled data to support downstream tasks with minimal supervision, i.e., limited amounts of labeled data. Both paradigms expand AI’s reach into data-scarce scientific problems.
I will then present our work on automated scientific modeling, where we learn interpretable models of dynamical systems — such as process-based models and differential equations — from time series data and domain knowledge. Finally, I will highlight the role of ontologies and semantic technologies in experimental computer science, including machine learning and optimization. In these areas, we have developed ontologies for the representation and annotation of both data and other artefacts produced by science, such as algorithms, models, and results of experiments.