The CIRSS speaker series continues in Spring 2025 with a new theme of “Generative AI and the Future of Research.” Our speakers will share their research on the opportunities and risks associated with the rapidly evolving landscape of generative AI usage in scholarship.
We meet most Wednesdays, 9am-10am US Central Time, on Zoom. This event is open to the public, and everyone is welcome to attend. The series is hosted by the Center for Informatics Research in Science and Scholarship (CIRSS) at the School of Information Sciences at the University of Illinois at Urbana-Champaign, and our Spring series is led by Yuanxi Fu and Timothy McPhillips . If you have any questions, please contact Janet Eke.
Participate: To join a live talk, follow the “Join Here” link for the current week below to access the iSchool event page for the talk. From there, click the “PARTICIPATE online” button to join the live Zoom session. Recordings of past talks can be found via the “Recording” links below if available.
Follow: To receive weekly updates on upcoming talks, subscribe to our CIRSS Seminars mailing list at https://lists.ischool.illinois.edu/lists/info/cirss-seminars. Subscribe to add events to your calendar via Google Calendar or Outlook.
Spring 2025 Speakers

James Zou, Stanford University
Wednesday January 22, 2025, 9am-10am CT
Title: AI scientists for biomedical discoveries
Abstract: This talk will explore how generative AI agents can enable scientific discoveries. First, I’ll introduce the Virtual Lab—a collaborative team of AI scientist agents conducting in silico research meetings to tackle open-ended R&D projects. The Virtual Lab designed new nanobody binders to recent Covid variants that we experimentally validated. Then I will discuss how generative AI can expand researchers’ creativity by designing and experimentally validating new small-molecule drugs. I will conclude by discussing some interesting open problems and opportunities in designing and optimizing multi-agent interactions.
Bio: James Zou is an associate professor of Biomedical Data Science, CS and EE at Stanford University. He works on advancing the foundations of ML and in-depth scientific and clinical applications. Many of his innovations are widely used in tech and biotech industries. He has received a Sloan Fellowship, an NSF CAREER Award, two Chan-Zuckerberg Investigator Awards, a Top Ten Clinical Achievement Award, several best paper awards, and faculty awards from Google, Amazon, and Adobe. His research has also been profiled in popular press including the NY Times, WSJ, and WIRED.

Mario Krenn, Max Planck Institute for the Science of Light
Wednesday January 29, 2025, 9am-10am CT
Title: Towards an Artificial Muse for New Ideas in Science
Abstract: Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential.
[1] Krenn, Kottmann, Tischler, Aspuru-Guzik, Conceptual understanding through efficient automated design of quantum optical experiments. Physical Review X 11(3), 031044 (2021).
[2] Krenn, Pollice, Guo, Aldeghi, Cervera-Lierta, Friederich, Gomes, Häse, Jinich, Nigam, Yao, Aspuru-Guzik, On scientific understanding with artificial intelligence. Nature Reviews Physics 4, 761–769 (2022).
[3] Krenn et al., Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network, Nature Machine Intelligence 5, 1326 (2023)
[4] Gu, Krenn, Interesting Scientific Idea Generation Using Knowledge Graphs and LLMs: Evaluations with 100 Research Group Leaders. arXiv:2405.17044 (2024)
Bio: Dr. Mario Krenn is the research group leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. His work uses artificial intelligence to augment human creativity in scientific discovery, with a particular emphasis on quantum physics. Dr. Krenn has introduced AI systems that design quantum experiments and hardware, several of which have been realized in laboratories, and developed algorithms to inspire unconventional ideas in quantum technologies. His ERC Starting Grant project, ArtDisQ, aims to transform physics simulators to accelerate the discovery of advanced quantum hardware. He believes that understanding the qualities of great human scientists — such as creativity, curiosity, and the ability to uncover surprising insights — is essential for advancing the development of artificial scientists.

Sayash Kapoor, Princeton University
Wednesday February 12, 2025, 9am-10am CT
Title: Can AI automate science?
Abstract: The promise of AI has led to its rapid adoption across scientific fields. Companies have even promised to build AI agents that can automate all of science. In this talk, I will go over three reasons to temper the hype around AI use in automating science. First, existing AI adoption has been plagued by severe reproducibility failures that lead to overoptimistic results across dozens of fields. Second, while AI has been claimed to automate all of science, recent empirical work shows that current models fall well short of accomplishing far simpler tasks, such as reproducing a paper’s results even when the code and data are provided. Still, research tools like those for automating reproducibility are a promising avenue for improving the quality of scientific outputs. Third, even if AI can solve specific scientific tasks to help improve the quality of research, the far harder task is updating scientific epistemologies. While AI could play a meaningful role in improving scientific research, the uncritical embrace of AI risks undermining rather than advancing scientific progress.
Bio: Sayash Kapoor is a CS PhD candidate at Princeton, a Senior Fellow at Mozilla, and a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values. He is a coauthor of AI Snake Oil, one of Nature’s 10 best books of 2024. He has written for outlets like WIRED and The Wall Street Journal, and his work has been featured in The New York Times, The Atlantic, Washington Post, Bloomberg, and many others. Kapoor has been recognized with various awards, including a best paper award at ACM FAccT, an impact recognition award at ACM CSCW, and inclusion in TIME’s inaugural list of the 100 most influential people in AI.

Diyi Yang, Stanford University
Wednesday February 19, 2025, 9am-10am CT
Title: Enabling and Evaluating Human-AI Interaction
Abstract: Recent advances in large language models (LLMs) have revolutionized human-AI interaction, but their success depends on addressing key challenges like privacy and effective collaboration. In this talk, we first share how language agents can help empower humans to learn diverse social skills such as listening skills and conflict resolution to demonstrate the societal impact of human-AI interaction. We then present PrivacyLens, a general framework to evaluate privacy leakage in LLM agents’ trajectories. By evaluating a variety of LLMs, PrivacyLens reveals contextual and long-tail privacy vulnerabilities. The last part introduces Co-Gym, a novel platform for studying human-agent collaboration. Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in multiple human-AI interaction tasks. Overall, this talk highlights how to develop AI systems that are trustworthy and capable of fostering meaningful collaboration with human users.
Bio: Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences.

Francesca Toni, Imperial College London
Wednesday February 26, 2025, 9am-10am CT
Title: Argumentative Explanations for Veracity-Checking
Abstract: AI has become pervasive in recent years, and the need for explainability is widely agreed upon as crucial towards safe and trustworthy deployment of AI systems, especially given the plethora of opportunities for misinformation, hallucinations and malicious behaviour in data-driven AI. In this talk I will overview approaches based on computational argumentation for explaining veracity-checking in a number of incarnations, including for fact checking, for detecting scientific fraud, and for claim verification. I will advocate computational argumentation as ideally suited to support explainable veracity checking that can (1) interact to progressively explain outputs and/or reasoning as well as assess grounds for contestation provided by humans and/or other machines, and (2) revise decision-making processes to redress any issues successfully raised during contestation.
Bio: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI (XAI) at the Department of Computing, Imperial College London, UK, as well as the founder and leader of the CLArg (Computational Logic and Argumentation) research group and of the Faculty of Engineering XAI Research Centre. She holds an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX). Her research interests lie within the broad area of Explainable AI, at the intersection of Knowledge Representation and Reasoning, Machine Learning, Computational Argumentation, Argument Mining, and Multi-Agent Systems. She is EurAI fellow, IJCAI Trustee, in the Board of Directors for KR Inc., member of the editorial board for the Argument and Computation journal, Editorial Advisor for Theory and Practice of Logic Programming, and associate editor for the AI journal, as well as the general chair for IJCAI2026.

Haohan Wang, University of Illinois Urbana-Champaign
Wednesday March 5, 2025, 9am-10am CT
Title: Toward Agentic AI Scientist for Biomedical Discovery
Abstract: Recent advancements in machine learning have transformed the landscape of computational genomics, particularly in the identification of disease-associated genes from complex datasets. In this context, we introduce GenoAgent, an innovative AI-driven framework designed to accelerate scientific discovery in genomics by automating the identification of disease-associated genes from complex gene expression datasets. This framework utilizes Large Language Models (LLMs) to simulate roles traditionally filled by human experts, such as project managers, data engineers, and domain experts. These LLMs collaborate effectively, leveraging context-aware planning, iterative correction, and expert consultation to enhance the efficiency and scalability of research processes. GenoAgent reduces the dependency on extensive human expertise, thereby streamlining the exploration and analysis of genomics data. Additionally, to evaluate and support the development of GenoAgent, we have curated the GenoTEX benchmark dataset. In addition, we will also introduce our recent advances in multi-agent research and prompt optimization that enables future agentic AI scientist.
Bio: Haohan Wang is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign. His research focuses on the development of AI methods for computational biology and healthcare applications. In his work, he uses statistical analysis and deep learning methods, with an emphasis on data analysis using methods least influenced by spurious signals. Wang earned his PhD in computer science through the Language Technologies Institute of Carnegie Mellon University. In 2019, Wang was recognized as the Next Generation in Biomedicine by the Broad Institute of MIT and Harvard because of his contributions in dealing with confounding factors with deep learning.

Daniel S. Weld, University of Washington
Wednesday March 26, 2025, 9am-10am CT
Title: Intelligence Augmentation for Scientific Researchers
Abstract: Recent advances in Artificial Intelligence are powering revolutionary interactive tools that will transform the very nature of the scientific enterprise, leading to increasingly automated scientific discovery. We describe several large-scale projects at the Allen Institute for AI aimed at developing open models, agentic platforms, and novel interaction that amplify the productivity of scientists and engineers.
Bio: Daniel S. Weld is Chief Scientist and General Manager of Semantic Scholar at the Allen Institute of Artificial Intelligence and Professor Emeritus at the University of Washington. After formative education at Phillips Academy, he received bachelor’s degrees in both Computer Science and Biochemistry at Yale University in 1982. He landed a Ph.D. from the MIT Artificial Intelligence Lab in 1988, received a Presidential Young Investigator’s award in 1989, an Office of Naval Research Young Investigator’s award in 1990; he is a Fellow of the Association for Artificial Intelligence (AAAI), the American Association for the Advancement of Science (AAAS), and the Association for Computing Machinery (ACM). Dan was a founding editor for the Journal of AI Research, was area editor for the Journal of the ACM and on the editorial board for the Artificial Intelligence journal. Weld is a Venture Partner at the Madrona Venture Group and has co-founded several companies, including Netbot (sold to Excite), Adrelevance (sold to Media Metrix), and Nimble Technology (sold to Actuate).

Marcel Binz, Helmholtz Institute for Human-Centered AI
Wednesday April 2, 2025, 9am-10am CT
Title: Foundation models of human cognition
Abstract: Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this – it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. Furthermore, I outline my vision for how to use such behaviorally predictive models to advance our understanding of human cognition, as well as how they can be scaled to naturalistic environments.
Bio: Dr. Marcel Binz is a research scientist and deputy head of the Institute for Human-Centered AI at Helmholtz Munich. His research employs state-of-the-art machine learning methods to uncover the fundamental principles behind human cognition. He believes that to get a full understanding of the human mind, it is vital to consider it as a whole and not just as the sum of its parts. His current research goal is therefore to establish foundation models of human cognition – models that cannot only simulate, predict, and explain human behavior in a single domain but that offer a unified take on our mind.

Yang Zhang, University of Illinois Urbana-Champaign
Wednesday April 9, 2025, 9am-10am CT
Title: Human-AI Collaboration for Social Good: Collective Design, Calibration, and Interaction
Abstract: Human-AI Collaboration centers on designing collaborative frameworks that integrate Artificial Intelligence (AI), particularly Large Language Models (LLMs), with human intelligence (HI) to tackle complex, real-world problems in social contexts. Human intelligence contributes unique capabilities such as reasoning, problem-solving, abstract thinking, and the ability to learn from experience. These attributes provide valuable context, domain expertise, and human-centered insights essential for understanding the intricate social and environmental factors that shape societies. Conversely, AI excels at processing large-scale data, identifying latent patterns, and making predictions, offering scalability and computational power for addressing complex issues. Motivated by the complementary yet distinct strengths of AI and HI, my research is built upon three core thrusts: human-AI collaborative design, calibration, and interaction. The design thrust leverages human-in-the-loop mechanisms to optimize neural architectures and hyperparameters efficiently, focusing on adaptive solutions for resource-constrained scenarios such as disaster response and urban monitoring. The calibration thrust addresses fairness, robustness, and cross-domain generalization by integrating AI and human intelligence through collective intelligence frameworks, ensuring alignment with diverse and dynamic domain requirements. The interaction thrust fosters seamless collaboration between humans and AI, integrating small and large language models with HI to maintain explainability, interpretability, and responsiveness in high-stakes applications. In this talk, I will discuss the technical contributions of these three thrusts, highlighting their impact on addressing societal challenges through human-AI collaboration.
Bio: Yang Zhang is a Teaching Assistant Professor at the School of Information Sciences at the University of Illinois Urbana-Champaign (UIUC) and a senior researcher at UIUC’s Social Sensing and Intelligence Lab. He is also a faculty affiliate of Illinois Informatics at UIUC. Previously, he was a Postdoctoral Research Associate at UIUC and a W. J. Cody Research Associate at Argonne National Laboratory. Yang earned his Ph.D. in Computer Science & Engineering from the University of Notre Dame, an M.S. in Data Science from Indiana University Bloomington, and a B.S. in Software Engineering from Wuhan University. His research focuses on human-centered AI, human-AI collaboration, deep learning, and generative AI. He has authored over 80 peer-reviewed conference and journal papers published in top venues such as ACM CSCW, ACM Web Conference, AAAI, IJCAI, and IEEE BigData. His work has been recognized with prestigious honors, including the Outstanding Graduate Research Award from the University of Notre Dame and the W. J. Cody Research Associateship at Argonne National Laboratory.

Harlin Lee, University of North Carolina at Chapel Hill
Wednesday April 16, 2025, 9am-10am CT
Title: Generative Models in Three Healthcare Modalities
Abstract: In this talk, I will present several ongoing projects that leverage generative models across different aspects of healthcare. The first focuses on the analysis of electrophysiological signals—such as EEG, EOG, and respiratory data—in the context of pediatric sleep. The second explores the application of generative models to DNA sequences for identifying genetic risk factors associated with psychiatric disorders. Lastly, I will introduce new initiatives involving large language models (LLMs) aimed at supporting and accelerating clinical research at UNC.
Bio: Harlin Lee is an Assistant Professor at the School of Data Science and Society, University of North Carolina at Chapel Hill. She received degrees in electrical engineering and computer science from MIT, and more in machine learning and electrical and computer engineering from CMU. She completed postdoctoral studies in applied math at UCLA. Her research interests include graphs, manifolds, optimal transport, nonconvex optimization, statistical signal processing, machine learning, and applications in healthcare.

Lucy Li, University of California Berkeley
Wednesday April 23, 2025, 9am-10am CT
Title: Language Models for People and Culture – From Pretraining to Application
Abstract: Given the widespread use of language models (LMs) today, it is imperative that we examine assumptions around their supposed “general-purpose”, one-size-fits-all nature. In this talk, I’ll discuss three research projects that share this underlying theme. First, I’ll show how various notions of text “quality” used during LM pretraining data curation can result in language from different social groups being filtered at disparate rates. Next, I’ll present findings from a crowdsourcing study, in which we surface a range of expectations around what people believe “fair” or “good” model behavior should look like. Then, I’ll discuss experiments in which we interrogate the extent to which large LMs can assist cultural analytics scholarship. I’ll conclude with some thoughts on what I believe will shape the broader AI community in the future.
Bio: Lucy Li is a PhD candidate at the University of California, Berkeley, affiliated with Berkeley AI Research and the School of Information. Her research intersects natural language processing with computational social science and digital humanities (e.g. cultural analytics). She has worked with Microsoft Research’s Fairness, Accountability, Transparency, and Ethics (FATE) team and the Allen Institute for AI, and led collaborations with colleagues in education, psychology, and English literature. She has been recognized by EECS Rising Stars, Rising Stars in Data Science, an American Educational Research Association (AERA) Best Paper Award, and an NSF Graduate Research Fellowship.

Blessing Ogbuokiri, Brock University
Wednesday April 30, 2025, 9am-10am CT
Title: Trustworthy and Responsible Large Language Models: Principles, Pitfalls, and Progress
Abstract: Large Language Models (LLMs) like GPT-4, Gemini, and LLaMA are transforming how we interact with information, yet questions about their trustworthiness and responsible use remain critical. In this talk, we explore foundational principles for building and evaluating responsible LLMs, uncover common pitfalls in their development and deployment, and highlight recent progress toward more transparent, fair, and accountable AI systems. Drawing on both technical and ethical perspectives, the session will offer insights for researchers and practitioners working across computer science, information science, and interdisciplinary fields. Attendees will gain a deeper understanding of what it takes to build LLMs we can trust—and how to navigate the complex trade-offs involved.
Bio: Blessing Ogbuokiri is an Assistant Professor in the Department of Computer Science at Brock University, Canada, and Director of the Responsible and Applied Machine Learning Laboratory (RAML Lab). He was previously a postdoctoral fellow and instructor at York University’s Africa-Canada Artificial Intelligence and Data Innovation Consortium Lab in Toronto. He received his Ph.D. in Computer Science from the University of the Witwatersrand, South Africa. His research interests include Responsible AI, machine learning, NLP, and theoretical computing. His research focuses on the intersection of Responsible AI and health—from building models using machine learning algorithms to predict diseases to applying NLP techniques for sentiment analysis, text classification, and natural language understanding—to help communities and governments tackle infectious disease outbreaks. A recipient of the Black Scholar Research Grant (2025) and the Google DeepMind AI Award (2018), he collaborates widely across disciplines and served as a co-chair of the Affinity Workshops at NeurIPS 2023. He also organizes the Black in AI Workshop and developed the Responsible AI course at Brock University, reflecting his commitment to inclusive, equitable, and impactful AI systems. Visit him at BrockU.ca.