CIRSS Speaker Series, Fall 2025: The AI Disruption

The CIRSS speaker series continues in Fall 2025 with the new theme of “The AI Disruption.” Our speakers will discuss how recent advances in AI have reshaped their research — what has been made easier and what has become more difficult — and reflect upon its broader disruptive impact on society.

We meet 2-3 Wednesdays a month, 9am-10am US Central Time, on Zoom. This event is open to the public, and everyone is welcome to attend. The series is hosted by the Center for Informatics Research in Science and Scholarship (CIRSS) at the School of Information Sciences at the University of Illinois at Urbana-Champaign, and our Fall series is led by Yuanxi Fu and Timothy McPhillips . If you have any questions, please contact Timothy McPhillips.

Participate: To join a live talk, follow the “Join Here” link for the current week below to access the iSchool event page for the talk. From there, click the “PARTICIPATE online” button to join the live Zoom session. Recordings of past talks can be found via the “Recording” links below if available.

Follow: To receive updates on upcoming talks, subscribe to our CIRSS Seminars mailing list at https://lists.ischool.illinois.edu/lists/info/cirss-seminars. Subscribe to add events to your calendar via Google Calendar or Outlook.

Fall 2025 Speakers

Xuan Wang, Virginia Tech
Wednesday September 10, 2025, 9am-10am CT
Title: Towards Small, Open-Source, Multi-Modal Language Model Agents for Science and Society

Abstract: Recent advances in large language models have shown impressive capabilities across scientific domains and societal applications, but their size and proprietary nature often limit accessibility and reproducibility. In this talk, I will present our work on developing small, open-source, multi-modal language model agents that can reason, plan, and act in diverse scientific and societal contexts. I will discuss methods for designing small but highly effective language models, integrating multi-modal inputs, and coordinating multi-agent interactions to achieve complex tasks. I will also highlight real-world applications in science and society, emphasizing transparency, reproducibility, and accessibility.

Biosketch: Dr. Xuan Wang is an Assistant Professor in the Department of Computer Science Department at Virginia Tech. Her research interests are in natural language processing, data mining, AI for sciences, and AI for healthcare. She was a recipient of the NSF CAREER Award 2025, Nvidia Academic Grant 2025, Cisco Research Award 2025, NSF NAIRR Pilot Award 2024 – 2025, and NAACL Best Demo Paper Award 2021. She received a Ph.D. degree in Computer Science, an M.S. degree in Statistics, and an M.S. degree in Biochemistry from the University of Illinois Urbana-Champaign in 2022, 2017, and 2015, respectively, and a B.S. degree in Biological Science from Tsinghua University in 2013.

Pascal Hitzler, Kansas State University
Wednesday October 1, 2025, 9am-10am CT
Title: How Disruptive Is It Really?

Abstract: There are certainly disruptions happening in and out of AI at the moment. But how big are the disruptions really? How much of it is overhype? In this presentation, we will attempt to take a level-headed and long-term perspective, demystify some of AI, acknowledge clear advances, and highlight foundational obstacles in the path of further advancing AI that will likely not be fully overcome anytime soon.Details coming soon.

Bio: Pascal Hitzler is University Distinguished Professor and endowed Lloyd T. Smith Creativity in Engineering Chair at the Department of Computer Science at Kansas State University, one of the Directors of the Institute for Digital Agriculture and Advanced Analytics (ID3A), and Director of the Center for Artificial Intelligence and Data Science (CAIDS). Until July 2019 he was endowed NCR Distinguished Professor, Brage Golding Distinguished Professor of Research, and Director of Data Science at the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio, U.S.A. He is director of the Data Semantics (DaSe) Lab. From 2004 to 2009, he was Akademischer Rat at the Institute for Applied Informatics and Formal Description Methods (AIFB) at the University of Karlsruhe in Germany, and from 2001 to 2004 he was postdoctoral researcher at the Artificial Intelligence institute at TU Dresden in Germany. In 2001 he obtained a PhD in Mathematics from the National University of Ireland, University College Cork, and in 1998 a Diplom (Master equivalent) in Mathematics from the University of Tübingen in Germany. His research record lists over 400 publications in such diverse areas as neurosymbolic artificial intelligence, semantic web, knowledge graphs, knowledge representation and reasoning, denotational semantics, and set-theoretic topology. He was founding Editor-in-chief of the Semantic Web journal, the leading journal in the field, and is founding Editor-in-chief of the new Neurosymbolic Artificial Intelligence journal. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010, which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association’s Choice Magazine, and has translations into German and Chinese. He is founding steering committee member of the Neural-Symbolic Learning and Reasoning Association. More information about him available here.

Nihar B. Shah, Carnegie Mellon University
Wednesday October 15, 2025, 9am-10am CT
Title: LLMs in Science: The Good, The Bad and The Ugly

Abstract: As LLMs become increasingly integrated into academic workflows, their influence is both promising and precarious. In this talk, we will explore three facets of this evolving intersection.

  • The Good: LLMs executing aspects of peer review that are difficult for human reviewers.
  • The Bad: Vulnerabilities in the review process to fraud such as identity theft and collusion rings.
  • The Ugly: Methodological pitfalls of autonomous “AI scientists.”

Bio: Nihar B. Shah is an Associate Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research is on the evaluation of science and the science of evaluation. His group develops computational tools with strong theoretical guarantees, and also designs and conducts controlled experiments for evidence-based policy design. His work has been used in the review of well over hundred thousand papers and thousands of proposals. He is a recipient of the Young Alumnus Medal from the Indian Institute of Science, a JP Morgan faculty research award, Google Research Scholar Award, an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a “truly outstanding and innovative PhD thesis”, the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, and several Best Paper Awards.

Rishi Bommasani, Stanford University
Wednesday October 22, 2025, 9am-10am CT
Title: Technocratic AI Policy

Abstract: AI will better impact society if it is responsibly governed. In this talk, I will argue that designing wise policy for emerging technologies like AI hinges on building a strong bilateral relationship between AI researchers and AI policymakers. To that end, I will discuss a collection of efforts that have led to some of the key outcomes in global AI policy. I hope this talk will offer a fuller vision for how academic AI researchers can have large-scale impact to advance better societal outcomes.

Bio: Rishi Bommasani is a Senior Research Scholar at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), where he researches the societal and economic impact of AI. Rishi received his PhD from Stanford CS, advised by Percy Liang and Dan Jurafsky and supported by the Stanford Lieberman Fellowship and NSF Graduate Research Fellowship. His research has won multiple paper awards and has been featured in Science, Nature, the New York Times, the Wall Street Journal, and the Washington Post. He has led important industrial AI policy efforts such as the annual Foundation Model Transparency Index, the California Report on Frontier AI Policy, and the EU AI Act Code of Practice.

Jeff Clune, University of British Columbia
Wednesday October 29, 2025, **11am-noon CT** (note later time)
Title: Open-Ended, Quality Diversity, and AI-Generating Algorithms in the Era of Foundation Models

Abstract: Foundation models (e.g. large language models) create exciting new opportunities in our longstanding quests to produce open-ended and AI-generating algorithms, wherein agents can truly keep innovating and learning forever. In this talk, I will introduce quality diversity, open-ended, and AI-generating algorithms and share some of our recent work harnessing the power of foundation models to unleash their potential. I will cover our recent work including OMNI (Open-endedness via Models of human Notions of Interestingness), Video Pre-Training (VPT), Automatically Designing Agentic Systems (ADAS), the Darwin Gödel Machine, and The AI Scientist.

Bio: Jeff Clune is a Professor of Computer Science at the University of British Columbia, a Canada CIFAR AI Chair at the Vector Institute, and a Senior Research Advisor at DeepMind. Jeff focuses on deep learning, including deep reinforcement learning. Previously he was a research manager at OpenAI, a Senior Research Manager and founding member of Uber AI Labs (formed after Uber acquired a startup he helped lead), the Harris Associate Professor in Computer Science at the University of Wyoming, and a Research Scientist at Cornell University. He received degrees from Michigan State University (PhD, master’s) and the University of Michigan (bachelor’s). More on Jeff’s research can be found at JeffClune.com or on X (@jeffclune). He has won the Presidential Early Career Award for Scientists and Engineers from the White House, had two papers in Nature, one in Science, and one in PNAS, won an NSF CAREER award, received multiple Outstanding Paper of the Decade and Distinguished Young Investigator awards, a Test of Time award, and had best paper awards, oral presentations, and invited talks at the top machine learning conferences (NeurIPS, CVPR, ICLR, and ICML). His research is regularly covered in the world’s top press outlets.

Chaowei Xiao, Johns Hopkins University
Wednesday November 5, 2025, 9am-10am CT
Title: Towards Secure and Safe AI Agents: From Model to System

Abstract: Immense efforts are underway to align AI with human values and ensure its responsible use. Yet a profound question remains: is AI truly safe? In this talk, I will share our Dual Pathways Principle that unites the model and system perspectives to build secure and safe AI agents. I will introduce our recent work, which integrates security and human-centric principles to build secure and safe AI. Then, I will discuss why ensuring AI safety demands a system-level approach and present our security-by-design approaches for building secure and safe AI agents. Combining them together, I aim to lay out a pathway toward secure and safe AI agents.

Bio: Chaowei Xiao is an Assistant Professor at Johns Hopkins University and a Researcher at NVIDIA. His research focuses on building next-generation secure and trustworthy AI and Agents. He has received several prestigious honors, including the Schmidt Science AI2050 Early Career Award, Argonne National Lab Impact Award, and multiple industry faculty awards from Amazon and Apple. His work has won various awards including the USENIX Security Distinguished Paper Award (2024),MobiCom Best Paper Award (2014), EWSN Best Paper Award (2021), ACM Gordon Bell Prize Finalist (2024) and Bell Special Prize (2023). His research has been cited around 20,000 times and featured in leading media outlets including Nature, Wired, Fortune, and The New York Times. He also holds multiple patents, and his research has been exhibited at the London Science Museum. Before joining JHU, he was an Assistant Professor at the University of Wisconsin–Madison. His group at JHU has multiple PhD, postdoc and interns openings. Interested applicants are encouraged to contact him.

Abstract: For millions of years, human intelligence set the standard. But now, the lightning pace of tech has left us gasping, struggling to keep up with our own cognitive demands. AI has pushed civilization into overdrive, yet what we are ultimately doing is burning terawatts of power on data centers and excluding humans from this growth. We have built systems that are prefixed ‘smart’, but not smart enough to break free from their own inefficiency.

In this talk, Dr. Nataliya Kosmyna will argue that we need to start creating more seamless AI interfacing directly with our brains, achieving the same outcomes with the brain’s energy consumption levels.

Technology should amplify our creativity, not snuff it out. It should fuel social interactions, not isolate us. The goal is not to replace human thought, but to propel us into a Type II civilization.

Instead, we are trapped in a dystopian remix of 1984 — 2025’s version — where digital censorship and surveillance threaten to choke innovation in nations that refuse to play along.

This talk will explore critical questions: What should define ownership in the age of AI and at what cost?

It is time to reclaim the conversation — because true evolution should never be about creating more artificial intelligence. It is about evolving the most powerful source of intelligence: Your Mind.

Bio: Dr. Kosmyna is a Research Scientist at MIT Media Lab’s Fluid Interfaces group and a Visiting Faculty Researcher at Google. She has over 15 years of experience in developing and designing end-to-end brain-computer interfaces (BCIs). Coming from a background in artificial intelligence, neuroscience and human-computer interaction (HCI), she is passionate about the idea of creating a partnership between AI and human intelligence, a fusion of the machine with the human brain.

Nataliya obtained her Ph.D in 2015 in the domain of non-invasive Brain-Computer Interfaces (BCIs). Most of her projects are focused around BCIs in the context of consumer grade applications. Nataliya is a public speaker, author of multiple research papers and reviewer of numerous professional journals and conferences. Dr. Kosmyna often collaborates with teams from Boston Dynamics, Microsoft Research, NASA. Additional information about Nataliya’s work is available on her MIT people page.

Katie Atkinson, University of Liverpool
Wednesday November 19, 2025, 9am-10am CT
Title: Which legal tasks can be, should be, and should not be, undertaken by AI?

Abstract: There is a wealth of academic literature on the topic of AI and law, but it is only in recent years that AI has started to be deployed in legal work in practice. Some legal tasks are well suited to automation through the use of AI tools, whilst other tasks are more challenging for AI, and importantly, legal AI applications need to be demonstrated to be trustworthy, in order to support confident deployment. In this talk, I will showcase work from the field of AI and law directed at supporting several different legal tasks, before focussing specifically on explainable approaches to legal decision-support that involve modelling the arguments featuring in reasoning about legal cases. I will cover examples of the use of AI in real world legal applications and discuss how both the private and public sectors’ legal work is being transformed through the capabilities offered by AI and law research and technologies. I will also be highlighting the challenges this transformation poses for governance of the use of AI in legal work and the regulatory responses being developed.

Bio: Katie Atkinson is Professor of Computer Science, Associate Pro-Vice-Chancellor and Director of the Interdisciplinary Centre for Sustainability Research at the University of Liverpool, UK. She has been conducting foundational and interdisciplinary research on artificial intelligence for over 20 years, with her key areas of research being in the fields of computational models of argument, AI and Law, and AI for chemistry. Katie has published over two hundred articles in peer-reviewed conference proceedings and journals, and has also applied her work in a variety of industrial projects with large and small law firms. In 2016-2017 Katie served as President of the International Association for AI and Law, since 2020 she has served as a member of the Lawtech UK Panel and in 2024 she was appointed to the Artificial Intelligence Advisory Board for the European Commission for the Efficiency of Justice. She also holds the roles of Co-Editor-in-Chief of the Artificial Intelligence and Law journal and President of The Foundation for Legal Knowledge-Based Systems (JURIX).

Abstract: In this talk I’ll discuss some recent projects on AI deployments in a variety of workplaces, including the long-haul trucking industry and brick-and-mortar retail. While discussions about AI and work often focus on worker displacement and deskilling, I argue for a simultaneous focus on the preservation of quality and dignity in AI-impacted workplaces, as well as attention to the managerial and relational impacts of AI across work environments.

Bio: Karen Levy is an associate professor of Information Science at Cornell University and associated faculty at Cornell Law School. She is the author of Data Driven: Truckers, Technology, and the New Workplace Surveillance (Princeton University Press 2023). Levy is a New America Fellow and a Fellow of the Canadian Institute for Advanced Research’s program on Innovation, Equity, and the Future of Prosperity.