banner
CS Colloquium (BMAC)
 

The Department of Computer Science of Colorado State University, in cooperation with ISTeC (Information Science and Technology Center), offers the CS Colloquium series as a service to all who are interested in computer science. When in-person meetings are possible, most seminars are scheduled for Monday 11:00AM -- 11:50AM in CSB 130 or Morgan Library Event Hall. For help finding the locations of our seminar meetings, consult the on-line CSU campus map.map

For questions about this page or to schedule talks, please contact Sudipto Ghosh (sudipto.ghosh AT colostate dot edu). Here is a list of past seminar schedules.

CS501 information for students is available directly on Canvas.

 

Upcoming Events





CS Colloquium Schedule, Fall 2024



August
19

cs Computer Science Department Colloquium
Introduction to the Graduate Program

Speaker: Sanjay Rajopadhye, Professor and Graduate Director, Computer Science Department

When: 11:00AM ~ 11:50AM, Monday August 19, 2024
Where: CSB 130 map

Abstract: Dr. Rajopadhye introduces the Computer Science graduate program at CSU.




August
26

cs Computer Science Department Colloquium
Lightning Talks - Round 1

Speaker: CS Faculty: Chuck Anderson, Darrell Whitley, Mohammed Safayet Arefin, Nate Blanchard, Nikhil Krishnaswamy, Ravi Mangal, Sangmi Pallickara, Shrideep Pallickara

When: 11:00AM ~ 11:50AM, Monday August 26, 2024
Where: CSB 130 map

Abstract: Faculty present brief talks on their research activities.




September
9

cs Computer Science Department Colloquium
Lightning Talks -- Round 2

Speaker: CS Faculty: Bianca Trinkenreich, Craig Partridge, Ewan Davies, Indrajit Ray, Indrakshi Ray, Sudeep Pasricha, Sanjay Rajopadhye, Vinayak Prabhu, Fabio de Abreu Santos

When: 11:00AM ~ 11:50AM, Monday September 9, 2024
Where: CSB 130 map

Abstract: Faculty present brief talks on their research activities.




September
16

cs Computer Science Department Colloquium
Lightning Talks -- Round 3

Speaker: CS Faculty: Louis-Noel Pouchet, Asa Ben-Hur, Bruce Draper, Francisco Ortega, Marcia Moraes, Sarath Sreedharan, Sudipto Ghosh, Yashwant Malaiya

When: 11:00AM ~ 11:50AM, Monday September 16, 2024
Where: CSB 130 map

Abstract: Faculty present brief talks on their research activities.




September
23

cs Computer Science Department Colloquium
Human-Aware AI – A Foundational Framework for Human-AI Interaction

Speaker: Sarath Sreedharan, Assistant Professor of Computer Science, Colorado State University

When: 11:00AM ~ 11:50AM, Monday September 23, 2024
Where: CSB 130 map

Abstract: We are living through a revolutionary moment in AI history. We are seeing the development of impressive new AI systems at a rate that was unimaginable just a few years ago. However, AI's true potential to transform society remains unrealized, in no small part due to the inability of current systems to work effectively with people. A major hurdle to achieving such coordination is the inherent asymmetry between the AI system and its users. In this talk, I will discuss how the framework of Human-Aware AI (HAAI) provides us with the tools required to bridge this gap and support fluent and intuitive coordination between the AI system and its users. We will discuss how, HAAI, a framework originally developed to model explanatory dialogue has since been shown to be capable of modeling and addressing diverse challenges associated with human-AI Interaction. In particular, we will look at how HAAI could be used to achieve value-alignment, calibrate user trust, provide effective assistance, and even generate deceptive behavior.

Bio: Sarath Sreedharan is an Assistant Professor at Colorado State University. His core research interests include designing human-aware decision-making systems to generate behaviors that align with human expectations. He completed his Ph.D. at Arizona State University, where his doctoral dissertation received one of the 2022 Dean’s Dissertation Awards for Ira A. Fulton Schools of Engineering and was an Honorable mention for the ICAPS-23 Outstanding Dissertation Award. His research has been published in various premier research conferences, including AAAI, ICAPS, IJCAI, AAMAS, IROS, HRI, ICRA, ICML, ICLR, and NeurIPS, and journals like AIJ, and in AI Magazine. He has presented tutorials on his research at various forums and is the lead author of a Morgan Claypool monograph on explainable human-AI interactions. He was selected as a DARPA Riser Scholar for 2022 and a Highlighted New Faculty at AAAI-23. His research has won multiple awards, including the Best System's Demo and Exhibit Award at ICAPS-20 and the Best Paper Award at Bridging Planning & RL workshop at ICAPS 2022. He was also recognized as a AAAI-20 Outstanding Program Committee Member, Highlighted Reviewer at ICLR 22, IJCAI 2022 and 2023 Distinguished Program Committee Member, and Top Reviewer at NeurIPS 22.




September
30

cs Computer Science Department Colloquium
Concept-based Formal Analysis of Neural Networks via Vision-Language Models

Speaker: Ravi Mangal, Assistant Professor of Computer Science, Colorado State University

When: 11:00AM ~ 11:50AM, Monday September 30, 2024
Where: CSB 130 map

Abstract: As deep neural networks (DNNs) demonstrate growing capabilities to solve complex tasks, there is a push to incorporate them as components in software and cyber-physical systems. To reap the benefits of these learning-enabled systems without propagating harms, there is an urgent need to develop tools and methodologies for evaluating their safety. Formal methods are a powerful set of tools, rooted in formal logic, for analyzing behaviors of software systems. However, formal analysis of learning-enabled systems is challenging—DNNs are notoriously difficult to interpret and lack logical specifications, the environments in which these systems operate can be difficult to model mathematically, and existing formal methods do not scale to these complex systems.

In this talk, I will focus on addressing the challenges in interpreting, specifying, and formally verifying DNN behavior. First, I will present a logical specification language designed to facilitate writing specifications about vision-based DNNs in terms of high-level, human-understandable concepts. I will then demonstrate how we can leverage vision-language models such as CLIP to encode these specifications and to design an efficient procedure for verifying vision models with respect to these specifications. I will conclude by describing some open problems.

Bio: Ravi Mangal is an assistant professor in the Department of Computer Science at Colorado State University. He is interested in all aspects of designing and applying formal methods for assuring the correctness and safety of software systems. His current research focuses on developing formal methods for Trustworthy Machine Learning , i.e., for safety, robustness and explainability analysis of machine learning models as well as formal safety analysis of systems with such learning-enabled components. Previously, he was a postdoctoral researcher at Carnegie Mellon University in the Security and Privacy Institute (CyLab) and, before that, he graduated with a PhD in Computer Science from Georgia Institute of Technology.




October
7

cs ISTeC Distinguished Lecture In conjunction with the Department of Computer Science and Department of Electrical and Computer Engineering Seminar Series
Compositional Verification and Run-time Monitoring for Learning-Enabled Autonomous Systems

Speaker: Corina Pasareanu, Principal Scientist (CMU CyLab), Technical Professional Leader -- Data Science (NASA Ames ⁄ KBR)

When: 11:00AM ~ 11:50AM, Monday October 7, 2024
Where: LSC Room 386 map

Abstract: Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size, lack of formal specifications, and sensitivity to small changes in the environment. We present compositional technigiques for the formal verification of safety properties of such autonomous systems. The main idea is to abstract the hard-to-analyze components of the autonomous system, such as DNN-based perception and environmental dynamics, with either probabilistic or worst-case abstractions. This makes the system amenable to formal analysis using off-the-shelf model checking tools, enabling the derivation of specifications for the behavior of the abstracted components such that system safety is guaranteed. We also discuss how the derived specifications can be used as run-time monitors deployed on the DNN outputs. We illustrate these ideas in a case study from the autonomous airplane domain.

Bio: Corina Pasareanu is an ACM Fellow and an IEEE ASE Fellow, working at NASA Ames. She is affiliated with KBR and Carnegie Mellon University's CyLab. Her research interests include model checking, symbolic execution, compositional verification, probabilistic software analysis, autonomy, and security. She is the recipient of several awards, including ETAPS Test of Time Award (2021), ASE Most Influential Paper Award (2018), ESEC ⁄ FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program ⁄ General Chair for several conferences including: ICSE 2025, SEFM 2021, FM 2021, ICST 2020, ISSTA 2020, ESEC ⁄ FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is on the steering committees for the ICSE, TACAS and ISSTA conferences. She is currently an associate editor for IEEE TSE and for STTT, Springer Nature.




October
8

cs Sponsored by Colorado State University’s Information Science and Technology Center (ISTeC) In conjunction with the Department of Computer Science and Department of Electrical and Computer Engineering Seminar Series
Attacks and Defenses for Large Language Models on Coding Tasks

Speaker: Corina Pasareanu, Principal Scientist (CMU CyLab), Technical Professional Leader -- Data Science (NASA Ames ⁄ KBR)

When: 10:00AM ~ 10:50, Tuesday October 8, 2024
Where: CSB 130 map

Abstract: Modern large language models (LLMs), such as ChatGPT, have demonstrated impressive capabilities for coding tasks, including writing and reasoning about code. They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities. However, these previous code models were shown vulnerable to adversarial examples, i.e., small syntactic perturbations designed to “fool” the models. In this talk we discuss the transferability of adversarial examples, generated through white-box attacks on smaller code models, to LLMs. Further, we propose novel cost-effective techniques to defend LLMs against such adversaries via prompting, without incurring the cost of retraining. Our experiments show the effectiveness of the attacks and the proposed defenses on popular LLMs.

Bio: Corina Pasareanu is an ACM Fellow and an IEEE ASE Fellow, working at NASA Ames. She is affiliated with KBR and Carnegie Mellon University's CyLab. Her research interests include model checking, symbolic execution, compositional verification, probabilistic software analysis, autonomy, and security. She is the recipient of several awards, including ETAPS Test of Time Award (2021), ASE Most Influential Paper Award (2018), ESEC ⁄ FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program ⁄ General Chair for several conferences including: ICSE 2025, SEFM 2021, FM 2021, ICST 2020, ISSTA 2020, ESEC ⁄ FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is on the steering committees for the ICSE, TACAS and ISSTA conferences. She is currently an associate editor for IEEE TSE and for STTT, Springer Nature.




October
14

cs ISTeC Distinguished Lecture and Computer Science Department Colloquium
The Incredible Machine: Developer Productivity and the Impact of AI on Productivity

Speaker: Thomas Zimmermann, Sr. Principal Researcher, Microsoft Research

When: 11:00AM ~ 11:50AM, Monday October 14, 2024
Where: LSC Room 386 map

Abstract: Developer productivity is about more than an individual’s activity levels or the efficiency of the engineering systems, and it cannot be measured by a single metric or dimension. In this talk, I will discuss a decade of my productivity research. I will show how to use the SPACE framework to measure developer productivity across multiple dimensions to better understand productivity in practice. I will also discuss common myths around developer productivity and propose a collection of sample metrics to navigate around those pitfalls. Measuring developer productivity at Microsoft has allowed us to build new insights about the challenges remote work has introduced for software engineers, and how to overcome many of those challenges moving forward into a new future of work. Finally, I will talk about how I expect that the AI revolution will change developers and their productivity.

Bio: Thomas Zimmermann is a Sr. Principal Researcher at Microsoft, where he works on cutting-edge research and innovation in data science, machine learning, software engineering, and digital games. He has over 15 years of experience in the field, with more than 100 publications that have been cited over 25,000 times. His research mission is to empower software developers and organizations to build better software and services with AI. He is best known for his pioneering work on systematic mining of software repositories and his empirical studies of software development in industry. He has contributed to several Microsoft products and tools, such as Visual Studio, GitHub, and Xbox. He is an ACM Fellow, an IEEE Fellow, recipient of the IEEE TCSE Edward J. McCluskey Technical Achievement award, and Co-Editor in Chief of the Empirical Software Engineering journal. https: ⁄ ⁄ thomas-zimmermann.com ⁄




October
15

cs Computer Science Department Colloquium Sponsored by ISTeC
The Lord of the Models: The Fellowship of Trust in AI

Speaker: Thomas Zimmermann, Sr. Principal Researcher, Microsoft Research

When: 10:00-10:50 AM, Tuesday October 15, 2024
Where: CSB 130 map

Abstract: In the realm of software, an AI revolution is afoot, transforming how we create and consume our digital world. In this talk, I shall share initial observations on the evolution of software engineering and AI’s profound impact on developers. Like the forging of powerful artifacts, AI-driven tools are reshaping development processes, bringing unprecedented efficiencies yet also presenting new trials. Central to this grand transformation is the vital role of trust in AI-based software tools. Understanding and nurturing this trust is paramount for their successful adoption and integration. Moreover, I will reveal why the research community stands as a pivotal fellowship in this epic journey, guiding us through the challenges and triumphs of the AI age. Join us as we embark on this transformative quest, bridging trust and innovation in the dawn of AI and software engineering. (This text has been rephrased by the author using ChatGPT to reflect a different style while maintaining the original meaning and contents.)

Bio: Thomas Zimmermann is a Sr. Principal Researcher at Microsoft, where he works on cutting-edge research and innovation in data science, machine learning, software engineering, and digital games. He has over 15 years of experience in the field, with more than 100 publications that have been cited over 25,000 times. His research mission is to empower software developers and organizations to build better software and services with AI. He is best known for his pioneering work on systematic mining of software repositories and his empirical studies of software development in industry. He has contributed to several Microsoft products and tools, such as Visual Studio, GitHub, and Xbox. He is an ACM Fellow, an IEEE Fellow, recipient of the IEEE TCSE Edward J. McCluskey Technical Achievement award, and Co-Editor in Chief of the Empirical Software Engineering journal. https: ⁄ ⁄ thomas-zimmermann.com ⁄




October
21

cs Computer Science Department Colloquium
Sampling Colorings with Markov Chains

Speaker: Charlie Carlson, Postdoc at University of California Santa Barbara

When: 11:00AM ~ 11:50AM, Monday October 21, 2024
Where: CSB 130 map

Abstract: We review the history of sampling random k-colorings with and without Markov chains. We also present a new result for sampling random k-colorings in graphs with maximum degree ∆; our results hold without any further assumptions on the graph and are stronger than all previous results for general graphs. This talk is based on a paper with Eric Vigoda that will appear at the Symposium on Discrete Algorithms in 2025.

The Glauber dynamics is a simple single-site update Markov chain. Jerrum (1995) proved an optimal O(n log n) mixing time bound for Glauber dynamics whenever k < 2∆ where ∆ is the maximum degree of the input graph. This bound was improved by Vigoda (1999) to k < (11 ⁄ 6)∆ using a “flip” dynamics which recolors (small) maximal 2-colored components in each step. Vigoda’s result was the best known for general graphs for 20 years until Chen et al. (2019) established optimal mixing of the flip dynamics for k < (11 ⁄ 6 − ε)∆ where ε ≈ 10−5. In this talk we present the first substantial improvement over these results. We prove an optimal mixing time bound of O(n log n) for the flip dynamics when k ≥ 1.809∆. This yields, through recent spectral independence results, an optimal O(n log n) mixing time for the Glauber dynamics for the same range of k ⁄ ∆ when ∆ = O(1). Our proof utilizes path coupling with a simple weighted Hamming distance for “unblocked” neighbors.

Bio: Charlie Carlson is a postdoc at the University of California Santa Barbara. She is interested many different topics in theoretical computer science and mathematics such as approximate counting, spectral graph theory, randomized algorithms, and combinatorial optimization. She is currently interested in new methods for analyzing the mixing time of Markov chains. In January she will start as a postdoc at the Simons Laufer Mathematical Sciences Institute. Before her postdoc at Santa Barbara, Charlie graduated with a PhD in computer science from the University of Colorado Boulder. She was advised by Alexandra Kolla.




October
28

cs Computer Science Department Colloquium
Mixed-Reality Decision Support for Human-Machine Teaming

Speaker: Bradley Hayes, Associate Professor of Computer Science, University of Colorado Boulder

When: 11:00AM ~ 11:50AM, Monday October 28, 2024
Where: CSB 130 map

Abstract: Clear and frequent communication is a foundational aspect of collaboration. Effective communication not only enables and sustains the shared situational awareness necessary for adaptation and coordination, but is often a requirement given the opaque nature of decision-making in autonomous systems. In this talk I will share my lab's recent work using mixed reality at the intersection of human-machine communication and human-aware optimization under partially observable conditions. Through these advances we are able to realize human-autonomy teams that are greater than the sum of their parts, enabling autonomous systems to operationalize psychological insights about human cognition for effective communication and to distill and disseminate knowledge from machine learning models for real-time multimodal decision support. 

Bio: Bradley Hayes is an Associate Professor of Computer Science at the University of Colorado Boulder, where he directs the Collaborative AI and Robotics (CAIRO) Lab. Brad's research exists at the intersection of Explainable AI and Human-Robot Interaction, developing techniques to create and validate autonomous systems that learn from, teach, and collaborate with humans to improve efficiency, safety, and capability. His work has been recognized with best paper nominations and awards from the University of Colorado Boulder, the ACM ⁄ IEEE International Conference on Human-Robot Interaction, the International Conference on Autonomous Agents and Multi-Agent Systems, and the IEEE International Symposium on Robot and Human Interactive Communication. Prior to joining the faculty at CU Boulder, Brad conducted research on the algorithmic foundations of human-robot interaction at the Yale Social Robotics Lab and the Massachusetts Institute of Technology Interactive Robotics Group.




November
4

cs Computer Science Department Colloquium
Towards building Adaptive Models for Autonomous Cyber Defense

Speaker: Aritran Piplai

When: 11:00AM ~ 11:50AM, Monday November 4, 2024
Where: CSB 130 map

Abstract: Traditional rule-based and supervised learning methods rely on historical signatures to detect cyber-attacks. While modern AI models excel at generalizing and identifying novel threats from large datasets, a key challenge remains: detecting these threats rapidly when data is limited. In this talk, I will present our previous work in which we demonstrated how reinforcement learning (RL) can adapt attack detection policies by integrating descriptions of novel attacks into a knowledge graph (KG). This approach helped enhance detection accuracy, even in data-scarce environments, by dynamically adjusting strategies based on evolving attack descriptions. The results were promising, enabling the identification of unseen threats and better adaptation to changing attack landscapes.

However, challenges persist in keeping up with rapidly evolving malware and attack patterns. To address these, I will briefly explore new research directions, including meta-learning, which shows promise for enhancing detection in few-shot learning tasks by leveraging similarities between known and novel attack types. Yet, its application in cybersecurity is complicated by the significant differences between novel and previously observed attacks. I will discuss the challenges and potential solutions for addressing rapidly evolving cyber threats.

Bio: Dr. Piplai is an Assistant Professor at the University of Texas at El Paso, specializing in automated cyber defenses, cybersecurity knowledge graphs, cyber threat intelligence, reinforcement learning, and adversarial learning. He holds a Ph.D. in computer science from UMBC and a bachelor’s degree in computer science and engineering from Jadavpur University, India. Dr. Piplai has served on program committees for major conferences such as ICML, AAAI, EMNLP, and the International Conference on Big Data. He was also the session chair for the IEEE International Conference on Machine Learning Applications in 2022. He also has industry experience at Amazon Science and Samsung Research, where he has worked on large volumes of cybersecurity data. At UTEP, his research focuses on detecting evolving cyber-threats and malware by leveraging large language models (LLMs), natural language processing, and knowledge graphs to generate textual descriptions of cyber-attacks. He then uses this information to guide downstream machine learning and reinforcement learning models, building adaptive systems for cyber defenses and malware detection.




November
5

cs Computer Science Department Q&A Session
Ask me Anything about the Linux Kernel community and mentorships

Speaker: Shuah Khan, Linux Kernel Maintainer & Fellow at The Linux Foundation

When: 11:00AM ~ 12:15PM, Tuesday November 5, 2024
Where: EDDY 103 map

Abstract:

Bio: Shuah Khan is an experienced Linux Kernel developer, maintainer, and contributor, and authored A Beginner’s Guide to Linux Kernel Development (LFD103). Shuah leads the Mentorship program aimed at increasing diversity in open source and providing equitable access to learning resources, also serves on the Linux Kernel Code of Conduct committee and the Linux Foundation Technical Advisory Board.




November
11

cs Computer Science Department Colloquium
The future of computer vision: A retrospective of the computer vision lab and a roadmap for the future

Speaker: Nathaniel Blanchard, Assistant Professor of Computer Science, Colorado State University

When: 11:00AM ~ 11:50AM, Monday November 11, 2024
Where: CSB 130 map

Abstract: Over the past five years, the field of computer vision has evolved rapidly, and the computer vision lab at CSU has evolved with it. Specifically, the lab produces state-of-the-art computer vision work, but the lab's measure of successful work is not merely performance on a dataset. Instead, successful computer vision work necessitates the creation of vision systems that enable or enhance real-world use. I argue that, thus, evaluations of these systems should center around measuring vision's positive impact on applications. In this talk, I discuss the lab's breakthroughs and contextualize those successes in how they further the fields of AI for education, affective computing, and environmental sciences. Examples include 6D-pose for group-centric learning, internal mental-state detection for learner modeling, and smoke opacity detection for environmental monitoring. I conclude with a roadmap for the next five years for the lab and into the far future - establishing a gold standard for state-of-the-art user-focused systems.




November
18

cs Computer Science Department Colloquium
Addressing the Spiritual Crisis of Modern Society through Online Spiritual Care

Speaker: C. Estelle Smith, Assistant Professor, Department of Computer Science, Colorado School of Mines

When: 11:00AM ~ 11:50AM, Monday November 18, 2024
Where: CSB 130 map

Abstract: Spiritual care is a vital form of care that—surprisingly—has far less to do with religion than one might initially assume. Not even 10-20 years ago, mental health used to be a taboo topic, yet modern society has become increasingly aware of dramatic incidence rates of mental illness across the world, while technology innovators and Human-Computer Interaction researchers have embraced the opportunity to design and study technologies that impact mental health. In a similar way, today, there is virtually no discussion of spiritual crises in public discourse. Yet millions of people are leaving organized religion across the Western world, while a simultaneous epidemic of loneliness and isolation has become a serious public health concern. Numerous traumatic events continue to rock our world, fragment society, and invoke a state of widespread spiritual crisis, yet many people do not know where to turn for support, how to cope, or how to talk about it openly without fear of stigmatization. In this seminar, Professor Estelle Smith will introduce the basic tenets of professional spiritual care by explaining what it is, what it is not, and how it complements and improves other forms of clinical care. She will then synthesize insights from her ongoing research into online support communities situated across platforms like Reddit and CaringBridge, including empirical results on how these platforms already do mediate some forms of spiritual care as well as design directions for improving their governance and UI ⁄ UX for better spiritual health. Computer scientists and technology designers can and must do better to build technologies that foster healthy human societies; spiritual care offers a powerful lens for serving this goal and healing our communal wounds moving forward.

Bio: C. Estelle Smith, Ph.D., is an Assistant Professor in the Department of Computer Science at the Colorado School of Mines. Dr. Smith is developing a new interdisciplinary research area at the intersection of Human-Computer Interaction (HCI) and Spiritual Care (see bit.ly ⁄ sacredtech). Her work focuses on improving the spiritual health and wellbeing of patients, caregivers, and online community users by understanding and supporting the effective governance and UI ⁄ UX design of online spaces. With numerous publications and paper awards at venues like CSCW, CHI, and TOCHI, Dr. Smith regularly collaborates with online communities across platforms like Reddit, CaringBridge, and Wikipedia.




December
2

cs Computer Science Department Colloquium
Privacy-preserving AI: Challenges and Approaches

Speaker: James B. D. Joshi, Professor, DINS, School of Computing and Information, University of Pittsburgh

When: 11:00AM ~ 11:50AM, Monday December 2, 2024
Where: CSB 130 map

Abstract: The age of AI is upon us, presenting us with unprecedented opportunities to solve societal problems and accelerate scientific discovery and innovation. At the same time, AI may be used as a powerful weapon to inflict unimaginable harms to individuals and society, including erosion of democratic foundations of our society such as the right to privacy, and human dignity. In this talk, I will focus on challenges to ensuring privacy in the age of AI, and discuss various approaches and research directions. I will also overview some of our recent work related to privacy-preserving AI techniques that employ computing over encrypted data. The talk will also overview some efforts at the national level related to addressing privacy-preserving AI such as the national strategies and funding opportunities

Bio: Dr. James Joshi is a professor of School of Computing and Information at the University of Pittsburgh, and the director ⁄ founder of the Laboratory of Education and Research on Security Assured Information Systems (LERSAIS). He served as a Program Director CNS division and its SaT program at the NSF. He currently serves as an “Expert” in the NSF TIP Directorate. While at NSF, he also served as the Co-Chair of the Privacy Interagency Working Group of the Networking and Information Technology R&D (NITRD), as well as the NITRD Fast Track Action Committees for: (1) Advancing Privacy Preserving Data Sharing and Analytics (PPDSA) and (2) Digital Assets R&D Agenda. He also served in the NITRD CSIA Cybersecurity R&D National Strategy Task Force and contributed to writing the Federal Cybersecurity R&D Strategy published in Dec, 2023. He is an IEEE Fellow, an ACM Distinguished Member, Fellow of Asia-Pacific Artificial Intelligence Association, and an IEEE CS Golden Core member. His research is focused broadly on cybersecurity and privacy areas including advanced access control models, security and privacy of distributed systems and AI ⁄ ML, and trust management. He is a recipient of the NSF CAREER award in 2006. He had earlier established and managed the NSF CyberCorp Scholarship for Service program at Pitt in 2006. He has served as a program co-chair and ⁄ or general co-chair of several international conferences ⁄ workshop. He currently serves as the founding Steering Committee chair of the following co-located International IEEE conferences: Collaboration and Internet Computing (CIC); Trust, Privacy and Security of Intelligent Systems, and Applications (TPS); and Cognitive Machine Intelligence (CogMI). He had served as the Editor-In-Chief of IEEE Transactions on Services Computing from 2017-2021. He has published over 140 articles as book chapters and papers in journals, conferences and workshops, and has served as a special issue editor of several journals including IEEE TSC, Elsevier Computer & Security, ACM TOPS, Springer MONET, IJCIS, and Information Systems Frontiers. He established the first undergraduate degree program in Computer science and Engineering in Nepal while he was at Kathmandu University.