Turning experimentation into shared direction for teaching, research, and operations.
IDEAS: Establishing and Sharing Strategic Guidance for DS/AI Across Academic Mission
Universities are awash in experimentation with AI and data science—but too often, lessons stay siloed within individual courses, labs, or units. HAIL’s IDEAS goal focuses on transforming distributed innovation into shared, actionable guidance that helps Pitt move forward with clarity, responsibility, and impact.
- Digital Leadership
-
Digital Leadership is the ability to guide organizations and communities through rapid technological change by aligning data, AI, and digital tools with human values, strategy, and real-world impact. It goes beyond adopting technology and it requires shaping how decisions are made, ensuring systems are ethical, transparent, and trustworthy, and empowering people to use data with confidence and purpose.
Grounded in principles of responsible data science, digital leadership emphasizes using data and algorithms to drive better outcomes while actively managing risks like bias, misuse, and loss of trust. It brings together technical expertise, critical thinking, and ethical awareness to create solutions that are not only innovative but also equitable and accountable in the digital age.
To support this work, we are developing a Digital Leadership taxonomy alongside a growing collection of articles that will help define, identify, and tag key skills, competencies, and practices. Together, these resources will create a shared language and framework for understanding and advancing digital leadership across contexts.
- Frameworks for GenAI Use
-
Frameworks for GenAI Use is a key component of the HAIL AI Playbook, translating principles of responsible data science into practical, actionable guidance for generative AI. This section of the Playbook focuses on how GenAI can be thoughtfully integrated into research, teaching, operations, and decision-making—moving from experimentation to intentional, high-impact use.
Within the Playbook, these frameworks provide structured approaches to help individuals and teams determine when and how to use GenAI, evaluate and verify outputs, and manage risks such as bias, hallucination, privacy concerns, and misuse. They are designed to support consistency across contexts while remaining flexible enough to adapt to different domains and evolving technologies.
As part of the HAIL AI Playbook’s living structure, this work will continue to grow through real-world case studies, pilot programs, and community contributions. Together, these frameworks will help establish a shared foundation for trustworthy, transparent, and effective use of generative AI—advancing digital leadership across the University of Pittsburgh and the broader ecosystem.
- Contextual & Responsible AI Practices
-
Contextual & Responsible AI Practices is a focus area within the AI Leadership Lab that helps participants use AI thoughtfully in real-world settings. It centers on understanding context—who is affected, what data is used, and what decisions are being made—and using that awareness to guide responsible choices. Through practical examples and hands-on work, participants learn to spot risks such as bias or misuse, evaluate AI outputs, and apply AI in ways that are transparent, accountable, and aligned with their goals.
- Double Box's Loop to Align Computation with Principles
-
Double Loop Framework for Aligning Computation with Principles helps teams ensure AI systems are not just effective, but aligned with human values and responsible practices. It uses a two-part approach: one loop focuses on improving technical performance, while the second steps back to question whether the goals, assumptions, and outcomes align with principles such as fairness, accountability, and real-world impact. Together, they create a continuous cycle of building, evaluating, and refining both the system and its purpose.
This framework supports more thoughtful decision-making by encouraging teams to ask not just how systems work—but whether they should be used at all.
- AI Teaching Field Notes
-
AI Teaching Field Notes is a collection of short, practice-based reflections from University of Pittsburgh faculty exploring the role of artificial intelligence in teaching and learning. Rather than offering prescriptive guidance, it captures how instructors across disciplines are actively navigating AI in their classrooms—what they are trying, what is working, and where challenges remain.
The collection reflects a wide range of approaches, from integrating AI tools into assignments to intentionally limiting their use. By documenting these real-world experiences, AI Teaching Field Notes creates a shared space for insight, experimentation, and evolving practices around responsible AI in education.
- Alexandros Labrinidis - AI Teaching Field Notes: Learning to Trust (But Verify)
- Ansuman Chattopadhyay - AI-Powered Research Tools for Life Sciences: Now Being Tested at Pit
- AI Playbook
-
The AI Playbook is a living framework developed through the DataSci+AI Forum to capture real-world signals, identify emerging patterns, and translate them into shared principles for responsible AI use across research, teaching, and operations at the University of Pittsburgh. It helps align stakeholders, guide decision-making, and turn cross-disciplinary insights into actionable strategies.
Pathways to Engage
|
Faculty & Researchers |
Students |
External Partners |
|---|---|---|
|
Turning experimentation into shared academic guidance. |
Turning learning experiences into guiding insights. |
Turning applied use cases into institutional direction. |
|
HAIL Advisory Board |
