Skip to main content

Working Groups

Working Groups are more informal organizations in HAIL that can be convened around themes, specific infrastructure, or some combination.

Working Groups need a leader and members and there is the hope, but not the expectation, that Working Groups have the potential to grow into future Communities of Practice.

Explainable AI

Advances in machine learning (ML) and artificial intelligence (AI), particularly in supervised learning, have introduced challenges in understanding and trust, largely due to the opaque nature of many methods like deep learning.

This lack of transparency is especially critical in high-stakes applications such as credit scoring, hiring, and healthcare. To address this, there is a growing emphasis on explainable AI (XAI), with new approaches like the Tsetlin Machine (TM) gaining traction.

TM, developed at the University of Pittsburgh, uses logic-based learning rather than arithmetic, enabling interpretable models and reducing the need for extensive expertise or post-hoc explanations. TM has demonstrated benefits in accuracy, scalability, and efficiency across diverse tasks, from classification to natural language processing.

The Explainable AI Working Group at Pitt is exploring XAI methods to tackle real-world challenges, such as tracking political violence and assessing recovery potential for cardiac arrest survivors.

These efforts align with responsible data science principles, emphasizing transparency and communication. The group aims to expand its scope into new domains and foster best practices in explainable and responsible AI.

The Responsibility of Human Data

Biometric data, including physical traits like fingerprints and facial recognition and behavioral patterns like keystroke dynamics and voice, is increasingly integrated into daily technology. While it enhances security and personalizes experiences, its inherent ties to individuals raise significant privacy and ethical concerns, including issues of consent, surveillance, and potential biases.

Regulations like the Biometric Information Privacy Act (BIPA) in Illinois and similar efforts in other states highlight the growing need for stringent data privacy measures. The responsible use of biometric data requires informed consent, secure storage, and safeguards against discriminatory outcomes.

Non-compliance with privacy laws can result in severe legal and ethical repercussions. To address these challenges, a working group of experts in privacy, ethics, and health sciences is developing guidelines for the ethical use of biometric data.

Their goal is to balance the benefits of biometrics with robust privacy protections, ensuring compliance with laws, promoting transparency, and fostering responsible data practices to benefit individuals and society.

Get Involved

If you would like to join an existing one or apply to form an RDS@Pitt Working Group, please fill out this survey. If you plan to apply to form a new working group, please prepare a 250- to 500-word statement of intent that lays out

  1. The vision for the group as well as its central, organizing topic, why it is important, and how it relates to responsible data science;
  2. A clearly identified leader of the proposed Working Group; and
  3. A potential membership list that includes members from at least two different units across campus.

Working Groups receive administrative support to facilitate meetings and are invited to be ad hoc members of the Steering Committee.