By Caleb Finamore
Editor's Note: this is the final part in our mini-series on the EAAMO conference.
Part 1 | Part 2
Last month, Pitt hosted the 2025 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. Over the course of the week, scholars from all over met with the goal of advancing understanding of both the societal embedding of technology as well as how techniques from algorithms, optimization, and mechanism design can help improve equity and access to opportunity for underserved communities through the lenses of both the social sciences and humanistic studies.
I sat in on the first day of the Principles and Performance Workshop, serving as a note-taker for and contributor to the Public Sector AI Procurement breakout group. Our task was to review, discuss and ask questions regarding the AI Factsheet for Third Party Systems, a proposal from City Detect, Inc. to the city of San Jose, CA. In turn, a committee would then review our review, using our discussion to discuss larger frameworks surrounding the procurement of AI products for governmental, commercial, and educational purposes. The way I describe the process makes it sound a little absurd, but I think the reality is that most things we do are a little absurd. I think that’s ok.
Absurdities in mind, the breakout group was an endless wealth of discussion, the pipe dream of every undergraduate professor who has ever stared into the void of sixty-some laminated eyes, forced to ask the dreaded question, “so who did the reading?” I arrived prepared to speak, but I did not expect to, and I did not have to. Aside from occasionally googling facts, figures, and the million-dollar brand identities of Texas cities, my role was to listen to and record the conversation. To sit, to look at people looking at other people, to clatter on my old, gunked-up keyboard until it let me type the letter p, to smile politely in a way which says "I know you didn't mean to do that, it's okay to look away" whenever anyone accidentally made eye contact with me. Following are my unedited notes on the session, left unedited as recognition of my role in this sphere: not leading, not participating, but transcribing as transparently as possible. I would like to do it again sometime.
- Began by raising issues:
- People won’t be the biggest fan of installing a bunch of cameras to be surveilled
- What about using existing cameras?
- Treat it like a police style model: must demonstrate exhaustion of lower level resources before escalation
- Can you do anything about it regardless?
- Detroit identified a lot of blight, but hasn’t reached goal of dealing with it
- People won’t be the biggest fan of installing a bunch of cameras to be surveilled
- Moved to addressing quality and ownership of data
- State data must be public, but private corporation data can be private, so where would the data for this be stored?
- However, even public data can be redacted if it’s used in connection to private stakeholders
- Raised concern with evaluation metrics disclosed by City Detect
- “F1-score” is not specific, and the lack of specificity is concerning given that 85% is not a particularly high bar
- If you were a potential client, what metrics would you be looking for? Would that change based on your background?
- Concern about order of operations: soliciting RFPs to solve blight problem vs being solicited to by corporations
- First mover advantage: “hey, I’m corporation X, here’s how I solve this problem”
- Political pressure when being solicited to to look like you were forward thinking as a government
- What metrics for accountability exist and are presented?
- What metrics exist within AI at large?
- Not a metric per se, but making sure the people who are receiving reports and outputs have the technical skills to parse the output they’re receiving
- Comprehensive audit: testing inputs and outputs, looking for patterns
- Who is the burden of proof on?
- Will the model present proof of the blight? Will citizens have means to contest the blight?
- What happens if the model is wrong? Who bears the financial responsibility?
- Answers to above questions will vary heavily depending on local legislation and procedures
- Expressed concern about City Detect’s human review
- human review cannot be comprehensive, because if it was, you could just employ the human, so if the review isn’t comprehensive, is it good?
- What metrics exist within AI at large?
- Raised concern that this feels like a way to exacerbate economic inequalities
- People who will have blights won’t have the money to fix them, so it could easily turn into liens and compounded fines
- Agreed would like to see a more detailed procedure for use cases of model
- Opened discussion of speed at which they deploy new models
- Why so many models?
- Why is the rate so fast?
OUT OF TIME
