Skip to content
Article

Leading With Values: What Every Organization Can Learn From Pittsburgh’s Responsible AI Journey

|

HIKE2

Before an organization can ask what AI should do, it needs to answer who AI should serve — and why. That’s the quiet but powerful lesson embedded in how the City of Pittsburgh has approached artificial intelligence: not as a technology strategy, but as a values exercise first.

At Innovation Summit 2026, two of Pittsburgh’s civic technology leaders pulled back the curtain on a multi-year journey to build a responsible AI framework inside a major American city government. Their story isn’t just a public sector case study — it’s a blueprint for any organization navigating the tension between AI’s potential and the human trust required to deploy it well. The throughline? You don’t lead with the tools. You lead with what you believe.

City of Pittsburgh presentation at Innovation Summit 2026

Meet the Speakers

Chris Belasco is the Chief Data Officer for the City of Pittsburgh, where he oversees the city’s data governance, technology standards, and AI strategy.

Andrew Hayhurst is the Senior Manager of Innovation at the City of Pittsburgh, focused on workforce enablement, pilot programs, and translating technology strategy into operational practice.

Together, they’ve spent the past several years building one of the more thoughtful municipal AI frameworks in the country — not by moving fast, but by listening carefully.

Start With Values, Not Tools

When ChatGPT launched in late 2022, city employees at Pittsburgh — like workers everywhere — started using it. Directors raised alarms. There was serious internal pressure to do what several other major cities had done: block AI entirely. Chicago had famously instructed employees to immediately leave any meeting where an AI tool was present. Allegheny County took a similar stance.

Pittsburgh chose a different path.

“We chose to prepare by establishing guide rails, training our employees, and really focusing on trying to bring experimentation to the workforce so that we could do this responsibly and correctly,” Hayhurst explained.

The framing the team chose for their AI standards was itself intentional. They deliberately avoided the word “policy” — a word that implies punishment and restriction — and instead published standards and guidelines. The message to employees was clear: you can use AI, you should be exploring it, but here is how to do it in a way that aligns with our values.

That values-first orientation grew out of an informal conversation at the prior year’s HIKE2 Innovation Summit, when Tia Christopher of Orange Peel Collaborative told the team simply: “You can do this by leading with your values.” It was a quiet moment of permission that shaped everything that followed.

The core value Pittsburgh kept returning to? AI should enhance public servants — not replace them. This wasn’t just a talking point. It shaped procurement requirements, training design, use case prioritization, and how they communicated change to a workforce that had real concerns about job security. When city council members raised worries about AI displacement, the Pittsburgh team didn’t push back — they recognized it as value alignment.

Andrew Hayhurst

Building the Scaffolding: From Guidance to Governance

Pittsburgh’s approach offers a practical model for how organizations can move from reactive AI scrambling to structured, confident adoption. Their journey unfolded in deliberate phases:

2023 — Establish the baseline. With AI tools already in use, Pittsburgh published its first AI usage standards in early 2024, establishing which tools were acceptable, what data was appropriate to input, and what accountability looked like. Critically, they also launched monthly AI workshops — beginning with something they called ChatGPT 101 — to demystify the tools and build comfort across a workforce with a wide range of technical fluency.

2024–2025 — Operationalize and refine. As Microsoft’s Copilot Chat became available on the government tenant, Pittsburgh steered employees toward it — not because it was the only tool, but because it kept sensitive city data within a secure ecosystem. They published an updated version of their AI standards in late 2025, incorporating lessons learned from the first year of practice. They also built a formal request process so employees could get new AI tools reviewed and approved, rather than creating a shadow IT problem.

Ongoing — Scale responsibly. Pittsburgh now runs an AI working group that surfaces use cases from across departments, maintains an AI registry for public transparency, and has developed a prioritized list of “productivity plays” — a term they chose deliberately to signal helpfulness rather than threat.

That language intentionality matters more than it might seem. Hayhurst put it plainly: “With the fear that we have throughout a lot of city employees — the idea that we’re going to take their jobs — we wanted to make sure that we’re using words that didn’t scare them away.”

From Strategy to Practice: Real Use Cases in a Complex Environment

One of the most grounding parts of the session was seeing how Pittsburgh’s values show up at the use case level — not just in slide decks.

The team developed 22 “productivity plays” and ran them through an impact-difficulty matrix to identify where to focus first. Quick wins included a recycling coach chatbot (low risk, high resident impact), park usage analytics, and automated meeting summaries — the kind of AI tools that make employees’ workday tangibly easier without touching anything sensitive.

More complex use cases required more care. When the city’s planning department wanted to add a chatbot to help residents navigate the 2050 comprehensive plan documents, the team flagged a critical risk: the tool was launching during a mayoral primary. Rather than press forward, they slowed implementation, brought in teams to stress-test the guide rails, and rolled it out only after the election. The decision to slow down in service of public trust is exactly the kind of judgment that a values framework makes possible.

Chris Belasco

A permit navigator — a chatbot to help residents understand the minimum requirements to build in Pittsburgh — is now on the roadmap. The city currently processes 5,000 permits but cycles through 25,000 reviews because applications are so often incomplete. A pre-submission AI tool that helps residents submit better applications could dramatically reduce that backlog while keeping plan reviewers focused on high-value work. As Belasco noted: “There’s no way we can process that volume with fewer plan reviewers. We need all the plan reviewers we have. We need more. And so how do we make these things possible for the workers that we have?”

Even small internal tools reflect the same philosophy. The team built a solution to digitize handwritten fitness test forms collected in the field — because it turned out that pen and paper worked better than tablets in bad weather. Rather than force a technology solution that didn’t fit, they found a way to bridge the analog and digital while saving staff from tedious manual data entry. They’re applying the same approach to utility invoices and card-catalog-era records.

The Ecosystem Matters as Much as the Strategy

One of the most underappreciated elements of Pittsburgh’s approach is how openly they’ve built it — not in isolation, but in relationship with a broad ecosystem of partners.

Nationally, they’re members of the GovAI Coalition (led by the City of San Jose), a participant in Bloomberg Philanthropies’ City AI Connect, and connected to Partners for Public Good. Locally, they’ve partnered with Pitt Cyber’s Beth Schwanke, CMU faculty focused on responsible AI, and the Western Pennsylvania Regional Data Center. They’ve also initiated community listening sessions with residents who have historically been excluded from technology decisions — deliberately building in the kind of community voice that shapes whether technology actually serves people or just processes them.

This isn’t just civic generosity. It’s a recognition that responsible AI at scale requires input from people who experience the consequences of the systems being built. The City of Pittsburgh is also actively investing this knowledge back into the region, hosting workshops for smaller municipalities throughout Allegheny County that don’t have dedicated AI resources.

Watch the Full Session

The full Innovation Summit 2026 session — including their live Q&A on permitting AI, procurement disclosure requirements, and startup partnerships through PGH Lab — is available to watch here:

What This Means for Your Organization

Whether you’re a financial services firm navigating explainability requirements, a SaaS company scaling AI-assisted features, a law firm cautiously exploring document automation, or a public agency trying to build constituent trust — the Pittsburgh story surfaces something universal: the organizations that will get AI right are the ones that start by getting clear on what they value.

That’s the work that makes everything else — the tools, the training, the governance — coherent rather than chaotic.

If your organization is trying to figure out where to start, or how to move from ad hoc AI experimentation to a responsible, scalable strategy that your workforce actually trusts, we’d love to think through it with you.

Get in touch with the HIKE2 team →