Skip to content
Video

AI Agents in Action: Shaping the Future of Human-Machine Collaboration

|

HIKE2

As AI agents shift from buzzword to business imperative, organizations are asking not just if they should implement them—but how. During Innovation Summit 2025, Ian Gotts (Elements.cloud), Heather Maples (Dollar Bank), and Andy Stahl (Databricks) pulled back the curtain on where companies really are in their AI journeys, what makes agents effective, and what foundational work can’t be skipped. Whether you’re piloting internal tools or planning customer-facing automation, the insights shared here offer a grounded, strategic roadmap for getting started—and getting it right.

Key Summary:

  1. Don’t Skip the Foundation: Data Quality and Governance Are Non-Negotiable
    AI agents are only as good as the data they rely on. The panel stressed that success starts long before implementation—with ensuring your data is clean, governed, and contextually complete. This is especially critical when integrating multiple sources like PDFs, audio, or system logs.
  2. Start Internally, Not Externally
    High-risk, customer-facing deployments are tempting—but premature. Many organizations are finding success by first using agents for internal training, service desk support, or HR tasks. These lower-risk applications build organizational confidence and refine processes before scaling outward.
  3. Small, Purpose-Built Agents Outperform General Solutions
    Rather than relying on massive, general-purpose models, the panel advocated for building compound agents tailored to specific tasks and datasets. This modular approach increases speed, reduces hallucination risk, and allows for precise tuning as needs evolve.

Change Management Is Just as Important as Code
Building an AI agent is the easy part—getting it adopted is the real challenge. From executive buy-in to employee trust, organizations must address the human side of transformation. Framing AI as a tool that enhances (not replaces) roles is key to widespread engagement.

The Real Barriers to AI Agent Success

The panel opened with a key observation: building AI agents is relatively easy. The challenge lies in getting people to trust and adopt them.

Andy Stahl explained that many organizations start with excitement—deploying a chatbot or integrating a co-pilot—but quickly run into resistance. Whether it’s a customer unwilling to rebook a flight with a bot, or an employee frustrated by irrelevant suggestions, adoption falters when agents feel disconnected from real-world use cases.

To drive meaningful engagement, Stahl advised starting with user experience. “Think about AI as the new UI,” he said, advocating for interfaces that make AI outputs useful, intuitive, and embedded into existing workflows. The most successful agents don’t announce themselves as AI—they quietly solve real problems.

Why “Good Enough” Data Isn’t Good Enough

Data hygiene, governance, and structure emerged as dominant themes. All three panelists agreed: without the right data foundations, AI agents will fail.

Stahl warned against underestimating data complexity. Structured data (like tables in CRM systems) is only part of the picture. The real challenge lies in unstructured data, PDFs, contracts, voice transcripts, scanned documents, and connecting these across disparate systems. He emphasized the importance of building ontologies (conceptual maps of data) and audit trails to ensure agents deliver consistent, traceable results.

Heather Maples echoed this from a banking perspective. At Dollar Bank, concerns about customer data security, regulatory compliance, and internal trust are top priorities. For AI agents to succeed, she said, “PII must be locked down like the Vatican,” and responses must be reliable enough to uphold the institution’s reputation.

Smart Starting Points: Internal Agents and Incremental Wins

Rather than aiming for fully autonomous, customer-facing agents on day one, the panel recommended more realistic entry points.

Maples shared that Dollar Bank is starting with agents that assist with employee training and internal service resolution. These low-risk scenarios allow the organization to introduce AI incrementally while improving efficiency and building internal confidence.

Stahl supported this “crawl-walk-run” approach, noting that many companies waste time chasing high-risk, high-complexity use cases too early like contract summarization or multimodal document analysis. These projects often stall due to legacy systems, messy data, and lack of internal expertise.

Instead, organizations should start with internal agents that address common business problems and scale up gradually, learning as they go.

Building for Governance, Scalability, and Change

The panel also addressed engineering and design concerns. Stahl outlined a compound agent model, using multiple smaller agents, each trained for a specific task, rather than relying on one massive, generalized model. This architecture improves speed, reduces cost, and limits risk.

Ian Gotts added that organizations must also rethink documentation and business processes. Agents need well-structured data and clearly defined tasks to function properly. If internal systems and workflows are poorly documented, agents are bound to struggle.

Maples noted that as Dollar Bank implements Salesforce from scratch, they’re treating it as a greenfield opportunity to design with agents in mind. “We’re redefining process now so that we’re ready to add agents later,” she said.

What AI Agents Mean for Future Careers

The session concluded with a look toward the future of work. Gotts and Stahl emphasized that success with agents will require more than coding. Skills in business analysis, data governance, psychology, and change management will be just as important, if not more so.

Stahl advised students and professionals alike to study two areas in particular: financial accounting (to understand business impact and risk) and cognitive psychology (to design systems that people actually use). “You can build anything,” he said, “but if no one uses it, what’s the point?”

Maples underscored the importance of continuous learning. “It’s hard to keep up,” she admitted, “but the goal isn’t to master every new tool. It’s learning how to teach and collaborate with AI.”

AI agents are not just another tech trend; they’re a new paradigm for how we interact with systems, data, and each other. But they’re only as smart as the ecosystems we build around them.

This panel made one thing clear: success with agents doesn’t begin with a flashy use case. It starts with clean data, well-understood processes, thoughtful governance, and above all, a clear purpose. Organizations that take the time to build these foundations today will be the ones leading tomorrow’s intelligent enterprise.