Article The Cost of Standing Still: When Does “Not Yet” Become “Too Late”? April 8, 2026 | Brett Schandelson Every organization that has ever delayed a technology decision has told itself the same story: we’re being prudent. We’re waiting for the right moment. We’ll move when the technology matures, when the risks are clearer, when we have more budget, when the timing is better. Sometimes that’s wisdom. Sometimes it’s Blackberry watching the iPhone launch and deciding to wait. The gap between those two outcomes—prudent patience and costly inaction—is what this panel was built to explore. Across four disciplines and decades of combined experience in financial services, government, robotics, and enterprise analytics, the panelists shared a surprisingly consistent message: the cost of waiting is rising faster than most organizations realize, the cost of switching tools is falling faster than most assume, and the organizations that are winning right now aren’t necessarily the boldest movers—they’re the ones with the clearest sense of why they’re moving and where they’re headed. Here are the key takeaways. About the Session The session was moderated by Brett Schandelson, Director of Legal Analytics & Technology Solutions at HIKE2, who brought firsthand experience deploying AI inside a chronically underfunded state public defender agency—and beating the entire state government to production in the process. Joining him on the panel: Bill Fortwangler, Executive Vice President and Chief Information Officer at Dollar Bank, with nearly 40 years of experience navigating technology transitions across nine organizations — from the early PC era through cloud and now AI. Gabriel Goldman, Senior Commercialization Specialist at the National Robotics Engineering Center (NREC) at Carnegie Mellon University, where he has spent 17 years helping companies solve real-world operational problems through applied robotics and automation. Ryan Harter, Director of Strategic Alliances at ThoughtSpot, an agentic analytics platform helping enterprises move toward truly self-service, AI-powered data insight. Timing Is a Strategy Decision, Not Just a Risk Decision The panel’s central question—when does “not yet” become “too late”?—doesn’t have a universal answer, and the panelists were honest about that. But they were equally honest that the calculus is changing in ways that make inaction increasingly costly. Ryan Harter described watching tech startup clients lose customers to faster-moving competitors while they debated the right moment to add AI to their product. In one case, a company’s deliberation cost them not just customers but investors—who saw the hesitation as a signal about the organization’s ability to compete. The market doesn’t wait for internal alignment to be achieved. Bill Fortwangler offered the institutional perspective: Dollar Bank has been around since 1855, and Fortwangler has watched nearly 40 years of technology cycles—PCs, distributed systems, SaaS, the cloud, and now AI. His read on what’s different this time is that there’s less control. Previous technology generations were more deterministic: you deployed, it worked or it didn’t, and you knew what you got. AI systems are probabilistic, they can drift, and their outputs can change even when the inputs haven’t. That requires a different kind of governance—but it also requires a different kind of urgency. Waiting for perfect controls before moving is a strategy for getting left behind. Brett Schandelson’s government experience added the most striking data point on the cost of waiting: a state agency that went from zero to a working beta in 30 days by moving decisively with a guidance framework rather than a policy moratorium. The same initiative, pursued through traditional government procurement and approval channels, would have taken months to years. That speed differential—measured in the value delivered to the people the agency served—is the real cost of standing still. “The cost of switching is going down. The cost of waiting is going up. Our architectures are less monolithic. It’s not a big SAP implementation that’s going to take six years and cost $5 million. Everything is modular. When you know what your North Star is—go. Don’t worry about picking the best tool.” — Ryan Harter, Director of Strategic Alliances, ThoughtSpot The practical implication of this framing: timing decisions should be evaluated not just on the risk of moving, but on the quantified cost of not moving—lost revenue, competitive displacement, operational drag, and the compounding disadvantage of having competitors build capability and organizational learning while you deliberate. Most organizations are good at estimating the cost of a failed implementation. Very few have done the same analysis on the cost of inaction. Governance That Protects vs. Governance That Paralyzes Every panelist had experienced governance structures that were designed as protection but functioned as barriers—and every panelist had strong, specific views on how to tell the difference. Fortwangler drew a sharp distinction between governance that makes sense and governance that doesn’t. A conversational AI agent deployed in a customer-facing context, where outputs can change in real time based on live data, needs rigorous oversight and quality management controls. A software product that was built using AI-assisted coding—static, tested, deployed—does not require a different procurement process than any other software product. Applying the same governance intensity to both is a failure of proportionality, not an abundance of caution. He was equally candid about an internal Dollar Bank example: a public affairs AI tool stuck in a third-party risk evaluation pipeline that was designed for a different category of risk. The CEO wants to move fast. The governance process isn’t keeping pace. That friction—well-intentioned but misdirected—is a version of the cost of standing still playing out inside organizations that believe they’ve already addressed the governance question. Gabriel Goldman offered an instructive parallel from the robotics world. About 15 years ago, the challenge in autonomous vehicles wasn’t the vehicle’s capability—it was the absence of an independent verification system that could confirm what the sensors were reporting was true. A vehicle that lost one sensor heartbeat could veer off course, and there was no box that could catch it. The solution was to build a separate monitoring layer that could validate sensor data in real time. That box exists in robotics now. In enterprise AI, Goldman observed, it doesn’t yet—there’s no widely adopted independent truth-verification layer for AI outputs. That’s a genuine risk gap, and it’s also, he noted, a significant commercial opportunity for whoever builds it. “Without a strategy documented, you’re going to be floundering. We formed an AI governance working group. We figured out what’s the easiest thing to tackle first, then second, then third. You need to outline that over a roadmap—and then your boss will tell you to do it faster.” — Bill Fortwangler, EVP & CIO, Dollar Bank The session’s consensus on governance: document and publish your AI strategy and acceptable use framework before deploying broadly, build governance into the workflow rather than placing it at the gate, apply oversight intensity proportional to actual risk rather than uniformly, and build the cultural expectation that governance is a safeguard that enables speed rather than a barrier that prevents it. Start Where You Already Have Data—and Build From There One of the most consistently practical themes across the session was the advice to start not with the most ambitious AI vision, but with the data and systems that already exist inside the organization—and build AI capability on top of that foundation before purchasing anything new. Fortwangler outlined Dollar Bank’s three-horizon approach with clarity. First: enable the personal assistant and copilot capabilities already embedded in platforms the bank has already purchased—ServiceNow, Salesforce, HRIS systems. These capabilities come at minimal marginal cost and provide immediate value, while also building organizational literacy and comfort with AI-assisted workflows. Second: identify the data pipelines and governance structures needed to connect those systems. Third: create custom agents—but only once the foundation is solid enough to support them. Most organizations, he argued, are trying to skip to step three without having done steps one and two. Gabriel Goldman reinforced this from the robotics side with a principle his team applies on every engagement: go on-site and try to do the process yourself before recommending any technology. The goal isn’t to look good at a task—it’s to understand what the work actually is at the task level, which almost always reveals smaller, lower-risk, less expensive automation opportunities that deliver real value before any large-scale investment is required. Organizations that skip this step tend to either over-invest in capabilities they don’t yet need or under-invest in the foundational problems that would actually move the needle. Ryan Harter added the data governance dimension that enables everything else: before AI can reliably surface insights, the semantic layer underneath has to be consistent. If one team calls it “sales” and another calls it “revenue,” and the AI has to guess which one is right, the output will be wrong or inconsistent—and wrong AI outputs are worse than no AI outputs, because they erode trust in ways that take much longer to repair than the original data problem would have taken to fix. The through-line: AI strategy built on a weak data foundation is just faster noise. The organizations making the most durable progress are the ones that treated data governance as the prerequisite, not the afterthought. Overcoming Resistance: Turn Your Skeptics Into Champions The session’s most memorable audience exchange came from a Takeda Pharmaceuticals executive who raised what turned out to be a universal concern: what do you do with employees who are two or three years from retirement, highly skilled and experienced, and who perceive AI not just as change but as a personal threat? The answers from the panel were practical and varied—and together they form a useful playbook for any organization navigating similar resistance. Fortwangler’s approach at Dollar Bank: mandatory training, monitored adoption, and a clear message that AI tools are enhancements to existing platforms, not replacements for people. When employees retire, the automation they helped embed reduces the need for like-for-like backfilling—the organization repurposes and restructures rather than replaces. That framing shifts the narrative from “AI takes your job” to “AI means your successor doesn’t face the same bottlenecks you did.” Brett Schandelson’s approach in his former state agency was more counterintuitive: he deliberately piloted the AI tool with the most skeptical unit—the one with the most complex cases and the strongest resistance to change. He mandated their participation. What happened was that the unit that had the most to complain about had the most to gain. Staff who had been spending hours on document processing, interpretation, and data entry found themselves reviewing the AI’s output and taking action in a fraction of the time. The biggest skeptics became the loudest champions, and their credibility with the rest of the agency accelerated adoption across 200 users in ways that any top-down rollout could not have matched. Ryan Harter reframed skeptics as assets: their resistance is rooted in experience. They know what’s been tried and failed. They know what the best version of the output should look like. That makes them the ideal test cases—the people who will catch the errors that less experienced users would miss, and whose validation carries the most weight organizationally. Putting skeptics in charge of proving the AI wrong turns their energy from resistance to engagement—and often reveals the most important improvements before anything goes to production. Three Things to Take Back to Your Team The session closed with a question every leader in the room was implicitly carrying: given all of this, what should I actually do when I get back to my organization? The panel’s closing advice was refreshingly calibrated—neither a call to reckless speed nor a defense of cautious waiting. Here’s the distillation: 1. Start with your why. Align on what you want the technology to do and for what purpose. Organizations that skip this step end up chasing tools instead of outcomes. For mission-driven entities — government, nonprofits, legal services — this alignment is especially powerful because it connects technology decisions to values people already hold. 2. Build a roadmap, even if you’re not ready to move. You don’t have to adopt today, but you should know where you’re heading. A clear strategy lets you evaluate incoming tools against your actual direction rather than reacting to market noise. 3. Don’t let analysis paralysis cost you more than a wrong turn would. The architecture of today’s AI tools is modular, not monolithic. Migrating from one platform to another is easier than it was two years ago — and it will be easier still next year. The cost of an imperfect start is recoverable. The cost of not starting may not be. Watch the Full Session The full panel video includes a deeper exchange on the specific three-horizon AI strategy Dollar Bank built and is executing against, Gabriel Goldman’s firsthand account of how CMU’s robotics lab approaches automation readiness assessments with commercial clients, Brett Schandelson’s detailed story of going from moratorium to production beta in 30 days inside a state government agency, and a rich audience Q&A including the Takeda exchange on managing near-retirement resistance to AI adoption. What Is Waiting Costing Your Organization? Most organizations have a reasonably good sense of the risks of moving too fast on AI. Very few have done an honest accounting of what it’s costing them to wait. Lost competitive position, widening capability gaps, compounding operational drag, and the talent and cultural cost of being seen as an organization that isn’t serious about the future—these are real costs, and they add up faster than most planning cycles account for. HIKE2 helps organizations across financial services, government, insurance, law, and high-tech move from AI uncertainty to AI strategy—building the roadmaps, governance frameworks, and change management approaches that let you move with speed and with confidence. If you’re ready to stop deliberating and start building, let’s talk. Get in touch with the HIKE2 team → Latest Resources Article Humans + AI: Redesigning Work, Roles, and Relationships for What’s Next The most honest conversation about AI and the future of work isn’t happening in the Read The Full Story Article Designing Work That Works: How AI, Agents, and Data Are Rehumanizing the Enterprise For decades, organizations have been using technology to make workers faster, more efficient, and more Read The Full Story Stay Connected Join The Campfire! Subscribe to HIKE2’s Newsletter to receive content that helps you navigate the evolving world of AI, Data, and Cloud Solutions. Subscribe