Video Shared Responsibility Model: Best Practices for Ethical and Secure Cloud Innovation April 16, 2025 | Adam Franklin, Kuldeep Singh In a cloud-first, AI-driven world, understanding who is responsible for protecting data, systems, and business operations is more critical—and more complex—than ever. At Innovation Summit 2025, experts from HIKE2 and Red Argyle Solutions unpacked the realities of the Shared Responsibility Model, providing a clear framework for navigating governance, security, and compliance in a rapidly evolving tech landscape. Whether you’re managing a Salesforce environment, deploying AI tools, or orchestrating multi-cloud integrations, these practical insights will help you align people, process, and technology for resilience and trust. Key Takeaways: Shared Responsibility Means Constant Vigilance, Not Delegation Cloud providers offer secure platforms, but the ultimate responsibility for data integrity, user access, and compliance still rests with the customer. Organizations must clearly define—and audit—their role in protecting and governing their own assets. Mature Models Require More Than Technology—They Require Cultural Change Building a resilient, shared responsibility model demands not just technical controls, but also education, clear policies, and cross-team collaboration. Embedding a security-first mindset across your workforce is as crucial as the tech stack you choose. Global Operations Demand Layered Governance Strategies For multinational organizations, a single compliance model isn’t enough. Instead, companies need scalable governance frameworks that adapt to different regulatory environments, such as GDPR or emerging AI regulations, without creating bottlenecks. AI Expands Risk—and Responsibility The integration of AI into everyday tools increases exposure to new types of risk, including data leakage and bias. Organizations must scrutinize vendors’ terms of service, implement strict access controls, and rethink data governance policies to keep up with evolving AI challenges. With the ubiquity of technology—and the pace at which it’s moving—now it’s not enough. So we’re going to talk about things like compliance, security, and governance as they relate to organizations. I’ve got people way smarter than me—we’re going to talk about it here today! We’ve got Adam, Kuldeep, and Gary. I’ll turn to them to do a left-to-right introduction of themselves. Who’s left? Uh, your left, Gary.Gary Polmateer: Hey, I’m Gary Polmateer, CEO of Red Argyle. Since the late ’90s—yeah, that has a bit of a stink to it—I’ve been working with Salesforce. Since 2008 I’ve focused mostly on enterprise tech companies.Kuldeep Singh: All right, I’ll introduce myself. I am a data and AI governance strategist. I’ve been in the data industry for about 24 years—mostly in financial services for about 15 of those. I’ve also done some retail and some philanthropic work, but all of it has been in data analytics.I work for HIKE2 and am based out of Denver. As a strategist, I look at the capabilities we take to market in this space, how we keep up with our clients, and how we modernize the capabilities they need. More and more, I’m seeing data shift toward making it AI-ready—and that excites me. I’ve been doing this for 20 years; it’s a great time to be in the industry. I’m super-excited about where data and analytics are heading over the next few years.Adam Franklin: Hi, my name is Adam Franklin. Relatively speaking, I’m the baby of this group—I’ve been involved in tech for about 15 years. Prior to that I was in a completely unrelated industry. I’m also with HIKE2, functioning as a technical architect primarily focused on Salesforce and cloud-based solutions, though I occasionally work with firms on things far outside that stack.Setting the StageModerator: All right, thanks, everyone. I’m going to lob some questions at you, and then—when we’ve got 15 or 20 minutes left—we’ll let the audience lob questions at you. Keep your questions in mind; we’ll get some audience participation as well.1. The Shared-Responsibility ModelModerator: Hip, I’m going to throw the first one at you. Beyond my terrible, clinical explanation of what the shared-responsibility model is, can you ground us in your definition, your experience with it, and how you’ve seen it present in the spaces you specialize in?Kuldeep:Sure. First, the shared-responsibility model is not a shared-services model. Years ago, in financial services, we were used to shared services. About a decade back, while leading data quality and business intelligence for a large wealth- and investment-management firm, we started talking with Google, Microsoft Azure, and others. Cloud was still new to financial services, and—very quickly—I realized that, while the cloud provider offers great capabilities, I still bear most of the responsibility for my data.The key is absolute clarity about who is responsible for what. Whether you’re dealing with infrastructure as a service, platform as a service, or any variation, there’s always a platform, an application, your data, and your access management. For me, as a data person, the responsibility for the data always lies with me. The provider supplies tools and infrastructure, but I must audit exactly what I—and they—are responsible for. That clarity is how we protected a global tech organization.2. Salesforce & Shared ResponsibilityModerator: Gary, you run a Salesforce consultancy. Where do you see the shared-responsibility model surface in Salesforce implementations? Feel free to lean into the security angle.Gary Polmateer:Salesforce’s number-one value is trust, and that can lull people into thinking, “I’ll just put all my data there—it’s bullet-proof forever.” At the platform level, that’s largely true; I’ve never heard of anyone breaking into a Salesforce data center and exfiltrating data.But Salesforce is remarkably admin- and configurator-friendly. It’s easy to “drive your car into a ditch.” Think about Experience Cloud sites with public endpoints but poorly configured sharing models, or twenty-year-old orgs with rusty brakes—outdated code, layers of admins, and data you should have purged years ago.So I spend a lot of time on basics:Audit your system.Data-retention and destruction policies.Is Salesforce still doing what you bought it for?That’s the essence of shared responsibility in a Salesforce context.3. Integration Architecture & Shared ResponsibilityModerator: Adam, as an architect focused on integrations—multiple apps and systems—what does shared responsibility look like when you’re orchestrating everything, and what benefits do you see?Adam Franklin:There are obvious benefits—avoiding risk and staying compliant—but also speed and efficiency. Clear policies remove uncertainty: if governance has milestones, you know the path for every change. That predictability lowers organizational anxiety and makes it easier to reason about large, complex implementations where information is scattered.A huge but overlooked benefit is education. In a SaaS-heavy world where anyone can spin up a new tool, educating everyone in the institution creates opportunities for continuous improvement. People closer to day-to-day work often spot issues or innovations the “experts” miss.4. Organizational MaturityModerator: Let’s talk about maturity. What drives an organization from “version 1” to “version 5” of a shared-responsibility model—size, age, ambition?Kuldeep:Great question. We’re living in an era where the pace of change outstrips anything before. Back in 2013, at a global investment bank, we ran maybe 50 PCs for data science and ML. The company loved innovation, but the next question was always, “How do we take this to market?” We had no answer.Most organizations still leverage only 5–10 % of their data. As generative and agentic AI move from buzzwords to reality, the volume, variety, and velocity of data explode—structured, unstructured, semi-structured. That stretches any shared-responsibility model.Key maturity drivers:Data scale & diversity.Regulatory stretch (GDPR, CCPA, MiFID II, etc.).Business demand for new AI-driven capabilities.Even cloud makes it easy to spin up data lakes, but policies often lag. Mature orgs revisit frameworks, expand stewardship, and re-align governance to new risks. Highly regulated industries—financial services, healthcare, legal—are furthest along, but even they’re still figuring it out.5. Global Scale & GeographyModerator: Let’s layer in scale. Global organizations operate under many jurisdictions. Gary, how do you handle multiple models across countries?Gary Polmateer: Start with basics. I love the Center for Internet Security’s Critical Security Controls—23 domains with 30–50 questions each: physical infra, cloud, backup, reporting, data management, etc. They map a zero-to-100 maturity ladder:No awareness.Informal practice.Formal process.Reportable.Automated & governed.Even reaching level 2 or 3 can meet GDPR. For example, a data-take-out request: build a manual but documented procedure—good enough. The point is to chip away at complexity over time.On the horizon: the EU Digital Operational Resilience Act (DORA) and the Digital Markets Act (DMA). Awareness is step one; specialists can help with the details.6. AI & the Future of Shared ResponsibilityModerator: We can’t skip AI. How will AI challenge and shift responsibility models as organizations adopt it?Adam’s ViewDoes the paradigm change? Probably not. Vendor and consumer still share responsibility, but new questions arise—ethical as well as technical. Do you even know which AI tools your employees are using? Do you have policy and controls? We’re still mid-stream in technological change, so stay sensitive and iterate.Kuldeep’s ViewAI stretches existing bounds. Example: A retail bank wanted to compress a one-month mortgage-loan process. Great—until you hit the Fair Lending Act, GDPR, CCPA, and 31 different U.S. state regs. Clients now ask about AI governance:Defensive controls (security, regulation) remain the same.Data lineage & quality become crucial.Cloud providers help with bias testing, explainability, and content filtering—but compliance and data trustworthiness still belong to you.Gary’s ViewFrom a cyber-security angle, AI enlarges the attack surface. Terms-of-service matter: data residency, model-training rights, retention, tokenization. Read them! Even simple OAuth connections can over-scope permissions (“read my entire inbox”). Vendors are improving, but stay skeptical, minimize data exposure, and leverage tools like Salesforce’s Trust Layer for anonymization and tokenization.7. Audience Q &A (excerpt)Audience Member: In multi-cloud architectures, what if the service provider wants to use our data to train their model? Sometimes the ToS is vague. How do we navigate that? Cut everything off until we finish due diligence?Gary: We evaluate risk provider by provider. We lock down new AI integrations with SSO and fine-grained IAM, then roll out training and awareness. Training is often the cheapest cyber-insurance.Kuldeep: Control what you can: inputs (your data, your prompts) and outputs. Re-examine risk frameworks, map new risks, and build or revise policies before you launch.Adam: Education is the third pillar. Your “productivity hack” can be your company’s data breach unless people understand the stakes.ClosingModerator: This is the most people I’ve ever talked to about cyber-security—kudos for sticking it out! It may not be the flashiest topic, but it can save you the most money.If there are no other questions, let’s wrap it up. Thanks, everyone! Latest Resources Article Wodzenski’s Viewpoint: Preparing a future-ready workforce is critical in the era of AI Originally published by Pittsburgh Business Times Story Highlights Pittsburgh has long been a city defined Read The Full Story Article Navigating 2025 Trends: Insights with HIKE2 Experts As we move into 2025, the pace of innovation in Cloud, Data, and AI continues Read The Full Story Stay Connected Join The Campfire! Subscribe to HIKE2’s Newsletter to receive content that helps you navigate the evolving world of AI, Data, and Cloud Solutions. Subscribe