Skip to content
Video

The Shared Responsibility Model Part 2: Best Practices for Ethical and Secure Cloud Innovation

HIKE2

As organizations rely increasingly on cloud technologies to drive business-critical solutions, understanding the evolving balance of responsibilities between service providers, organizations, and key stakeholders is key. This webinar explores the stakeholder roles and the shared responsibility model, emphasizing how all parties can collaborate to ensure secure, compliant, and agile cloud environments. Participants will discover cutting-edge strategies to protect data, enhance operational resilience, and stay ahead of evolving risks while leveraging the potential of cloud technologies.

Transcript:

Thank you so much for joining us today. My name is Kalia Garrido, and I head up strategic marketing and events here at HIKE2. If you haven’t heard about us already, HIKE2 is a fully-functional best-in-class innovation consultancy, and we specialize in digital transformation strategy design and implementations. Our main areas of expertise are in AI, cloud solutions, and data and analytics. And you are in for a treat today because we have some seriously smart folks who are ready to share their expertise with this group.

But before we begin I will do some housekeeping. So this is, of course, a webinar, so your cameras and microphones are off, but we do encourage your engagement via the chat, the Q nd A, or we’ll reserve a little bit of time at the end of the session for asking live questions. And we are, of course, recording this session will be posted on our Youtube channel along with a plethora of other innovation, forward content. And I will share the link here in the chat soon.

This is part 2 of our 3 part series on the Shared Responsibility Model. And today we are going to be chatting at mostly an intermediate level and focusing on policies within the shared responsibility model.

Now I have an exciting announcement. When it comes to our 3rd and final session in this series. This will be happening, live in person at the Innovation Summit. This is March 26th and 27th This session specifically will be on the 26th and it will be held in Pittsburgh, Pennsylvania.

We would love for you to come, join us there and meet our experts live. I will share a link about how you can register for that session here in just a moment. But 1st allow me to begin with an introduction for my favorite partner in crime, Morgan Llewellyn, who is the moderator of today’s session as the HIKE2 AI practice leader Morgan, who is also a Phd. Brings over 20 years of experience and renown in the strategy and innovation of AI solutions he has been successfully implementing advanced AI solutions and technology for government agencies, Fortune 100 companies, and here, at HIKE2. Morgan drives AI strategies and ensures that our company remains a leader in AI technology and applications. And so, Morgan, with that, please do take it away and thank you for joining us today.


Morgan Llewellyn: Yeah, thank you so much, Kalia, and thank you. Everyone for coming to the talk today. So today, as you know, Kalia mentioned, we’re going to be talking about the shared responsibility model. We have a, we have a previous recording on this kind of a version. One kind of a getting started early adopters that type of thing, introducing what the concept is and why it’s relevant. And so this talk is gonna focus more upon organizations that are starting to move away from that early adoption or basic implementation of a shared responsibility model. Where, again, just to, you know, recall shared responsibility model. You’ve got this kind of shared, you know. Trust?

Between, you know, the cloud companies who are providing the infrastructure and the actual companies who are responsible for maintaining that trust with what they’re putting in and how they’re accessing it, how they’re sharing that information, etc.

And so, as we go to, you know, a more advanced implementation and adoption of the shared responsibility framework. Gary, I wanted to start off the call a little bit a can you introduce yourself because the wealth of experience you bring from a security perspective is truly amazing. But, second, can you start to talk a little bit about what differentiates folks from kind of these early adopters. And what are the signals or the triggers that suggest? Maybe they’re outgrowing a basic implementation or an early implementation.


Garry Polmateer: Sure. Well, Hi, everyone, I’m Gary Polmateer. I’m the CEO at Red Argyle Salesforce SI, we focus on UX, Salesforce Experience Cloud and security if you can imagine that. And in the Salesforce ecosystem since 2008. Got a couple certifications, I’m a Salesforce Community MVP, Hall of Famer. So an old timer, if you will, and thanks a lot for having me today. So talking about growth. I was trying to think of a succinct way to answer this question, and I found a couple tongue-in-cheek answers, and one of them was if the people who founded your company wrote the security and privacy policy. It’s probably you’ve probably outgrown it. That was my personal experience. When we got Red Argyle running, my partner and I wrote all of the policy, and it was very homegrown. But I think that the real things that we see kind of out in the world working with customers at the enterprise scale is 2 things, one is cadence.

So security and privacy policy is is not just a 1 time exercise. It’s an ongoing effort. And I think when we were prepping for this call, I said, if you don’t have a policy yet that controls how often you update your policy. That’s a great way to start getting everything on a cadence to make sure that this effort, you know, we talk about policy, the shared security or shared responsibility model falls within that the way that you’re working with your software vendors and integrating and managing your data and privacy expectations. So one is cadence. The second is alignment where just validating that just because this one person over, you know in in it wrote a policy to control something, does everyone else in the organization have a visibility and expectations to maintain that. So I think, putting it together some form of reliable cadence and some form of reliable communication infrastructure can help take those original policies and bring them up to, you know, would you consider the right scale standard.


Morgan Llewellyn: I I think those are a couple of really, you know, kind of amazing suggestions on, hey? Maybe it’s time to reevaluate when it comes to cadence, I guess real quick. Gary, I mean is there, you know, kind of are there goal posts that you see that, hey? This is roughly kind of best in class. Cadence, is it Quarterly? Annual? Biannual? What? What do you see?


Garry Polmateer: Yeah, for the Galactic scale, you know. And if you look at a lot of the regulations, you know, they tend to be some form of annual cadence for the Regulators, looking at an organization that they’re managing it. So I think that’s your you know, required standard for for the overall policy frameworks. But within that I think there’s a lot of room for interpretation, both based on the size of your organization and what your actual business needs are along with the risk factor that you’re working with. An example is you know, a backup policy. You might only have to look at that once a year to make sure it’s still adequate. But at the practice level you probably want to be running your backup every day, and you probably want to be doing a test, restore every week or every month. So again, within a family of policies, I think that there are a few cadences that make sense, and it just comes down to again the right size for your organization.


Morgan Llewellyn: And let me ask you one more question, because I I think this is really valuable insight when you do this cadence, who should be involved typically in kind of this annual review, like, Who’s who’s responsible for it? Are there other people that need to be involved like, how would you kind of structure. That review of, you know, kind of ownership.


Garry Polmateer: Yeah, when I work with with customers, we we typically see a few stakeholders. We’ll see an It presence. We’ll see a business presence we will see usually some kind of legal presence, and then we will be kind of the vendor presence, and then usually the 5th is, you know company project management, or you know, internal compliance team or something along those lines. So usually stakeholder groups like that. It ends up being a you know, functional team who all have our own specific roles on the on the project.


Morgan Llewellyn: Okay, yeah, thank you so much. I think those are great insights to, you know. Really, get us started. Let me transition. Now, Kuldeep, let’s bring you into the conversation, you know. Can you introduce yourself? But then, you know, similarly, Kuldeep, your focus is on the data and data governance side. Can you talk a little bit about? You know, similar to what Gary was saying? Can you talk about what are signs or signals, or maybe business events that are starting to show that it’s time to reevaluate governance from a data perspective. What are you putting into the shared responsibility model from a data perspective? How are you sharing that data across your organization outside of your organization? Any thoughts or kind of insights you could share.


Kuldeep Singh: Yeah, no, absolutely, Morgan. 1st of all, thank you for having me here. You know. So again, a brief introduction I’m with HIKE2. My role at HIKE2 is I’m a strategist. And I focus on data analytics in my role. My focus is on modernizing our capabilities that we take to the market to help our clients succeed. I’m also an advisor to management at every level.

And we are modernizing our capabilities, keeping up with times, because everything is moving so quickly and and fast. But my background is in data and analytics. About 24 years now. Sometimes I say 20 I don’t. I’ve forgotten how many years now 2024 somewhere there and but but it’s been a fascinating journey for me. All of what we are going to talk about today, and we are talking about has has evolved over the period of time, and it’s just been fascinating to see where we are heading.

You know, as far as the topic itself. Great question, Morgan. I feel you know, when I start thinking about policies, data governance. Now, I work a lot with AI governance as well. You know, I start thinking about what are some of the demand signals? Right? Many organizations, you know as as much as we have made progress with AI and data.

Right? They’re still kind of struggling a little bit. You know some of them are still, especially the industries I work with. Financial services healthcare. Many of them have been reluctant, and 1st of all, even, you know, adopting the cloud.

And now they have adopted the cloud. And, as you know, Sec. And any of those Regulators that they work with. There’s heavy scrutiny, right? So some of them have even kind of taken their risk models and moved it back on Prem, just because of the scrutiny and and lack of that policy that specifically addresses their advances today

on the policies that they have already had. So the demand can come from very many places. Some organizations like, I said, are starting off. Some are just reacting to what’s happening in the industry, specifically to regulations like even in the United States. I was checking 25 States have very well written documented guidelines, standards, and policies that you know they expect those doing business in their territory to adhere, you know, to name a few. There’s Texas data, Privacy and Security Act, you know. There’s Oregon Consumer Privacy act right? I mean, there are all these things across these States. Colorado has one. Utah has one Minnesota, Montana, I mean. I can keep naming these States.

And then you kind of, you know. Step away from it, and you go into more of the global scene. You know the last I checked and I checked this you know, obsessively, but about 120 countries now have laws right with AI laws were already in place for data. So there’s a lot of things happening. Other companies might be just outgrowing what they have. Right? I mean, if you look at large investment banks and wealth managers.

I mean, they’ve had. They’ve been working with AI data customer relationship management, right? Interaction of different varieties of data for a while. However, you know, they’re just trying to keep up with the Regulators. They’re trying to keep up with their markets. And then, you know, like, I said, if you enter into new markets, especially if you go into Europe, and you’re dealing with European Union countries and the 100 other countries right across the globe who have laws. Now, I mean, those are some of the demands that are coming. And the other things, Morgan, I would say. Before I pause here, if is expansion and data volume right? We have been talking about volume, variety, velocity, right? As we have defined big data, you know, 15 years back, or maybe even before that. However, you know, that’s that’s kind of just on a different drive. Now, right? I mean, 3rd party Apis data packs and 3rd party data, right? IoT devices that are producing. You know, a lot of data. I mean, there’s always been a lot of data is just continues to grow.

And then, especially if you go into the Llm world right? There is a need for a multimodal aspect of you know how you kind of consume data both on the structured and the unstructured side. And that’s where kind of, I think the policies, when you step back were written. And they’re reviewed every year. But I think the organizations at any level of that spectrum from starting to entering into a new market, to outgrowing what they have. They have to kind of, you know. Step back and take a look at it and continue to kind of, you know, reinvent and republish their policies and make sure that the Regulators are you know, in sync with what they are expecting. So I’ll just pause here. And see if there are any. Follow up questions. Morgan. I mean.


Morgan Llewellyn: I think what you’re saying is really interesting. If we take, you know, kind of what Gary was talking about. He’s talking about, hey? There’s internal drivers that are maybe disrupting and causing you to rethink about your policy. And I think what you’re also suggesting is, there’s external factors, too. We’ve got State regulations, government regulations that are also driving, you know, a necessity to reevaluate.

And I think that really comes back nicely to kind of your point. Originally, Gary is cadence right? You know whether it’s internal drivers that are causing you to rethink your model whether it’s external drivers causing you to rethink what you’re doing from a security perspective, having some sort of standard cadence to reevaluate what you’re doing internally, maybe external drivers, you know. That’s 1 way to help kind of bridge. And that’s where you’re gonna see a level of sophistication that maybe you don’t see in some of the early adopters. But you will start to recognize in these later adopters. So I think you know this internal external components really nice. And and Gary, I think. You know, kind of that cadence. You know, ties it together really nicely, I guess you know. Let’s bring in our our 3rd guest here, you know Adam Franklin is with us, and you know, from the you know, call it the AI call it the innovation side, the thing that I’m most interested in. And and I think there’s there’s just an exponential opportunity is really, when it comes to integrations. Right? So when you’re thinking when you’re talking about data, right? And you’re thinking of all these new data sources and the data volumes you’re talking about call deep. A lot of it’s coming from integrations as you’re thinking about security threats from your platform, right? A lot of a lot of integrations can introduce new threats or or new capabilities.

You know, Adam, what are you seeing, you know, from an integrations perspective. What is kind of early adopters shared responsibility look like what it you know. What should people be thinking about? Triggers is is maybe they’re starting to advance. And what are the 1st steps they should be taking a look at as they move from kind of that beginner to that more mature organization.


Adam Franklin: Yeah. So 1st of all, let me just wish happy data privacy week to all those who are celebrating it. And just a quick intro. Adam Franklin, Technical Architect with HIKE2. Kalia has explained what we do pretty well. I am mostly focused on cloud solutions a lot of time in Salesforce a lot of time talking about all the other things that Salesforce touches and integrates with. So when I think about this question really of outgrowing sort of where you started. You know, I think it starts by recognizing some of what Gary was saying, which is that this is kind of the natural evolution of things. If your business is doing well you’re going to outgrow those policies, you’re going to discover that you need to go deeper. And I think thematically what I would say probably the 1st place that a lot of organizations end up is that they discover that they don’t necessarily have the level of knowledge and insight, that they need to be confident that they’re fulfilling all of their responsibilities as it relates to integration. You know. So what does that mean? It means they don’t necessarily know if they can answer the question of How well do you understand? Your integrations in terms of their ongoing health and performance. How well do you know what tools are being used? A lot of businesses nowadays? You’ll see different tooling being used across different business units, and it may not always be well governed. And also, you know critically how is access being managed and kind of recorded and to go into a little more depth about those. When I think about understanding the ongoing health and performance of the integrations in the enterprise, I think what I see is that at 1st organizations they they generally know what’s integrated, and they know generally how it’s integrated and why they integrated it. But they can’t really tell you on a day to day basis. Well, what’s going on with that integration? And so I think, you know, in terms of policy where this usually leads is introducing policies around things like observability or application performance monitoring, you know, tools like Dynatrace or Datadog or splunk, so that at the Enterprise level you can actually answer those questions a little bit better.

The second one that I mentioned, which was tooling. You know, sometimes it’s enterprises that just have a variety of middleware being used, and they don’t necessarily know exactly what’s going on with all those tools. You know, different integrations are being realized in different ways and I think it becomes more and more important not to say that all integrations must be executed using the same tool, I would argue that that’s an anti-pattern. But to really introduce some level of tooling governance, so that there’s at least a review of those tools to make sure that one you’re not buying redundant tools for no reason and 2, that the tools that you are selecting can meet those overall security requirements. So thinking back to the whole shared responsibility. Model when we start integrating with other systems. You know, if we’re buying tooling to achieve those integrations are those tools actually able to fulfill our obligations in terms of data security.


Morgan Llewellyn: You know you bring up a really good point. And and this is something I’ve seen with organizations where you know when you start out as a small company. It’s easy to know how many integrations we have. Right 6, 10, whatever whatever that number is. But you know a victim of their own success. I’ve I’ve seen very large, very sophisticated organizations, with no idea how many integrations they actually have. And it just kind of explodes and expands. You know, I guess, kind of an open question for the group here, as as organizations are becoming more sophisticated. What are some, you know, kind of policies or frameworks that can put in place to, you know, kind of prevent this explosion of just unruly integrations. Where there’s 0 insight into what are we integrated with what are we sharing? You know, and and having that kind of top layer observability, any suggestions or kind of best practices that folks have seen.


Garry Polmateer: Some of my more mature customers. They have a software selection and approval process where you know essentially, any new software that’s being considered by the organization has to be put through diligence and that doesn’t have to be an enterprise only set of rules. I would highly suggest even small medium businesses to just have some level of expectation with that decision making process to to vet the software properly. You know, when we talk about the shared responsibility model most of my experiences with Salesforce and we all know how Salesforce works and how trust Salesforce works and all of those things. But then we go to say, an integration platform like Workato.


I may or may not know all the details of how do they encrypt their data at rest? What is their logging like, how can they, you know, maintain granular security controls when you’re moving data from one platform to another. So that’s where at least having a basic diligence process can help understand the impacts of adopting some of those new technologies.


Morgan Llewellyn: So you’ve got, you’ve got a software selection process. What about for organizations that maybe they’ve already selected a software. Let’s say Salesforce. And now, you’re you’re actually not selecting another software. Maybe you’re just integrating to a data source.

Right now, maybe you can think of that as a software selection. But, you know, call deep, you know, Adam, any thoughts. And how do you do that same level of vetting? How do you think about integrations and data acquisition, you know, in in order to maintain that visibility and understanding of of those integration points.


Kuldeep Singh: Yeah, I mean, I can. I can provide. You know more of what I’m seeing from a governance perspective. Look, I mean yeah, if I go back to my industry, you know where I’ve spent a lot of my time in, especially in large global organizations. Right? They are bringing in software. They’re bringing in technology. Some of them are, you know, more build versus buy. And so many years later, we’re still struggling with unifying. You know that quintessential customer data right from various sources and the promise of advisors utilizing, you know, intelligence through Llms and through AI, I mean that lure is there? I mean, like, I said. You know whether you’re reacting, whether you want to be the 1st adopter people have built robot advisors, you know. Back in the early, you know, 2,000 tens and thereafter. And now they’re bringing in these new foundational models right, which require them to kind of, you know, really unify their customer data the activity data, the transaction data and and people are struggling because one. You know the governance practices that were put in place, you know, 10 years back don’t necessarily align well with what is necessary today, right? Whether it is, how agile it needs to be, how granular it needs to be people. The the fair lending act in banking right it requires. You know the banks to be fair, right? And those were very those were, you know, business rules specific versus artificial intelligence specific.

So as as you’re kind of, you know, bringing in new technologies. You’re looking at AI as an embedded component of how you make decisions. Now, they’ll have to look at various different principles, right? Such as fairness, transparency, integrity of your data, explainability, and just the understanding of you know how bias is being introduced into your models and all the way into your prompts right? And and all the way through.

So you know, taking it back to, I think I’m going all over. But let me connect this for you here, Morgan. But you know, as you kind of start starting to think about your principles, that you have to kind of adhere to the demands of, you know customer unification, because you want to take a be 1st in the in the market, and you want to take your robo advisor, or whatever you’re trying to do. For your clients on behalf of your clients. You want to kind of innovate there. So some of these organizations. What I’m noticing is that they have to step back revisit their principles right, and then revisit the principles that the policies were kind of created over and and then bring that those principles and the policies as they are being you know, either being edited or created. You know what is the acceptable use policies? And whether the policies are in the risk function or your privacy function, or any of that. So you’re bringing all of that. And then you’re also looking at. You know, there are now newer tools available in the cloud to to be able to integrate right? And we’ve been talking about integration here.

You know. What are the cloud native integration frameworks that are available to you? What Apis can you use? You know? And that’s that’s the advancement in Apis, and how you can bring in 3rd party data. How do you kind of utilize? You know, technologies such as data break snowflake or any of the data lake and data fabric kind of providers out there. And and revisiting really your principles, your policies, the tools and bringing it all back and and and really, seeing how you have to reinvent yourself right and to Gary’s point earlier while reinventing is needed now in the industry, especially in organizations that are very highly regulated. It’s just that there is a need to be agile, right, and not everything has to be done tomorrow. But but you know, bringing it back to what’s that cadence right? How quickly and how thoroughly you want to continue to progress and continue to kind of reinvent yourself


Morgan Llewellyn: I think. I think if I can jump in, I think you you said something that was really interesting, which is, you know, the governance policy that they had from 10 years ago. It’s not relevant right today at some level, or it needs to be updated. And as I think about kind of the shared responsibility model. And you know what’s changed in, you know, kind of the technology world, the data world. There really has been this shift of traditionally, we haven’t thought of unstructured documents as data sources. Frankly. And so when we think about data sharing when we think about data ingestion, we’re typically thinking about, how do I share something from a structured SQL or no SQL. Database into wherever I’m putting it?

And it seems like one of the you know, the real needs to reevaluate the shared responsibility model is now with Llms and these new capabilities. Now you’re throwing in unstructured documents. And what does that mean for the sharing of information, not just at the document level, but within the document.

With LLMs. And different pieces of information, any kind of insights called deep that you’re seeing of how folks are thinking about a shared responsibility model now that they’re incorporating unstructured documents.


Kuldeep Singh: Yeah, I mean, I think you know, there are 2 ways to think about it. In my opinion, Morgan, you know one is, what is your responsibility right your as in. Not yours, Morgan, but you know me as a as a adopter of the shared responsibility, ability, model in my organization, and we spoke about that a little bit in our 1st webinar, where we were really differentiating right? Who is responsible for what?

So that hasn’t changed while the vendors, you know whether it’s azure or aws, or any one of the ones that are providing these foundational models. You know they made certain things easy for us by creating these foundational models. They also kind of, you know, provide

you know, certain things like stereotype right? That is, being introduced through documents or whatever data that is coming in, whether it’s entropy, right racial and toxicity. So they’ve given us frameworks that we can utilize and accelerate. However, going back into the basics right? Which is.

how is the data sourced? Right? How do you define your data? And I’m talking about both structured and unstructured. Right?

When you talk about tokenization, when you talk about, you know, really taking your documents and breaking down them into feature, sets into kind of you know the elements that are actually eventually fed into a model. Those basics have not changed right? What hasn’t changed is the security aspects the the ability to define and discover

right? So that’s all part of your core governance framework, whether it is structured, unstructured. Of course you have to morph it because most of the people were thinking more in terms of structured data when they came up with data governance, and any of the policies related to data. However, that is necessary for your unstructured aspects, too. Right? So you have to take the same policies, the same cadence, the same governance styles. You just have to expand it. There’s a lot more to do now, and I can go into more of the details.

However, I would always advise people that. Let’s start from the basics.

if you’re if you understand your policies that you’re required to to adhere to.


And then and as you’re kind of, you know, bringing in new ways of working. What additional risk is being added right to your work. Now, if there are new risks, you know you define them. But if there are risks that need to be edited in your existing policies, you know you have to go back and edit your policies right? So so hopefully, I answered your question, Morgan, the basics haven’t changed.

The technology is there, right? And the technology is continuing to evolve through this the the Cloud Service Providers. However, the responsibility of making sure that your data is clean. So you’re not feeding garbage in right? And and you know what you’re feeding in, and all of the things that Adam spoke about, you know whether there are security aspects, access as aspects. And all of that hasn’t changed right? That is your responsibility, and you have to still do it.


Morgan Llewellyn: Yeah, no, that’s fantastic. Let me let me take it back to you, Gary, and just talk about some other things that I think we’ve talked about in the past. You know, in in other settings around, you know, cadence is a great suggestion. Here’s something easy to do as organizations are getting more sophisticated, more complex, more integrations, etc, etc. What are some other kind of recommendations you might have regarding? How do you evaluate your threats. How do you even understand what threats you have any suggestions there? I don’t know if this question is too leading or not, but I’m curious what your thoughts are. How do you even identify your threats? And how do you understand? You know? Kind of to, you know Kuldeep’s point earlier of you don’t have to do everything. But how do you like and prioritize where your your risks and your threats are.


Garry Polmateer: Yeah. So one thing called you said. And I feel like I’m beating the drum for this all the time back to basics. It’s such an overwhelming topic. To think like there’s 18,000 levels of threat that can come into my organization. And what’s the most effective way to handle it. So I do always want to encourage like this is not, oh, impossible task. And it’s always about risk minimization, not elimination. I mean, if we’re doing anything that involves moving data from point A to point B to accomplish a function, there’s going to be risk. So the effort is to you know what, what can, what is the smallest amount of effort that can have the biggest amount of impact with the resources that we have in hand right now.

And a couple areas where I’ve seen a lot of progress made that is utterly simple. Here’s 1 story. A customer says Hey, you know they’re a healthcare company, hey? We can reduce our support costs by using chat. So the powers that be said, all right, let’s build chat. They self implemented a chat product. Guess what was happening. Their customers were chatting tons of Phi, private health, information, social security numbers, doctors, names into the chat and completely destroying their current posture for that type of information.

So instead of implementing a complicated technical solution, the organization said, All right, here’s what we’re going to do in order for you to get to a chat session. You have to check 3 boxes that say, like, I am aware I can’t put any of this info in here, and when the chat starts they retrained their customer service agents to lead conversations away from where things could get risky or move them to another channel, where they could talk freely and in a secure way. So when I think about places to get started, it’s not always just well, let’s let’s do a million dollar logging enhancement project which very well could be warranted in certain circumstances, but also going back to just do we have good training and common sense mechanisms in place to minimize risk as well. So I think it’s very easy to just get completely blindsided or blinded by the technical complexity when there’s still more risk, I think, on the human side that is unaddressed every day than on the tech side. So you know, they’re both important for sure, but also just don’t consider both sides.


Morgan Llewellyn: Let me ask another question. You know, where is the role like of penetration, testing, pen testing? And when when does that come in? Is that a super sophisticated, highly complex organization? When should organizations start doing that? Is it in this middle, you know, is your is your advancing? What? What do you typically recommend when they should start considering pen testing.


Garry Polmateer: I mean, I’ll say, some of the scariest security vulnerabilities that I’ve come across in my career were the result of a company investing in penetration, testing to find them. And with so many things it’s like, when you go to the doctor to get your annual checkup, go to the penetration tester to get your annual checkup on are all of the security controls that I think I have in place actually taking place. And there’s a lot of levels of complexity for that, I do think. And I’m not saying this in a self-serving capacity, just in a fresh set of eyes capacity, working with a neutral 3rd party, who’s a specialist, does make a lot of sense because they’re not overly familiar with what’s going on, and they can bring kind of an outside opinion. That’s unbiased to the conversation. But penetration testing can happen at many levels. And I also want to say it doesn’t always just have to be. Let’s hire a white hack hacker to do something to our public endpoints. It can also be something like a simulated attack internally of one user goes in and deletes a record then reports it to it and says, Hey, can you run a log? Trace on this activity to see what exactly happened? And if the it can produce. That or not is a simple penetration test of you saying, did the system work the way we thought, or running a test restore? I deleted a record. Can you restore it? So there are very basic ways to have some checks and balances all the way up to kind of the more traditional white hat activities.


Morgan Llewellyn: Yeah, that is super interesting and kind of a nice perspective to put on it. Because I think commonly, we think of pen testing as, yeah, I’m gonna try and hit an Api endpoint and see if I can get in. But to your point there’s other types even bringing it back to data governance of is your data governance policy being enforced, you know. Are you able to, you know? Look at Crud and that type of thing, and and see if something was deleted. It doesn’t show up in the in the logs. I think that’s a really good kind of call out and kind of shows that you know, it doesn’t have to be this massive, scary thing. It can actually be a little bit more structured. And, you know, kind of pointed to to where the organization needs to go. You know, Adam, I I know you’ve you’ve been kind of sitting there and and being thoughtful. You know when again, if if we can bring it back to integrations. And you think about you know, organizations and maybe tools. Or you know, what do you see organizations doing as they’re getting away from some of these early implementations. I think, Gary, you made the joke. If your policy was made by your founders right. It’s probably time, you know. I I think, for a lot of tech companies or in organizations. It could be the same thing set for integrations if your integration was 1st set up by your founding team. Maybe it’s time to reevaluate integrations. What do you see organizations doing as they’re taking a look at some of these early integrations. Where, where do you see low hanging fruit to improve observability to, you know, observe monitoring that type of thing, even documentation.


Adam Franklin: Yeah, I think I’d like to continue with this idea of going back to the basics. I I don’t know if maybe it’s just my fundamental intellectual limits. But I love the basics, and I think that’s that’s where you start you know, the whole principle of least privileges is a great starting point. And I think that kind of touches on what Gary was mentioning around pen testing right? Can you actually make sure that people aren’t doing the things they’re not supposed to do. And can you make sure that they can do the things they need to do which is important for data governance. Right? If I can’t see certain data, I might be creating a lot of duplicate data that maybe starts to mess up the quality of the data. I think that as it relates to observability. You know, within Salesforce, obviously, it’s possible to just start making it a point as policy to say, let’s let’s introduce some some basic practices around, always ensuring that there is a little bit more robust logging in place, even if you’re not ready to start buying these big enterprise tools.

There are open source tools that are well known that work very well and will give the organization significantly greater observability. I think another area that is valuable, particularly if you’re thinking about like identity and access management. And who’s getting access to what implementing a policy of just paying closer attention to those Oauth tokens that are constantly being granted to just make sure that things are being granted to the systems you expect them to, and to the individuals who you expect to see them granted to. So a lot of this is really, you know, just verification. At the end of the day, like have we intentionally designed our system? Have we identified what we think should happen? And is it actually happening? Or are we seeing a lot of outliers? You know. I think that’s that’s all. Relatively low, hanging, easily accessible things that any organization could start to adopt.


Morgan Llewellyn: So I know we’re coming up on time here, and and we want to leave some time for questions. Does anyone have anything that we haven’t talked about in terms of, you know, practical triggers like, if you’re if you’re at this point in your organization, maybe thinking, are you starting to go global, international? Or are there other triggers that folks should be considering or thinking about of? If I’ve reached this point in my organization, it’s probably time to reevaluate my security and my my trust models, and then any other kind of you know best practices, or, you know, low hanging fruit, opportunities for organizations that are starting to hit those those marks. Any any thoughts. I’ll open it up to the group. Maybe I’ll pick on you, Gary, because I see you shaking your head.


Garry Polmateer: Yeah. Just a a really easy one. We all just finished our 2025 planning. And we’re starting to execute our growth plans for the year, and I would just say, a simple one is are you looking to get into any new industries, verticals, or geographies that could require your organization to meet new security and compliance standards. So if there’s even a maybe in 2026, I’m going to be international and working in the EU. Now’s a great time to understand what you’re signing up for for doing business in that part of the world. So there’s a quick one.


Kuldeep Singh: And Morgan, if I have to kind of jump in, and, you know, add a couple of thoughts here, I would say for many organizations that I deal with and work with. Right? Unless they’re global players. And they’ve been doing AI for a long time are in early stages of thinking about. How do we kind of, you know? Do this responsibly right?


And to them, I say, you know, if you’re in early stages, or you’re kind of Re, like I mentioned earlier, reinventing yourself.

There are frameworks out there which are not necessarily certifiable, like Gary. You had mentioned Nest in the past conversation. Right. I mean, if you look at Risk Nest responsible framework for AI, you know it provides a really clean, nice guideline on. How do you take some of your policies and systems that you need to put in place and plug it back into. You know what’s your value, that your organization is adding to the humanity, to the markets, to the areas that you work in. And like tying all of that to your organization’s brand to products, use cases, business, you know, value that you take into the market.

Start there. Many organizations, especially the ones that are very mature, are used to Iso standards right now, if you want to, kind of, you know, start thinking more towards hey? We’re going to be certified in 24 months then start looking at Iso. You know, there’s many of them. AI specific ones are 4, 4, 2, 0 0 1 that we are looking at that will allow you to certify and stand out in the market as a leader. Right? That’s 1 thing. And the second thing I would say is you don’t have to do everything by yourself, right? I mean, technology is evolving so fast. Look at your data value chain. Look at, you know, if data was your business, how do you look at your. You know the value stream of data where it was created, to where it was retired, and everything in between, and start kind of mapping it to the technologies that are available today and try to make them native. So that you know you’re not operating in a disint integrated manner living on islands and trying to do data governance here. Engineering here right now, technologies and tools afford you to kind of, you know. Be hand in hand and have a good handshake, and and native to what you’re doing. So it’s not thought separately. So those are 2 2 things that I kind of advise, you know people, and I kind of adopt them myself.


Morgan Llewellyn: Adam, you wanna take us home.


Adam Franklin: Sure. I think it’s actually, you know, I’m thinking a lot lately about you all remember, earlier this year, the crowd strike outage and the impacts that had. And what we’re seeing now is, there’s actually some some response to that where there’s additional legislation being proposed or adopted. And I think. Just kind of going back to some of what Gary was talking about earlier. It’s a good opportunity to start taking a close look at your event, response and recovery plans.


You know, just from both a business continuity perspective, but also from a legal compliance perspective. Right? It’s always nice, low hanging fruit to say, I’m not going to get sued this year. So I think that would be an area where, if if companies aren’t already asking that question, and already making sure that they’re doing it. It’s a great time of year to sit down and take a few hours and think about that.


Morgan Llewellyn: I think that’s an excellent suggestion, and you know, as we close, I think I’ll just throw one additional into the the pot is. You know, we think about legislation changes. I think certifications is a great idea. One other thing to think about is, you know, kind of on this cadence. you know. Pay attention to what your big platform providers are doing, and you know there’s so much innovation so quickly. Sometimes it’s necessary just to do a little homework for yourself to understand what are the new capabilities on my platform in terms of integration, monitoring or in terms of data security? You know those types of things, even in in terms of you know I am. You know. Pay attention to what your your big platforms are doing, and you know, as you look at that cadence to reevaluate external kind of, you know needs and internal needs, you know. Also, take a look at your technology, how that might be forcing, you know, kind of change. And so with that, Kalia, I’ll throw it back to you. I want to thank everyone. This has been a lovely session. I learned a ton. I hope others learned a lot as well. Kaylee, why don’t you go ahead and and wrap us up.


Kalia Garrido: Yeah, I. We don’t have any questions coming through in the chat, but I totally echo your sentiments, Morgan. This was a great session. It’s a wide world out there. There’s a lot a lot of new stuff to learn. So we appreciate you all coming together to keep us kind of abreast of what’s happening. Like I said in the beginning, if you want to join these folks live in person, we’ll be in Pittsburgh for the innovation Summit 2025. That’s happening at the end of March. And I’ve included some chat links in the chat. But you can always reach us at Hi 2.com go to our events, Tab, and see all the cool things that we’re up to. So Morgan khal deep Gary, and Adam. Thank you so much for sharing your expertise on a version. 2 of this, we got one more to go, and we’re doing in person, so we hope to see you there. Thanks everybody.