Video Data and AI Governance Framework November 14, 2024 HIKE2 As artificial intelligence continues to evolve at an unprecedented pace, organizations are grappling with how to implement effective governance frameworks that ensure both compliance and innovation. How can businesses balance risk management with the speed of AI-driven transformation? What role does responsible AI play in shaping competitive advantage?In this Data & AI Governance Framework Webinar, industry experts Kuldeep Singh (Chief Data Officer) and Morgan Llewellyn (AI Practice Principal) dive into these critical questions. They explore why AI governance is more important than ever, discuss best practices for integrating governance into business strategy, and share real-world insights on navigating compliance, ethics, and ownership in the AI era.If your organization is adopting AI—or even if you’re just beginning to think about it—this webinar is packed with actionable insights and expert perspectives to help guide your governance strategy. Watch the full discussion to learn how AI governance can drive innovation while ensuring trust, transparency, and accountability. Transcript: Kalia Garrido: I would like to officially welcome everybody to our event. Today we are going to be talking about data and AI governance frameworks. My name is Kalia Garrido, and I head up marketing and events here at Great Data Minds, which is a HIKE2 company, if you haven’t already heard of us. Great Data Minds is a collective of passionate data activists. And we are on a mission to modernize the world of data. We do this in a couple of different ways. The first is that we have our services arm. This is where we do our strategic planning education and the deployment of critical data projects. And that’s at HIKE2. And then when it comes to the data and analytics community content and the conversation just like what we’re here to do today. You can find more about what we’re up to at HIKE2.com/events. You can also find us listed on Eventbrite, and all of our sessions are recorded and then posted back out on YouTube. So you can certainly follow us there. A little bit of housekeeping for today’s session. This is a webinar, of course, so your cameras and microphones are off, but we highly encourage collaboration and engagement via the chat. The Q&A. Or in a little bit of time that we’ll reserve at the end of this session. If you’d like to ask some questions to Morgan and Kuldeep. And like I said, we are recording so you can certainly find us later on Youtube as well. So some introductions from my esteemed colleagues. I have Morgan Llewellyn joining us today. He’s a PhD. And he is the AI practice principal. He’s got over years of experience and expertise in AI strategy and innovation. Here at HIKE2, Morgan leads our AI strategy. He helps the company stay at the forefront of AI technology. He has implemented a whole bunch of cutting edge AI solutions for both government agencies, fortune, companies, and a whole bunch of legal entities all the way here and back. So, Morgan, thank you for joining us. Morgan Llewellyn: Yeah, it’s a pleasure to be here. Thank you, Kalia. Kalia Garrido: Awesome, all right, and we also have, of course, the one and only Kuldeep Singh. He is our chief data officer here at HIKE2 and he’s got over years of experience in data and digital strategy across sectors like finance and retail and philanthropy at HIKE2. He drives business value by using AI automation and data governance to improve decision making and operational efficiencies. Kuldeep, thank you so much for joining us and sharing your expertise. And I will pass the floor to you. Kuldeep Singh: Yeah, sure. Thank you so much, Kalia, for the introduction and for organizing. And I’m really excited to be here with Morgan. I think this is the first one that Morgan and I are actually together on, and looking forward to this one and many more to come on this specific topic, and others. Right? So we are. You know this topic that we’re gonna cover today: data and AI governance. I’ve been passionate about it. I’ve been intrigued by it for decades now, especially the data governance piece, and then Morgan and I were chatting. You know AI governance in the past decade has continued to evolve. And it is evolving even now. And so is the data governance. Right? So it’s really passionate about this topic, very heavily interested in the research and seeing what you know, some of the changes across the globe are, so today, we’re going to cover all several facets of data and AI governance. We may not be able to cover everything in the next min or so, but we’re going to talk about the why right? Why are we talking about it today, when this has continue to exist for decades, you know, we’ll cover some of the principles. I will talk about the frameworks and you know, we’ll also talk a little bit about the use cases and how we are seeing in the market with our clients and companies that we work for adopting some of these frameworks and practices, and then and then finally, we will cover. You know, how do you enable? Not for a project or a Poc you know, or but but for the entire organization, the entire really the enterprise. So you know. So we’ll we’ll continue to kind of talk about other things such as you know. What are the foundation foundational elements? As I mentioned, you know the principles, the foundational elements of really achieving something that people haven’t thought about right through with governance, which is like the strategic goals. Right? Governance was always seen as in a practice of you know, compliance, and it still is right. Risk, management and enterprise, risk, etc. But how do you kind of, you know, achieve your strategic goal of growth and through artificial intelligence and data governance. And the data governance. Right? So we’ll talk a little bit about that. So, as an introduction to the topic, you know, we live in an era where data is often termed as the new oil, right effectively managing and governing this asset is really no longer optional, right? So we can now confidently say that data is, you know, the fuel that will drive artificial intelligence and really artificial intelligence is the catalyst that will drive. you know, a company’s growth. And so most organizations that are driving growth have already embraced artificial intelligence, and some of the organizations might be more on the early stages of it. But, you know, before jumping deep into the conversation, I also wanted to say that the firms that collect store manage clean, and use trustworthy data to fuel their AI algorithms will have a unique perspective right on their markets on their, and they’ll have the insights that nobody else has access to. But all of that, you know, as data and AI is driving growth. The key thing that we wanted to stress upon today is that all of that has to be done responsibly right? So responsible AI with trusted data will really drive the ultimate, ultimate, competitive advantage for the firms. Who are, you know, in the midst of really adopting this. So today’s competitive landscape demands, not only just, you know, like I said the data and AI, but doing it responsibly. So I thought, you know, before we get started, Morgan, what’s the definition of responsible. AI, right? So I’ll just cover that, and then we’ll get into our topics here. So for the audience responsible, AI is the practice of developing, using and deploying artificial intelligence systems in a way that it is ethical, transparent, and and accountable. Right? So so that is very necessary. So the frameworks that we’re going to talk about today are going to be about data governance and responsible AI, and you know, many organizations have already have data, governance practices, AI governance practices, but they are also at a crossroads right now, because you know of all of the advances that we have seen with AI right moving into the Gen. AI in the past to years. and everything that’s to come in the next to years. So this is a hot topic it’s emerging. There’s a lot of information coming out of Europe and the United States. And we’re going to continue to see evolution of this space, which is very exciting. So so, Morgan. You know, I I have a quick question. So as organizations are thinking about data and AI governance, why, why is this becoming, you know, re-emerging as a hot topic right now, why should organizations think about it? Morgan Llewellyn: That’s a good question. So if we look at kind of traditionally or historically, how organizations have leveraged, you know, kind of data governance in general. Right? It’s how do you combine information to get new insights, get new value for the organization? And be able to manage the risk and the sharing of that information within our cost organizations. So it’s really this focus on being able to facilitate the transfer of information between, you know, business stakeholders, etc. I think, as we look at what’s happening in generative AI and just in technology in general, the speed of development, the speed of advancement is so fast that good governance allows you to be able to innovate quicker, faster than what you would be able to do otherwise if you didn’t have a good governance policy. And so let me let me just kind of give an example. So suppose you have an organization that doesn’t understand what data they have and how they can use it. Then when some of these, you know, Gen AI applications come in sometimes the easy answer is, no, you can’t use our data to go do this because there’s not an understanding of the security risk. And what we see is organizations that do have good governance, that do understand what information sensitive, what information is permissible to be used in Gen. AI. What are the rules around? How Gen. AI is going to be, you know, leveraged within the organization. You know what’s allowed, what’s disallowed organizations that have that understanding have the framework in place. They’re able to move a whole lot faster. And so they’re able to innovate a whole lot faster, and that separates them from the other groups. So I think that’s why we’re starting to see. you know, a renewed focus or a renewed energy in the governance space is because it’s not just about risk. It’s about speed at this point, and an understanding that a good risk policy actually allows you to move faster. It’s not about saying no quicker. It’s actually about being able to say yes quicker. And so I think that’s what we’re seeing. Kuldeep Singh: Yeah, that’s a great perspective, Morgan. And you know, another thing that we are seeing is emergence of new regulation. Right? I mean, I mentioned it earlier. I mean, I track fairly. AI. You know, this is a website that tracks all sorts of regulations and governance regulations and compliance that is being introduced, you know, across the globe, I mean I think last I checked few weeks back. It was like new regulations. And I checked it last night, and of them have actually passed, and they are now policy that, you know, lies with the government of certain countries. Right? So how is compliance driving some of the innovation, or maybe even kind of you know the need for frameworks and the principles and the evolution. Hence. Morgan Llewellyn: So I think the the speed at which you know, we’re seeing changes in in law around the usage of data. And AI right? But really around the usage of data within AI, and what AI can and can’t do, I think what it’s it’s driving is really a flexibility within governance policy itselfto your point, that this is an evolving and really an expanding market when it comes to governance over AI. And what organizations need is a flexible approach to governance, to be able to bend and react to the changing policy that’s out there. So I think that’s thing we’re seeing. But you know talking point. I I like to always kind of stress when it does come to governance in in AI, or even governance and data is, you know, this is something I actually welcome in the space. I do welcome additional governance. I do welcome kind of additional policy, because I think it takes the. You know the uncertainty away from the conversation. You know it takes that bullet out of the Chamber. If there’s a clear list of policy or rules that AI has to play by. It makes it a whole lot easier. as you position, you know, kind of what your product does against that policy. If it’s unknown what you’re going to have to do. I think that’s where you get into this nebulous world of, you know, kind of, you know, just unknown fears. And so I really welcome policy in this space, but because the policy is ever evolving, it necessitates having a flexible framework of how do you you know. How do you bend and adjust your governance policy within your organization, to changes in the law? Kuldeep Singh: Yeah, yeah, no, no. Very well said. And you know that towards the end of this conversation, right? One of the section actually focuses on the change management, aspect aspect that companies have to kind of, you know, adhere to or start working towards including the enablement function. Right? I mean, because, like you mentioned, it’s an ever evolving, ever changing ecosystem. And it’s not a and done thing that you set something up. And then, you know, it’s something that could be forgotten about right? So that’s great. So I want to kind of, you know. Bring us into the framework talk. Now, when you think about data governance audience might be familiar with dharma principles. Right? There’s Dcam now and then there’s a data governance institute has its own framework. and then you move away into the artificial Intelligence realm. There is, you know, NIST has a framework. There’s artificial Intelligence Governance Institute in Europe that’s responding to the EU AI Act, and there’s responsible. AI. There are all these organizations that are kind of really publishing their frameworks frameworks being frameworks, you know. One thing that I’ve noticed, Morgan, is that when you actually take one of these frameworks and take it into an organization to implement it right? That’s where the experience comes in. And with our experience over the last couple of decades, right? Really implementing and also dealing with the failures and the successes of the program. one thing I mean, this is not a sales conversation, but I would just bring up that HIKE2 has a data and AI governance framework, that is, you know, culmination of all of these different principles that already exist with the organizations that I just spoke about. But we have kind of just infused our experience onto it and moving a little bit deeper. And then, Morgan, I have a quick question for you on this. When we start thinking about frameworks organizations that are actually considering, either whether they already have a data governance framework or an AI framework. That isn’t you know, it’s either working or it’s in early stages, and if you’re struggling with implementing something, you know what I would do as a practitioner, I would step back and think about. Hey, what are some of the core principles that I’m trying to kind of, you know, facilitate or govern right? I mean, so those principles need to need to be kind of, you know, identified. And when I think about a good framework. and what are some of the principles that define that framework, I kind of kind of think about it in to different buckets, right? Starting with the human aspects, right? Which is the people, roles the ownership and stewardship, followed by more of a defensive approach, which is, you know, who has access to data, this information security, the compliance, and the regulations. and then pivoting on to more of a trust relationship with your data, which is where we’re talking about data quality, the lineage and the provenance, and then eventually heading into the infrastructure, the scalability more on the automation side that artificial intelligence is affording us but artificial intelligence. If you think about a principle that really a lot of people talk about now, and it’s very well written about. And you can go to conferences just on that specific topic is the ethic ethics and bias which really define the responsible AI, and the so the principles of ethics and bias are critical, which is kind of stretching that definition of governance that some of us in the room might be familiar with so data governance. It stretched us towards privacy, and it stretches towards more on the information security data classification. And AI is really stretching us towards the right into the realm of ethics and bias and responsible AI explainability, right? More responsibility around your decision making. So it doesn’t become a black box. So that brings me to a question, Morgan, that I think you can help me with is around Gen. AI, because Gen. AI is now really stretching even more so. Right? AI has been around for the last years, I mean, you know. People say it was there years back, however, with Gen. AI. At least you know the ones that we are touching as consumers, as practitioners. How is it really stretching these principles. Morgan Llewellyn: Yeah, that’s an interesting question. So you know, to answer that, let me relate to, you know, kind of organizations and how we typically think about data governance. So in a complex organization. So you’ve got a couple of different kind of governance models here, right? You’ve got a federated model, and you have a centralized model. Most complex organizations are going to be more federated, right? Or that kind of hybrid approach between a federated and a centralized organization. So you’ve got this. Let’s just suppose it’s a federated, though. So you’ve got this federated organization where different departments own a technology right? They own the Crm or they own the you know, they own the, you know the saps, or whatever that that technology might be. So different pieces of the organization are are having data come in. And those are essentially the data stewards. And so those data stewards are saying, okay, here’s here’s what information we’re gonna share with the broader organization. Maybe we’re gonna do some tagging of what’s sensitive, what can be shared here? The rules around being, you know, shared. And then, on the other side of governance. You have the requesters. Okay? Well, oh, you know Kaldeep has an interesting piece of information that if I combine it with, say, my Hr information. Then I’m going to have some new insight. And so that’s typically how we’ve we’ve thought about kind of this governance model you’ve got. You know, people who are the data stewards who more or less kind of quote unquote, own the data. And then you’ve got these requesters. And how do we put in place a framework to allow the sharing of information between these organizations. Well, now, what Gen. AI does is it? It still needs all that right. I still need to request data from your organization to put into Gen. AI to get some new insight. But what’s what’s really kind of changed. And what’s a little bit different is now, I’m creating a lot more data than maybe what I traditionally did in just like an Ml. Model. Maybe an Ml. Model produced some scores, and if we’re being honest in and of themselves, the scores aren’t that interesting as like new data types other than how you’re implementing them in your business right, making decisions off of them, etc. But no one’s gonna go. And you know, or very rarely do you go and leverage those scores in different places. Within the organization. But with Gen. AI. Now the Requester is taking information from call deep, and I’m going to go and generate a lot more information. And who’s the owner of that new information? That’s a bit of a nuance with Gen. AI, and just like we have to have an owner over. You know the you know, the source of truth created by your Crm. We now need kind of an owner over what’s being generated by you know what’s being created by the Gen. AI. Who’s the owner of that information? And how do you even share or make that shareable across the organization as well, just like we did. So we’re really, you know, it’s almost like Etl, where we’re creating this whole new cycle of data that has to be managed. Whereas previously we didn’t. So that’s what we’re seeing with Gen. AI. Morgan Llewellyn: It’s really just adding a layer of complexity at kind of the end of what we thought was traditionally the process we have, this whole new process being created. Kuldeep Singh: Yeah, yeah, no. I mean, I think you bring up a really interesting you know. Concept, right, the owner who’s the owner of, you know, as part of the artifacts that are coming out of the outputs that are coming out of the Gen. AI who is the owner, I mean, is it like a you know we have. We’ve had chief data officers. We have, chief, you know, data and analytic officers. Now we are seeing AI officers. Is it going to be the chief privacy officer, or because I’m not talking about a use case here? Right? I’m talking about the enterprise. A big Jp. Morgan, I mean, how are? How are big organizations thinking about it in your experience. Yeah, I think that’s a good question. So I think in smaller organizations having it centralized makes sense right? Because it’s manageable. But in these larger organizations where you do have ownership and kind of different departments or different silos ownership of technology. Then I think that gets a little bit harder where you’ve got a single owner, but I mean, I think there’s a little bit nuance, and I I think the the world still figuring this out is, you know, there’s a little bit of a source of truth problem here, too. So if I’m taking information from you from, say the Crm. And I’m passing it through Gen. AI. And some additional information is being added to that right. But it also contains some of your information. How do we untangle the ? And do you still? Do you own a piece of that output? Or do I own the output because I created it? I think there’s some nuance there, and it’s a little bit unknown, I think. Really, this comes back to having a policy is the best approach, right and having a policy to be able to facilitate this, whether Morgan Llewellyn: the answer is, Morgan owns it because Morgan created it, whether it’s a different answer around. Well, actually, we have this part of the organization owns the Gen. AI, you know, Llm. APIs. And so they’re the owners. I think that’s a little bit difficult. But that could be an answer as well but having a policy over who’s the owner? That’s the most important part, right? And over what can be put into the model right? Just like we had the conversation around in, you know, traditional AI, what can you put into a machine learning model? What can’t you? You have to have that same policy of what can you put into the Gen. AI model? And then the output just having who is the owner of the output? Is it the person who, you know, basically requested for their business purpose. Is it someone else in the organization? As long as you have those things figured out, I think you’re you know you’re at least going in the right direction. Kuldeep Singh: Yeah, no, I mean, I think this is a a great call out right? I mean that accountability. Because when we start thinking about when I didn’t mention this explicitly when I spoke about the principles, you know, there’s the whole concept of, you know, being transparent. right? Making sure that there is integrity, right? Making sure that whatever is coming out on the other end right? Which is, is explainable. But who is? If you don’t have that accountability. Or if you don’t have that person who’s owning that decision. then really, you know, you can talk about transparency forever when things go wrong, who’s answerable? Right. I mean, some of those things have to be thought about. So as we kind of, you know, wrap this principle section, I want to jump into the framework, because principles define your framework. But but you know, people have to think about right, that accountability being transparent, being able to explain. So that’s critical. One last thing right. I was at a conference, Morgan, just a, you know, few weeks back, and I was talking to this and a practitioner from a a large investment bank. And they were talking to me about, you know, hey, we want to interact with these Llms that are available right? Which have been available through tech companies. You know, they build these powerful models. Now I am interacting with them. I feel that you know, how do I really control? Or how do I get good governance? How do I trust? Because I’m sharing my information with a model that was built with generic data? Right now I have all of this information that is very proprietary to me. So how do I actually build a governance on things that I don’t really control? Which are which are these elements that are, you know, Microsoft, Google, any of the ones that are out there? Right? Including Meta. What are there any thoughts around that as and then we’ll move into the framework. Morgan Llewellyn: Yeah, I think there’s, you know, there’s different ways. You can approach that question. The st is, how do you think about governance in terms of model integrity and model accuracy. How do you govern? You know, kind of the the call to the Llm. I think that’s approach. The second one is from a security aspect around. You know. What information do you provide to the model? How do you ensure that your information is not being retained if that’s a requirement. And also, I think there’s a lot of sensitivity around, you know. Fine tuning of Llms as well. If you’re creating an Llm. Is that appropriate? Can that be done? So I think there’s a few different ways to to slice and dice that question. What what I would say is in general, you know, in I don’t want to go too far down the rabbit hole, but in general, when it comes to Llms it is different from our traditional kind of machine learning, you know, kind of statistical modeling approach. Because we do. you know, basically the models trained right? And we haven’t validated the use case? That we’re actually intending to use it for, and so putting in place a good architecture that limits how that Llm is being called specific to the use case. That it’s intended to be used for I think is really important. So basically putting guardrails around the model so that you’re not going. And, you know, running into issues of. you know, creating nonsensical answers which you know, if you log into Linkedin, you know last year you you would have seen lots of those fun examples so well architected. Kind of calls to the Llm. That in some ways mimic you know, kind of the testing process done in traditional. Ml. that’s And then just awareness around what data you know, what is your use case? What data can you put into the Ml, you know, it’s always a good policy to think about data retention, not using your data for additional model training and then depending on your organization. I do see a lot of interest out there, and fine tuning Llms. You know, before you do that, I just, you know, pause and think about is this information? Can I use information from one client across? You know all my clients, you know. Think about how is that information being used? And how would the output of those fine-tuned models be used? In some cases it might be okay. In other cases I’d steer clear of it. Kuldeep Singh: Yeah, thank you for that. You have a lot to think about, lot to ponder, lot to unpack for organizations that are implementing. But good news is that you know there are. There are well defined theories out there. And I mean, it has to be, you know, specific to your organization and your organization’s need. So thinking about your organization’s need, right, that brings me to the conversation of the framework itself. I think when we think about as we were building the framework that we are sharing with some of our clients. You know we think about. Let’s simplify the framework. Right, let’s not put too much jargon into it. And when you think about a simple framework, it brings you back to the same questions that you’ve been asking, anyway, right as you’re creating your business case. Why are we doing it? You know. What are we doing? How are we doing it? And where should we do it so? The why question is all about your strategic alignment and vision, which was missing, especially in the data governance realm, and because it was seen as an enforcing function. very related to the you know. It’s very critical, right? I don’t want to keep come out as I’m beating down on regulations and compliance. But that’s where governance grew. And it did well for many companies, many industries actually right. However, there’s questions about all of the investment that we’re putting into governance. How does it really align with our strategy and our vision? And how are we going to measure some of the things that lead up to the return on investment, whether it’s performance, whether it’s productivity, whether it’s risk management. You know, Kpis, that you might already have or or you know, meeting compliance the next part, which is the what? That’s where you actually are doing things right. Whether it’s, you know, all of the principles I spoke about ownership, lineage, quality access controls, privacy, security, you know, just to create reliable and organized data ecosystems. And then, as you kind of move into the next phase, which is, you know, how are you going to do it? Generally speaking, and we’ll talk about change management, as I’ve mentioned earlier. But this. You know, this stage really centers around the house. It centers around change management and adoption, right? Fostering a data centricculture through targeted training. You know policy development, bringing in automation, bringing into the right tools and technology, making sure that the operating model is very well understood. And then from there you move to the where. Where are you going to apply all of that right? So you describe your why, you described your you know, what are you going to do? And how are you going to enable the entire organization? But where do you do it? And that’s where organizations need to really focus on their information management or information governance, right? Where data is being created, where information is being used. You know who’s touching the data and how it flows through the organization itself, even before it hits your data. Ops. Mlops AI Ops, and eventually some kind of insight that comes out on the other side of it which then flows back into your organization. So that that needs to be well understood. Yeah, your information management practices. including, you know, the repositories underlying repositories like the application repositories, your cloud data stores, right? Your Edws, your data lakes, your data, lake houses, all of that. And then the next bit which I want to talk about is so all of what I’ve spoken about thus far, you know, for people in the room who are have been dealing with data governance they’ll very clearly recognize with the for the why, what, how, and where? But, as we, as Morgan mentioned, you know, as we are moving into more of the AI realm, especially in the Gen. AI realm. You know, we start talking about responsible AI, and I think, Morgan, you alluded to some of the foundational pillars, but that need to be thought about. But if you can again articulate, what are some of the foundational pillars that organizations need to think about as they are thinking about. you know, using their data governance and extending it to bringing in more AI governance and calling it data and AI governance versus data, governance and AI governance. Morgan Llewellyn: Yeah, I think you know with AI, one of the things that we’re always worried about is bias, right? And so when AI is being used to help with decisions. Understanding, if these models are biased is really important. So I think that’s thing to consider that said. you know, I think the right place to start is, you know, generative AI or AI. It’s really no different where you start than where you would with just data governance right? Being able to understand what data you have, what sensitive data you have and who can use? And who can see that information? And how can that information be used in decisions, I think, whether that’s, you know, through dashboards or through. Ml, that process looks a lot, you know, very similar.I think where AI starts to differentiate a little bit from, you know, kind of traditional data. Governance is around the automation. And the you know what I would say, the ability to, you know, at some level, have verifiable bias in the output. In consently verifiable bias. So that’s of the the things that if you’re looking at, you know implementing, you know, governance around AI, the the thing I would stress is, hey piggyback on what you’re already doing on data governance. And if you’re doing nothing on data governance, think about who owns the data, what can be shared? And how do you make a request to that organization. If you already have that in place, then really, what you need to do is start thinking about bias and what is bias in terms of, you know, race, gender age, those types of things. Location, some places as well as the other thing to think about is consistency of the output. So not just not just a time bias, but ensuring that your results are consistent over time. That’s another type of bias that with Gen. AIand some of the creativity you really have to think about. So those are things that I’d really think about when it comes to biases, you know. Make sure it’s consistent, and make sure you’re not hitting any of those kind of rails. The other thing I would, you know again, consider, is whatever the policy is on, you know, being able to delete information or being able to retain information that should be implemented on whatever’s being generated by the Gen. AI, too. So that’s got to follow through just like the information that you’re collecting about an individual upfront and say, your Crm. Any information you’re generating about that person through Gen. AI should be under the same policy. Kuldeep Singh: But thank you for that. You know one thing as I’m listening, Morgan. You know I think some of the core principles. You know that we’re always there continue to exist. Right? They’re not going away. A question that I’ve been asked multiple times, right? I would be interested in hearing your perspective is when you especially in the highly regulated industries, right? When we go into financial services. Healthcare, I mean, you work very heavily with legal tech highly regulated highly, you know, compliant, they have to, you know, adhere to all of these compliance. So they already have their risk. Frameworks. Right? They already have. Some organizations may not. But you know, generally speaking, if you’re and sec facing, or you know you’re you’re reporting up to some compliance body. You have your enterprise, risk function. You have your how the Kpis that you monitor the health of your organization with right and and now you have your kind of, you know, pulling in all of this information through Gen. AI. And you know all of the advancements that are happening. Are you thinking that you know you need to continue to leverage what is existing. And as a result of all of the advancements, you’re making small adjustments to the the frameworks and the policies and the governance. You know, vehicles that are already available. Morgan Llewellyn: That’s what I would encourage people to do. Yes, %. You know, it’s not a throw everything out. It’s how do you iterate and improve. And then how do you isolate the risk of Gen. AI and how do you mitigate that risk? I think that’s really what what organizations are trying to understand today. And I think if if we take a slightly positive approach. you know, a number of those risks can be mitigated through just good data governance, or what we would historically have called good data governance, right? Access requesting sharing of information, good metadata, tagging, etc. Where the additional risk comes out is, you know, just making sure that you have security on your endpoints as well as who owns the output coming from that information. And how is it tied to your data policy? Kuldeep Singh: No, that that’s great. And the last question on this segment, which is around, you know the principles and the frameworks. you know, one thing that I’m hearing especially coming out of a Saas organizations, right? Or tech organizations really is as or companies are thinking about implementing, you know, AI governance framework. They don’t have to start things from scratch. Right? There is governance as a as a service like, you know there, there’s a shapely additive explanations. Right? There’s line, which is a local, interpretable model agnostic. I mean all of those concepts already exist, and they’ve continued to exist right? The explainability of your model becomes important because at the end of the day, right? The decisions that you’re making in organizations that have to be explained. Somebody has to explain them to. So that that is a very scientific need. It’s not like optional. It’s not that something that you’re going to wing. So as organizations are starting to put these frameworks in is, are you thinking that there’s an industry that you know our attendees can kind of lean into where, governance as a service might become an offering that they can just use and implement for their specific needs. Are you thinking that the industry is heading in that direction. Morgan Llewellyn: I think that’s a good question. So here’s a thought. I, here’s a thought that’s always rattling around in the back of my head. Kuldeep Singh: Hmm. Morgan Llewellyn: Is, you know, I I worked in big tech for a while. Big tech is a little bit different than a financial organization than a healthcare organization than a government than a law firm, right in the sense that the people who own the data. It’s not as varied, right? You’ve got at some level, your product or your engineering team? They are an It company where the collection of data is part of the product, right? And so it’s a little bit more centralized. And so the way that they can facilitate information sharing given that they are, you know, teams of software engineers who think only about all automation. They’re just better equipped from the way that they’re structured. From the way that the data comes into the organization as well as the skills of the people. They’re better equipped to be able to have, you know, tech forward solutions. If you pivot to a healthcare, a financial services, a law firm. They’re, you know, while they’re they’re capable of being able to take advantage of some of these tools. There is a natural human element that you know, just exists within these organizations. They are at some level, a little bit more complex. They’re a little bit more relationship based. They are a little bit less, you know, for lack of better term, technical, savvy? And so there is a there is a different, you know. I think, when I think about successful organizations that are more complex like this. There tends to be a little bit more of a human element to it. And so I think that’s something important to call out is, you know, yes, I think, coming out of tech, you see some really great. There’s a lot of really great tools out there. However, I think when you, when you try and map those tools onto these more complex organizations, you do need to have some sort of customization. You do need to have some sort of fine tuning, and you know, maybe that’s kind of where your questions going is, you know. I think that’s where there is a little bit of an opportunity for some sort of service based. Approach around. How do you? How do you take a tool, a tech tool tech forward tool, map it to an organization and customize it for that organization and for those people. Kuldeep Singh: Yeah, no, I mean very well said. And you know just a quick example. I used to work for a wealth management firm that was trying to kind of, you know, really bring in the Gen. AI. I mean, what they really wanted to do is they wanted to have a much more robust relationship between the client and the financial advisor right. And there were a lot of questions being asked of the Financial Advisor, and the firm was able to capture some of these questions and these interactions and curate what the firm actually believed in. And these curated interactions, you know, was constantly evolving because the interaction between client and financial advisors were happening for about advisors, right? And millions of clients. Actually. So somebody was actually curating these these interactions and making sure that that was relevant, which then was fed into the Llms, you know, to produce some automation and automated responses right? But the human in the loop actually is becoming a thing, a human in the loop. HI tl, I think it’s what it’s called. And how do you make lives of humans easier as they are asked to govern more, and things like that. Great? No, I mean, I think this is where I would pause for a second. I have a couple of more things that I want to cover, Kelly, but I was wondering, you know, if there are any questions that we can cover from the audience, because I don’t want to miss out on that opportunity. Kalia Garrido: Any questions coming in right now, except for one question about, you know. Kind of logistics around if there’ll be any slides to review after but if anybody does have any questions, this is certainly the time to go ahead and ask them, and I’ll just keep an eye on that chat called deep as you as you kinda continue on. Kuldeep Singh: Yep. And you know our, our. My response to that you know, will be share slides. Yes, we have, very specific slides for our our frameworks and the principles that we believe in. And we’re happy to kind of, you know, share some of that information. But also, if you are interested in going at any level of depth, because we are just touching the surface here, by the way, right and so happy to kind of, you know. Engage after after this conversation as well. So thank you for that, Gil. Oh, so you know, Morgan, I wanted to kind of I took us in the direction of, you know the Financial Advisor. A wealth management firm, and how they were kind of curating, you know their prompts and their questions, and how there’s a separate department, you know, that’s kind of looking at the interactions and making sure that you know they’re safely curated, and there’s no harm that’s caused to the client, because at the end of the day these financial advisors have to answer to the clients based upon the decisions that they’re making. you know. Are there other such examples that you’ve come across, you know, where clients have been concerned about? Hey? We want to use. Llm. We see the benefit of it. We are heading in that direction. But then, you know, we worry about all of that right, I mean, are there any specific examples that could kind of, you know. Better describe the the fear? Maybe. Morgan Llewellyn: Yeah, I think you know. when I when I think about Llms and generative AI, and how folks use it, I think all too often we focus on the generative part right? Of how do we use an Llm to almost be that last mile? Right? I want to generate something that I’m gonna go put in front of a customer. I’m gonna generate something that you know needs to be legally defensible. And you know, when when we’re dealing with those use cases. %. We see folks, you know, a little bit apprehensive as they should be. If if that’s not well tested, well validated. Well, vetted. What what I encourage folks to do is when they’re thinking about generative AI you know, focus on the initial risk mitigating use cases don’t. Don’t focus on the most risky use cases where you’re asking it to do that, you know ultimate customer interaction, unless it’s like something more or less harmless right like a you know, an email response to, you know. Hey, we’ll follow up here shortly. It’s a but you can use Llms because Llms are so great at understanding context because they’re great at understanding content. Think about how you leverage an Llm. Earlier in your data stack to produce better quality data, better insights for your organization. I think those are risk mitigating approaches when you kind of shift left, and you get deeper into your data stack and use it to really address data quality data extraction issuesthat you’ve historically dealt with. So do we run across folks who, you know, want to use Llms. You know, for you know what could be, you know, risk enhancing kind of use cases. Sure. What we tend to do is focus on risk mitigating use cases initially, and and there’s just a number of them out there.Yep, yep, no, I mean, that is well, said the point I wanted to make here was, you know, through the examples. And you know what you provided Morgan was. You know, if you remember, about years back, right? I mean, we were not in the Gen. AI, or at least we were not talking about it openly, like we do now, right in the last years Target came out and said, You know, hey, they know which couple is pregnant, right? They know which individual or which customer is actually going through pregnancy based upon their spending habits. Right? What I mean. It’s just as simple of an example of responsible AI right and and high risk. I think you mentioned right while I I don’t think the target innovation team was thinking about. You know the adverse impact of that right? They were thinking that, you know, hey, we know. And now we can better serve you. So the intent was, you know, we can better serve you with the products that you need during this very important phase of your life. But did they actually ask. Kuldeep Singh: you know that group of the segment, or those people, or those individuals, or those customers right, whether they wanted to be identified. And they wanted to be known by the organization. So I think that as simple of an example, that is, and you know you can now take that as a core principle, and you can start working backwards towards your frameworksand, like you said, Right, what do you want to work on first? st How are you going to kind of you know. get better at it. And how? What are some of the risky? Eventually, I think the point is, what’s the value of your organization. Right? So that is the core principle behind these frameworks is. what are your values? What is your brand? Because these use cases and these products and services are what people are going to touch. And there is going to be kind of, you know, a level of sophistication that is needed to be able to explain why certain decisions are made. Morgan Llewellyn: Yeah, let me give you a nice little example of how you can use AI, or, you know, kind of machine learning generative AI to actually reduce risk and enhance your governance. So a lot of organizations again, very complex organizations, they have a duty to notify. If there’s a data breach right. however, who do you notify in the event of a data breach is a very complicated question, because that information spans a number of unstructured documents, a number of databases, etc. And so how do you use, you know, AI, to be able to understand who are those key contacts within an organization, so that in the event of a data breach, or in the event that you just need to delete data because you know the that data is expired. You know, Per, you know, either a legal requirement or your own firm’s sla’s. You’re gonna remove data, being able to notify a client that you know you’re gonna do something with your data? You know, that’s an application of AI that we’ve done, of how do you identify? Who are those key contacts? So that in the event of something, you know who to contact. That’s a very simple kind of risk mitigating, but also aligns with your existing data governance policy. Right? So how do we use these new tools to provide a better policy. Kuldeep Singh: Yep. Thank you for that. So I’m gonna move us forward. I know we only have about, you know, or min left and then we’ll pause for questions again. So in summary, what we covered today is, why is this topic relevant? Right? Why is there so much interest now, I mean, interest has been there, I mean, you know, for the past years, years, but it’s elevated now. So we spoke a little bit about that. We gave a few examples of you know how our attendees can kind of think about, you know, as they’re thinking about these frameworks. We spoke about some core principles. We moved into the frameworks. We tied it to kind of, you know the value of your organization and really engaging the whole organization. Right? I mean, it gets not like the data group or the AI group, or the Gen. AI group, or then it’s like, you know, the risk organization. So as you’re thinking for an enterprise it needs to be thought top down and bottom up, right? I mean? So that’s where your board, your executives, your practitioners and your, you know, engineers. And you know your data center or your cloud operations. Right. Everybody’s involved in creating this framework. It’s not you can do it centrally. And then, you know, you’re not providing perspective. As you know how these frameworks need to be implemented. But that brings me to my next point, which is, how do you enable an organization? Right? So when we think about aligning governance with business transformation. A key part is seeing governance, not as just another compliance task, but as we’ve been talking about, but as a strategic enabler. If you frame it in terms of Morgan of real business goals, right growth, improving customer service, and running operations smoothly, it becomes clear that governance directly supports our bigger picture. And one thing that I have noticed because I’ve been doing data governance at least since is that governance has a branding issue, right? I mean, people think of it as you know, and that’s something that they have to adhere to an enforcer right? Whereas I think the message is changing to know it’s an enforcer, and it is very critical for the viability of your organization. Good governance of data, responsible governance of your AI, as I mentioned earlier, will drive your ultimate, ultimate competitive advantage, especially in the next years. Right? So what are some of the things that organizations need to think about. You know a couple of things that come to mind, Morgan, are change, management, right? The enabling of the whole organization, the education and the fluency and the literacy around all of that. What have you seen, you know, in in the clients you work with. How are they thinking about all of that. Morgan Llewellyn: Yeah, I I guess a a couple of things right? So when thinking about messaging. I like to focus on sharing of data. It’s, you know, good governance allows you to share necessary information across groups and allows you to have access to an understanding of what data can be shared and how that that shared data can be used. So I think that’s thing. So focusing on the sharing aspect, I think, is a really important piece. The second thing is, you know. good getting started in governance right? If you are getting started, whether you’re getting started net new, or whether you’re getting started with Gen. AI, you know, I think getting started with data governance. There’s a lot of similarities to how you get started with, you know. you know, a lake house or a, you know, a traditional data, warehouse, etc, where it is one of these things where start small, identify where the value is. So coming back to governance right? Where is the value? There’s probably a couple of risk you know, risk mitigating things you have to deal with, such as you know who to inform, or, you know, deleting information that’s being retained, or customers who have left right? So there’s there’s pieces like that. But then there’s also a few use cases around. If we just share this information with this organization or this information with this department, here’s an opportunity for us to get better as an organization, and so, rather than trying to govern everything all at once. How do you identify? Here are the pockets, here are the sources, here are the systems where we want to start putting some governance around it. And even here are some of the fields that we’re most interested in sharing, because if we can share that information and combine it with another, another source or another system. we can generate some value. So, starting small, you know, it’s the quintessential approach to anything in it, start small, identify where the value is, and then build on top of that. Kuldeep Singh: Yeah, yeah, no, no, I mean very well, said, I think when I think about then you know, and I recently spoke about it. The foundation of good responsibility. AI right is really, it starts with just the foundation models and the governance around it right? And then data is a big deal and how do you govern? You know. So the whole governance piece that we just spoke about. But the last bit is maybe there’s a human component. No, not. Maybe there is a human component which you brought up Morgan. And then the last bit is the use case itself. Right. I mean, you have to drive this forward because it’s got AI has to deliver value. Value can be your back office or it could be. You know the product and services you’re delivering to the market.And if you’re thinking about implementing AI, some use cases are very large, right? Like, you know the example that I was talking about fas, millions of clients and trillions of asset under management, however, organizations should think about. You know. What are those use cases? What are those transformational capabilities that they’re building? And then now start aligning these, you know, principles and frameworks around that. But also, I think, because governance has had a branding, you know, issue I think, starting to think about a center of enablement, right? Executive support education literacy, right? Engaging with key stakeholders feedback loops and making sure that you know your external stakeholders like your regulators and your customers and and your internal stakeholders, people in the back office, your executives? Right? It’s a cross functional group of people who are really helping you shape and evolve these frameworks over a period of time. Right? So that center of enablement is a huge thing. And one last thing, Morgan, I’ll say, is about change management. I think you’ll see you speak aboutthat which is this isn’t a done thing right. I mean, chain. Governance has to be part of your change management. Things are evolving too fast for it to kind of, you know. Just be off the ledge, and then we miss out on things. I mean the cost of missing out things is too big, especially as we move into. You know, Gen. AI and quantum, and a whole bunch of other things that are coming our way right? I mean, it could be a matter of you know, survivability, your competitive positioning. So the the stakes are pretty high. But you know. Let’s kind of boil it down to use cases one thing at a time value, add value, and then continue to move forward any any parting statements, Morgan, I know. We have to wrap in a min. So I’m just heading through this section. Morgan Llewellyn: Yeah, no, nothing. Nothing on my side. I mean, we’ve got a couple of minutes. Maybe we can open it up, for if there are any questions, or you know anything, anything we want to wrap up with. Kalia Garrido: See any specific questions coming through. But I want to thank you both for this great discussion. There’s a lot to unpack there, and it was awesome getting a peek inside each of your brains for how this applies to your respective worlds. So, of course, as always, thank you to our attendees for joining. If you’d like to see recording this video, you can check us out at our Youtube Channel and we’ve also dropped some links into the chat, for where you can find our other events. We wish everybody a wonderful day again, Kuldeep. Thank you for leading us through. Morgan, thank you for sharing your expertise. Kuldeep Singh: Alright. Thank you, Kalia, I really appreciate it. Enjoyed it, Morgan, you should do this. Morgan Llewellyn: Thank you, everyone. Thank you, Kuldeep. Thank you Kalia. Kalia Garrido: Bye, everyone. Thanks. Latest Resources Article Wodzenski’s Viewpoint: Preparing a future-ready workforce is critical in the era of AI Originally published by Pittsburgh Business Times Story Highlights Pittsburgh has long been a city defined Read The Full Story Article Navigating 2025 Trends: Insights with HIKE2 Experts As we move into 2025, the pace of innovation in Cloud, Data, and AI continues Read The Full Story Stay Connected Join The Campfire! Subscribe to HIKE2’s Newsletter to receive content that helps you navigate the evolving world of AI, Data, and Cloud Solutions. Subscribe