Podcast Law & Coder Episode 2 – AI Bots in Legal Marketing: Why Human Connection Matters February 12, 2025 HIKE2 In this episode, join hosts Christina Natale and Morgan Llewellyn as they highlight the rapidly evolving landscape of AI in the legal industry and underscore the importance of staying ahead of technological advancements to maintain a competitive edge. Some of the key points covered: AI Competition Drives Innovation – New players like DeepSeek challenge established AI firms, accelerating advancements in legal AI applications. Democratizing Legal Research – AI enables smaller firms to access judge-specific rulings, case precedents, and opposition strategies, leveling the playing field. Balancing AI Implementation and Human Connection – Successful adoption requires a mix of specialized and broad-access AI tools, rigorous testing, human oversight, and structured data access to ensure accuracy and compliance. Tune in now to gain insights into the critical role of AI in law firms today. Christina Natale: Welcome back to the Law & Coder Podcast, I’m Christina Natale, the Director of Industry, Solutions at HIKE2. For anyone who has not been with us before. Welcome, not welcome back, my co-host, Morgan Llewellyn! Hi Morgan! Morgan Llewellyn: Hey! Everyone nice to meet you all. My name is Morgan Llewellyn. For Law & Coder, I suppose I’m the coder half of the buddy cop duo here. You know. Thanks everyone for tuning in. Christina Natale: I love the like image of us as a buddy cop duo. I swear I like noodled on this forever. And I’m just. It’s the perfect dynamic for us big news. Our 1st episode is like officially live on the Internet. We’ve got. So I’m excited because we might have 2 listeners today like Morgan’s mom and someone else. I don’t know. What do you think, Morgan? Morgan Llewellyn: Yeah, you know, I think we’ve got one great listener. And hopefully we can. We can have 2. No, I mean, we’ve got really good feedback on on that first.st Podcast so you know, thanks to everyone who’s listening, thanks to everyone who’s tuning in for the 1st time, you know. And as always, you know. Thanks. Mom, love you. Christina Natale: Love you, Morgan’s mom. I was thinking today, since we can do maybe our for our second episode a little bit a little bit less of, you know, outline like, you know, I always love to be super prepared and do all of my research. One of the things I really love about how we communicate and talk about trends and things going on in legal tech and innovation is that one of us will just come to the other like, Oh, did you hear about, or had you have any thoughts on? And I just think that’ll be a really cool kind of second episode for us here as we dive in to just talk about some things that are going on, and maybe more current events versus broad topics, and how? Why they matter like what’s going on, and if I’m a legal audience I might not have heard about them, or I might just not understand how it affects me. Maybe we can talk a little bit about. Why don’t you just start talking about something that’s going on in current events? And then let’s see how we can explain, or better, you know, communicate it to our, to our legal listeners. Morgan Llewellyn: Yeah. So let’s just dive in and talk about some current events. And you know, for those who are listening, you know, a year in the future. This will kind of date us and tell you when we, when we release the podcast but one of the big things to come out over the past year was or past. Week is a deep seat, and you know. Christina Natale: Feels like a year. It’s been the longest week of all time. Morgan Llewellyn: It does. But you know, I think while it’s topical today, and it’s current today, I think this is something that you know has been building for a while, and something that’s still gonna be relevant a year from from today. And so what you know, I think, what what’s kind of fun. And what’s interesting here is why is DeepSeek important? What does it mean for the AI industry as a whole? And then let’s bring it back to legal? And why is DeepSeek and not DeepSeek in particular the model. But the concept that there’s more competitors coming to market? Why is that important to legal, I think, is a really important conversation that you know? Maybe not enough. People are having. Christina Natale: Okay, well, let’s do. Last time I asked you on the fly, explain it to me like I’m a lawyer and you thought you were in trouble. But let’s start there. What’s going on with deep seek. Explain it to me like I have no idea. 30 seconds go high level. Morgan Llewellyn: Well, I don’t think I’ve got 30 seconds. But you know. So let’s talk about what we’re seeing with large language models. So you know, large language models. It’s kind of the, you know, workhorse of generative AI. And and really, what’s you know, causing a lot of, you know, kind of conversation, disruption, whatever you want to call it. and with these large language models deep seeks another kind of variant of it. They’re getting more powerful. And I think one thing, if we rewind a couple of years ago we had really best in class large language models right? When open. AI 1st came out. It was kind of heads and shoulders above the rest, but that you know that performance gap, it has slowly been eroded where, I think, you know, we are seeing parity among a number of different models. Where you are seeing, you know the performance of open AI in a lot of its tasks. I’m not saying in every single task, but you know, in a lot of tasks you’ll have parity between, say, an open AI and anthropic, or you know, even even some of the llama models coming from the same meta. And so I think what we’re seeing with Deep Seat is really just another entry. And you know what we’re seeing with some of these larger models? There just is going to be a, you know, kind of a conversion towards, you know. You know the performance of these models right there. There’s gonna be less of that gap. They’re all going to what I like to call parity. Right? You’re gonna have parity over a lot of what we’re seeing. And so what’s really interesting about deep seek, though. And I think what kind of rocked the financial markets even in the past week, is that it came from outside the Us. Right? And so why was that important to the financial markets they’re saying, Well, look if we don’t need, you know, Nvidia processors, if we don’t have the data centers in the Us. What does that mean for some of this nuclear AI, you know. You know, need for additional power all these other things. And I think that’s what really kind of started rocking the markets. It’s it was less about another Llm. Because we’re going to get a lot more Llms. Right. Every Cloud provider has their own, as we’ve expected for a while. They’re going to continue to get better. But I think what really rocked the market was, well, someone outside the Us. Could be doing this. And what does that mean for kind of the pickaxe and shovels? Of the AI industry? And I think that’s really what deep seek is people kind of reevaluating. What if it’s not all done here in the Us. What does that mean for the broader market? And so that’s kind of what we’re seeing. What? What do you think on that, Christina? Does that make sense. Christina Natale: Yeah, it does make sense. I guess you know you said pickaxe and shovels once again. It’s like you read my mind. But you didn’t really answer the second half of the maybe deeper question, which is like, could you? Could you just explain a little bit more about like what is? Why does the model matter? I know you said. It’s probably moving toward parity. But like for someone who doesn’t understand what that means. Like, what is the model as as the AI engine or system or platform that they’re using like, what part of it is the model? And why does it matter. Morgan Llewellyn: Yeah, so well. And I think that’s kind of the point, right? If everything’s moving to parity, it doesn’t really matter. If you know, for 90% of your activities, if you know, deep seek is giving you a similar output as you can achieve with an open AI or something like this, it really doesn’t matter what the models are doing is, you know, they’re really the thing that’s understanding the the context of what you’re feeding it and then generating that that predictive outcome. You know. So that’s that’s essentially what the model is doing. And the point is, if everything’s moving to parity. Perhaps it doesn’t matter as much. It comes down to other things. It’s gonna come down to latency, which I don’t think we’re talking about. But latency is the amount of time it takes to deliver results. I think that’s gonna be a really important point that brings in some of these smaller Llms that people are talking about, and we can get into that and what that means for legal but there’s gonna be other. You know. Factors cost latency. A bias is is another potential one. And just kind of flexibility. So I think what we’re seeing is less about. You know, when I talk about parity. It’s less about the quality of the output. And you’re gonna start seeing folks differentiate more on some of these other dimensions. Christina Natale: Okay. You mentioned Claude, and anthropic, as you know, some of the examples you were talking about. I know, I said, and they’re just the words you said. I know it’s but really where I would go in the legal direction is, you know, we talk a lot with our clients about whether you know they want to pilot or get started with some super legal or industry specific tool, right like, do I want to try clearbrief? Do I want to try, you know, Vlex, do I want to use. Morgan Llewellyn: Not a sponsor. By the way. Christina Natale: No, we are not. Again, we are not yet any specific tech, right? But thinking about even like Lexus AI. Those tools that we’ve been told, are like specific to a legal problem, a legal use case to lawyers using them versus kind of this, I don’t want to say Wild West, but like this world, where you can kind of ask anything and see where it takes you, and what what sources you know are available for that, I wonder? And I have a lot of thoughts. You know. I tend to think that most of our problems in the legal industry don’t necessarily acquire a legal specific tool. But where do you see? You know, do you think there are both advantages and pitfalls of maybe using, you know, just a more general AI tool versus a specialized legal tech tool. Morgan Llewellyn: Yeah, let’s let’s talk about that. It’s a good question. You know, when we think about when we think about where to use a general model versus an industry specific, or, you know, think about a fine-tuned or a model trained for a specific purpose. Right? You might hear the term fine tune, or some of these small models, you know, if you hear people talking about, you know small, large language models, so large language models. Right? They’re trained on. Think of. Think of the history of time! Right, like every public document ever given, is the easiest, you know. Kind of way to think about it, and where these small models? You know where where you see folks promoting these small models is the ability for them to be trained on information that might not be publicly available like, that’s that’s really kind of you know, where you see folks. Christina Natale: So like access to that information, or. Morgan Llewellyn: Access to different information, and which then ends up changing. You know some of the predictive. You know how the the words you’re predicting should follow the question. I think the open question that’s out there is to what degree? And I think this is really where it’s relevant to legal. The question is, to what degree do you need a specialized model versus? To what degree do you need a model that understands human language and has access to the relevant documents, the relevant content. Context. That you’re looking for. And so I think that that’s really in legal where you need to start. Kind of having a, you know. A difference of opinion is, are you looking at having a model, large language model that has access to the history of, you know every brief you’ve ever written, or every brief that’s ever been written. Right? Can a large language model with access to that perform just as well as a specially trained model that’s been trained on all of that. And so I think that’s where we’re starting to see differing opinions where some people are in the camp of no, I need a specially trained model that understands things versus no. I can live with a large language model that has access to the latest brief or something like that. Because that’s really what I’m interested in understanding. I don’t need a model that’s been trained on the history of, you know, briefs. That type of thing. Christina Natale: Sure. Yeah, no, that’s it’s really interesting. Because I think a lot about, you know, when I was practicing and in complex litigation, when you say, access to documents? Right? You think about documents that we might feed the model for context. So not even like as they’re training the model. But like when I’m asking, you know, Chat Gbt, to review this document and use that as the context. Or I’m this user, you know, here’s my background. Give it that you know that understanding, I think, as a lawyer about, you know the briefs that I wrote in the legal research, I would do you almost have a double access issue like, is it? You know, case law that’s in the public domain? Or is there some sort of paywall like, do I need, you know. Can I access that that case? Law! And you think about the you know the chat, the infamous chat. Gpt lawyer who asked, you know chat Gpt, write me, you know. Write me a brief and then ask, you know, are these citations real? And chat? Gpt was like, yes, they’re real right, and it’s kind of silly and overplayed at this point. That, I think is actually an important conversation to have like. Is that because, you know, it didn’t understand? The question was not trained properly, or is it that it doesn’t even have access to those sorts of resources? Was it a bad prompt? There’s so many questions I have. But the main one is this question of access. Does Chat gpt? Does an open model have access to something like, you know the all of the case law and shepherdization and keys, that you know Westlaw or Lexisnexis, or something, have. Morgan Llewellyn: Yeah. So I well, I I think that’s more about how you architect a solution and less about the right. So again, with deep seat. You know, really, what we’re seeing is, you know, models today are made to be replaced. Any model you’re adopting today, whether it be a large language model or one of these smaller large language models doesn’t really matter. The model that you’re implementing today isn’t going to be the. It’s going to be deprecated in 6 months or a year, right like it should be deprecated in that amount of time, just at the the speed of advancement that we’re seeing. And so it’s less about, you know the large language model for today. It’s more about, how are you architecting a system and kind of what you’re alluding to? It’s more about, how are you providing it? Access and restricting access to the right information? That’s what’s more important, it’s less about the model. The model is, you know, again, if they’re going to parity at some level, it becomes a commodity, and it’s more about how do you, you know, provide that model access? Have a well architected system. I’m not saying there aren’t edge cases out there where. Yes, you do want a fine tune model. Right? 100% there are, and I think you know some people, you know. we’ll fight tooth and nail over that opinion. I’m just saying, for the majority of use cases we’re seeing, parroting these large language models. And so how you provided access to information is probably more important than the actual model you’re using. Christina Natale: Okay, one last follow up question, and then I’m ready for your next current event. To me. I’ve heard a lot of people use the word like training, the model right? And I know what you mean about how you know the company trains the model that is in existence, that we’re asking questions of. But I hear a lot of people saying like the way that they build kind of that context, and they’ll say, like, Oh, well, you have to train the model. Is that is that a real thing? Is that, are they trained the model? Or are they just teaching it? And I guess maybe that’s just semantics. But they’re really just, you know, giving the proper context and telling it to, you know, either remember or refeeding it. That context right. Morgan Llewellyn: Yeah. So when people are talking about training a model. there’s very few organizations out there that are actually training Llms right? And building Llms from scratch. What they’re what most people are referring to, and how most organizations, particularly, you know, kind of in legal, should be thinking about. It is what information do I give access to the Llm. The Llms trained, the the algorithms trained. What it needs is the right information right? And it also needs the right, you know, kind of call it a prompt right. It needs the right prompt of how to understand this information and what you want to do with it. And on that prompt side, you know, kind of just a brief aside. It’s about the consistency and reliability of the output of of your Llm. And in order to get that consistency and reliability, you need to have consistency and reliability of your prompt as well. And so you don’t want a hundred different people doing a hundred different prompts over the same piece, over the same document, because you could end up with a hundred different outputs, and that would be bad from a business, you know, from a business perspective you want. If I give you an input, I want to see the same output time after time. Christina Natale: Reliability, accuracy. We’ve got to know that we’re going to get if we. And if we ask the same question like you said in the same way. You know, and I’ve seen as as we did Pocs of some like very early, you know, AI tools with with some of our law. Firm clients like that’s the 1st thing people say is not just if they don’t get the answer they expect the 1st time they’re immediately out right? We’re like Nope, that is not accurate. It’s not reliable. I can’t. I’m not ever using that again. I’m not try it again. The second thing would be maybe the 1st time they ask a question they get a really good and the answer they expect.and then just kind of, they’re like, Okay, cool. Now, I trust this. And I think that’s another kind of the second part of it is okay. But what about when you ask it? The same thing, or when someone you know you tell your paralegal, you know. Okay, you can. This is good. This is an approved use or your lawyers. Okay, this is a use that you can can use it for. And then we learn down the road that they’re getting different answers, which is. Morgan Llewellyn: So I think this is this is actually another. I I don’t know if this is a a current event or a current topic, but I think it’s a relevant topic. That you’re you’re kind of touching on in in particular, it’s it’s important for law firms. And it’s this idea of testing. So we’re gonna be spending a lot less time building. You know, writing code. What we’re gonna end up doing is spending a lot more test time testing. And so all the time that we’ve put into development, it’s not like you’re going from, you know, 100 to 10, and you’re just saving, you know, 90 h. What you’re gonna end up doing when you’re using, you know, essentially statistical based, you know, algorithms to to predict what you’re gonna do is spend a whole lot more time on testing and exception handling. And so it’s not that you’re gonna go from 100 h to 10 HI think there’s a misnomer out there. What you’re gonna do is yeah, sure, you’re gonna reduce your overall hours. But it’s not 100 to 10. It’s more like 100 to 50, because now you’re gonna be spending a lot more time on testing. All right. I’m ready for your next trend. Or was that it was that your second one.Oh, well, that was just kind of a just a bonus. But I I think that’s relevant. Because I do think there is this this thought out there that you know AI is. Gonna say, you know, AI is making coding easier. And so you know whether it be in an implementation and a development. I’m gonna save a ton of time. and you’re actually gonna be spending time on other things as well. Christina Natale: I think that’s a good point. And I would. I would just add, you know, you said, making coding easier. I know that’s it’s funny. Sometimes we don’t even realize when we answer questions. It’s from our perspective, as a you know, whatever hat we’re wearing, and I think that’s the last kind of, you know, really tie to legal. That, I’d add, is really any industry, you know, as we talk about, regardless of the industry, we think it’s good. You know this new thing is going to make this this much easier and more efficient, and to your point there’s still a lot to be done to make sure both that we, you know, have found the right uses for us and for our industry, and that we’re getting, you know, the best possible results, but also knowing where you know, to keep a human in the loop to keep an expert in the loop, and where this is a great 1st draft that this you know product has given me. Now it’s my job. It is my duty to, you know. Review it and correct it, and make sure that, you know. I get it over the finish line. Morgan Llewellyn: Yeah, yeah. I mean again, just another place, you know, kind of that human in the loop. I I think we need to start thinking about agents in the loop and make it a little bit more agent centric, you know, just to be provocative. But I do think again on this idea of, we’re gonna be spending time in different places. If it’s becoming easier to disrupt workflows, we’re gonna actually be spending a whole lot more time on workflow, redesign and adoption change man. So all those like human centric activities are going to be necessary because it’s becoming a whole lot easier to disrupt workflows make them more efficient, etc. So that’s just another kind of bucket of time where, as you you know, kind of scale back in one, you’re going to be scaling up in the other. And I think what’s interesting, it’s the important thing to talk about. It’s it’s kind of like if you think about it as you know, kind of a 1 time versus recurring costs. These are one time costs. We’re talking about the testing. You know, the the workflow redesigns. Those are one time costs. You are saving time overall. You know, kind of in in that process efficiency. And but there is this one time kind of cost that you’re gonna have to pay. Christina Natale: Yeah, I think it’s it’s interesting, because, you know, we rather than taking. You know all this talk about AI coming for our jobs. And what have you? What you’re saying, how it makes sense to me, anyways, how it speaks to me is we’re actually just reversing the cycle that you know, we thought it would be okay. We’re going to do this work and use AI to make it better sometimes. Now, it might even be starting with, it’s starting with the AI work product and making that better. And you almost get into this cycle of right. It’s making us better it should. The goal is to make us better rather than replace by switching that kind of order or adding it into the cycle in different ways. But always, like you said, remembering the role of you know the human, the expert, the you know. What have you in getting it? There. Morgan Llewellyn: I’ve got. I’ve got another kind of fun. AI trend that I’m seeing. That I think pertains, you know, really well to legal, and that’s really about the application of. you know AI agents in sales and marketing material. And and so you know, what what I’d like to do is is tell a little story for. Christina Natale: Okay, I love it. Morgan Llewellyn: You know. Christina Natale: Horrible story! Morgan Llewellyn: Let let’s let’s tell a little story, and then, you know, really bring it back to legal in in how I think this is relevant for legal and and how you know kind of marketing. And Bd should be thinking about. You know the application of agents. So you know, if we go, you know, way back in time. You know, when email 1st came out. people starting. Christina Natale: You were, you were what like 20 at that point. Morgan Llewellyn: Was probably 40. Yeah, I was probably 40 at that. You know, when, when email 1st came out. you had, you know, the creation of spam happened right? And initially, spam was super effective. Because, hey, it’s an email. I don’t have that many emails. I don’t have that many contacts. But what people started to do was realize, well, maybe I’ll just click on an email that makes sense to me right like, Oh, Christina sent me an email. I’m gonna click on Christina, you know.Bob sent me an email. I don’t know any Bob’s. I’m not gonna click on that email. And we see the same thing with with our phones. Right? We we tend to pick up phone calls from people we know, and we ignore those that we don’t.So we have this this kind of spam filter plugged into our mind already? Of who are we gonna spend our time with? You know, who are we? Gonna pay attention to. And I think what we’re we’re seeing a little bit of that in the AI and the agent space as well. where you know a few years ago, we developed a highly automated, you know what is essentially an agent before agents exist using Llms and vector, databases to reach out. Christina Natale: Morgan said vector database everybody take a shot. Morgan Llewellyn: Yeah. So you know, we, we developed the the system and it was fully automated. And we thought it was giving great job recommendations. So it was a fully automated job recommendation. Hey, here’s a job. Here’s a great candidate, let’s, you know, invite that person to apply. and we knew that we were giving great recommendations. What we assumed was, we’re gonna great recommendations, and no one would engage with it. We we just made that assumption. Why, you know we’ll figure that problem out next. Right? Let’s 1st get the recommendation engine going, and then we’ll figure out the engagement problem. Well, what happened was we got it out there. And we ended up getting like, 80% open rates on one off emails across the board didn’t matter. The role didn’t matter, you know kind of the company, etc. We were just getting great engagement and come to discover we put in kind of a business rule, just to make sure that it was a relevant, you know, kind of thing where we put in some business rules to make sure that you know the person was engaged with the brand they were being asked to apply. And so it’s the same idea of. There’s an existing relationship between the agent and the automation and the person you’re reaching out to. And I think what a lot of organizations are missing. And this is coming, how it comes back to legal, I think what a lot of organizations are missing is you can’t just put a a bot right and an agent, and just start spamming. You know, business development ideas and sales ideas, there needs to be an existing relationship, an existing connection. You 1st have to do that before these bots are going to be truly effective. And so if if I’m sitting in legal right now. and I’m sitting in business development marketing, how do I get more physical connection with my highest priority prospects that can then allow me to really bullhorn the impact and the value of highly tailored personalized agents. But you need that existing relationship. And so it’s not a we’re gonna move completely to bots, you know, back to this human idea, I don’t think we can move completely. Bots. It’s a combination of how do we continue to do really great in person relationship building. And then we use bots to actually make that more effective and really kind of move things forward. I I think that’s a really interesting and kind of emerging trend for legal. Christina Natale: Okay, I’ve got to first, st just like kind of a reaction. And then I think I’ve got another I’d like to think through another connection to legal there. The 1st is that it makes me think of, you know, on my iphone I have a there’s a setting, and you go into your phone settings, and you literally just click silence. Unknown callers. Right? It’s kind of that. It’s like putting on a spam filter. If I don’t have like you said. If I don’t have an existing to the sender, the content, the whatever it is, it’s not gonna make it to my inbox. Never mind, I’m not going to read it, even if it does right. I’m like, Nope, next, like moving on. So that’s number one. The second thing that you said that really spoke to me was kind of your experience in your in your past with that Hr kind of you know that recruiting type example that you gave. What really spoke to me is, you know, finding the right voice and messaging for the audience you’re trying to reach. And it makes me think of today. You know, this trend of, I’ve seen a lot in like legal blogs recently about basically, you know, feeding every opinion that a judge has ever written into, you know, and saying, Okay, now, here’s my brief, you know. Rewrite it in the voice of this judge and to the point, right? It’s it’s a very different context. But it’s the same idea of like, how do I best voice this message in in a way that it is going to speak to its intended audience, no matter who what the message is or who the audience is, I think, carries across a lot of broad applications. But that’s what I’m thinking about. Do you have any thoughts on on that. Morgan Llewellyn: Yeah, I think you know. yeah, I I think it’s an interesting question. So gaming are people gaining the system? Are they using technology to gain the system right? Am I using a piece of technology to give me an unfair? It’s almost an ethical question. Does this piece of technology give me an unfair advantage. When you know, kind of presenting to a specific judge right? Because I can understand every ruling, every judgment they’ve ever had, and understand what they like, what they don’t like and and basically incorporate that. I think the thing, the thing that I’ve seen in the past is when we’re 1st thinking about these things in the concept of AI and automation, we think they’re novel problems and they’re new problems. But here I’ll throw it back to you. Christina Natale: I’m ready. Morgan Llewellyn: As an attorney, I mean, would you already be trying to do this? Are you already trying to write your, you know? Are you trying to position the case to what you already know about that, judge. And so, if you had more time, more money of, you know larger law firm. Are you already doing this? And you have almost at some level, an unfair advantage, because you have all these massive resources like, aren’t we already doing this just with time and money. Christina Natale: Sure. So a few things I’ll say, you know, when I started in private practice it was, you know, small law firms, but really high volume and high kind of stakes, really like complex, federal, massive, nationwide litigation. And even with, you know, I had no paralegal. I had no, you know, associates under me, and I was doing literally the old school like manual case law research. And I really prided myself like, I write a really good brief. I’m really good at legal research, and even then would, when I do my Lexus search or my Westlaw search, I then, if I got it really good, really tailored to like the types of case law I was looking for. I’d go into the filters, and I’d see if there were any judges 1st in that circuit or district, then try to narrow. And if there and God forbid! I found an opinion written by that judge. You know I’m going to cite it. I’m gonna say that judge’s words to them, and I’ve heard something. Someone said something like, If you don’t have precedent like you lose right? But here’s the thing. And this is where I want to go with this. I really love that. You asked me this question, because yeah. but also no right, like if I’m an associate and a partner tells me, here is the argument we are going to make. Go write me this brief. Go find me the research, and I do the research and I come back, and my memo is like no one has ever ruled on this before. There’s no precedent we lose. I’m either getting fired. I’m not ever gonna would you work for that partner ever again, like my billables are 0, right? Because that’s what that’s what precedent is right, and I’m not to get political on jurisprudence, and like the Supreme Court, and whatever but seriously right, like all any of us are doing, is taking something that’s been done before. and trying to argue that it supports our point of view. And so to your point about gaming the system. And what about. If everybody’s got this advantage. I think it becomes kind of a really cool again to the point. If everyone’s doing that, if that becomes the new model we have to adapt. We have to come up the food chain and become better at. If I know that my opposition is going to do that and feed every you know opinion that the judge has ever written, and ask that AI to write the the you know, the brief or the you know, mock order whatever proposed order in the voice of that judge giving the answer the outcome that they want. I can do the same thing, and I’m gonna turn. I want that same exact body of work to prove the opposite. And so I think that almost does. It forces you to come up the food chain and say, and like again not to like, you know, toot my own horn, but like that’s what I had to do manually. That’s why I was. I’m like the like, grandpa like. Oh, I had to walk up this the hill in the snow 3 miles to get to work, but like I have to do that manually and literally try to make the same cases say the opposite. So I think it’s kind of a cool. I don’t know that it’s good for. you know, legal research, or for young lawyers to learn the law. But I do think that at a high and complex level, if I’m really good at what I do. If I understand the basics, if I have the outline. I think it’s a really cool way to elevate kind of that counter argument, and maybe you speak, you know. Maybe you do get a better outcome, because you’ve convinced the judge that what they said does apply to what you’re saying, even if they reached a different outcome. Morgan Llewellyn: Yeah, I mean, I think again, we’re we’re talking about, you know, kind of the trends. And you know, we touched on a couple of things in in that right data. Access having access is important, right? The Llm needs to have access to information. But I think the other thing that you touched on is look, it’s not just that we’re seeing parity in Llms. We’re actually gonna start seeing almost parity in our ability to you know, form arguments. If we all have access to the same information. Right? And I think you know again, you might see people trying to frame this as an ethical question. But we’re already doing right. It’s just that. It’s not a level playing field. If you’ve got more time and more money, you’re able to do a better job. And so you know, where? Where have we seen, you know, access to Llms. Elevate law? It’s not in the exceptional. It’s kind of, you know below the exceptional, the exceptional. They’ve got the time and the money and the resources to do this, which is ours right where we’re, gonna you know. See is, you know, to your point the ability to elevate everyone else because they don’t have to spend massive amounts of time. They’re gonna have access to tools that can do this work for them, which allows them, then, to your point, have to be a little bit more novel. Christina Natale: Your strategic, your your analysis. It frees you up to to make those more strategic kind of moves and think at a really higher elevated level which I love. It looks like I’ve like completely lost track of time. I don’t even know when we when we started. I think we’re probably getting pretty close to to time up here. Do you have any parting thoughts before we leave today? Morgan Llewellyn: No, I think we we touched on a couple of of really important things for legal right? The 1st is. I think, while agents are really interesting, and automation is only going to happen more in professional services like ours, those relationships become more important. Your relationships almost become your mode right? If if you want to think about it that way, because relationships are going to give you accessto the ability to have meaningful tools, meaningful automation from a business development perspective. I think that’s that’s pretty interesting. And then, you know, as far as, like current events go, we’re just gonna see more. Llms, don’t. Don’t focus on the most recent Llm focus on how do you replace that? Llm. How do you, you know? Give consistent and you know consistent results, and good results. And I think is important. And then data access again.l ess about the Llm. How are you architecting a system to be able to share information with an Llm. To be able to share information across different tools in your organization, different people in your organization. So you can really fully use that Llm. Maybe that Llm sits within, you know, kind of the 4 walls of your you know your firm. Maybe it sits, you know, kind of in a very secure cloud. There’s a lot of different ways to architect the security of Llms. But how are you providing the ability to share information within your organization, I think, is is absolutely key. Christina Natale: It’s been a pleasure, as always like I always say as we wrap up our conversations, I mean doing deliverable work all day, every day, you know, like any of us, whatever. It gets tiring, but it’s so refreshing, and it’s such a, you know, a really nice break in my day to just spend, you know, whatever 1530, however, many minutes noodling on really exciting and interesting stuff going on in the world. So, as always, I appreciate you being here with me, and we’ll be back next time. I think we spoiled or teased last time that we’d be talking about risk versus reward and innovation. I’m glad we were able to take this, you know, quick time to tackle something that’s actually super relevant and current. So next time we’ll come back with our risk versus reward discussion, and I really look forward to having that conversation with you. Morgan Llewellyn: I think, risk versus rewards gonna be like, sorry we didn’t have time for Matt Damon today. I can almost see us like having a another one before that. But yeah, sorry risk versus reward. Yes, we’ll have you on. Christina Natale: Sorry, mom, that’s the thing. Your mom was so excited for risk versus reward. Morgan Llewellyn: Right. Christina Natale: Keep teasing her just like, keep pushing it further out. She’ll be so excited by the time we we finally get there. Morgan Llewellyn: Yeah, I will close out with. And I love you, mom. But yeah, thank you. Everyone, for you know, kind of listening. And you know, really appreciate all the support and the really valuable feedback and comments that we’ve got. So thank you. Everyone. Christina Natale: Thanks everyone. See you next time. Latest Resources Article Navigating 2025 Trends: Insights with HIKE2 Experts As we move into 2025, the pace of innovation in Cloud, Data, and AI continues Read The Full Story Article Your Guide to Salesforce for Law Firms: Transforming Case and Client Management In today’s fast-paced legal industry, law firms are constantly looking for ways to improve efficiency Read The Full Story Stay Connected Join The Campfire! Subscribe to HIKE2’s Newsletter to receive content that helps you navigate the evolving world of AI, Data, and Cloud Solutions. Subscribe