Conversations with Zena, my AI Colleague

AI Legal Risks with Rory O’Keeffe

David Espindola

AI is moving fast—but law moves in a very particular way: by turning society’s values, conflicts, and edge cases into enforceable boundaries. In this episode of Conversations with Zina, My AI Colleague, David Espindola sits down with Rory O’Keeffe—commercial lawyer, entrepreneur, and founder of RMOK Legal—to explore what happens when those two speeds collide.

Rory shares his journey from Ireland to London and from traditional practice into the heart of technology, where he now advises clients on AI, cybersecurity, data, and digital risk. As a co-author of The AI Advantage, he makes the case for pragmatic adoption—especially for SMEs—grounded in a simple truth: AI is only as trustworthy as the data beneath it, and data can become quicksand fast.

Together, David, Rory, and Zina unpack the shifting global landscape—from the EU AI Act and U.S. state-by-state fragmentation to the growing influence of case law in areas like IP, privacy, and deepfakes. They also dive into the real operational question facing every organization using GenAI: Where is your data held, who owns the output, what’s the cloud and jurisdictional exposure, and can you explain how decisions were made?

The conversation lands on a clear theme: AI will transform legal work—automating document review, research, and contract analysis—but the profession’s core will remain deeply human. Judgment, trust, negotiation, culture, and emotion still decide outcomes. Rory’s mantra captures it best: AI won’t replace lawyers—but lawyers using AI will replace lawyers who don’t.

In a world of accelerating capability and rising risk, this episode is a grounded guide for leaders trying to adopt AI without losing control of what matters most.

Opening to Conversations with Zena.

Music at the the end of each episode

Support the show

David Espindola:

Our guest today is Rory O'Keefe. Rory is a commercial lawyer, an entrepreneur, and a storyteller redefining the future of legal services. He is the founder of RMAC Legal, an award-winning legal practice, recognized for its innovation in AI, cybersecurity, data, and digital risk. Rory is also the co-author of AI Advantage, Thrive Within Civilization's Next Big Disruption, a practical guide to harnessing artificial intelligence in legal and business contexts. Hi, Rory. Welcome to the show.

SPEAKER_01:

Hi, David. Thank you very much.

David Espindola:

Also joining us today is my co-host, Zina. Zina is an AI that has been trained on my work. If you want to further interact with Xina, you can reach her and chat with her at brainias.com slash Xina. Hi, Xina. Welcome to another episode of our podcast.

Zena:

Hi, David. Thanks for having me back on for another episode. I'm excited to be here and can't wait to dive into today's conversation. Let's make it a great one.

David Espindola:

In today's episode, we're going to touch on several critical aspects of AI adoption. And in particular, we're going to talk about legal risks in the context of AI. So, Rory, you have an extensive legal background and a deep understanding of the broad legal aspects of AI. Tell us a little bit about your journey and what you're focusing on today.

SPEAKER_01:

Thank you. My journey is for some of you maybe also telling straight away is that I'm originally from Ireland, based in London now, but I was I grew up in the capital city Dublin and worked in what we call the Garden of Ireland in Wicklow. It's a county just besides Dublin. I started my career in a two-person law firm doing criminal law predominantly, a bit of probase, and you know, people who just walk in off the streets with their whatever they need for support. I moved then into uh business uh law firms, doing corporate, then into construction in an international firm, and then straight into arbitration in the public sector, and then out of that I landed in technology, which was not a huge leap from what I was doing before, because the construction world, the process of how you build new pieces of technology, it's similar to how you would build, say, a new bridge or a new power plant or dealing with things like that. And the regulatory regime of oil, gas, and liquefied natural gas and renewable energy sources was a tricky area to work in. So moving into a very regulated market in the digital space across multiple industries wasn't alien to me. So it was a nice uh move into that. And then I decided, hey ho, let's see what the world is like in another jurisdiction. So I left Ireland, moved across to London. It was only supposed to be a short time. I left during the uh London Olympics in 2012, and I'm still here. Build up a family, had two kids, um, and decided, ah, sure I'll stay. And so uh in technology, and then having worked in a Fortune 500 company for 10 years, um, I think I think I was a legal director there, I moved back into private practice as a partner in one of the international firms, and then set up my own practice myself, which is where I am now, and and just like yourself, set up a podcast just for the fun of it, you know, because we have nothing else to do in our lives.

David Espindola:

That's right. No, that's a great story, Rory. Thanks for sharing that with us. Now, you have also co-authored a book. Can you tell us a little bit about the book?

SPEAKER_01:

Yes, it's called The AI Advantage. It was uh co-authored with a number of other um uh thought leaders, expert leaders. I won't call myself a thought leader, I'll I'll say they are uh exceptional brains out there. So I I was very fortunate to be involved in that project last year and and I was delighted became a bestseller um through Amazon, I believe, in the US. And for some reason, I don't know why, but I'll accept it in Japan as well. So I'll take that. But my my focus in that time was to talk about AI for the context of the SME unicorn startup perspectives and small legal team perspectives. Um a lot of the conferences, a lot of the articles were written from the big people with big budgets and big data sets, and they having the time and the bandwidth to develop their own tools, um, whilst the smaller players were doing, they were creating a fast follower um approach to the adoption of certain AI tools. And it was tricky for them to get their voices out there, and so my article, particularly in that chapter that I was writing, it was all about how um uh we create a digital sky skyscrapers. We think these this megalit this mon monumental, brilliant um AI ambitions and the tools and technologies, but it's all built on data, and I always think of the data as data quicksand, because as soon as you entered it into the model and it's learnt against it, you have to question the quality of the data. Is it synthetic data? Is it not? Is it biased? Is it not? How good is it? Have you got um enough sources? And and in law, for example, it's always changing, it's always moving forward. So the data is always out of date, effectively. Uh as soon as you've you've um uploaded it, it's net your your model is out of date. I'm not saying that it's it's wrong or it's useless, it's absolutely absolutely not. That's not what I'm saying. It's just I was trying to keep people in their mindset open, is that it is a tool that you use, but don't be led by the tool. You lead the tool.

David Espindola:

Yeah, there's so much movement going on, you know, whether it's the data that's changing constantly or the technology that is evolving, it's so difficult to keep up with uh you know all these changes. I think that's one of the big challenges that we're all facing today. Now, I'd like to dive right into this topic of the legal aspects of AI. So, can you give us an overview of what's happening in the legal landscape of AI and what are some of the things that our audience should be really aware of?

SPEAKER_01:

So it depends where they're based. So the first thing I always say, because the the lawyer made talks, I'll first talk with the regulation piece, obviously. Um the regulations, so the obvious one is what Europe uh the European Union came up with their Artificial Intelligence Act, which is now in full flow. This year, they're always uh last year was all about AI literacy, and this year they're moving into more of the risk assessments and the AI policy creation because there's more guidance that's coming from the European Commission around how to interpret the provisions of that that act, because there was it was quite it was brought in some people say quite suddenly. Uh some people question it's um whether it's going to stifle innovation or encourage innovation. Um I think I I have no opinion either way. I I just believe that the digital regulatory landscape across Europe, UK, and other jurisdictions is getting so complicated, it's hard for a lot of people to know am I still within the boundaries of the law or not, or is there something else that I don't know about that's just going to scupper my business model suddenly? Um and that confusion or that joined up thinking is I think is a problem uh for the European Union. Then if you look at uh like the US, for example, you had an executive order under Biden, now you've got um using the different kinds of executive orders under the Trump administration, but then you have to go down a level. So members, uh not member states, the um US states, I'm sorry, European context, but the US states, um they're coming with different uh kind of digital types of regulations. So some jurisdictions like California will have a very strong privacy legislative legislative outlook. You've got new legislation coming through on the books in New York, um, which is talking about like online safety and and things like that. So you've you've got museums across it. Um if you go further outside, you will know that India, China, Japan, Singapore, um, Australia, New Zealand, they all have some guise of AI regulation. A lot of them will say, well, we've already got personal data or PII type legislation in place, which does a lot of the work for us. And I say that's to a point, yes. But we have to think about things like biometric, facial recognition, um, bias in the context of uh psychological um uh triggering or emotional recognition type data, you know, there's a lot of things that we use in law enforcement which shouldn't be used in day-to-day activities, you know, that that kind of thing. And then the questions around who owns the data, where's the data flowing? The second area, which I think um is coming through, which is interesting in the legal aspect, is the case law. So whilst politicians and legislatures are trying to grapple with how to construct these codes and these these acts and regulations, the courts will still have their own basis to move forward. And there's a lot of case laws that's coming through, like California, for example, which you would see like the open AIs in there, you've got um Getty Images and Stability AI, you've got open AI in a number of jurisdictions, you've got a number of uh regulators in other countries that are looking at um uh like Deep Seek, for example, the Chinese LLM model that came through suddenly. Um I think it was early last year, it kind of came on the horizon. So you've got these kind of new things where people are testing it from uh a privacy and security perspective, and um, but most of the case law seems to stem around the intellectual property. And talking to a few barrister friends of mine um just uh before uh the end of last year, um their predictions were a lot of case law would be around the deep fakes section. So there's case law, there's a lot of legislation where saying we should identify when AI is being used in the creation of a movie or creation of this whatever product, um, so that people are aware. And I think part of that conversation came from when we were had conversations around the metaverse. If we're allowed to talk about that anymore, I don't know if anybody does, but anyway, we'll we'll leave that one out there in the ether. But the idea was that you know the idea was when you were in the metaverse, are you talking which avatar are you talking to? Is it a real one, or is it a person, is it a minor, is it an adult, you know, etc. So we have to identify. And then when it came to social media, they'll say, is this commission paid for are you advertising a product or are you just and happy or a fan of a product? You know, if you're getting you should tell people that you're uh that you're being paid for this so people can make a more informed decision if they choose to buy or not based on what you're doing. It's a bit like product placements and movies and and TV. So the same thing is happening with AI. So it's not as if we're a lot of the stuff is like common sense, but you need the legislation or the case law to back it up to force those people who are just trying not to do the the right thing. And so though that case law, be where it may in the world is being analyzed by lawyers across the world because it might be test cases within their countries. Like I came from a jurisdiction in Ireland, you know, we d the this population size is smaller than I think the population of York City. So you know, it's like although more space, okay. Um so um, but the the the issue for it is that that you have to borrow from somewhere else to see could I take from that and will it be relevant, will it be persuasive to a court in say Ireland or court in South Africa or court in New Zealand, you know. And then uh that will may encourage legislatures to step in. But I would say the basics of conversation I have with clients today is around I always ask them, okay, you want to use AI, great. Where's the data being held? What data are you using? What is the position around the IP? Are you gonna own it? They're gonna own it. Uh what's the cloud provider? Because obviously there's gonna be a cloud provider somewhere. So where is that data center? You know, and if that goes down, where does it go? So, and if you need if it is a machine learning uh uh tool and you need to unpick it, um how easy is it to do? So, say you uploaded, you got it, you acquired a new company, and you've now absorbed their data into the mix. Now that could mean that you've taken a company that is um uh super risky, like their risk appetite is much higher than your own because they're small and they were growing, they were very aggressively trying to maximize their market position. So their positions in their contracts could be just wide open and say, I'll agree to all of these crazy things in my contract. You feed that into your standard model, well then the uh AI tool might say, Oh, so you're comfortable with these things now, are you? I said, No, never was. So you have to be careful how you feed it in as people acquire and grow and evolve in their and their actual commercial risk profile for their company.

David Espindola:

Yeah, there's certainly uh a high degree of complexity, right? For entrepreneurs, for companies that are dealing with AI. There's so much to look out for. And then you have the you know, the issue of jurisdiction that you just brought up. You know, I uh from a US standpoint, um, you know, we deal with the situation all the time about different states coming up with different legislation, and the Trump administration is trying to override all that and try to come up with uh a federal law which would make it easier for entrepreneurs, but you know, this is an ongoing conversation. We don't know where it's gonna go. In the meantime, it's difficult for any of us that are trying to establish AI-based businesses to navigate the complexity of all the different regulations. But even for an individual, and that's uh a question that I wanted to ask you, if you are leveraging AI as basically a generation capability for content or documents, what are some of the legal implications and things that people need to be careful with when using Gen AI in things like contracts or even for marketing purposes or for product development? What are some of the key concerns that people should be aware of?

SPEAKER_01:

Well, well, I'll focus um I think the best one to focus on the contracts themselves because I think for a lot of people listening, they would they should be asking their legal advisors, are you using AI? And if not, why not? Just a simple question, because their competitors are going to be using it. My mantra's always been that it's um AI won't take the job of a lawyer, so it's no such thing as an AI lawyer. Okay, uh maybe. I don't know. We'll wait and see.

David Espindola:

But I'd like to I'd like to come back to that conversation uh soon. So but go ahead.

SPEAKER_01:

But but the idea is that it's not that AI will take the lawyer, but a lawyer it's the difference is a lawyer using AI uh will replace a lawyer who's not using AI. That's my opinion. I know that's quite controversial in some quarters. But the idea is that you have to ask in contracts, for example, there's kind of there's a lot of legal tech out there, and last time I was talking to somebody who who does this of those 4,000 pieces of technology out there that is being trying to be sold within the European market, and I'm sure it's similar for the US and the Canadian markets as well. And what we have to look at is to say they're doing the same thing, they have an AI tool which will summarize the risk profile of a contract. There's there's another tool that will analyse it and update it based on which side you're working against against a database of a whole series of templates and examples of good drafting, which again is subjective actually, in most sense, because there's no such thing as the perfect contract. If there was, you wouldn't need lawyers and contracts, it would just agree the same contract. Done. Uh and that project has been tried out, by the way, on NDA's non-disclosure agreements. They tried to get a whole series of academics, law firms, and barristers and attorneys around the world to agree a one uniform form of non-disclosure agreement, confidentiality agreement. And they did come up with it, and nobody's using it. Because it's not enough. People are just saying, yeah, but like typical lawyers. Yeah, I can give you a straight answer, or I can give you the longer answer that I can that uh will charge a little bit more and more time. Um, which I hope you don't ever have that lawyer on your books. But the idea is well that you can use it, and it's very, very good. But what you get is a like I I use two different tools for reviewing contract. It's an on-prem solution for me, it is a version that I control against the playbook and and uh that I've drafted myself, or aligns to what my clients' playbook, so their risk appetite and what they want in their contracts. So I've created the tool to do that. Um, what happens then is that it comes through and gives me all these suggestions. Now a lot of them are good. Yeah, I can take those. Some of them are overkills, so you send, no, I don't need all of that. No, no, the other side's gonna hate me. I've just added an extra two pages to a contract. When all I could say is there's one single line. Now, all of that is good as a starting block. And I say it's like having your paralegal or your junior lawyer do a first attempt at trying to pull the right things together against standard templates. And then the seasoned lawyer should step in and say, take that out, don't need that. Because the solution doesn't merit this. Or the pricing is is completely different. No, no. Because I always say there's three parts to understanding a contract. It's the legal risks, it's the solution or the service, and then it's the pricing. And a lot of lawyers forget the solution and the pricing because they think I just need to focus on the legal, I'll leave the rest to my client. I say I disagree wholeheartedly, because then you're not giving them what they need. You're just giving a very conservative, you know, in a vacuum response, which is no value really uh uh in the long term. Um but they the tools are quite cool to give you that kind of starting position, and that can save, shave off a few hours of your time. You know, so if you're being billed on an hourly basis, that's a saving. Yeah, great. But what I always say, even with content, and I just mentioned this one as well, is that it's like you know, you have ideas. I love to do the brain dump into an LLM model and say, so this is well, I kind of have ideas around what I want to do, this is where I'm doing it. And then it comes back to me with a structure. You know, it's like the English teachers saying, So that was a nice try. But I'll give you a mark for a good effort, you know. That that that horrendous thing that you got from your from your teachers. So that was a nice effort, nice trying. Thanks very much. But you don't win awards for that. Well, I think they do now, but and my day they didn't. Um and so the the position is that you get something that you go, okay, I've started, I'm starting off the block. So it's not as such a big task, and now all I'm doing is perfecting it and tailoring it to what my client really needs. And what that allows me to do is to really test do I really understand my client and their industry? And if I don't, uh what questions do I need? To ask them so I can be more effective for them in the long term. Because I know being on when I was in-house, the types of law firms and external counts I use, the ones I respect are the ones that knew where my pain points are and knew my industry. Alright, or were willing to say, Hold on, I've never done this before. Can we just backtrack a little bit? You explain to me why this is so important because I'm not seeing it, and then they explain it. You move forward again. It gives you the time to have those conversations which you didn't have before because you're usually firefighting with multiple clients saying everything was needed yesterday because social media has now driven everybody into this. I need it immediately. Okay, so so that's kind of the that's kind of the working environment that we have within legal services now, is is now under more pressure to deliver faster, more with greater quality, shorter space of time, nearly close to outcome-based model rather than hourly or um, and certainly there's limited commodity-based um services with the legal industry, but it is requiring to be faster and more efficient than ever before, which AI helps so long as you know in content, for example, you'll have a ton of hallucinations. Don't rely on the first draft. It will come out with phraseology that is not personal to you. It's personal to it could be written like by Charles Dickens, you know, who writes like Charles Dickens? Amazing novels. But do we is that what the CEO wants to read? Right. You know, once upon a time, you know, let's not start your legal memo like that. Yeah, you know, so all right.

David Espindola:

So I want to spend a little bit more time digging into this conversation about AI impacting the legal profession. But before I do that, I'd love to have Xina ask you a question. So, Xina, given Rory's expertise as a lawyer working with AI, what question do you have for him?

Zena:

Well, one interesting thing I'd love to ask Rory is how he sees the role of legal frameworks evolving as AI becomes more integrated into everyday business practices Okay.

SPEAKER_01:

That's a very good question. And I think Zena is very biased. I'm already with that question. She she's assuming that that's going to be the case. It's actually different. Different lawyers will give you different answers to this. And I'll give you my own. I think that the the legal service model that we have, the operating model in the past, um, and I'm going from the time when I remember the time when computers were coming in. Okay, so that I'm of that era, okay. And how that changed how we deliver services, you know, the going from letters to emails, going from having mul in in-trays and outrays, right, on our desks and having legal secretaries that and facsimiles, Bluetooth, um sorry, not Bluetooth, Blackberries, and then and then you know, cassette for you know dictaphones and and and voice recordings. Okay, a lot of and some people might say, uh, okay, so I'll leave you to research that in your own time. Now we fast forward to where we are now. I don't need to explain where we are now, and after pandemic and all that, that shifted everything as well in how we deliver services. But for the last about five to maybe eight years, the the legal services have always been talking about how to create a client-centric service. You know, um, I think some people coin it as the 360-degree model, you know, for client services. And law firms are trying to find the more personalized approach to delivery. So we we know that that even the algorithms have driven a more personalized approach when you're going through Doom Scrolling, which happens to me as well, you know, when you're going through the various different platforms, and they give you personalized advertising, or we we heard something you said here, or you like something there, and they're trying to find something that will keep you where you are in that platform. Well, law firms are trying to find that kind of niche thing by saying, Okay, you came into the building, you said you liked tea instead of coffee. So we will record that and make sure every time you come in, next time we will always give you tea or offer you that tea, or we might ask you about a few things. So we try and record something so makes it more like we care, okay? And it's very hard for people to believe. Lawyers actually do care, okay. Um, some I'd say. Um we we we don't we are we don't have the the the heart is still in us somewhere, it might be very small, but it's still there. And the idea for it is is that we're everybody's trying to um get that emotional connection rather than because we were talking about emotional intelligence for a long time and that's a leadership skill and all this stuff. So you know where I'm going. Authenticity was a big mantra for a while, and then the DEI, and then etc. So you can see where all these little threads have caused law firms and and legal departments to adjust and pivot to a new way of working. AHI has thrown a lot, um, a lot of confusion because it has questioned um the original mantra was automation was brought in to take out the grunt work. Anything that can be um that just requires you to suck up a lot of time of doing the same thing over and over again, automate it. Now I correct myself as saying automation is not AI, okay, but automation came in first, and everybody tried to do that. So trademark applications, you need to fill out a few forms. A contract, you might have basic details, names, company information, address, things like that. You automate it, you fill in the details, it populates it into your contract. Hey, Presto, you got a starting draft without having to fill in extra blanks, it's already there. Um, then we moved it to checklists, common questions you ask your clients to try and uh kind of rule out complexities within a deal or rule in and then address those early to try and speed up the contract lifecycle. AI then allows you to come in, and what it's doing is it's removing a lot of the necessity for power legals, it's reducing the number of trainees that you need to go through a law firm because you don't need them to fill in gaps, like going into databases and physically going through documents when you're purchasing a new company or selling a company. Um, the number of staff you need to turn through uh complex deals is reduced. You might not need four people on a project, you now reduce to two. You know, for me, when certain technologies like what I do, I have different tools for different things, and I am still able to run my servers to get the same quality as a partner in a big firm, quality-level service, um, without having a huge, uh shiny office and an abundance of amazing people to um to greet and meet and all those things. Um there's a change within law schools and universities because they're trying to pivot to a new working model. Um what is the future lawyer? What is it going to look like? What kind of skills do they need? Because the entry level for law firms now and legal departments is going to be a higher bar in theory, because you're expecting a lot of the admin, a lot of the kind of traditional um information will be either ruled in as relevant and then taught at law school level, or it'll be ruled out as being it's been taken over by technology. For example, I don't think many people need to know how to um fix a photocopier anymore, even though I think I think I've said it to you before, it should be a rite of passage for everybody to sit beside a photocopier for at least a week.

David Espindola:

Let's dig into that a little bit because I I believe that there are aspects of the legal profession that are a perfect fit for AI automation. All the research that needs to be done, the document-based work that needs to be done, things that are typically done by paralegals. But fundamentally, I believe the primary work of a lawyer is deeply human because it requires, you know, the ability to influence, it requires building trust, understanding nuances. And there's so many aspects of this profession that is deeply human. I don't think AI can replace a lawyer. Do you agree with that?

SPEAKER_01:

I agree for now, uh, because we don't know where the technology is going. I see AI as a left brain, right brain uh position. So it's the left brain that it can do all these calculations, do these amazing things that you can use AI for, like you know, DNA analysis, coming up with new cures, you know, um running calculations faster than we ever could ourselves in a in the shortest uh a shorter space of time than than ever before. And it's fascinating the the various use cases out there with the use of of these kinds of technologies. In law, though, the thing is anything could change a case or could change the outcome of uh of a negotiation or of uh or of a discussion or dispute. And it's down to subjectivity, it's down to emotions, down to the human beings that are in front of you. Um there is a there are points in history which have changed because of some major event, like um, you know, you could you could say the the first man on the moon, the first woman on the moon, the um the you know the introduction of of of like a viral video which changed people's views of of of uh a point of law. Um because the law is or is a living piece of of work because it it grows and changes with the population at that time. You know, we we can talk about it from a macro level where governments change depending on, you know, it was a Labour government today, it's now conservative government, tomorrow it's a whatever it's a new party brought into the government. That's a legislative change when those policies are then mandated by that new government, and then the legislature comes up with new, the executive comes up with new orders, and then that has to trickle down into the the way in which the law is implemented. And AI can be told about what these changes are, but for a lot of things the law that when it's written down is open to interpretation, and it's not always clear. Um, and I found some ingenious attorneys out there, you know, serious academics, who could change um an interpretation based on a comma. A singular comma, which I don't think that critical thinking, that that and an application of to the facts and against the people that you're dealing with, and whether that is the right solution for a client. Um I've had clients who um and I'll leave the criminal law piece because it's I don't think AI is best placed for that, but for for in a civil litigation context, uh I see clients that come in, they get extremely emotional. You know, they the worst ones you have is like property boundary disputes, they're the worst because you know it's very personal to them. I had one client who was so obsessed about a boundary wall because they were connecting houses, painted the boundary yellow with uh no pink with yellow polka dots. Wow. And wanted me to present that um as evidence of the boundary wall to the judge, and I was going, uh, is there any chance we could change the colours or repaint it or something? Because you're not really painting yourself with the right perspective. But the reason for it is because a lot of things can roll on technicalities, and an AI tool we said, so this case didn't go through, so that means it's a settled point in law. And we say, Well, it's not, because it didn't deal with the point of law that we were trying to test because it was ruled out on a preliminary hearing, because on a technicality, or it wasn't the right defendant, it wasn't the right test case because they they were tarnished in some way in front of the jury, which was the fault of the lawyer that was that was pleading their case at the time. The lawyer was bad, you know. I put it out there, it's always a possibility. Um, so uh but the data will tell you a story, but you need to read between the lines of that data, and that's where the law requires you to say, Well, so we're not ready for it. Maybe we need to bring like in Ireland, for example, um, divorce didn't appear in Ireland until 1996. For some countries, they go, Oh, that's crazy. For other countries, they said, Oh, why is it in? But it took time before a country was ready to change its views on whether to bring it in or not. And then even after that, you know, the the disparities with other jurisdictions and how we deal with uh a divorce proceeding in in Ireland is different to the UK, it's different to the US and different states in the US will differ to degrees. AI can only deal with what it's given and what it's trained to do, but it can it tell you this is the right approach to do, or tell you something left field, which nobody ever thought of because it's brand new. Never thought of it before. They can't, not yet. So that's why I say it's a left field piece. Well, and that's why it's for the right. Right.

David Espindola:

So I I so that was my point, right? I mean, the law is a reflection of our values, it's a reflection of our culture. There's so much emotion that goes into that. You just uh exemplify that with the property boundaries, right? But there's a lot of emotion involved, and AI is, like you said, the left brain side of things. So, how do the two pieces come together in an effective way if you're dealing with a complex uh case and you're leveraging the human lawyer, but you're also leveraging the AI to do that hard analytical analysis?

SPEAKER_01:

But that that's the point. You use it, you always keep it in your mind. It is a tool.

David Espindola:

Yeah.

SPEAKER_01:

It's a tool like Outlook, like Teams or Zoom, or whatever you use. It's another tool you do in order to deliver your service. That's all you're doing. But you, as the human, and as the lawyers all know ethically and under their regulations as lawyers and attorneys, we have this obligation to our bar association, our bar council, or law associates, uh law societies, is to ensure that we, when we send out any legal advice or any response, it has been reviewed by a human and it has been sanctioned by somebody in authority. Because otherwise, you're gonna get sued and rightly. So if you're saying everything should be done by AI and allowing it all to pass through with AI with no review, it's the same in privacy law, automated decision making, you have to tell the other person it was an automated decision. Um, so like banks have dealt with that when they're dealing with mortgage applications. If it's an automated decision, they know that that could be very risky because they need somebody. They might have gone through 13 different types of checks which have been streamlined using different types of technologies and checks and balances, you know, know your customer anti-money, laundering checks, credit checks, etc. That could be all automated, and then you get to the final decision where the AI tool might say, I've done all these checks, and this person is amazing. And then a human must come in and say, Is it? Because the person is actually borderline going into bankruptcy because they're about a week away from losing all that. If they lost their job tomorrow, they wouldn't be able to afford half of this. Is this a good bet for us to do as a as a bank? And also, we know, even though we've not reported it to the market, that we need to change mortgage rates next year to something else. So I think once we change this variable rate, this person won't survive. But I can't tell the AI tool because it's not wide knowledge yet. There's variables, which requires the human still to input and still to have oversight. And it comes back to the other mantra which we have, which is uh the talk about responsible AI. Um, I move it to explainable AI. You need to be able to explain how the decision was made by the tool.

David Espindola:

And that's a challenge. The tool telling you.

SPEAKER_01:

The tool can tell you I made the decision this way. Thank you very much. Now, human, can you come in? Because you're the person that's going into the witness box if something goes wrong, not the AI tool.

David Espindola:

Yeah. So I know there's a lot of very smart people working on this problem, but there is some limitation in the understanding of how AI processes all this information and comes up with decisions, right? So there's a high degree of complexity, and all the you know the models are using uh billions of different parameters and and weights that the human mind can't even comprehend. And so I know there are there are techniques that are being developed and you know, leaving trails in your thinking process, you know, the chain of thought, what's going on step by step to come to a decision, but it is a challenging problem.

SPEAKER_01:

It is a challenge, like it's it's it's at least it's moved on from what we had a few years back, where we were trying to train uh a computer model to understand, to identify this is a picture of a kitchen, or this is a picture of a gorilla, right? And I'm I hesitate when I say this as as as but I I say as as shocking as it was. The pictures of they churned in and they put in thousands and thousands of different pictures. The tool was only able at that point in time, I think it's improved, but the tool at that point in time was only able to identify a kitchen if it had a woman in it. And it could only identify a gorilla, and sorry, couldn't identify a difference between a gorilla and a person who is, I think in your world, in vernacular be African American, and mine is Matt Black. Um, my my son being being black as well, and I'm married into a mixed race family as well, so I have that, and that shocked me. But it's only it's not that the computer wasn't intending to be racist or biased or prejudiced, it's just because the tool didn't allow for it to make that distinction. And that was based on the quality of the data that was being put through and how the algorithm was set up to make those determinations. And so if it comes also down to the quality of the data you put through, are you using synthetic? Are you not? How old is the data? What is it using it? Is it reinforcing um biases which you don't want? Or is it going to be are you training it the wrong thing? Just like with my kids, you know, you've got these education moments, but are you like for sometimes you might say something and you swear and you didn't think the little one was underneath the table when you swore, and it's just learned a new swear word, and you didn't want it to start saying it, but it starts saying it. You're going, I'm gonna get in trouble with school tomorrow. It just happened, but unfortunately with children, there's no way of extracting that back out of the brain, but hopefully with AI you can.

David Espindola:

Yeah, no, this is a really fascinating conversation. I'd like to get Xina's input into this. So, Xina, what aspects of the legal profession do you believe can be replaced by AI? And what aspects will continue to require human capabilities?

Zena:

I think there are definitely parts of the legal profession where AI can step in, like reviewing large volumes of documents or doing routine contract analysis. Those are tasks that can be pretty efficiently automated. But when it comes to nuanced judgment, building trust with clients, and handling those really complex human-centric decisions, that's where human lawyers are going to shine for a long time.

David Espindola:

Yeah, I think you're pretty much uh in alignment with everything that Rory and I have been talking about. I think there is a place for AI. There are certain aspects of the work that AI is going to do much better than humans can. But at the end of the day, this is a profession that requires that human intuition capability. The you know dealing with the nuances, dealing with the cultural aspect of things, dealing with the human emotions, right? That's huge. Rory, to wrap up here, I'd like to talk a little bit about how our institutions can keep up with all these changes that are going on. And what I'm getting to is the ability for us as a society to deal with ethics, to deal with governance, to deal with this rapid development of AI. How do we keep up with it as a society? What are your thoughts on that?

SPEAKER_01:

That's a big question. It's a good question. I'd say as a society, I would always say start with common sense. Um there's a a lot of fear-mongering that goes on around new technology or any introduction of any change. And I always say as human beings, we don't fear change, we just don't like to be changed. That's the problem.

David Espindola:

Yeah.

SPEAKER_01:

So with any kind of change in any model or any way that we do things, we need to be brought along carefully, eyes wide open. Um, so you have a thoughtful and a common and practical approach to dealing with it. The questions around ethics is not something society can always deal with on their own. They they rely on all these experts, they rely on their legislators, their politicians, they rely on their lawyers and their judiciary to to do a lot of that work for them because it is very complex. You said at the beginning, it is complex because there's many facets, um, as we saw that the the latest is social media bans for children under 16 was brought in in Australia. It's being muted in New York. I think it's going to be muted for the European Union in uh later this year as well. So it's that is, and for some people they say, well, that's obvious. And for a lot of parents, they go, Oh, thank you. Because then we can get the kids off the phones and you know have them talking to us again. And then we realise they're teenagers, maybe we don't want them talking to us anymore. But it's fine. Um, but it it is because we we've we've developed a new culture around social media, around technology, and we have to question how healthy that is for us. And I'm a huge fan of technology. I'm such a geek at heart. So I I I hesitate to anybody say, Oh, should you should you like do you want to put your phone away? Like somebody said, put your phone in your drawer for an hour a day. And I and he said, You'll get used to it eventually. And I said, No, but I'll I'm I miss it, you know, just in case. I don't know what's happening. What's happened? And then you come back to your phone, you realise nothing happened. Okay, thanks. So it's but it's there is that kind of like when everybody everybody at this listening to this, you know, when your phone now let me rephrase that, if your phone is stolen, a common thing across London is people on bikes just grabbing out your hand and going straight on, and then your phone is founded somewhere else around the world, um and it's gone. But that moment when your phone is either broken or stolen, it's like the everything else stops because you need to replace it. There's this sudden I need to need need to get a new phone. What do I do? How do I survive without this phone? And then you realize just how much um you've put into that, how much reliance or or control you've given away to uh just a piece of technology. Yeah, of course the fortune for a lot of people, yeah, sure. But you know, okay, but the the reality of the technology is that it's just a tool. What I I kind of I wonder to myself at the society when we when we're we're embracing this technology, which I was a huge fan of it came in, it's not three years old, but uh it's it's it's it's important to embrace these technologies and these these changes. But when we had this when it first came out, I'll say there were some jurisdictions that banned the LLM models until it was reviewed. Took six months, I believe, for for example, in Italy, the data protection authority banned it because there was inadvertent release of personal data uh where other people could see your data. You don't want that. Um people are using AI as the threat actors, so you've got sophisticated ransomware attacks, now ransom as a service or denial of service attacks or phishing attacks, which are being far more sophisticated and enhanced because of the use of AI tools.

David Espindola:

Yep.

SPEAKER_01:

And that security risks, and then the and I'll just mention it because it is coming at some point, they're still trying to develop quantum computing, which will throw out all the encryption concepts that we have out there. But anyway, that's way down the path, according to the reports. But as we're dealing right now, the sophistication within it can cause a lot of fear-mongering with a lot of people. But what I ask people to say is you should look at security measures that you're putting in place for your own personal protection for your house, not because you don't want to address it or see it, it's just to say you're protecting yourself, you're protecting your family. So when it comes to protecting the company as well, that's what it's for, because that protects your livelihood. So if it means being more um questioning a lot of the things, which as lawyers we naturally do, we question everything, we trust no one until we need to trust you. Um except my family, bless them. Um which you have to. Um so it's but what's but everybody else don't trust you until you've proven it you can be trusted because all you need is that one percent of the population who are trying to do bad to just target you or your company, and that's what you but you just need to be mindful that it's it's a group society effort to question these things. Like we had a thing with the parents recently um about one of the computer games that a lot of kids are doing. I won't name it because double down and get everybody into trouble. But it is it's one of those games that is across the world and all the kids love. When the parents in the classroom we all sat down, we said, Are you seeing similar things about certain chat and communications, which are a bit questionable for children of this age? Um, borderline and grooming. I meant I'm not sure I'm comfortable with this, what should we do? All the parents uniformly agreed, unanimously agreed, we'll just cancel for all our kids. Because no matter what security controls the provider was putting in place, the kids themselves were smart enough to circumvent them. These are like nine, ten, eleven-year-olds interesting. Who have lived with this technology longer than you know, because this is native to them. Yeah. And now we're gonna add on AI where they just write into the topic and saying, so there's this tool, how do I circumvent it? Oh, we can't give you anything to circumvent it. But if they ask the question in the right way or prompt it in a different way, they can conveniently find an answer which will allow them to get around these security measures. Now, a lot of the main platform providers will be live to this and you know, the big consoles and such so that they will have their own measures. But when you look at the other things, which are just wide open across the internet, you could see that you you need to be more, we need to educate ourselves more. We can't ignore the risks, but we should be seeing security as a positive thing rather than something that just reminds us of all the bad stuff in the world. The bad stuff's in the world, that's fine, but just put the security measures in place so that you can sleep well at night. And I think that's that's what society needs to be doing across everything that we do. And as lawyers, it makes my life so much easier.

David Espindola:

Yeah, there's so much for us to consider, right? I mean, you brought up several topics, change management. I think AI is an epic change management challenge that we need to deal with. You know, there's the issue of trust and security and privacy. There's so much for us to consider. We could spend all day talking about this. It's been a fantastic conversation. Uh, thank you so much for being a guest on the show today. Where can people find you?

SPEAKER_01:

They can find me on LinkedIn, they can find me on my webpage, rmoklegal.com, they can join me on my podcast beyond the fine print, the tech law podcast, um, which for my mother cures her insomnia. Um, for the the tech enthusiasts and legal enthusiasts, they seem to enjoy it, so I'm pleased with that. Um, yeah, they can find me on that. It's it's uh I'm very used to finding. If you need to spell my name, it's two F's with the South of Ireland spelling rather than the North of Ireland spelling, so it's no key foot two F's.

David Espindola:

All right. Thank you so much, Rory. And uh Zina, thank you for being a great co host once again. It was great uh chatting with you.

Zena:

Absolutely, David. Always a pleasure to be part of the conversation. And Rory, thanks for sharing your insights as well. Looking forward to our next episode.