Featured image for “Gen AI in the Enterprise: Vern Eastley, Legal AI Advisor”

Gen AI in the Enterprise: Vern Eastley, Legal AI Advisor

Gen AI in the Enterprise is happy to welcome Legal AI Advisor Vern Eastley of Vern Eastley Advisors! Vern is a software engineer by trade who found himself swept up in the AI trend a few years ago. He quickly realized a need for AI in the legal space and jumped headfirst into the business intersection in August of 2023. He’s been “drinking from the fire hose” ever since!

Today, Zach and Vern talk about things devs and architects should know about AI, giving proper attribution to tools like ChatGPT, what using Generative AI in the legal field looks like practically, preventing and dealing with hallucinations, and more.

View This Episode On:

About Guest Vern Eastley:

Vern is an entrepreneur and visionary who founded Vern Eastley Advisors. Vern’s company helps legal tech companies get noticed, attract ideal prospects, inform and excite them, and turn them into customers. He works with forward-thinking lawyers, teaching them to understand and use AI to run their firms more profitably, differentiate from competitors, and deliver spectacular results. He provides both private consultation and online group instruction

About The Generative AI In The Enterprise Series:

Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.

And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.

See All Episodes

Partial Generative AI In The Enterprise Episode Transcript

Note: this transcript section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!

“[Music]

Zach Gardner: Ladies and gentlemen, welcome to the future. My name is Zach Gardner, and I’m the Chief Architect of Keyhole Software. Because this is 2024, we can’t go a day without talking about generative AI. I wanted to expose my brain to many different points of view. I talked to people in healthcare, finance, and every vertical I could think of, but one area was lacking—a legal perspective. Now, I’m not a lawyer by any means. I didn’t even take the LSAT, nor do I have any desire to. Thankfully, I found someone who can discuss the implications of generative AI from a legal standpoint.

Today, with me from Idaho, is Vern Eastley, the legal AI adviser at Vern Eastley Advisers. Vern, how’s it going? Long time no talk!

Vern Eastley: It’s going great! Happy to talk to you today, thanks for having me on.

Zach Gardner: Anytime, anytime. So, my disclaimer that everyone who listens to the show knows and loves—if they had a drinking game, they’d have to chug when I say it—so I’m going to go extra slow this time: All the views and opinions expressed in this program are those of the participants and do not reflect their employers or their trade organizations. It’s just two dudes talking and having a good time. So, Vern, to get us started, maybe you could introduce yourself. What’s your background, and how did you get into being a legal AI adviser? There’s got to be a good story behind that.

Vern Eastley: Well, your little disclaimer was a great legal intro, so there must be a bit of a lawyer gene in there somewhere! I should start by saying that, just like you, I never took the LSAT and never went to law school. It’s a bit of a strange place I’ve found myself in. I’m actually a software engineer by background, and I’ve run a mobile app development agency for a long time. But about a year ago, I got really heavily into using AI in my work.

The topic was obviously on my mind, but then one day, I had lunch with a buddy who runs a consulting firm as a law librarian. He’s constantly getting inquiries from law firms about AI—they’re a mix of freaking out about it and being excited. We talked for two hours about how AI is affecting law, and I just couldn’t get it out of my mind. I thought, this is the perfect place to combine AI with another industry. So, as a complete legal neophyte, other than perhaps negotiating some contracts with my attorney over the years, I jumped headfirst into this in about August of 2023. I’ve been drinking from the fire hose ever since.

Zach Gardner: Very cool, very cool. That’s good to know, and yeah, that’s how we bonded—through mobile app development, of course. So, for those of us who are not in the legal field—I spend most of my day in healthcare IT, so I have some peripheral knowledge, but definitely haven’t been thinking about it as much as someone like you—what are some of the things that an application architect or developer should be thinking about in terms of our usage of generative AI? Just give me the lowdown as a, I don’t know if “dilettante” is the right word, but a part of the legal community.

Vern Eastley: Well, I think my first response would be that there’s always a class of problems as software engineers that we wish we could solve, but we’re just not that smart. Some things are really hard to boil down to an algorithm that you can express in code, especially if you’re dealing with natural language or nuanced understanding of complicated issues. If it’s hard to build an algorithm to express it, AI is a really great candidate, especially this generative AI that’s all the rage these days.

As a software engineer, there are a lot of examples where I don’t know how to do something with conventional programming, but I can think of all sorts of ways to tackle the problem using AI. Now, whether I’m going to make a mistake and screw something up with that is also an important question to ask, but at least there are things we can try now that were previously beyond our capabilities.

Let me give you an example—you mentioned healthcare. I’ve worked with a client outside the legal field, of course, who is involved with medical transcription and figuring out things like billing codes based on detailed notes left by a doctor. There are firms out there already that can take a transcribed note from a physician and figure out what the ICD-10 codes are for Medicaid billing, or whatever it is, just based on that. It’s not something you can do with just a keyword search; it requires identifying actual concepts and ideas from a conversation. That kind of thing is now within our grasp, and there are lots of things in every industry where problems that used to be pretty much impossible are now at least something we could look into.

Zach Gardner: I came across two examples of that this week. One of them was a situation where I knew kind of what I was after—I wanted to do managed identity authentication to access a key vault on Azure. I knew I’d done this before, but for the life of me, I couldn’t remember what I did. I was able to ask ChatGPT, “Alright, pretend you’re an expert C programmer, which I should be with this many years of experience. This is the problem I’m trying to solve—what are some ways you would solve it?” It gave me two or three options, and the first two didn’t work, but the third one was exactly what I was thinking of.

It didn’t require me to copy any code, so to speak—it was like, this is the class name I was after. Then I was able to go out and Google and figure out some additional things from that. So, I don’t think there’s any attribution I need to give ChatGPT. I don’t think its feelings are going to be hurt by any means, but that is something I was curious about as a developer—how do we ensure that we’re giving proper attribution? Does it even matter? Because it’s not an entity that’s going to come after me if I copy one line of code from it. Or do I have to attribute it because it wasn’t something I directly wrote? These are some of the things I think about.

Vern Eastley: That’s a great question. I’m one of those weird programmers who will put attribution in a source comment if I find something on a website that explains how to do something. I do it for my own benefit as much as anything, but programmers coming after me could benefit from seeing what guided my thinking. You can’t just provide a link to ChatGPT, and I suppose you could put the prompt in, but the model’s changing continually—there’s randomness and variation in the answer by design, so you can’t exactly duplicate the answer. I suppose you could paste it in, but it’s interesting. There’s one other thing I want to touch on in your story—the issue of…oh geez, just having a little brain cramp here. My mind just totally blanked, like the time I was in a piano recital a few months ago and forgot the song. I hope we can edit for a moment here…

Zach Gardner: No problem! Actually, I forgot the second example I wanted to mention. I needed to do some regression analysis in Python. I haven’t done a ton of Python, but I can read it and know what’s going on, though it’s harder for me to write it. There were a couple of different algorithms, a couple of different machine learning models that I could have used. So instead of Googling the advantages and disadvantages of one model over another, I just gave it the list of models and asked, “What are the differences between all of these?” It saved me a bunch of time. Even if it made me 1, 2, 3, 4% more effective, if you’re talking about a professional athlete, for example, that’s the difference between winning a championship and not. I really do see this tool as something that, even if it helps just on the margins, that alone might be worth it.

Vern Eastley: That’s interesting because that’s exactly the point I was hoping to get to. There are a lot of studies out there that claim generative AI is more helpful for neophytes and newbies than it is for experts. The reason they make that claim is that someone who has no idea how to tackle a problem can ask a question and then get some guidance to start making progress really fast. But I don’t think that’s the best way to look at it. In your case, as more of an expert, you don’t need someone to tell you exactly how to do something—you just need to know which of the various paths you could follow to do it.

For example, with Azure authentication or whatever it is, you already know the theory; you just need to know how to get there. I find that when experts, like attorneys or software engineers, use ChatGPT and other systems, they already have the intellectual framework and background—they just need to know the quickest way from point A to point B. They already know where they’re going, whereas with a newbie, they don’t really understand the lay of the land. They’re just typing things in without fully understanding what they’re doing. It creates the illusion of progress, but I don’t think it’s quite the same.

Zach Gardner: Yeah, I hear you. One of the things that actually scares me is that in some instances, the output from ChatGPT is so human-like that it’s hard to tell when it’s being sarcastic, or when it’s joking. I got burned on a blog post the other day—there was a tech blog where someone was making a claim about machine learning and AI. I copied it verbatim and tried it out in code, but it completely failed. I realized that the original post had been sarcastic, but there was nothing in ChatGPT’s answer that tipped me off. I had a good laugh about it, but it reminded me to be more critical and not trust the AI as gospel. Have you come across anything like that?

Vern Eastley: That’s a great point! It’s kind of a cardinal rule in the legal profession, and in software too, that you don’t want to be someone else’s crash test dummy. You want to understand enough of what’s going on so you can see if something is leading you astray. Just like a responsible lawyer won’t take a precedent or argument at face value without confirming it’s still good law, as a programmer, you have to understand enough of what’s happening to recognize when AI might be sending you down the wrong path.

With ChatGPT, the randomness and variation in its output can sometimes create issues, and it may even make up things that sound plausible. This is especially dangerous for someone without a strong foundation in the subject matter. For example, I’ve heard of situations where attorneys relied too heavily on AI-generated documents without thoroughly reviewing them, leading to incorrect legal advice. It’s a reminder that while AI can be a powerful tool, it’s no substitute for human judgment and expertise.

Zach Gardner: Absolutely, and that’s something I think we all need to keep in mind. AI is a tool, but it’s up to us to use it responsibly and make sure we’re not letting it lead us astray. I think that’s a great note to end on. Vern, it’s been a pleasure talking to you today. Thanks for sharing your insights with us.

Vern Eastley: Thanks for having me, Zach. It’s been a great conversation. Take care!”


Share: