Featured image for “GenAI in the Enterprise: Barron Stone, AI Product Lead at Defense Unicorns”

GenAI in the Enterprise: Barron Stone, AI Product Lead at Defense Unicorns

Today on GenAI in the Enterprise, Zach talks to Barron Stone, the AI Product Lead at Defense Unicorns. Barron served in the Air Force for 12 years and is still currently serving as an active reservist. In the Air Force, Barron was an electrical engineer, which means the office was his battlefield. Now at Defense Unicorns, he has continued that trajectory. Defense Unicorns builds and delivers software capabilities for national security missions, and Barron has led the company in developing several AI-based security solutions.

Zach and Barron discuss how Generative AI impacts Barron’s personal and professional life, from solutions for the US military to an animated musical band of sea creatures. Curious what that looks like? Listen to the full episode!

View This Episode On:

About Guest Barron Stone:

Barron Stone is the AI product lead at Defense Unicorns, delivering open-source AI capabilities for national security missions. On the side, he serves as a reserve officer in the U.S. Air Force. Barron holds B.S. and M.S. degrees in electrical engineering. For fun, he authors online courses with his wife, Olivia, teaching foundational electronics and software development skills to make intimidating technical topics accessible for anyone wanting to learn.

Barron on LinkedIn: https://www.linkedin.com/in/barronstone

About The Generative AI In The Enterprise Series:

Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.

And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.

See All Episodes

Partial Generative AI In The Enterprise Episode Transcript

Note: this transcript section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!

[Music]

Zach Gardner: Ladies and gentlemen, welcome to the future. My name is Zach Gardner. I’m the Chief Architect at Keyhole Software, and a few months ago, I set out on a little bit of a journey, a little bit of a quest: generative AI. I had been hearing all about it. I had used ChatGPT, and I had looked into things like LangChain, but I was starving for really, really good insights from people actually using it. I didn’t want to go on this mission alone; I wanted to go on it with people who had been thinking about it longer than I have, who are actually using it, and who have better insights than I, as a dabbler, could ever come up with. Well, the good news is that I scoured the internet and found a few people crazy enough to agree to come onto this videocast today with me. On this videocast, on this little journey, is Barron Stone. He’s the AI Product Lead at Defense Unicorns and has an awesome shirt and an awesome mug. Barron, how’s it going?

Barron Stone: Hey, Zach. Thanks for having me on. So, anytime. I’m Barron Stone with Defense Unicorns. We are a U.S. defense contractor. I feel funny saying that, but I have been asked multiple times, “What industry are you in? Are you a defense company?” Yeah, it’s right there in our name. We build and deliver software capabilities for national security missions. As you said, Zach, I lead our product team developing out AI capabilities. We’ve got an open-source tool called Leap Frog AI, which enables you to self-host generative AI models so you can take AI into those disconnected, air-gapped environments, classified environments where national security missions tend to operate. Then we’re building on top of that to extend out to the broader scope of AI, which is more than just generative AI. I know that’s what we’re here to focus on, but there’s a lot more. Delivering AI for national security capabilities is my day job.

On the side, and part of why I got into the defense industry, I’ve been in the Air Force for 12 years, still serving as an active reservist. They let me put on the uniform every once in a while and stay connected to the mission that way. I have a background in electrical engineering. People always ask, “Are you a pilot?” No, I’ve got 12 years in the Air Force, and I’ve never actually flown on a military aircraft. As an engineer, the office is our battlefield, working acquisitions and making sure we’re getting those capabilities to the warfighter, making sure they have the best things they need. That’s me in a nutshell.

Zach Gardner: No, really cool. And I forgot to mention this before, but thank you very much for your service to our country. I’ve had uncles in the Air Force, and my grandpa was in the Navy, if I remember right. So, you know, there’s a special place in my heart for those that serve. Before we get too much into the weeds, it’s always good to have a disclaimer: all the views and opinions expressed in this program are those of the participants and do not reflect their employers or any trade organizations or branches of the military they are affiliated with. I think that covers all the bases. It’s just two people talking; we just happen to be recording what we do.

Barron Stone: Yeah, if you don’t like something I say, blame me. It’s coming from here.

Zach Gardner: Totally, totally. Okay, so to get started off with, ChatGPT—I remember it coming out, probably the first time I used it was January 2023. I used it for some fun little things, like making recipes. But it wasn’t until the last couple of months that I really started to see how it could be applied to my professional life as a Chief Architect. To get us started off, I’m curious, what’s your experience been with generative AI? Where did you first start using it, and how are you using it today in your personal and professional life?

Barron Stone: Like you, I started playing around with these tools as they came onto the scene over the past year or two. I’ll start with where I’ve used it personally because personal is more fun than what I do for work. A lot of the creative stuff, like image generation, is fun to play around with. For example, I had a feeling you might ask this question, Zach, so I brought a friend along. This is my roctopus. I don’t know if you can see it there. Maybe if you hold it close to your face, the camera will focus.

Anyways, this is the first picture I’ve ever painted, back in 2013. It’s an octopus playing a piano. I had in my head all these other sea creatures I wanted to give him a full band, like a seahorse playing a saxophone and a crab playing the drums. But I’m not a good painter, and I didn’t have the time or energy to execute those ideas. So, these ideas have been cooking in my head for over a decade. About a year ago, when Bing made their image creator available for free online, I sat down on a Sunday morning with two cups of coffee, and I was able to get all these ideas out of my head and produce eight other bandmates for my roctopus. It was very personally satisfying to use AI to get those ideas out.

Another way I’ve been using AI that amazes me is for travel. My wife and I went to Japan a couple of months ago. Neither of us speaks Japanese, so having tools like Google Translate to get around was indispensable. The coolest thing I discovered was the ChatGPT app on my phone. We were walking around Osaka, and I saw this clown character everywhere. I took a picture and asked ChatGPT, “What is this?” It was able to identify it as a mascot for an old restaurant, which has exploded into a marketing icon. The fact that it could take that picture, find the information, and synthesize a concise response blew my mind.

Another personal use is coding. My background is in electrical engineering, so I used to write a lot of code. My career has taken me the management route, so I don’t get to code as much as I’d like to, but I still do projects to scratch that nerd itch. I started dabbling with ChatGPT, asking it to write Python functions. I’ve been very impressed with its capabilities, and I know a lot of our developers are using AI much more in their workflows. I’m just a dabbler, but it’s been really cool.

One area I don’t use AI in my personal life, which I know a lot of people do, is writing content for LinkedIn or cards. I’ve tried generating a LinkedIn post, but by the time I come up with a topic and rework what it generates to have a human touch, I feel like I should have just written it myself. I want my content to be from me, not from ChatGPT. So, I bring that up to say I don’t use it for everything. Some people do, but that’s a personal choice.

Professionally, I use AI to explain things to me like I’m a kid. As new topics come up or new technologies emerge, being able to get distilled information and ask questions to dive deeper has been helpful. I also use LLMs for generating agendas for meetings and planning team offsites. These tools help get past the blank page. While I haven’t used AI much for slide generation or presentations, I find it can help me get started, even if I end up rewriting most of what it generates. Still, learning every day and trying to adopt AI more into my workflow for content generation and synthesizing information.

Zach Gardner: I’m definitely with you in terms of text-based content. Images are a different story. I’ve probably never done one painting in my entire life. Being an engineer by trade, we have to think analytically. Our brains just aren’t naturally flowery and creative. Being able to describe to a tool what we’re after, what it should look like, and the cast of characters is helpful. I’ve never heard of a roctopus, but ever since you said that, I’ve had “Rock Lobster” playing in my brain.

Barron Stone: It’s the roctopus, all one word. It’s my rocktopus.

Zach Gardner: I dig it. Maybe I should try painting. Who knows? Maybe it’s a hidden talent. But the good Lord saw fit to give me certain gifts, and that wasn’t one of them. So, no, that’s a really good overview. Your journey is very similar to mine. I’m curious if you could talk me through some interesting use cases for generative AI in the defense space, or even just machine learning in general. Let’s not constrain ourselves, as this is a very unique space.

Barron Stone: I’m glad you said that because LLMs are sucking up all the oxygen in the room. People see it as a hammer and try to make everything a nail. AI is so much more than that. There are many use cases for machine learning beyond generative AI. For example, classification: sensors detecting things and identifying what they are, analyzing situations, etc. It’s not necessarily generative AI but certainly machine learning. We should start from use cases and workflows, then work backwards to figure out the best technology to improve things, whether it’s AI, automation, expert algorithms, or statistics.

In the DOD, AI can accelerate decision-making by synthesizing information and helping decision-makers move faster. Specifically, generative AI can transcribe audio, translate it, and summarize it quickly. For example, during a conflict, lots of meetings happen throughout the day. Information needs to be quickly synthesized and communicated out. Traditionally, this would be a manual process, but AI can speed it up significantly. Another use case is predictive maintenance: using machine learning models to predict when equipment will fail, allowing for proactive repairs. There’s also target recognition: analyzing sensor data to identify and track potential threats. These are just a few examples, but AI has the potential to enhance many aspects of defense operations…. basic expert algorithms or just you know statistics, right? Like there’s a whole broad toolkit of things we can do to improve stuff. AI is just one of those. So yeah, I guess broadly, glad to hear that recognition. I think within DOD, AI really can be seen as a tool that can accelerate decision-making. It helps you get information to decision-makers and get that decision faster. It’s something called the decision advantage, being able to decide and move faster than your adversary. So that’s broadly speaking, the place that I see AI playing a role.

So that could be synthesizing information from tons of locations, doing analysis, and presenting results. Specifically in generative AI, I think one of the use cases and sort of where I think the tools are at today that is really valuable is in the ability to do transcription. So taking audio and turning that into text, potentially doing translation along the way, and then also summarizing that.

An example: if there’s a conflict going on in the world, we’ve got operations happening, there are lots of meetings happening throughout the day as commanders are over video chat or whatever, having these discussions—”what’s going on, what are we doing next?” Humans talking to humans, making a plan—that’s information that needs to be quickly synthesized and then communicated out to the force. It’s very timely. Things are moving fast.

Being able to, as humans are communicating verbally, transcribe that and then summarize that down into whatever format the military uses—lots of specific reporting type formats—but being able to push that out very quickly. Traditionally, that would be a very manual process. You’d have somebody sitting there manually transcribing meeting notes. By the time the human can do that, maybe that information is stale. It also means that person is not doing something else more productive, just a very manual process.

I say that as a very specific use case of just getting information out faster and synthesizing it. I had another one… it just went poof. You know, that didn’t used to happen to me until either I was on a recording or after my first kid. I don’t know what it is; just being a parent, you have these little pieces of your brain that just fly off. Oh yeah, the other use case was just really spot on.

So in terms of Defense Unicorns, do we have any open-source products people can check out? Other than Zarf, for the nerdy folks who already know that.

Zach Gardner: Yeah.

Barron Stone: So, we’ve got a couple of different open-source projects that we build and maintain with the community. You mentioned Zarf, so I’ll just run down the list real quick. Zarf is a tool for packaging and deploying software into air-gap environments. If you’re thinking like Cloud-native applications, those often expect to have internet connectivity. When you put them in places without internet connectivity, things tend to break. So Zarf is a tool for getting all those dependencies. It solves one piece of this whole software delivery puzzle.

We have another tool called Pepper, which does some Kubernetes stuff that I will not even try to explain. I think I’ve heard the term “admission controller.” I’m sure the engineers will tell me I’m wrong, but it helps with integrating applications into our baseline tech stack. Then we have another tool called Lula, which helps with continuous monitoring for compliance. And then Leap Frog AI, which is the AI one right there in the name. That was what I mentioned earlier, for being able to self-host generative AI models in these disconnected environments. If you’re operating in a classified or any defense information environment, you can’t necessarily send that off to ChatGPT or wherever those servers reside. A lot of these cloud-based services, you can’t touch. So this is your ability to recreate those capabilities locally, wherever your data is. Don’t go to the AI; bring the AI to you.

Zach Gardner: No, and I in my research for this, I’ve definitely gotten a better appreciation about how fundamentally different the defense sector is in terms of the requirements to deploy. I mean, you talk about the authorization to operate—the ATOs—it’s just… I’ve been through change review boards before, but ATOs are sort of a next level beyond that. You have just the environment and where it’s going to be deployed. If you’re going to deploy software on a ship in the Mediterranean Ocean, the operator may only have a 3×3 square area to be able to use the software. It has to be easy to use, user-friendly, but also air-gapped, like what you were saying, which is in terms of what we normally do for web-based software, it’s just so foreign to think of that as a requirement that I would have to even consider from day one.

Barron Stone: Yeah, so I think it was a saying—I don’t know if I picked it up or how I got it—but during my time in the Air Force, trying to bring any new capability to the fight, I would say that the technology is easy; it’s the people that are hard. People are the hard part. And there are still a lot of technical challenges, and that’s where Defense Unicorns, with these tools, are solving a lot of those technical challenges through tooling that can help with that. But I think adoption-wise, people are going to be the biggest impediment. And I don’t say that to point the finger at anyone. There’s certainly a lot of concerns, and a lot we still don’t understand across the board.

So I think the main thing—or the thing that’ll progress things the best—is a better understanding. AI literacy is maybe a term, maybe it’s a little loaded, but what AI is, what it can do, and what it can’t do. Because right now, even a couple of weeks ago, I was having conversations with potential mission heroes we were going to work with, and they were asking, “If I’m deploying this LLM and chatting with it, how do I know what it’s telling me is true or guarantee it’s accurate?” Short answer is, you can’t. There are a lot of things you can do to restrict the responses it’s going to give you, to prevent it from wildly hallucinating. You can run all types of evaluation frameworks to determine how accurate this is within the scope of that framework and things. But at the end of the day, when you take this non-deterministic system out into production and start using it, you can’t say with 100% certainty that what it’s telling you is accurate.

And in the defense space, we’re doing very impactful things with the decisions we make, and ultimately, the human is responsible for those decisions. We can’t point a finger and say, “Oh, the AI told me to do this, the AI told me this.” You made that decision. It’s on you. And I think that makes that—when leaders hear that, and if they don’t understand AI, it gets very scary. That sounds very risky. I’m not criticizing there. So I think it’s understanding where those limitations are, and then understanding within that, how can you use AI? How do you take what it tells you and make yourself more effective, make decisions, but not rely on it and understand where those limitations are?

And I think when that understanding comes broadly, I think people will be more willing to use it. There’s a lot of curiosity, but people are still hesitant on that front. And with that comes recognition too, right? Like we live in an imperfect world. We’re operating with imperfect information, often missing information or sometimes incorrect information. So trying to expect an AI model to be 100% accurate because it’s not is not a fair assessment. So are we as good or better than what we’re currently doing with whatever manual process or things like that? And I think people also don’t realize that LLMs are this new thing, and we’re still figuring out the bounds of what they’re good for or not good for. But we’ve been using analysis tools and machine learning for decades. It’s not anything new. And if that tool gives you this information, like as a commander, you’re responsible for saying, “What is my decision based on this information?” So there are a lot of opportunities to use these generative AI models to generate potential courses of action, to say, “Based on this broad swath of information that I, as AI, am able to collect, interpret, process, and synthesize down, here are ABC, the three potential courses of action and why I’m recommending those.” But again, the commander is ultimately responsible for taking that decision.

So it’s important that we design AI systems that can inform and improve, that can accelerate that decision advantage. We want to make sure we’re careful not to be something that’s persuasive and persuading the human. That gets into some gray areas there. But as far as an information tool to accelerate things, people just need to understand or start to become more comfortable with that. I think it’ll get there, and we’re seeing a lot of interest, so people are wanting to figure out how to start using these things. It’s just figuring out, once you can get that into place and start using it, understanding what do I trust, what do I not trust, how do I make myself more effective.

Zach Gardner: Very good way to frame it, very good way to sort of put a bow on the conversation. So if people are curious to learn more about you, learn more about Defense Unicorns, what social media platforms are you active on? Where should people go? Where would you—?

Barron Stone: LinkedIn for me personally. I’m very active on there as Barron Stone. If you search my name, I might be the only Barron Stone on LinkedIn, or there’s not a lot of others. The picture is this face, so you’ll find it. Defense Unicorns, as a company, we’re also very active on LinkedIn. We post on there a lot, and our website is defenseunicorns.com if you want to learn more about what we can offer.

Zach Gardner: I dig it, I dig it. Thank you very much, Barron, for the time, and thank you to everyone who stayed through to listen. Barron dropped some knowledge. Always appreciated, and ladies and gentlemen, catch you in the future.

[Music]


Share: