Featured image for “GenAI in the Enterprise with Dr. Peacock, Chief Architect & CIO”

GenAI in the Enterprise with Dr. Peacock, Chief Architect & CIO

We have a doctor on the pod today! Zach interviews Chief Architect and CIO, Christopher Peacock, Ph.D. (or Dr. P for short) on this GenAI in the Enterprise episode. They’ll explore the dynamic landscape of #GenerativeAI, uncovering both the promise and peril and shedding light on its transformative potential and inherent risks. Key takeaways include…

  • Potential and Risk: #GenAI offers groundbreaking capabilities, from innovative solutions to streamlined processes, but it also presents security vulnerabilities, such as denial-of-service attacks.
  • Need for Quality Assurance AI: There’s a pressing need for quality assurance AI to validate data integrity and ensure the reliability of AI-generated content, especially in legal and cybersecurity contexts.
  • Human Oversight and Responsibility: Despite AI’s power, human oversight remains crucial in interpreting and adjudicating AI-generated content, highlighting the ethical responsibility of technology practitioners.
  • Mitigating Risks and Ethical Use: As technology evolves, practitioners must guide AI’s ethical use, mitigating risks like misinformation and cyberbullying while maximizing positive societal impact.

View This Episode On:

About Dr. Peacock:

Dr. P is presently working as a CIO as well as Chief Architect on several projects. He is passionate about evangelizing new technology and concepts and Road Mapping across industries, all while maintaining a sense of humor. He is ahead of the wave most of the time, with 30+ years of innovative, entrepreneurial, and strategic thought leadership. Dr. P is a catalyst for change, and is known for his big-picture vision and drive for success no matter what it takes.

Find Dr. P on LinkedIn: https://www.linkedin.com/in/drpeacock/

About The Generative AI In The Enterprise Series:

Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.

And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.

See All Episodes

Partial Generative AI In The Enterprise Episode Transcript

Note: this transcript section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!

[Music]

Zach Gardner: Ladies and gentlemen, welcome to the Future. My name is Zach Gardner, the Chief Architect of Keyhole Software. I set off on a mission about six or seven months ago when generative AI was all anyone could talk about. Frankly, I was a little tired of hearing about it, but maybe subconsciously, I also realized I didn’t know enough about it. As a Chief Architect, I’m expected to know about everything, even things I couldn’t possibly be an expert on, and generative AI was certainly among those topics.

So, I did what any self-respecting Chief Architect would do. I scoured the four corners of the worldwide web. I found people from diverse backgrounds, people in industries I don’t normally interact with, people with insights, and people who have been thinking about this for longer than I have—maybe even programming since before I was born. Despite my boyish good looks, don’t let that fool you. If I shaved my beard, I’d look like I was 13 years old—no joke.

Today, I have the esteemed Dr. Peacock with me. He’s a Chief Architect like me and a CIO, unlike me. Dr. Peacock, welcome to the program.

Dr. Peacock: Thank you very much for having me here.

Zach Gardner: Anytime, anytime. And just as a reminder for our more litigious audience members, all views and opinions expressed in this program are those of the participants and do not reflect their employers, trade organizations, yacht clubs, loyalty programs at various supermarkets, or grocery stores. It’s just two dudes talking, we’re just here to have a good time, that’s all.

Dr. Peacock: Totally.

Zach Gardner: So, for those in our audience who have not heard of you yet or had the pleasure of meeting you, can you give them a little bit about your background? What industries have you worked in? Where did you come from? What was the first thing you remember programming?

Dr. Peacock: Well, as I said, I’m Dr. Peacock. I have a doctorate in Information Technology with a specialization in algorithms, specifically wireless network algorithms. But I’ve been in IT for a very long time—much longer than you’ve been alive. Despite my good looks, I’m older too. I actually began programming many years ago in an entrepreneurial endeavor. I was working on a program for a game and we were creating a transactional gaming platform. That’s how I delved into programming. Since then, I’ve gotten much more serious in nature and have been involved with organizations ranging from small mom-and-pop shops to globally impacting companies. Currently, I am the Chief Architect for 3BX and the CIO for a couple of other companies. I was introduced to this to provide my viewpoint on this particular talk.

Zach Gardner: For sure, for sure. One of the things we were talking about when we first spoke, it was January—maybe it’s March now—was really about cybersecurity. These chatbots are out there, people are using generative AI to create them and interact with human beings like you and me. What are some of the risks that you’re seeing or that you’ve heard about as it relates to chatbots?

Dr. Peacock: Some of the risks that I think are fairly obvious include denial of service attacks and similar issues that bots can enable. The nature of learning bots allows them to find potential security risks much more expediently than a regular human can. Previously, you would have groups or bands of cyber attackers, but now you can do the same thing with one bot that auto-generates attacks. This can overwhelm traditional defenses, and as more automated systems come into play, it becomes a resource battle. You need to consider what resources you can throw at the problem. It’s crucial to find a balance between functionality and protection—where do you draw the line?

Zach Gardner: Interesting. Now that we have systems that can synthesize information faster than a human with cognitive limitations, I wonder if there will be funding for projects to have chatbots that defeat other chatbots using generative AI.

Related Posts:  GenAI in the Enterprise: Chuck Schneider, Founder, CEO, and Chief Automation Officer

Dr. Peacock: DARPA is actually doing that with one of their more recent funding opportunities. They’ve recognized that generative AI can generate text, pictures, and code. The code it generates might contain security vulnerabilities. DARPA is funding efforts to have large language models analyze code generated by other models to identify flaws. Have you seen any strategies or insights into combating the proliferation and attack surface expansion caused by chatbots?

Dr. Peacock: Yes, I was aware of those efforts. Currently, the code generated by bots lacks sophistication, but that will change. Right now, you can identify bot-generated code because of certain structures, but as bots mature, this will become a bigger issue. The ability to fight bots with our own bots will become mandatory. The sophistication of what’s being generated still leaves much to be desired. Many developers use bots to fill gaps, but often the output is vulnerable and not seamless enough for production.

Zach Gardner: I’ve played around with it a bit too. For instance, I had a project where I needed to do some regression analysis in Python. I could have spent a lot of time researching, but instead, I asked ChatGPT and found it to be an effective starting point. However, copying and pasting directly into my source control without ensuring there are no vulnerabilities is risky and something that would keep a CIO up at night.

Dr. Peacock: Exactly. I use these tools to generate content, which then takes me less time to analyze and clean up. It’s a great resource for determining the best way forward, but you have to scrutinize it to ensure it’s truly valuable and usable.

Zach Gardner: There was a court case last year where an eviction lawyer in Florida submitted a brief to a judge that cited court cases the judge had never heard of. Upon investigation, the lawyer admitted he used ChatGPT to generate the brief. This highlights that while these tools are powerful, it still takes a human to adjudicate the responses and interpret what is actually effective.

Dr. Peacock: Exactly. These tools are incredibly powerful, but they require human oversight to ensure their output is valid and useful.

Zach Gardner: …and I think that the moral of the story is that these are tools—they’re powerful tools. Like my iPhone is a very powerful tool, this pencil that I’m holding is a very powerful tool. It still takes a human being at the end of the day to be able to adjudicate the responses out of these tools and to be able to interpret what it is that is actually and truly effective from them. Curious if you’ve seen, we’ve kind of spent a little bit of time talking about maybe the downsides a little bit of these tools. I’m curious if you’ve seen any really good and really compelling, positive use cases for it outside of maybe just document analysis and synthesis. Anything else kind of come to mind for like really good and insightful use cases for these tools before we get into some of the more negative stuff?

Dr. Peacock: Well, some of the stuff that came to my mind as you were speaking, I was for some reason thinking about the potential algorithms that you could actually use to have a bot search for how valid the content is. That would be actually a very good marketable or commercial thing to do because I could think of ways it could be used to clean stuff up and look at the viability of what’s being in there. Sorry, that’s a side topic that just crossed my mind as you were talking. So, in terms of good examples, well, I’ve seen it utilized for validating or removing some of the outliers of potential raw data so it can massage it down to the point where it’s more usable and viable for the developers or the users themselves. I have seen that, and that’s a viable use for it. Other than the report stuff and about the potential shortcut for scripting and things along those lines which you brought up, I could think that that would be a very valuable use for it—checking the raw data in transit or at rest even. Because if you check that data in transit before it’s actually utilized, then you could save a lot of valuable time in various different verticals or industries. I mean, especially manufacturing just came to mind. So, that’s a potential positive I can think of.

Zach Gardner: Interesting. I hadn’t thought of that one before. It’s almost like you would have not an adversarial generative AI but maybe a quality assurance AI. Is that a good way of thinking about it?

Dr. Peacock: Quality control, call it QA/QC, that’s what I was trying to get at.

Zach Gardner: Right, right.

Dr. Peacock: Now you could do the same thing on a much more scaled-down version. I was thinking about exchanges on a device-to-device viewpoint, but if you throw in the human aspect, you could use them as a quality check for various personal interviews and whatnot—for scouring for data, capturing data for various data sets. Because right now, if you want to look at it from the data side of things, you have non-standardized data sets. The bot doesn’t care as long as it’s programmed appropriately. It could take and con all that into something more usable. Interesting. This just crossed my mind that if no one’s done that, that makes sense. I don’t know if maybe someone will get a business idea out of this videocast.

Zach Gardner: You never know. You know, I did a little thing with some data cleaning—I’m not gonna get into the organization I did it with—and we were trying to do something along those lines, but it never came to fruition before I had to leave. But there is potential there, not just as a standalone offering upon itself, even though it could be that, but as an addition to supplement other services that might be offered. Just a side thought.

Dr. Peacock: Hm, interesting. I mean, in some ways the technology kind of reminds me of blockchain. When it came out, it was supposed to be revolutionary, there were going to be so many use cases for it. And then what? I mean, we got HODL and then Bitcoin’s up to 70 grand now. Like, that’s about it. There’s not a lot of things that it’s used for. Generative AI—one of the things that kind of scares me, you know, that we’re in March 2024 heading into an election in a few months—are the geopolitical risks of generative AI. With the 24-hour news cycle, the potential that a news organization could accidentally pick up something that is completely fabricated because they want to be the first to break the story. You know, if it bleeds, it leads. I’m curious if there are any other existential threats other than the ones that we’ve talked about that you’re kind of thinking about as a Chief Architect, a CIO, or even maybe just as an American.

Related Posts:  GenAI in the Enterprise: Roger Kibbe, Sr. Developer Evangelist at Samsung

Dr. Peacock: Well, as a person I think trumps them all, but that threat is already out there. I mean, there are already generated AI briefs from famous people and generally a lot of people don’t validate what they’re taking in. They make automatic assumptions based on it—”Oh, it’s obviously true because so-and-so said it.” That’s a logic thing we’re not going to get into, but I don’t do that. A lot of people do just because they heard it. It doesn’t mean it’s true, but a lot of people make that assumption. You already see that kind of thing out there now. That very thing could then be turned around and help to convict someone wrongly—slander campaigns and stuff like that, easily. I’m amazed it’s not happening more already. Maybe it is, I’m just not aware of it. That kind of thing is an obvious negative. I mean, my character could be slandered out there and I wouldn’t even be aware of it. If you have bad things that happen in your career—I know this happened to me before—I couldn’t get back into a particular vertical for one reason or another. I’m like, well, maybe it’s because they heard about that problem that happened five years ago or ten years ago. What if it was based upon something that never happened and there is no grounds for it? But how does the “audience” know that? Because if you phrase it or position it appropriately, it doesn’t matter. It’s all about how the person reacted, and that paints you in a very bad light. That could have very negative ramifications on yourself professionally, not just personally but professionally. You think about that, it made me think about something immediately there when you talk about the cyberbullying and stuff like that. I would think that would be an avenue that the bot’s going to come into play too—let’s just create a bot to bully somebody. Why not? I could pay a few bucks to get somebody to create a bot to do that, then I don’t have to worry. Let something else do it. That’s a negative side, I’m sorry, just came to mind.

Zach Gardner: No, I mean the risks are definitely there and I think we, as technology practitioners, need to be aware of them and keep them front and center in our mind so that when we’re deploying solutions, when we’re talking to people that are thinking about deploying solutions, when we’re talking to people that are thinking about thinking about deploying solutions, we can kind of steer them in the right direction. So, no, I appreciate the insight. A lot of the things that you brought up I hadn’t really thought about before, which is 100% the point of this videocast.

Dr. Peacock: It’s all about conceptual, you know. In terms of content, a lot of these things you might not think about because if you’re just going to sit around brainstorming, you have a tendency to go down certain avenues that you’re familiar with. Unfortunately for me personally, I don’t have very many preconceptions so I can go off in all kinds of weird directions. But those things have to be taken into consideration because, like you were saying, you have to be the gatekeepers, the control guards for these things being used appropriately. I mean, just because they can be done doesn’t mean they should be done. I forget where I heard that. I’m sure Patton said it or Eisenhower, somebody did, I don’t know who.

Zach Gardner: No, that’s a great way to kind of wrap up the conversation. If people want to learn more about you, learn more about your work, where should they go? What social networks are you active on? Where can people find you?

Dr. Peacock: On the weekends? At home.

Zach Gardner: Good answer.

Dr. Peacock: Honestly, if you do a search for me, you’ll find me. There are a couple of other people that come up more on the search results—their SEO is better than mine—but only a couple. And then there’s me, which is a little troubling considering some of the stuff I’ve done has been cleared, but we won’t go there. That stuff you can’t find, don’t look for that. But sorry, I digress a little bit there. But I’m actually on a lot of different networks, different stances of my persona or personality. I’m on some of the social networks. You name them, I’m probably there somewhere—not that I’m there a lot, but I visit. Professionally, I’m on LinkedIn. I do a lot of networking there and stuff like that. There are a few other professional networks too. They’re much less socially accepted, but I’m out there too. Just do a little search and you’ll find me.

Zach Gardner: Okay, will do. Social networks, they’re a great place to visit, terrible place to live.

Dr. Peacock: Exactly. You could waste so much time and achieve nothing. I normally have time—I’m only gonna be here for so long. I fail most of the time but not by that much. I say, okay, I’ve been here half an hour, I’m gone.

Zach Gardner: Good. It’s a lesson we all can learn. So, Dr. Peacock, thank you very much for the time. Appreciate it as always.

Dr. Peacock: My pleasure, and thank you very much for having me. I hope this benefits those who listen.

Zach Gardner: Me too. Ladies and gentlemen, catch you in the future.

[Music]


Share:

Subscribe on

Apple Podcast
Spotify
Youtube
See All Podcasts

Latest Blog Posts

Blog Topics