Featured image for “GenAI in the Enterprise: Erik Hermansen, Director of Engineering at Bayer”

GenAI in the Enterprise: Erik Hermansen, Director of Engineering at Bayer

Erik Hermansen, a Director of Engineering at Bayer, joins us on the Generative AI in the Enterprise podcast today. Erik is a self-taught software engineer who is passionate about bleeding-edge techs, like Generative AI and LLMs. He has a lot of sympathy for folks who feel overwhelmed and overloaded by all the hype around GenAI. His advice is to slow way down and focus on what can be done with the tech today.

Throughout today’s episode, Erik and Zach discuss the practical implications of GenAI right now. They’ll give advice to engineers and businesspeople alike on how to take advantage of Generative AI and LLMs with bite-sized tasks that can be completed now, not at some unknown date in the future. Listen in as we cut through the hype and excitement and get down to the meat and potatoes.

View This Episode On:

About Guest Erik Hermansen:

Erik Hermansen is a technology leader who has recently been focused on digital agriculture solutions. In addition to his work as the Director of Engineering at Bayer, he has some side projects going in areas of animation, writing, and voice acting. He is passionate about diversity and fairness in the workplace and is always interested in friendly, open conversation.

You can reach Erik at https://www.linkedin.com/in/erikhermansen/. Connections accepted, just mention LLMs, Zach, or Keyhole in your request.

About The Generative AI In The Enterprise Series:

Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.

And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.

See All Episodes

Partial Generative AI In The Enterprise Episode Transcript

Note: this section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!

Zach Gardner: Ladies and gentlemen, welcome to the Future. My name is Zach Gardner, and I’m the Chief Architect at Keyhole Software. About three to four months ago, I set off on a little bit of a quest. I scoured my contacts on LinkedIn to find other people who were as curious about generative AI as I was. Thankfully, for you all today, a few people agreed to talk to me, giving their insights and real-world advice on large language models, highlighting both the positives and the potential drawbacks.

Today, on the program, I have Eric Hermansen. Did I say that right? I got it right?

Eric Hermansen: You did, awesome. That’s a great way to start out an interview – actually pronouncing someone’s name right, just a little pro tip. Thank you very much. I’m the Engineering Director at Bayer.

Zach Gardner: Eric, thank you very much for agreeing to be on the program.

Eric Hermansen: Glad to be here. Just as a disclaimer, all the views and opinions expressed in this program are those of the participants and do not reflect their employer, any trade, or organizations they are affiliated with. It’s just two people having a chat, you know, talking about stuff.

Zach Gardner: So, to get us started off, Eric, for those in the audience who don’t know you as well as I do, give me your background. What was your career path? How does one eventually rise to the occasion of becoming an Engineering Director at Bayer?

Related Posts:  GenAI in the Enterprise: Dale Thomas, Founder of ActionableOps

Eric Hermansen: Well, I was a Community College dropout, trying to make money to get back to school. This was back in the 90s. The jobs I was getting kept getting better, and finally, I was writing code in one of my jobs. I realized, “Wait a minute, I’m going back to school to learn how to get a programming job, but I already have one.” So, I never actually got a degree. I had a consulting business for some years, doing freelance consulting.

Eventually, I felt like all the interesting projects were happening inside companies after I left. So, I decided to become a regular employee and see what that was like for a while. I worked as an engineer, and then after a bit, my manager quit. I said to her boss, “Maybe I could take over and do what she was doing,” and that led me into management.

After so many years and decades, here I am, a Director of Engineering, managing managers of engineers. Along the way, I’ve stayed connected to my engineering roots, writing code almost every day, working on personal projects, and open-source projects. It keeps the joy of engineering alive in me and helps me stay in touch with the field.

Zach Gardner: That might have been how we connected, talking about GitHub Copilot and other tools that generate code. Four or five months later, here we are. To start off with generative AI, can you talk to our audience about how an enterprise large language model differs from the foundational work larger companies are doing? If a company isn’t sure how to get started, what would be the baby steps to enter this new world we find ourselves in?

Eric Hermansen: First off, I sympathize with people overwhelmed by all the AI news. Here’s one simple trick to calm down: focus on what can be done today, not what’s promised for next year or what might take us to Skynet. Look at what’s possible today, and the world becomes much simpler. For the big scary problems of the future, you’ll be in the best position to defend against them if they ever come up.

With LLMS, there’s a lot you can do today. Start with third-party tools inside your enterprise. Then look at internal use cases you can automate – internal use means less pressure to get it exactly right, allowing you to fail quietly. The more ambitious rung is adding features to your products that your customers will use with LLM-based capabilities. Foundational work, like training models and releasing them, is a special case area needing million-dollar salaries.

Zach Gardner: Speaking of which, with programming, what’s your experience with GitHub Copilot? Have your engineers used it? Where do you see it being used for good, and do you have any concerns about the next generation of programmers being reliant on this tool?

Eric Hermansen: I started using Copilot about a year ago, writing code with it. If you listen to the hype, it’s all going to be no-code solutions, programmers losing jobs, everything push-button. But nothing like that exists now.

I did a Twitch stream, writing code on a personal project with Copilot for months. Later, I reviewed my videos to see how much time I saved using the tool. I found I saved about 30% of my time, which matched GitHub’s reports on productivity increases. A 30% improvement is the closest thing to a silver bullet in software engineering.

For new engineers, they should turn off Copilot for long stretches to ensure they understand the basics. Guessing and hoping things work won’t serve them well. Companies should have good processes in place for code review, code scanning, and ensuring code quality to catch major problems.

Zach Gardner: I agree. In my experience, Copilot is excellent for generating model classes quickly. But, if I didn’t have a fundamental understanding of programming, it might be hard to even know how to formulate questions properly. With Stack Overflow, Google, and blogs, we had to learn to find information effectively.

Related Posts:  Legacy Code Automation with AI: Building a Solution

I’ve heard concerns about the next generation of programmers. Will they have the same scar tissue we do? Maybe not, and maybe that’s a good thing. What’s your take on this tool for managers and directors? Should they be cautious, or embrace it?

Eric Hermansen: If your company doesn’t have a code review process or tools like DAST, SAST, or SonarQube, and you rely entirely on an engineer’s output, you’re already in trouble. With processes in place, you can catch major issues.

For new engineers, they’ll be born into this stuff, getting through exams with ChatGPT, blundering through programming. It’s more concerning for the next gen. Companies should adapt their hiring practices and ensure internal processes for code quality. GitHub Copilot might be good for productivity, but it’s not a replacement for a solid understanding of programming fundamentals.

Zach Gardner: Agreed. I read a study where GitHub Copilot generated 500 examples, and 30% had obvious security vulnerabilities. DARPA has a project paying people to create large language models to identify performance issues and security vulnerabilities in code generated by other large language models. Using generative AI to fix its limitations is a good use case.

Are there other good use cases for large language models, or any bad ones you’ve seen?

Eric Hermansen: There’s third-party software, then internal use cases and software you might build. There will be good and weird use cases that make sense internally but not externally. Avoid using an LLM for something a regular algorithm can do, like search and replace. Using an LLM for numerical calculations or predictive models based on math isn’t ideal.

The cost of LLMs is significant. It might be a penny per prompt, but if every query costs a penny, that adds up quickly. If you chain prompts, it could be a 20-cent transaction. Scaling up can result in significant costs, potentially getting people fired.

Zach Gardner: Agreed. High transactional costs are a concern. One anti-pattern I’ve seen is developers doing queries inside loops, resulting in performance issues. A large language model detecting such patterns during code review would be valuable.

PayPal’s architecture requires API calls for all queries, ensuring intentional database hits. Maybe that approach will be applied to LLMs soon.

Eric Hermansen: As companies mature, they might look into use case-based throttlers or rate limiters. Knowing the ROI of each use case helps control costs. You might set limits on queries per day for specific use cases, keeping costs under control.

Zach Gardner: One other anti-pattern is Oppenheimer syndrome, where an engineer creates an agent with persistent memory, running it over the weekend to see what happens. This could lead to serious problems, like external threats accessing your network.

NIST is updating their secure software development framework to include recommendations on generative AI. Deploying new iterations of a large language model is different from traditional application development, focusing on data rather than code. It introduces new attack vectors and requires different considerations.

What’s the right combination of words to convince executives or directors to support LLM initiatives?

Eric Hermansen: In a company, you have a compliance organization covering IT security, product security, regulatory compliance, and legal concerns. Are they ready for LLM-based issues? Have a friendly conversation with them to see what they’re prepared for before making requests related to LLMs.

The benefits are there, and while I don’t like fear-based tactics, your competition might be further along in LLM maturity, realizing benefits before you. The opportunity cost of not being ready for LLM-based capabilities is significant.

Zach Gardner: That’s a great place to leave it. Eric, if people want to find out more about you or follow you, where can they go?

Eric Hermansen: They can find me on LinkedIn or follow my work on GitHub. Always happy to connect and discuss these exciting developments in AI and software engineering.

Zach Gardner: Fantastic. Thank you for joining us today, Eric.

Eric Hermansen: Glad to be here. Thanks for having me.


Share: