Debbie Reynolds joins Zach on the podcast today. Debbie has been involved in digital transformation for decades across a variety of industries. She found her niche in data privacy, working on the bleeding edge with corporations as large as McDonald’s, helping them prepare for the General Data Protection Regulations (GDPR). Eventually, she started her own consulting firm, focused on counseling companies around data privacy compliance and strategy.
Debbie’s views on Gen AI are, of course, filtered through the lens of privacy. She cautions her clients, big and small alike, to be wary of the confidentiality of the information they input into LLMs. After the info is given to the AI, it can be extracted – even if code is written to suppress it. On the flip side, an absence of information can also be dangerous. Even though AI is a machine, it still holds biases, and these biases can encroach on liberty, run afoul of the laws, and even harm people.
Debbie encourages users to remember that it’s the AI platform’s profit and the user’s risk, so be diligent about the way you use it. Especially with the impending AI Act (which promises some of the stiffest legislative penalties), she prescribes leveraging Gen AI and LLMs for low-risk use cases like summarizing content or drafting emails. In Debbie’s words, AI “is a source of information not a source of truth; you as the human have to bring the truth.”
About Debbie Reynolds
Debbie Reynolds, known as “The Data Diva,” has solidified her reputation as a leading authority in the fields of Data Privacy and Emerging Technology. With a focus on industries including AdTech, FinTech, EdTech, Biometrics, IoT, AI, Smart Manufacturing, Smart Cities, Privacy Tech, Smartphones, and Mobile App Development, Debbie has over 20 years of experience navigating the complex landscape of Data Privacy and Data Protection.
Debbie’s contributions to the field have earned her numerous accolades, including being named one of the Global Top Eight Privacy Experts by Identity Review and one of the Global Top 30 CyberRisk Communicators by The European Risk Policy Institute in 2020 and 2021. In 2022, the U.S. Department of Commerce appointed her to the Internet of Things (IoT) Advisory Board, and she served as the IEEE Committee Chair for Cyber Security for Next Generation Connectivity Systems.
Find Debbie on LinkedIn or learn more about her firm, Debbie Reynolds Consulting.
View This Episode On:
- YouTube: https://youtu.be/C1_6fK1z8RU
- Apple Podcasts: https://podcasts.apple.com/us/podcast/generative-ai-in-the-enterprise/id1730289289?i=1000664518136
- Spotify: https://open.spotify.com/episode/38RRNn4TmHagKsgRQNv25h?si=54a30d940bc741c5
- … or wherever you get your podcasts!
About The Generative AI In The Enterprise Series:
Welcome to Keyhole Software’s first-ever Podcast Series, Generative AI in the Enterprise. Chief Architect, Zach Gardner, talks with industry leaders, founders, tech evangelists, and GenAI specialists to find out how they utilize Generative AI in their businesses.
And we’re not talking about the surface-level stuff! We dive into how these bleeding-edge revolutionists use GenAI to increase revenue and decrease operational costs. You’ll learn how they have woven GenAI into the very fabric of their business to push themselves to new limits, beating out competition and exceeding expectations.
See All EpisodesPartial Generative AI In The Enterprise Episode Transcript
Note: this transcript section was created using generative AI tools like YouTube automated transcripts and ChatGPT. There may be typos, slight content changes, or character limits for brevity!
Zach Gardner: Ladies and gentlemen, welcome to the future! I’m Zach Gardner, the Chief Architect at Keyhole Software. A few months ago, I decided that I simply did not know enough about generative AI. Sure, I’d used ChatGPT, MidJourney, and DALL-E, but what I really felt like I was lacking was a broad perspective from people from many different backgrounds and fields of expertise. People from inside development but also people that are outside, tangential, or in completely different industries than I normally get to talk to. So, I did what any Chief Architect does: I scoured the four corners of the WWW. I got referrals from a number of different people, and one of the referrals is awesome enough to agree to come on and talk with us today.
Today on the program, we have Debbie Reynolds, who’s the Founder, CEO, and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC. Debbie, how’s it going?
Debbie Reynolds: Great! I’m excited to be here, and I love the fact that you started this exploration as so many people are as curious as you are about this field.
Zach Gardner: Absolutely. And you know, just my disclaimer, which I’m going to get engraved on my tombstone because I’ve said it so many times: all the views and opinions expressed in this program are the views and opinions of the participants and do not reflect their employers, any trade organizations they’re affiliated with, or any loyalty programs they have for grocery stores near them. We’re just two people, we’re just talking. We’re just getting to know each other, talking about a very important topic.
So, to get us started off, Debbie, for the people that aren’t familiar with you, what’s your background? How does one even get into data privacy in this day and age?
Debbie Reynolds: Those are two really good questions. So, first of all, I am a data person. I started my data life in library science. Maybe, Zach, you’re too young to remember when libraries had card catalogs, but I was turning those into databases back in the day. I’ve been involved in digital transformation in many different types of industries. As I’ve done more projects because I’m a technologist, I was working with multinational companies that had to do data moves or data transformation where there were laws about how certain data could be used. Over the years, I developed a personal interest in privacy. I read a book in 1995 called “The Right to Privacy,” co-authored by Caroline Kennedy. It was about privacy rights in the US, and I was shocked. My mother had given it to me, I read it, and it really caught my interest.
In the US, we feel like we have so many freedoms—freedom of speech, freedom of various things—but privacy was not in the Constitution. I saw those gaps developing as we moved into the more commercialized internet. About ten years ago, companies that knew me from my previous work started calling me, asking about privacy. They knew something was brewing in the EU with their changing laws. One of the first companies to call me was McDonald’s Corporation. I spoke with their corporate legal department about the General Data Protection Regulation (GDPR) before it came out. I told them it was a big deal that would impact the entire world, not just the EU. After that, I decided to focus on privacy. A couple of years later, PBS contacted me to talk about GDPR, and I made some predictions that all came true. People still call me about that interview, but that’s how I got into privacy.
Zach Gardner: Awesome! Don’t let my boyish good looks deceive you; a good moisturizer is all it takes. I do recall the Dewey Decimal System 100%. I have a 9-year-old and a 6-year-old, so we’ve had to teach them how to locate books. It’s always fun to remember the things from when we were kids. The GDPR is also a very interesting topic, as well as the EU AI Act. If you could talk a little bit about how you’re using generative AI in both your personal and professional life, what are your go-to tools, and where are you staying away from it?
Debbie Reynolds: That’s a good question. First of all, I love technology, but I don’t love everything people try to do with it. I always like to kick the tires on technology to see if it’s something I can use and understand its limits. I really like ChatGPT and the large language models because they can be like a low-level assistant. They can do low-risk, tedious tasks that you don’t want to do. I have a media company, and I use AI for summarizing videos or generating hashtags for posts. It saves time, which is crucial for a smaller business. I also use it for summarizing emails or organizing them better.
One interesting use case is for passive-aggressive emails. You can write an email the way you want and then ask AI to improve it, making it sound more polite. It’s very good for tasks like content creation, spell checking, and other low-level tasks. I don’t love it when people say AI will cure cancer or the world will end in two years. These tools should not replace humans; they are helpers. They should be used for tasks that save time, not for critical decision-making. When talking to companies about privacy, I advise against putting sensitive information into public models because it’s hard to track what happens to that data.
For me, I tell companies to start with low-risk, tedious tasks when using AI. Don’t put personal or sensitive data into these systems. Large language models give answers whether they’re right or not, so accuracy can be an issue. Biases can also be a problem. For example, Amazon had a hiring algorithm that favored men over women. It’s crucial to understand what the AI is doing and how it’s working. When using these tools, companies should play around with them to understand their capabilities and limitations.
Zach Gardner: So many good things to unpack in there. One of the things you mentioned was being able to summarize large documents. I’ve heard some companies have massive amounts of internal policies, and it can be difficult to navigate them. Being able to use AI for retrieval and summarization can save a lot of time. One of the earliest examples I heard of someone getting in trouble was Samsung. Some employees uploaded confidential information into ChatGPT for summarization, and it ended up getting exfiltrated. They blocked it at the firewall level. I’m curious about GDPR considerations and what companies should be worried about regarding this technology for proprietary and confidential information.
Debbie Reynolds: I highly recommend that companies do not put proprietary or confidential information into public models. Once that information is in there, it can be difficult to remove, and someone smart enough can extract it. Large language models are designed to give answers, not necessarily the correct ones. An absence of information is also information, which can lead to biases. For example, Amazon’s hiring algorithm favored men because it was fed more male candidates. Companies need to be aware of how these tools can impact their operations and ensure transparency.
When using AI, it’s important to start with low-risk tasks that won’t harm anyone. Regulators don’t care about summarizing documents, but they do care about tasks that could harm someone’s life, liberty, or privacy. Companies should be cautious about the data they put into these systems and how they use it. I always tell people that the company’s profit is their profit, but the risk is yours. You need to figure out your tolerance for risk. Samsung’s decision to block AI at the firewall level is one approach to mitigate that risk.
Zach Gardner: Your profit, their risk—that’s an interesting take on Facebook’s model where the user is the product. You mentioned biases and legal issues. Recently, Congress passed the TikTok Bill, which requires TikTok to be divested from ByteDance within 12 months or face a ban in the US. One of the concerns was a bias against certain types of information, like Tiananmen Square or Taiwan. Automation and computing power are great, but human decisions are still necessary. Both in terms of training sets and output controls, there are biases. I’m curious if you’ve come across anything related to the EU AI Act or potential litigation for companies based on their decisions.
Debbie Reynolds: The AI Act is almost final, with just a few more hurdles before it becomes official. Once it’s final, it will have a date when the law goes into effect, and enforcement will begin within two years. The AI Act focuses on companies using AI in ways that could create human harm, particularly high-risk scenarios. Regulators are not concerned with low-risk uses but are very interested in preventing harm. In the US, each federal agency will have an AI chief to ensure AI is not used in harmful ways.
The AI Act has stiff penalties, with companies facing fines of up to 7% of their worldwide annual revenue for egregious violations. This should make companies think carefully about their use of AI. The EU’s best export might be regulation, as seen with the GDPR, which has influenced laws worldwide. The AI Act is expected to have a similar impact. Keeping up with these regulations can be challenging, but it’s essential for companies to stay informed and compliant.
Zach Gardner: Hard to keep up with this stuff. I go to presentations on this every week. This is really good, Debbie. If people want to learn more about you and your work, where should they go? Any final thoughts on generative AI?
Debbie Reynolds: It’s such an interesting frontier. I kind of roll my eyes when I hear people say the world will end in two years or AI will cure cancer. It’s not going to do either of those things. What people need to understand about generative AI is that it will have a horizontal impact on almost any industry, much like the internet. We need to learn more about AI, understand what it can and can’t do. I’m not concerned about AI taking over the world, but I am worried about how people use it. People can be less than virtuous, and we need to understand the technology to use it responsibly.
For those interested, I’m on LinkedIn as Debbie Reynolds. You can find me there, on Twitter as Debbie Reynolds Consulting, and on my website, debbiereynoldsconsulting.com. My podcast, The Data Diva Talks Privacy, is available wherever you get your podcasts. I love talking to people, so feel free to reach out.
Zach Gardner: I love it! Everybody, if you want to get in touch with us or see what we’re doing, check out our website at KeyholeSoftware.com. You can reach out to us there. We have podcasts, blogs, and various other things. This has been fun, Debbie. I hope you come back and talk to us again.
Debbie Reynolds: Absolutely! Thanks so much for having me.
Zach Gardner: Thanks, everybody! Until next time, stay safe out there.
Subscribe on
Latest Blog Posts
Blog Topics
- .NET
- .NET Core
- Agile
- AI
- Angular
- Apache
- Architecture
- ASP.NET
- AWS
- Azure
- BackboneJS
- Blazor
- Blazor Server in .NET 6 Series
- Blockchain
- Business Intelligence (BI)
- C#
- Chat GPT
- Cloud
- COBOL
- Community
- Company News
- Consulting
- Conversational Apps
- Creating an FHIR API
- CSS & HTML
- Data Management
- Data Science
- Databases
- Design
- Dev Methodologies
- Development Technologies
- DevOps
- Docker
- Educational Event
- Effective Automated Testing With Spring Series
- Flutter
- Gen AI In The Enterprise
- Git
- Go
- Google Cloud Platform
- GraphQL
- Groovy
- Heroku
- HTML5
- Hyperledger
- Infrastructure As Code (IaC)
- Intro to Spring Batch Series
- Java
- JavaScript
- JavaScript Debugging Series
- JHipster Series
- Kansas City
- Keyhole
- Keyhole Creations
- Kubernetes
- Learning Svelte
- Machine Learning
- Microservices
- Mobile
- Modernization
- moderntoolingseries
- MongoDB
- Next Level
- Node.js
- OpenShift
- openshiftseries
- Opinion
- Podcast
- PostgreSQL
- PowerBI
- Programming
- Python
- React
- React Native
- Scaling PHP Apps
- Security
- Service Fabric
- Single-Page Application
- Soft Skills
- solidfoundationsseries
- Spring
- Spring Batch
- Spring Boot
- SQL
- Tableau
- Testing
- Testing React Native Series
- Tutorial
- TypeScript
- UI/UX
- Unity3D Series
- Unity3D Series 2
- Video
- Vue.js
- Xamarin