Debbie Reynolds joins Zach on the podcast today. Debbie has been involved in digital transformation for decades across a variety of industries. She found her niche in data privacy, working on the bleeding edge with corporations as large as McDonald’s, helping them prepare for the General Data Protection Regulations (GDPR). Eventually, she started her own consulting firm, focused on counseling companies around data privacy compliance and strategy.
Debbie’s views on Generative AI are, of course, filtered through the lens of privacy. She cautions her clients, big and small alike, to be wary of the confidentiality of the information they input into LLMs. After the info is given to the AI, it can be extracted – even if code is written to suppress it. On the flip side, an absence of information can also be dangerous. Even though AI is a machine, it still holds biases, and these biases can encroach on liberty, run afoul of the laws, and even harm people.
Debbie encourages users to remember that it’s the AI developer’s profit and the user’s risk, so be diligent about the way you use it. Especially with the impending AI Act (which promises some of the stiffest legislative penalties), she prescribes leveraging AI and LLMs for low-risk use cases like summarizing content or drafting emails. In Debbie’s words, AI “is a source of information not a source of truth; you as the human have to bring the truth.”