Saturday, March 22, 2025
No menu items!
HomeEntrepreneurAvoid These Risky Blindspots When Using Gen AI Company Wide

Avoid These Risky Blindspots When Using Gen AI Company Wide

Opinions expressed by Entrepreneur contributors are their own.

Interest in gen AI hasn’t slowed, but company-wide implementation has as more risks come to light. Recent research in manufacturing found growing concerns about gen AI risks are leading manufacturers to pause deployment.

This article explains three blindspots that can be catastrophic. But, first, know that gen AI isn’t like other technology.

Gen AI works differently from other AI and tech

Three key differences are:

  • Gen AI depends on neural networks, which are inspired by the brain. And we don’t completely understand the brain.
  • Gen AI also depends on large language models (LLMs) with large sets of content and data. What exactly is in the LLM varies among generative AI solutions, as does their approach to disclosure.
  • Scientists don’t know exactly how gen AI works, as MIT Review has reported well.

Although gen AI is powerful, it’s full of unknowns. The more we shed light on its “gotchas,” the more you can manage the risks of deploying it.

Related: Why GenAI is the Secret Sauce for Good Customer Experiences

1. Intensifying demand for transparency

The demand for transparency about how companies use gen AI is growing from the government, employees and customers. Not being prepared puts your company at risk of fines, lawsuits, losing customers and worse.

Legislation of gen AI has proliferated around the world at all levels. The European Union set the tone with its AI Act. To stay on the right side of this regulation, your company has to disclose when and how it’s using gen AI. You’ll need to demonstrate how you’re not replacing humans to make key decisions or introducing bias.

At the same time, employees and customers want to know when and why they’re dealing with gen AI. If your organization uses gen AI in the hiring process, explain that to both the candidates and the employees involved. (For more about AI in hiring, don’t miss this guide developed by my team and Terminal.io.)

When communicating with customers, your company should disclose using gen AI in any form (voice, text, chat, etc.). One way is in policies, as Medium does here. Another way is to provide cues in the customer experience. For instance, AWS shows when abstracts of related pages are generated by AI.

The good news is that if your business addresses the next two blindspots, transparency will be much easier.

2. Growing list of inaccuracy causes

The longtime saying “garbage in, garbage out” is true for generative AI. What’s new with generative AI is how the garbage can get in and, therefore, cause inaccuracies.

  • Misusing generative AI for math: Generative AI is bad at math and the manipulation of numbers. I shared my recent experience with this problem on LinkedIn here. For any experience involving calculations, number comparisons and the like, you’ll need to supplement gen AI with other solutions.
  • Garbage in the LLM: If the LLM has incorrect, outdated or biased content, then your business is at risk. And the chances of this risk happening are higher now than ever because trusted content sources ranging from The New York Times to Condé Nast are withdrawing. Recent research found a 50% drop in data and content available to gen AI technologies. So, demand transparency about the LLM from any gen AI solution you consider before committing to one.
  • Garbage in Your Content and Data: To tailor gen AI for your enterprise, chances are you’ll need to train it on your own content and data. But if that content and data don’t consistently meet your standards, are outdated, or have errors, your company is at risk.

My company’s repeated research shows that companies that report a high level of content operations maturity are faster at leveraging gen AI than others because they have practices to document content standards, govern quality, and more.

If your company doesn’t have such practices, you’re not alone. The good news is it’s never too late to catch up. Our team recently helped the world’s largest home improvement retailer define comprehensive content standards for transactional communications across all relevant channels in less than three months.

More good news here. As you close accuracy gaps, you also reduce your company’s risk of unwittingly introducing bias or violating copyright.

Related: Three Use Cases Of Gen-AI Which Can Be Useful For Organisations

3. The extent of maintenance required

Gen AI seems magical at times, but it actually requires vigilant maintenance by your business and the Gen AI solution you choose. If you deploy gen AI without a clear approach to maintenance, you will multiply the risks of 1 and 2 thanks to problems like these:

  • Drift: This problem is when the real world changes but your gen AI model doesn’t, such as when the content and data in the LLM become outdated. It was correct when you first launched, but now it’s not. Imagine a chatbot giving your customers an inaccurate fact about one of your products because it isn’t aware of that new product feature.
  • Degradation: Also called model collapse, this problem is when your gen AI solution becomes dumber instead of smarter. One cause of degradation is running out of fresh, quality content for the LLM. Recent research shows that LLMs, ironically, break down when fed with content generated by AI.

So, gen AI is a uniquely powerful technology that can take your company’s content to new levels of effectiveness. But that power comes with plenty of risks. Take these risks seriously as you plan your gen AI implementation so you’ll have fewer headaches and more success.

RELATED ARTICLES

Most Popular

Recent Comments