Opinions expressed by Entrepreneur contributors are their own.
Key Takeaways
- AI-generated transcriptions pose significant security and legal risks. They can compromise confidentiality, create discoverable legal evidence and are often inaccurate.
- Financial institutions and publicly traded companies face unique challenges, as AI transcriptions can leak sensitive customer data or produce misleading information that can cause market disruptions.
- While AI transcription can be useful for casual meetings, podcasts or brainstorming, human transcription remains the most secure and reliable option.
No one can argue that artificial intelligence plays, or will continue to play, a significant role in nearly every aspect of business. Historically, the transcription industry has depended on humans to convert spoken or audio content into a written form. However, with the rapid advancement of AI software, we must consider whether AI-generated transcripts are safe and secure.
In a somewhat unconventional approach, I will summarize my findings at the beginning of this article rather than at the end. In my professional opinion, while current AI note-takers and transcription software can be productive, they are not completely safe and secure. Legal and law enforcement transcription is a prime example of this. Additionally, business and educational AI-generated transcripts present their own set of challenges. Why is this the case? Here are several key issues:
-
Current laws are unclear regarding AI-generated content
-
Privilege and confidentiality issues remain cloudy
-
Accuracy issues continue to plague AI-generated copy
-
AI isn’t a replacement for human experience and insight
What is AI-generated content?
A typical transcription involves a human transcriptionist listening to an audio recording and manually creating a written document. Transcriptions can be categorized as either “verbatim” or “non-verbatim.” Verbatim transcriptions capture every word and sound exactly as they are spoken.
AI-generated content is produced by software programs that transcribe content based on data gathered from the internet. Legal or medical audio files are prime examples, as they often include complex words and phrases that require specialized knowledge and experience to transcribe accurately.
Furthermore, transcription becomes more challenging when multiple speakers with diverse accents are recorded. AI-generated transcription often struggles to differentiate between these accents and the varying pronunciations of the same word.
Legal issues and challenges for AI-produced content
Imagine the following scenario: During a critical meeting, conversations often move too quickly for a human associate to capture every word or even the key summary points. To address this issue, your company decides to implement an AI note-taking program. An AI bot quietly joins your Zoom meeting and instantly generates a transcription of the discussion, automatically sharing it with attendees and various apps.
Initially, this process seems productive and even magical. However, the excitement quickly diminishes as sensitive strategic information begins to leak among employees, and even worse, to your competitors. You not only have a meeting transcript, but you may also be creating discoverable legal evidence.
While this example may seem far-fetched, I predict that we will see an influx of AI-related lawsuits and court challenges in the judicial system in the coming months. A recent survey by Thomson Reuters revealed that 80% of respondents believed that AI will have a “high or transformational” impact on their practices within the next five years. The rise in AI-related legal actions can be attributed to several key issues:
-
Copyright infringement
-
Trademark and trade secret infringement
-
Misleading investors on AI performance and/or expectations
-
Event-driven lawsuits, i.e., data security, regulatory actions
-
Liability and accountability, i.e., false information generated by AI software
In most cases, notes taken by a human associate may be protected under attorney-client privilege. However, legal experts caution that content generated by AI software could be considered a neutral document, making it potentially discoverable by opposing parties in legal proceedings.
Additionally, state laws vary significantly regarding the legality of recording someone without their consent, which could create legal challenges if participants on a Zoom call are located in different states. These issues highlight some of the legal complexities associated with using AI note-taking tools.
Please note that I am not an attorney, and this should not be taken as legal advice. It is essential to consult your legal counsel or attorney for guidance on how these matters may affect your specific situation.
Related: 3 Costly Mistakes Companies Make When Using Gen AI
Issues with AI-generated transcriptions
AI-generated transcriptions share many of the concerns we previously mentioned. However, there are concerns specific to our industry. The first involves accuracy. Our company guarantees a 99% transcription accuracy rate. AI-generated transcriptions typically produce accuracy rates in the low-60% range, which is unacceptable for any transcription.
However, there are more significant issues with AI-generated transcriptions. Privacy is a huge issue. In 2024, a venture capital firm conducted a Zoom call with a potential client, which was recorded by Otter.ai. Afterwards, participants received a transcription of the meeting, which also included several hours of private conversation recorded after the official meeting concluded.
A federal class-action lawsuit against Otter.ai was filed in August of this year. The lawsuit accuses the company of “deceptively and surreptitiously” recording private conversations used to train its transcription service without securing the permission of participants.
Our legal and law enforcement clients face significant challenges when it comes to AI-generated transcription. For years, we have touted our Criminal Justice Information Service (CJIS) compliance, meaning our transcriptionists must undergo a background check and training before transcribing files that contain an individual’s criminal history. Can you imagine if an AI program revealed such information? The King County, WA prosecutors’ office issued a directive stating that they will not accept AI-generated police reports, citing CJIS and accuracy concerns.
Trial and deposition transcriptions are another example where AI-generated files pose a serious security threat. Court officials, including court reporters, must certify that the trial and legal transcripts they produce are correct. Appellate courts rely on trial transcripts when reviewing cases. Even the slightest mistake could modify or overturn a criminal conviction or civil verdict.
Publicly traded companies and financial institutions
Financial institutions, primarily banks and publicly traded companies, face unique security challenges when utilizing AI. The Federal Trade Commission (FTC) mandates that financial institutions maintain robust information security programs and ensures that vendors protect consumer information adequately. AI-generated transcriptions may inadvertently disclose sensitive customer data from bank transactions or accounts.
Publicly traded companies share similar concerns regarding AI-generated content. The Securities and Exchange Commission (SEC) regulates these companies by requiring them to provide accurate and transparent information to their shareholders. Inaccurate or misleading information, including government data about publicly traded companies or their competitors, can significantly affect stock prices, potentially causing market disruptions.
Moreover, such data breaches can invoke SEC cybersecurity disclosure rules, which require companies to report incidents within four business days. However, significant and irreversible damage may have already occurred if sensitive information is leaked due to erroneous data from an AI-generated platform.
Where AI-generated transcription works best
AI-generated transcription can be useful in certain contexts. If accuracy and security are not primary concerns, these tools can effectively meet specific needs. For example, they may be suitable for brainstorming sessions, company award celebrations and informal gatherings. However, it’s always advisable to have someone review and edit the transcription before sharing it with others.
An increasing number of podcasts are now using AI-generated transcripts. Earlier this week, while walking, I listened to a podcast on my phone and noticed a written transcript displayed on my screen. Podcast transcripts provide a clear advantage for individuals who are hearing-impaired. However, I observed multiple errors in the transcript, likely due to variations in accents and word pronunciation.
Related: AI Has Limits — Here’s How to Find the Balance Between Tech and Humanity
A thousand or so words later, my views on the safety and security of AI-generated transcription remain unchanged. The potential downsides and legal issues associated with AI-generated transcripts make it unlikely that they will replace human transcription entirely.
However, businesses can still benefit from AI platforms by implementing clear guidelines, internal policies, effective vendor management and industry-standard controls to minimize risk.
In situations where privacy and security are critical, human transcription remains the safest and most secure method for converting audio recordings and files into written format.
Key Takeaways
- AI-generated transcriptions pose significant security and legal risks. They can compromise confidentiality, create discoverable legal evidence and are often inaccurate.
- Financial institutions and publicly traded companies face unique challenges, as AI transcriptions can leak sensitive customer data or produce misleading information that can cause market disruptions.
- While AI transcription can be useful for casual meetings, podcasts or brainstorming, human transcription remains the most secure and reliable option.
No one can argue that artificial intelligence plays, or will continue to play, a significant role in nearly every aspect of business. Historically, the transcription industry has depended on humans to convert spoken or audio content into a written form. However, with the rapid advancement of AI software, we must consider whether AI-generated transcripts are safe and secure.
In a somewhat unconventional approach, I will summarize my findings at the beginning of this article rather than at the end. In my professional opinion, while current AI note-takers and transcription software can be productive, they are not completely safe and secure. Legal and law enforcement transcription is a prime example of this. Additionally, business and educational AI-generated transcripts present their own set of challenges. Why is this the case? Here are several key issues:
The rest of this article is locked.
Join Entrepreneur+ today for access.

