AI, Deepfakes, and Fraud: A New Risk Business Owners Need to Understand
Artificial intelligence is changing the way businesses operate. It is improving efficiency, automating processes, and opening the door to new opportunities across nearly every industry. But as with any powerful technology, it is also creating new risks.
One of the fastest-growing threats involves AI-driven fraud, including deepfakes, voice cloning, and highly sophisticated social engineering scams. These attacks are no longer theoretical. They are happening to real companies, in real time, and in some cases costing millions of dollars.
For business owners, executives, and finance teams, this is a risk that deserves attention now, not later.
At BakerHopp Insurance Group, we believe risk management starts with awareness. Below, we explain what deepfake fraud is, why it is on the rise, and how businesses can protect themselves.
Deepfakes Defined – Any Why They Matter to Businesses
A deepfake is an AI-generated video, image, or audio recording that convincingly imitates a real person. With today’s technology, criminals can recreate a CEO’s voice, generate a realistic video of an executive, or send emails that appear to come from trusted colleagues.
Fraudsters use this technology to manipulate employees into sending money, sharing confidential information, or approving fraudulent transactions.
- Deepfake fraud cases surged more than 1,700% in North America in a single year, driven by the availability of low-cost AI tools.
- Financial losses from deepfake-related scams exceeded $200 million in one quarter of 2025 alone.
- The average loss from a deepfake incident for a business is estimated to be hundreds of thousands of dollars per event, with some cases far higher.
In other words, this isn’t just a technology issue. It’s a financial risk.
Deepfake and AI Fraud in the Real World
Many business owners assume cybercrime means hacking into systems. In reality, today’s most successful fraud attacks often target people, not technology.
Common scenarios include:
Executive impersonation: Criminals use AI to clone a CEO or CFO’s voice and request an urgent wire transfer.
Fake video meetings: Employees receive instructions during what appears to be a legitimate video call with company leadership.
Business email compromise enhanced by AI: Traditional phishing emails often contained spelling errors or unusual wording. AI-generated messages, however, can closely match a company’s communication style, making them much harder to detect.
Vendor or payment-instruction fraud: Fraudsters impersonate a supplier, contractor, or service provider and request a change to payment details. Because the request looks legitimate, the next payment is sent to the wrong account.
Social engineering or targeting financial departments: Accounting and treasury teams are frequent targets because they have the authority to move money. Fraudsters often create urgent situations to pressure employees into acting quickly.
These attacks are becoming more sophisticated. In one widely reported case, criminals used a deepfake video call to convince an employee to transfer roughly $25 million during what appeared to be a call with company executives.
Events like this were once rare. Today, they are becoming more common.
Why This Risk is Growing So Quickly
Several factors are contributing to the rapid growth of AI-enabled fraud:
AI tools are widely available.
Voice cloning and video generation software that once required advanced expertise can now be used with minimal technical skill.
Public information is easy to find.
Company websites, LinkedIn profiles, webinars, and social media provide enough audio and video for criminals to build convincing impersonations.
Businesses rely on speed and trust.
Urgent requests, remote work, and digital communication create opportunities for fraudsters to exploit normal business processes.
Financial interactions happen electronically.
Wire transfers, ACH payments, and online approvals make it easier for criminals to move money quickly.
Experts warn that AI-driven fraud is now one of the fastest-growing forms of cybercrime, with billions of dollars in losses reported each year.
Why Insurance is Essential, But Not Enough
Many business owners assume their cyber policy or crime policy will cover any fraud event. In reality, coverage depends on how the loss occurred, what controls were in place, and what endorsements are included in the policy.
Deepfake-related losses may fall under several different types of coverage, including:
- Social engineering fraud
- Funds transfer fraud
- Cyber liability
- Crime/fidelity coverage
- Professional liability exposure
- Directors & Officers (D&O) risk if controls are questioned
These policies do not always overlap, and not every policy automatically includes coverage for social engineering or impersonation-based fraud.
Additionally, insurers increasingly expect businesses to have reasonable internal controls in place. If procedures are not followed, coverage disputes can arise.
This is why insurance should be viewed as one part of a broader risk management strategy, not the only protection.
Practical Steps Businesses Should Take Now
The goal is not to eliminate risk entirely – that is rarely possible. The goal is to make fraud more difficult, reduce the chance of loss, and ensure the company is protected if an incident occurs.
The following steps are considered best practices by insurers, cybersecurity experts, and risk advisors:
- Require verification for financial requests.
Any request involving money, payment changes, or sensitive information should be confirmed using a second method of communication, such as a known phone number or in-person verification.
- Use multi-person approval for large transactions.
Dual authorization or segregation of duties can prevent a single employee from completing a fraudulent transfer.
- Train employees to recognize social engineering tactics.
Most attacks succeed because someone believes the request is legitimate. Regular training helps employees understand that urgency, secrecy, and pressure are common signs of fraud.
- Limit unnecessary public exposure of sensitive information.
Not every detail about leadership, financial contacts, or internal processes needs to be publicly available. Executives’ videos, recordings, and contact details can be used to build deepfakes.
- Review internal procedures regularly.
Controls that worked five years ago may not be sufficient today, especially as AI tools become more advanced.
- Review insurance coverage regularly.
Policies should be evaluated to confirm that any social engineering, funds transfer fraud, and cyber exposures are properly addressed.
The Bottom Line: AI Has Changed the Risk Landscape
Artificial intelligence is not going away.
Neither are the criminals using it.
Deepfakes, voice cloning, and AI-driven fraud represent a new category of risk that many companies are not yet prepared for. But with the right controls, training, and insurance strategy, these exposures can be managed.
At BakerHopp Insurance Group, we work with business owners and leadership teams to identify emerging risks and make sure their coverage, procedures, and protection strategies keep pace with today’s threats.
If you would like a review of your cyber, crime, or management liability coverage, our team would be glad to help you understand where you stand and what steps may make sense moving forward. Reach out to us today.