fbpx
Show all

The Rise of Deepfake Scams — Can Private Investigators Help

The Rise of Deepfake Scams — Can Private Investigators Help? 

Artificial intelligence is reshaping the way crimes are carried out, and one of the most alarming developments is the rise of deepfake scams. These schemes use AI to generate convincing video and audio forgeries that can imitate someone’s face or voice with startling accuracy. A simple phone call that sounds like a family member or a video that appears to show a trusted colleague can be enough to trick people into handing over money or sensitive information. The problem is spreading quickly, and in global cities like Sydney, both individuals and businesses are finding it harder to know what’s real and what’s fabricated.  

What Are Deepfake Scams?

Deepfakes are synthetic media (audio, video or images) that are manipulated or generated using AI so convincingly that they appear real. A face can be swapped, a voice can be cloned or expressions and gestures can be altered. In effect, deepfake technology can create the illusion that a person said or did something they never did.

Fraudsters use this to engineer all kinds of scams. Common examples to watch out for include:

  • video of a familiar public figure endorsing an investment that seems too good to be true,
  • a cloned voice calling you to feign emergency and request funds, or
  • synthetic audio in phishing calls posing as your bank or another trusted party.

 

Deepfakes and associated scams are becoming more sophisticated by the day. They routinely combine mouth movements, vocal intonation, background noise and ambient cues to approximate natural speech. Many victims find it almost impossible to tell the difference, especially when under pressure.

Deepfake scams in Australia 

Australia is already seeing the fallout from this technology. For example, a recent campaign used fake video endorsements of well known figures such as Dick Smith and Gina Rinehart to pitch sham investment schemes. Many Australians lost thousands trusting those manipulated endorsements.

Research commissioned by Mastercard indicates that throughout late-2023 and 2024, 1 in 8 Australian businesses were victims of deepfake scams, with 1 in 5 having received deepfake threats. They also found that 36% of the individual Australians they sampled were targeted by deepfake scams last year, and that 22% of those individuals ended up losing money to them. Between businesses and consumers, total Australian losses reach into the tens of millions.

The threat extends beyond financial scams however. Just recently, the Federal Court imposed a $343,500 penalty on a Queensland man who created and distributed non-consensual pornographic deepfake images of prominent Australian women. The case was one of the first of its kind to reach judgment in Australia and marks a clear signal from regulators that the misuse of synthetic media for harassment or exploitation will not be tolerated

Why Deepfakes Are Hard to Detect 

The psychology of trust in visual/audio content

Human beings are wired to believe what we see and hear. A familiar face or a trusted voice usually signals authenticity, so we rarely question it. Deepfakes exploit this instinct by presenting something that feels emotionally and cognitively real. Research shows that memory, emotion and context heavily shape trust: if a voice sounds like your child, or a video resembles a close colleague, people are far more likely to believe it even when the situation seems unusual or illogical.

Limitations of current detection tools 

Although researchers are developing better tools to spot deepfakes, none are foolproof. Common limitations include:

  • Reliance on subtle ‘tells’: Many systems look for unnatural blinking, irregular lighting or mismatched lip movements. Such clues that are becoming less reliable as AI improves.
  • Low-quality media: Scams often use compressed or low-resolution files (e.g. short audio clips, social media videos) that are too poor in quality for accurate detection.
  • False results: Detection systems can miss a fake altogether (false negatives) or mistakenly flag genuine content as fake (false positives).
  • Need for originals: The most advanced tools usually require access to high quality source files, which victims rarely have when receiving a suspicious call, message or video.

The result is that most people have very limited means to reliably distinguish authentic from fabricated content. Fortunately, experienced investigators and digital forensics experts are stepping in to help bridge these gaps. 

How Private Investigators Combat Deepfake Fraud 

Digital forensics and authentication techniques

Investigators and digital forensics experts may analyse metadata, file headers, encoding artifacts and other digital fingerprints to look for signs of manipulation or inconsistencies. They may also perform media comparison, comparing a suspect audio/video file against known genuine samples to spot subtle divergences in texture, lighting, compression artefacts or signatures. In some cases, experts can try to reverse-engineer edits – for example, attempting to detect splice points, compression boundaries or traces of generative AI tools.

Tracing perpetrators through online footprints

Deepfake creators may leave traces beyond the media itself. Investigators might examine blockchain or cryptocurrency payment flows linked to a scam. They may also be able to check domain registration records to see who set up associated websites. In some cases, inspecting infrastructure like cloud servers or content delivery networks can help map connections between hosted content and potential operators.

Protecting Yourself and Your Business

Practical tips for individuals

  • Don’t rely on voice alone: A cloned voice can be convincing. If something feels off, ask to switch to video and request a simple, unscripted action (e.g. “show me what’s on your desk”).
  • Be cautious in online relationships: Scammers commonly use deepfakes in dating and friendship scams. Always confirm an identity with a live video call or a background check before sharing personal information or money.
  • Resist pressure tactics: If someone insists you act immediately, whether it’s transferring money or sharing details, pause and verify through another trusted channel.
  • Protect your digital accounts: Enable two-factor authentication across banking, email and social platforms. This makes it much harder for an impersonator to hijack your accounts even if they trick you once.
  • Double check financial pleas: If a family member or friend claims they urgently need money, verify by contacting them directly in another way before sending anything.

 Business-level risks

The risks deepfakes pose to Australian businesses are potentially varied and costly. A video or call that appears to come from a senior staffer could be used to pressure staff into transferring money or revealing sensitive information. Meanwhile, fabricated endorsements or manipulated product announcements can spread quickly online, damaging customer confidence and confusing investors. Damage from situations like these is difficult to repair. Once trust is shaken, staff morale and long-term credibility aren’t easy to get back.

Protecting against these threats means building strong internal safeguards. Clear protocols for authorising payments, routine training so employees know what to watch for, and policies that ensure rapid response to suspicious content are all critical steps. 

When to Call a Private Investigator

There are certain red flags that may suggest a deepfake is being used against you. Being alert to these signs can help you determine whether it’s time to seek professional help:

  • A video or audio message from someone you know, but their behaviour or tone seems unusual.
  • Sudden pressure to act quickly, such as transferring money or sharing sensitive information.
  • Requests or instructions delivered through unfamiliar channels or contacts.
  • A gut feeling that something is off and doesn’t align with what you’d normally expect.

When doubts like these arise, professional investigators can help bring clarity. They can draw on forensic methods and experience to assess whether media has been altered, endeavour to trace its source and support you in reporting or escalating the matter appropriately. Our work is done within ethical and legal boundaries, ensuring that any evidence gathered can assist in protecting your interests.

Concerned you may be dealing with a deepfake scam or malicious content? Contact Lyonswood today to discuss your situation in confidence.