Scammers are generating fake dating profiles and social media accounts to in an attempt to extort people.
OnlyFans models generated by AI image makers have sprouted across the X-rated service, while Tinder ‘coaches’ are using ChatGPT to help clients get dates. Apps that use AI chatbots to provide users with a ‘companion’ are rising too.
These fake accounts are being used for ‘romance scams’, where a person catfishes a person into becoming their partner and uses the relationship to extract money from them.
The Federal Trade Commission said these scams cost Americans $1.3billion in 2022. Other actors have also used AI-generated images to target businesses.
DailyMail.com spoke cyber security experts for tips on how to recognize an AI profile. These include looking at the hands, teeth and shadows.
This image was generated in Midjourney in seconds in response to a simple prompt
Martin Cheek, fraud and cyber security expert at SmartSearch, told DailyMail.com: ‘ChatGPT makes it very easy for cybercriminals to pose as different people without having vast literary skills.’
‘[They can] leverage the language model to generate more personalized messages and tricking people into falling victim to cyber-attacks.’
Ai-generated faces are not perfect. There are still some flaws in the technology and recognizing them can help let you know whether your Tinder match is the real deal.
Romance scams work by using social media or dating apps to lure a person into an online relationship. They will usually use pictures of a very attractive person — such as a model or actor — found online.
Then they begin to ask their target for money using everyday issues such as a car breakdown, medical bills or other expenses.
‘If an online love interest asks you for money, walk away — no matter how compelling the story,’ the FTC says.
Look at the hands
Can you spot what’s wrong t
Just like many artists, AI technology has trouble properly hands
‘With AI, there’s quite a few things that AI is quite bad at replicating – particularly hands,’ says Cyril Noel-Tagoe – Principal Security Researcher at cybersecurity firm Netacea, in the UK, said,
‘You can sometimes see distortion in those, and sometimes they might be too big or have an extra finger.’
This phenomenon occurs because AI uses pattern-seeking to generate images.
It can detect the pattern of people having hands, and that hands have fingers, but it does not know that there needs to be a particular amount of fingers.
Hands drawn by AI will often have bizarre looking fingers too. Often they are too long or disjointed.
This is because the platform does not understand what hands are and their function, just how they might look.
Look for telltale signs in the background
Shadows can be a giveaway that an image is AI-generated (this one is)
Images created by AI will also be rife with odd lighting and textures.
Vonny Gamot, a senior leader at McAfee, told DailyMail.com: ‘AI generated images often have some tell-tale signs that they are indeed fake, so it’s important to pay close attention.
‘If the outline of the person is blurry, for example, or if small details like shadows seem off, then you’ll know the image isn’t real.’
Because AI-generated images do not have real lighting, shadows on them will now be accurate.
Instead, AI is just using an amalgamation of different pictures to create fake shadows, which will often be inconsistent with how natural lighting acts.
Sometimes the outline of people in a picture generated by AI will seem blurry too, as the subject of the piece blends in with the background.
Look at tone, hair, eyes, face and teeth (THEFT)
Gamot says McAfee, a San Jose, California, security software firm, told DailyMail.com his company has a checklist to check if images are fake.
Called THEFT, McAfee associates check the Tone, Hair, Eyes, Face and Teeth of subjects to see if there any signs of AI generation.
The AI technology has trouble properly displaying all of these features.
Blotchy patches on skin, irregular skin tones, or flickering at the edges of the face are all signs of deep fake images and videos.
This is because the images the generator is pulling from is made up of people with small imperfections and differing skin colors.
These generators create an ‘average’ of all of them when creating a picture, and these small differences will build into odd coloring.
Is the hair a little TOO perfect?
No one has perfect hair, except fake people in AI-generated images.
Real people almost always have a few irregular strands of hair, flyways that refuse to be held down.
But, in AI images many of these small imperfections will not be there, creating almost too-good-to-be-true look for the models.
Other glitches may come by way of the eyes.
Eyeglasses will sometimes look assymetrical or non-functional.
Eyes can look expressionless or could be facing in two different directions.
Likewise, the light reflected in their irises may look strangely lit in a way that does not match the setting.
Image generators will often create irregular faces that have all the features of a human but be slightly misaligned in a way that makes them look eerie.
Like with hands, the AI can understand what the features of a face are after scanning thousands of pictures of them.
But, the machine is just generating images and does not quite understand why things look a certain way.
This means that while the AI knows a human has two eyes, a nose and mouth, it can not tell the relationship between them.
For example, when a person is facing sideways, the program may not properly turn the nose sideways too, as it does not detect them as existing on the same plane.
This can lead to odd-looking faces which have the correct feature but at slightly off angles.
Teeth don’t always render well in deep fakes, sometimes looking more like white bars instead of showing the usual irregularities we see in people’s smiles.
The pearly whites will often be too perfect — freakishly white and straight — or have abnormal irregularities.
These can include the teeth being too long, short or a person having too many teeth in their mouth.
Many scammers use ChatGPT to message someone, using the AI chatbot to write responses.
This is a favorite among foreign scammers who may not have great English themselves but can use the platform to write human-like text for them.
But, because ChatGPT is a combination of text from across decades, some words and phrases it uses may feel like a blast from the past.
‘If you’re worried that you might be being targeted by a criminal using ChatGPT, you should look out for the AI’s limitations, says Thomas Platt – Bot Specialist at Netacea.
‘Ask it about really topical, up-to-date things like what was on TV last night.
‘These AIs are trained on historical data, so if you ask about yesterday’s episode of Love Island, it will be very difficult.’
Short phrases, and repeated words
While AI can produce highly convincing phrases, there are still often giveaways, says Vonny Gamot, Head of EMEA at online protection company, McAfee.
Gamot says, ‘There are a few tell-tale signs of an AI-written message. AI often uses short sentences and reuses the same words.
‘Stay on high alert and scrutinizing any texts, emails, or direct messages you receive from strangers. ‘
Ask them to prove who they are
A classic sign of a scammer is that they won’t reveal their face on camera, or verify their identity, says Cheek.
Cheek says, ‘Verifying the identity of who you are engaging with online should be the first step. If they’re reluctant to show their face, ask yourself why.’
Check when they activated their account
A warning sign that you may be dealing with an AI-enabled scammer is the date on their social media or dating account.
Think about how long you have had yours – and why someone might have only joined last week.
David Emm of cybersecurity company Kaspersky says, ‘If you spot an account that’s only just been activated, it might be a fake trying to blend in with the crowd. If you can, check when they joined the platform, it’s quite easy to do – this is often a dead giveaway that the account has been created for the sole purpose of scamming you.’
Click Here For Original Source.