LinkedIn wants your help in tracking down fake accounts – here’s how | #datingscams | #russianliovescams | #lovescams


By Clare Duffy, CNN Business

In recent months, bots have been top of mind for many who track the social media industry, thanks to Elon Musk’s attempt to use the prevalence of fake and spam accounts to get out of his $44 billion deal to buy Twitter. But bots aren’t just a challenge for Twitter.

LinkedIn, often thought of as a tamer social platform, is not immune to inauthentic behavior, which experts say can be hard to detect and is often perpetrated by sophisticated and adaptable bad actors. The professional networking site has in the past year faced criticism over accounts with artificial intelligence-generated profile photos used for marketing or pushing cryptocurrencies, and other fake profiles listing major corporations as their employers or applying for high-profile job openings.

Now, LinkedIn is rolling out new features to help users evaluate the authenticity of other accounts before engaging with them, the company told CNN Business, in an effort to promote trust on a platform that is often key to job searching and making professional connections.

“While we continually invest in our defenses” against inauthentic behavior, LinkedIn product management vice president Oscar Rodriguez said in an interview, “from my perspective, the best defense is empowering our members on decisions about how they want to engage.”

LinkedIn, which is owned by Microsoft, says it already removes 96% of fake accounts using automated defenses. In the second half of 2021, the company removed 11.9 million fake accounts at registration and another 4.4 million before they were ever reported by other users, according to its latest transparency report. (LinkedIn does not disclose an estimate for the total number of fake accounts on its platform.)

Asheville, Charlotte, Wilmington among top 15 cities for tech talent growth; Triangle is No. 34

Verify, verify

Starting this week, however, LinkedIn is rolling out to some users the opportunity to verify their profile using a work email address or phone number. That verification will be incorporated into a new, “About this Profile” section that will also show when a profile was created and last updated, to give users additional context about an account they may be considering connecting with. If an account was created very recently and has other potential red flags, such as an unusual work history, it could be a sign that users should proceed with caution when interacting with it.

The verification option will be available to a limited number of companies at first, but will become more widely available over time, and the “About this Profile” section will roll out globally in the coming weeks, according to the company.

The platform will also begin alerting users if a message they have received seems suspicious — such as those that invite the recipient to continue the conversation on another platform including WhatsApp (a common move in cryptocurrency-related scams) or those that ask for personal information.

“No single one of these signals by itself constitutes suspicious activity … there are many perfectly good and well-intended accounts that have joined LinkedIn in the past week,” Rodriguez said. “The general idea here is that if a member sees one or two or three flags, I want them to enter into a mindset of, thinking for a moment, ‘Hey, am I seeing something suspicious here?’”

The approach is somewhat unique among social media platforms. Most, including LinkedIn, allow users to file a report when they suspect inauthentic behavior but don’t necessarily offer clues about how to detect it. Many services also only offer verification options for celebrities and other public figures.

LinkedIn agrees to settle ‘gender-based pay discrimination’ complaint

Targeting Artificial Intelligence fakes

LinkedIn says it has also improved its technology to detect and remove accounts using AI-generated profile photos.

The technology used to create AI-generated images of fake people has advanced significantly in recent years, but there are still some telltale signs that an image of a person may have been created by a computer. For example, that person may be wearing only one earring, have their eyes centered perfectly on their face or have strangely coiffed hair. Rodriguez said the company’s machine learning model also looks at smaller, harder to perceive signals, sometimes on the pixel level, such as how light is dispersed throughout the image, to detect such images.

Even third-party experts say detecting and removing bot and fake accounts can be a difficult and highly subjective exercise. Bad actors may use a mix of computers and human management to run an account, making it harder to tell if it’s automated; computer systems can rapidly and repeatedly create numerous fake accounts; a single human could simply be using an otherwise real account to perpetuate scams; and the AI used to detect inauthentic accounts is not always a perfect tool.

With that in mind, LinkedIn’s updates are designed to give users more information as they navigate the platform. Rodriguez said that while LinkedIn is starting with profile and message features, it plans to expand the same kind of contextual information to other key decision-making points for users.

“This journey of authenticity is really significantly bigger than issues around fake accounts or bots,” Rodriguez said. “Fundamentally, we live in a world that is ambiguous and the notion of what is a fake account or real account, what is a good investment opportunity or job opportunity, are all ambiguous decisions.”

The job hunting process always involves some leaps of faith. With its latest updates, however, LinkedIn hopes to remove a little of the unnecessary uncertainty of not knowing which accounts to trust.

The-CNN-Wire™ & © 2022 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.





Click Here For The Original Source.

. . . . . . .