Categories: Europe

AI bots like ChatGPT are being censored – but I think that could be a good thing | #ukscams | #datingscams | #european


Oh, ChatGPT – how we love/hate you. Whatever your opinion of the internet’s favorite chatbot, there’s no denying that it’s becoming rapidly entrenched in our digital lives; from helping you with writing to finding your dream home on Zillow, ChatGPT is everywhere now.

But how do you keep an AI in line? We’ve already seen chatbots causing a multitude of problems, after all. Microsoft had to rein in its Bing AI shortly after release because the chatbot was lying and throwing tantrums, AI is being used aggressively for digital scams, and I was personally able to get ChatGPT to extol its love of eating human infants. Yikes.

Now, I’m not blaming the chatbots here. I’m not even really blaming the people who make them. As our esteemed Editor-in-Chief Lance Ulanoff said, “I don’t live in fear of AI. Instead, I fear people and what they will do with chatbots” – ultimately, it’s the human wielders of this powerful new technology who will cause real problems for other people.

That doesn’t mean that AI businesses don’t have a societal obligation to make their chatbots safe to use, though. With villains out there using AI tools for everything from fraud to revenge porn, I was immensely disappointed to see that Microsoft laid off its entire AI ethics team earlier this year.

As a pioneer in the AI space, committing to AI in a number of ways, Microsoft should be doing better. However, according to a new statement from the tech giant, the company is taking a new approach to AI ethics.

ChatGPT is facing down regulation in the EU and US after a wave of controversies. (Image credit: Ascannio via Shutterstock)

Keeping AI ethical requires many hands

In a blog post (opens in new tab), Microsoft’s ‘Chief Responsible AI Officer’ Natasha Crampton detailed the company’s new plan: essentially, distributing responsibility for AI ethics across the entire business, rather than tasking an individual team with keeping a handle on it.

Senior staff will be expected to commit to “spearheading responsible AI within each core business group”, with “AI champions” in every department. The idea is that every Microsoft employee should have regular contact with responsible AI specialists, fostering an environment where everyone understands what rules AI should abide by.

Microsoft says its The Responsible AI Standard is “grounded in our core principles”. (Image credit: Microsoft)

Crampton discusses ‘actionable guidelines’ for AI, and referred back to Microsoft’s ‘Responsible AI Standard’ (opens in new tab), the company’s official rulebook for building AI systems with safety and ethics in mind. It’s all very serious business, clearly constructed to repair some of the reputational damage caused by Bing AI’s rocky start.

Will it work, though? That’s hard to judge; making sure the entire company understands the risks posed by irresponsible AI use is a good start, but I’m not convinced it’s enough. Crampton notes that several of the disbanded ethics team members were “infused” into the user research and design teams to keep their expertise on hand, which is good to see.

Censorship, but it’s actually good this time (I promise)

Of course, there’s an entirely different route that could be taken to ensure AIs aren’t used for nefarious purposes – censorship.

As I know from first-hand research, ChatGPT (and most other chatbots) have pretty rigorous safeguarding protocols in place. You can’t get it to suggest you do something potentially harmful or criminal, and it’ll steadfastly refuse to produce sexual content. You can circumvent these barriers with the right know-how, but at least they’re there.

Nvidia CEO Jensen Huang, shown here as an adorable toy figurine, is very pro-AI. (Image credit: Nvidia)

Nvidia recently unveiled a new AI safety software called NeMo Guardrails, which employs a three-pronged approach to preventing machine learning programs from going rogue. To sum up quickly, these ‘guardrails’ are broken into three areas: security, safety, and topical. The security rails prevent the bot from accessing stuff on your computer it shouldn’t while the safety rails work to tackle misinformation by fact-checking the AI’s citations in real time.

The most interesting of the three, though, are the topical guardrails. As the name suggests, these determine which topics the chatbot can use when responding to a user, which primarily works to keep the bot on-subject and prevent unrelated tangents. However, they also allow for the setting of ‘banned’ topics.

The problem with topical guardrails

With tools like NeMo, companies can effectively ban an AI from discussing a whole subject in any capacity. Nvidia evidently has a lot of confidence in the software, rolling it out to business users, so we can assume it works at least reasonably well – which, honestly, could be great!

If we can hard-code guardrails into publicly-accessible AI models that prevent them from working for scammers or manufacturing illegal pornographic content, that’s a good thing. To anyone who disagrees, I say this: ChatGPT is easily accessible to kids. If you think literal children should be exposed to AI-generated smut, I don’t want to discuss AI with you.

However, there are definite issues with using this sort of censorship as a tool for keeping the reins tight on AI-powered software. As Bloomberg recently reported, ChatGPT alternatives cropping up in China are very clearly being censored by the state, rendered incapable of properly discussing banned subjects deemed too politically contentious, like the 1989 Tiananmen Square protests or the independent nation of Taiwan.

I don’t want to get overly political here, but I think we can all agree that this kind of thing is very much the ‘bad’ sort of censorship. Online censorship in China is sadly commonplace, but imagine if ChatGPT wasn’t allowed to talk about the death of George Floyd or the Pequot massacre because those topics were deemed too ‘sensitive’ by politicians? Looking at the current state of world affairs, it’s a worryingly believable future.

China’s Tiananmen Square has, for many, become symbolic of the state’s censorship – censorship that extends to AI, it seems. (Image credit: Shutterstock)

Quis custodiet ipsos custodes?

Once again, we come back to the real problem with AI: us. Who guards the guardrails? It’s all well and good for Microsoft to say that it’s forging ahead with plans to keep Ai ethical, but what Crampton really means is that the tech firm’s AI will adhere to the ethics of Microsoft – not the world. The White House unveiled an ‘AI Bill of Rights’ last year, and again, that’s one presidential administration’s idea of what AI ethics should look like, not a democratically decided one.

To be clear, I’m not actually saying that Microsoft is an unethical company when it comes to AI. I’ll leave that to Elon Musk and his ridiculous ‘anti-woke’ chatbot plans. But there has to be an acknowledgment of the fact that whatever rules an AI has to follow must first be chosen and programmed by humans.

Ultimately, transparency is king. AI is already starting to face serious backlash as it encroaches into more of our lives, be that Snapchat users review-bombing the app’s new AI assistant or ChatGPT getting sued for defamation in Australia. Even Geoffrey Hinton, the famed ‘Godfather of AI’, has warned of the dangers posed by AI. If they want to avoid trouble, chatbot creators must tread carefully.

I genuinely do hope Microsoft’s new approach (and tools like Nvidia’s guardrails) have a positive impact on how we interact with AI safely and responsibly. But there’s clearly a lot of work left to be done – and we need to keep a critical eye on those deciding the rules by which AIs must abide.



Click Here For The Original Source.

. . . . . . .

admin

Share
Published by
admin

Recent Posts

Crypto Fraud on Rise Again, Here’s Why — TradingView News | #datingscams | #lovescams

Recently, SEC Chair Gary Gensler issued fresh warnings about cryptocurrencies amid Bitcoin's surge to a…

1 month ago

My aunt has fallen in love with a scammer | #ukscams | #datingscams | #european

Pay Dirt is Slate’s money advice column. Have a question? Send it to Athena here. (It’s anonymous!) Dear…

1 month ago

Hundreds rescued from love scam centre in the Philippines | #philippines | #philippinesscams | #lovescams

By Virma Simonette & Kelly Ngin Manila and Singapore14 March 2024Image source, Presidential Anti-Organized Crime…

1 month ago

Locals alerted of online dating scams | #daitngscams | #lovescams

Technology has disrupted many aspects of traditional life. When you are sitting at dinner and…

1 month ago

‘Ancestral spirits’ scam: Fake sangomas fleece victims of millions | #daitngscams | #lovescams

Reports of suicides, missing bodies, sexual kompromat and emptied bank accounts as fake sangomas con…

1 month ago

SA woman loses R1.6m to Ugandan lover | #daitngscams | #lovescams

A South African woman has been left with her head in her hands after she…

1 month ago