The New AI Is Magic – by Jonathan V. Last | #ukscams | #datingscams | #european


Before we start: I’m coming to Seattle!

On January 21 I’ll be in the Emerald City with my best friends, Sarah Longwell and Tim Miller, to tape a live show for The Next Level.

If you’re in the area, come out and hang with me? I’ve gotten to know a lot of you over email and through the comments. I’d love to meet in you irl. And I can promise it will be a great time.

Details and tickets here.

Moving on: Every week I highlight three newsletters that are worth your time.

If you find value in this project, do two things for me: (1) Hit the Like button, and (2) Share this with someone.

Most of what we do in Bulwark+ is only for our members, but this email will always be free for everyone.

What one AI art engine produces from the prompt: “AI generating world in violent swirl” (The Bulwark / Midjourney)

Why is conservatism so marbled with scams? Richard Hanania has some thoughts:

Spend some time consuming conservative media, and you’ll see it’s all just one scam after the other: “commemorative” gold coins, something about refinancing your house, Ben Carson brain pills. Often, there is barely any line between the straight reporting and scams, with the theme of the news coverage blending right into the messages of sponsors, both spatially and psychologically, as can be seen in two recent emails from Breitbart and Daily Caller.

The reader needs to squint in order to be able to tell which are the news stories and which are the ads, particularly when you cater to a demographic composed of individuals at the stage of life when their vision is deteriorating.

My favorite story in the genre of conservative scams is how Newt Gingrich after he left Congress would ask businesses to pay an annual $5,000 fee for him to name them an “Entrepreneur of the Year.” This would pop up in the news when the awards would occasionally go to a strip club or adult video store, at which point they would be kicked out of the club and refunded their money. During the Obamacare debate, Gingrich repurposed the scam so that for $5,000 doctors would get a plaque declaring them a “Champion of Medicine.” Potential recipients were told by fax that they were among an elite 100 physicians selected, without informing them that hundreds, if not thousands, got the same message. None of this mattered when Gingrich ran for president in 2012 and ended up with the fourth most delegates in the Republican primary. The best reporting on conservatives scamming their own supporters is, of course, published by liberals.

The question isn’t why corrupt people exist on the Right. Rather, it’s why there are so few norms on the political Right against telling blatant lies to your followers and scamming them. Or, to put it in another way, why aren’t conservative journalists ashamed to see their work being used to sell overpriced gold bars to retirees who don’t know any better?

Read the whole thing. (I had not heard the Gingrich story. Holy forking shirtballs.)

Hanania’s theory is that it’s all about “oppositional culture,” in which one group views itself as low-status and thus configures itself in such a way as to prize rejection of the high-status tribe as its single most important value.

So why didn’t conservative journalists root out the scammers in their midst? Because they didn’t want to give the mainstream media the satisfaction. They were, first and foremost, arrayed in opposition to the “MSM,” über alles.

I don’t know how much of this is new and how much is just a repackaged version of the near out-group / far out-group dynamic which seems to drive the conservative worldview.

For instance, have you clocked the conservative reaction to the trade of a Russian prisoner for WNBA player Brittney Griner?

The old-school, jingoist, Patriot response to having a U.S. woman taken hostage and imprisoned by an anti-American fascist would have been to celebrate Griner’s release with some full-on Hulk Hogan flag waving.

Griner is the in-group. Putin is the out-group. Simple.

But instead, there was a bizarre, polarized response to Griner’s release, with many conservatives suggesting not just that the hostage trade was ill-advised, or a bad deal—but that Griner deserved to be imprisoned in Russia because she had flouted Putin’s sacred rule of law.

In this view, Griner—a black lesbian with liberal politics—is a member of the near out-group and so is marked for special contempt. Meanwhile, Putin—a Russian autocrat who hates gays—is a member of some distant, far out-group, and so is either an NPC or even an ally, since he shares some of the same antipathies as American conservatives.

AI is getting scary good except that maybe it’s just scary? I read this post by Ben Thompson about the ChatGPT AI and all I could think of was the paperclip problem:

It happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had just been announced by OpenAI a few hours earlier:

This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke . . .

It was dumb luck that my first ChatGPT query ended up being something the service got wrong, but you can see how it might have happened: Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet. Those assignments — by virtue of being on the Internet — are probably some of the grist of the GPT-3 language model that undergirds ChatGPT; ChatGPT applies a layer of Reinforcement Learning from Human Feedback (RLHF) to create a new model that is presented in an intuitive chat interface with some degree of memory (which is achieved by resending previous chat interactions along with the new prompt).

Read the whole thing.

AI has suddenly popped out of the bottle. There’s this new ChatGPT. There’s the AI art programs, like Dall-E and Midjourney. There’s a photo editing app called Lensa, which looked at a bunch of my pictures and then created its own pictures of me. This one is pretty interesting:

This one creeped me the hell out. (Count the fingers.)

There’s even AI audio editing coming down the pike and I am told that it basically functions like magic.

Remote-control helicopters have been around as hobbies for decades, but starting around 2005, the combination of accelerometers and RC controllers made drone copters a cheap and accessible toy. By 2015, anyone who wanted a drone quadcopter could have one. Today you can get a quadcopter that looks like science-fiction for $50. The drone is revolutionizing photography, cinematography, warfighting, and logistics.

It is hard to overstate how far and fast the drone went once it hit the point where it was usable enough to be accessible to the masses.

I wonder if AI has arrived at the moment where drones were in 2005.

Here’s Casey Newton thinking through where AI and ChatGPT goes:

It’s instructive, I think, to compare this experience to Google, which for two decades now has been the default destination for many of the (non-creative) queries that early adopters are now running through ChatGPT. Google can answer plenty of questions perfectly fine, but it won’t spot errors in your code, it won’t write a recipe for you, and aside from suggesting a word or two, it won’t write emails or documents for you, either.

The company’s mission statement is famously “to organize the world’s information and make it universally accessible and useful.” In practice what it does is to present a ranked list of websites you can visit, interspersed with various kinds of ads, while occasionally extracting excerpts from those websites and presenting them on the search engine results page.

This has been an incredible business for Google, even as its utility for the average user has been flat to declining for some time now. Search engine optimization has rendered entire categories of search all but unusable. And even in cases where Google knows where to send you, the user is still usually at least a click away from the answers they seek.

For example, today I typed this into Google: “What are some styles of shoes that every man should have in his wardrobe?” It returned a bullet-pointed excerpt of a blog post; a list of five questions that “People also ask” (did I ask the wrong question?); and then a list of blog posts from various men’s fashion sites.

I entered the same query into ChatGPT and got this:

Every man should have a few staple shoes in his wardrobe that can be worn for various occasions. Some essential styles of shoes that every man should have are:

  • Oxford shoes: These are classic formal shoes with a sleek, refined silhouette and lace-up closure. They are versatile and can be worn with suits for work or dressed down with jeans for a more casual look.

  • Loafers: Loafers are a type of slip-on shoe that can be dressed up or down. They are comfortable and versatile, making them a great choice for both casual and formal occasions.

  • Sneakers: Every man should have a pair of comfortable, stylish sneakers in his wardrobe. They are perfect for casual occasions and can be worn with a variety of outfits, from jeans and a t-shirt to chinos and a button-up shirt.

  • Chelsea boots: Chelsea boots are a timeless style of ankle boot with a sleek silhouette and elastic side panels. They are versatile and can be worn with both casual and formal outfits.

It’s also a good idea to have a pair of dress shoes, such as wingtips or monk strap shoes, for more formal occasions. And for colder weather, a pair of sturdy, waterproof boots is a must-have.

Looking at the search results page compared to the chat transcript, Google feels positively prehistoric to me. The search box effectively returned an entire project — read these blog posts that we have ranked according to whatever arcane signals; dive into related questions that people are also asking — while ChatGPT3 just told me what I wanted to know.

I don’t know about you, but this looks like asteroid-level disruption to me:

This has significant downstream consequences for lots of companies. Much of the modern digital publishing sphere is built on companies publishing cheap “how to,” e-commerce, and “embedded YouTube clip” content and monetizing it through Google searches.

A world in which all of these are just presented to users at the point of search is one that could once again send the news media into an economic tailspin.

But that’s not the really scary stuff.

Here’s more from Newton:

There’s the way ChatGPT and similar tools can infinitely generate cheap, convincing text — and, just as importantly, infinite variations on that text — for use in influence operations, coordinated harassment campaigns, spam, and other harms. Platforms have historically struggled to determine to a high degree of accuracy which of their users are real and which are bots; when bots can be made to use tools like this, then, the potential for harm is real.

Finally, there’s the basic unknowability of what ChatGPT is really doing. For as great of an advancement as ChatGPT appears to be, it’s important to remember that there’s no real technological breakthrough here that made the bot appear to be smarter. Rather, OpenAI simply trained its LLM on far more parameters than other public models have to date. At some point, training AI models on exponentially more parameters than their predecessors caused an exponential leap in their abilities. But the mechanism through which that leap took place is still unknown to us — and it’s why no one who has built one of these things can tell you with any real specificity why it answered any particular question the way it did.

You should read the whole thing and subscribe.

This last point is something I fixate on a lot. Many years ago Charle Krauthammer wrote an essay about Deep Blue’s defeat of Garry Kasparaov:

For me, the scariest moment of the match occurred when Murray Campbell, one of the creators of Deep Blue, was asked about a particular move the computer made. He replied, ” The system searches through many billions of possibilities before it makes its move decision, and to actually figure out exactly why it made its move is impossible. It takes forever. You can look at various lines and get some ideas, but you can never know for sure exactly why it did what it did.”

You can never know for sure why it did what it did. The machine has already reached such a level of complexity that its own creators cannot trace its individual decisions in a mechanistic A to B to C way. It is simply too complicated. Deep Blue’s actions have already eclipsed the power of its own makers to fully fathom. Why did Blue reposition its king’s rook on move 23 of Game Two? Murray Campbell isn’t sure. Why did Adam eat from the apple? Does his maker know?

Which is precisely where we are with all of the new AIs.

Call it what you will. To me, it’s the definition of magic. And magic is, inherently, dangerous because it runs counter to rationality. And reason is one of the safeguards of civilization.

If you find this newsletter valuable, please hit the like button and share it with a friend. And if you want to get the Newsletter of Newsletters every week, sign up below. It’s free.

But if you’d like to get everything from Bulwark+ and be part of the conversation, too, you can do the paid version.



Click Here For The Original Source.

. . . . . . .