Gerd Gigerenzer on How to Stay Smart in a Smart World | #datingscams | #lovescams | #facebookscams


0:37

Intro. [Recording date: July 8, 2022.]

Russ Roberts: Today is July 8th, 2022. My guest is Gerd Gigerenzer. Gerd was last here in December of 2019, talking about his book Gut Feelings. His newest book is our topic for today: How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. Gerd, welcome back to EconTalk.

Gerd Gigerenzer: I’m glad to be back and to talk to you again.

Russ Roberts: My pleasure.

1:03

Russ Roberts: You write a lot about artificial intelligence and you say at one point that AI–artificial intelligence–lacks common sense. Explain.

Gerd Gigerenzer: Yeah. Common sense has been underestimated in psychology, in philosophy, always[?] else. It’s a great contribution of AI to realize how difficult a common sense is to be modeled.

So, what that means is that, for instance, AlphaZero can beat every human in chess and Go, but it doesn’t know that there is a game that’s called chess or Go. A deep neural network, in order to learn, to distinguish pictures of, say school buses, from other objects on the street needs 10,000 pictures of school buses in order to learn that.

If you have a four-year-old and point to a school bus, you may have to point another time, and then the kid has gotten it. It has a concept of a school bus.

So, what I’m saying: artificial intelligence, as in deep neural networks, has a very different kind of intelligence that does not resemble, much, human intelligence. Basically, to understand that deep neural networks are statistical machines that can do very powerful look for correlations. That’s not the greatest ability of the human mind. We are strong in causal stories. We invent, we are looking for.

A little child just asks, ‘Why? Why? Why? Why do I have to eat broccoli? Why are the neighbors so much richer than we?’ It wants casual stories.

Another aspect of human intelligence is intuitive psychology. How can a deep neural network know about these things?

And, finally, there’s intuitive physics. Already, children understand that an object that disappears behind a screen is not gone. How does a neural network know that? It’s very difficult. It’s a big challenge to get common sense into neural networks.

3:50

Russ Roberts: So, a big issue in computer science–we’ve talked about it many times on this program over the years–is that: Is the brain a computer? Is the computer a brain? They both have electricity. They both have on/off switches.

There is a tendency in human thought, which is utterly fascinating and I think underappreciated, that we tend to use whatever is the most advanced technology as our model for how the brain works. It used to be a clock. It was other things in the past. Now, of course, it’s a computer. And, there is a presumption that when a computer learns to recognize the school bus, it’s mimicking the brain. But, as you point out, it’s not mimicking the brain.

Russ Roberts: But, there may be some things that we call artificial intelligence that are brain-like and others that are not. What are your thoughts on the limits of that process? There’s a lot of nirvana, utopian thinking about what computers will be capable of in the coming years. Are you skeptical of those promises?

Gerd Gigerenzer: There’s certainly a lot of marketing hype out there. When IBM [International Business Machines] had this great success with Watson in the game Jeopardy!, I think I was amazed. Everyone was amazed. But, it’s a game–again, a well-defined structure. And even the rules of Jeopardy! had to be adopted[adapted?] to the capabilities of Watson. Then, the CEO [Chief Executive Officer], Ginni Rometty, announced, ‘Now, it’s the Moonshot.’ We are not going to the moon, but to healthcare, not because Watson knew anything about healthcare; because there was the money. And then, naive heads of clinics bought the advice of Watson.

Watson Oncology was the first thing for cancer treatment, only to find out that some of the recommendations were dangerously deadly. And then, IBM clarified that Watson is at the level of a first-year medical student.

Here we have an example of a general principle: If the world is stable, like a game, then algorithms will most likely beat us, perform much better. But, if it’s lots of uncertainty as in cancer treatment or investment, then you need to be very cautious. The claims are probably overstated–in that case, by the PR [Public Relations] Department of, yeah, of IBM.

Russ Roberts: But, isn’t the hope that, ‘Okay, Watson today is a first-year medical student, but give it enough data, it’ll become a second-year medical student. And in a few years, it’ll be the best doctor in the world.’ And we can all go to it for diagnosis. We’ll just do a body scan, or our smart watch will tell Watson something about our heartbeat, etc. It will be able to do anything better than any doctor. And you won’t have to wait in line, because it can do this instantly.

Gerd Gigerenzer: That’s rhetoric. If you read Harari, or many other prophets of AI, that’s what they preach.

Now, I have studied psychology and statistics, and I know what a statistical machine can do.

A deep neural network is about correlations and it’s a powerful version of a non-linear multiple regression, or a discriminant analysis. Nobody has ever talked about multiple regressions as intelligence. They can do something else. We should not let us bluff away into the story of super-intelligence.

So, what the real prospect is, deep neural networks can do something that we cannot do. And we can do something that they cannot do.

We should, if we want to invest into better AI, smarter AI, we also should invest in smarter people. That’s what we really need.

So, smarter doctors, more experts who can tell the difference and not wasting lots of money on projects like IBM’s oncology that don’t work, or IBM also had Watson on solely to bankers for investment. If Watson could invest–would be th4 great investor, then IBM wouldn’t be in financial troubles it is.

8:54

Russ Roberts: What I love about that insight is focusing on what distinguishes where artificial intelligence, or at least computers at this stage, can be extremely powerful versus not. And that’s stable.

There’s a more general principle–and I think it’s in your book, but certainly, it’s in your other book or in other people’s books, which is–fundamentally, when we’re looking at correlations in big data, we’re presuming that the past will tell us what the future will be like. And sometimes, it can: because it’s stable.

The environment is stable enough that whatever were the patterns that were revealed in the past, those patterns will persist in the future.

But in most human environments they don’t. And so, the promise of big data is–I like what, well, two things.

Former, excuse me, past EconTalk guest, Ed Leamer, likes to say, ‘We are storytelling, pattern-seeking animals.’

And, we are good at patterns and causation and sometimes they’re correct, but the computer doesn’t have any common sense to examine whether acorrelation is just a correlation or a causation.

Gerd Gigerenzer: So, the general point is–so, I’ve been studying simple heuristics that make us smart. And, simple heuristics, it’s like, you probably know the story of Harry Markowitz who got his Nobel Prize for an optimization model that tells you how to diversify your money into end assets.

But, when he himself invested his own money, for the time after retirement, he used his Nobel Prize winning optimization method? No, he didn’t. He used a simple heuristic.

A heuristic is a rule of thumb, and that’s called: invest equally. It’s called one over N. N is the number of assets or options. If you have two, 50/50. If you have three, a third/a third. That’s a heuristic.

And, in a world–which is called in decision theory of calculable risk that’s of stable world, yeah?–that would be stupid.

But, in the real world of finance, studies have shown it often outperforms Markowitz optimization, including modern Bayesian methods of that.

The general lesson is: There’s a difference between stable worlds and uncertainty, unstable worlds.

And, particularly, if the future is not like the past, then Big Data doesn’t help you. And for a finance with Markowitz optimization, you need lots of data to estimate all these parameters. The heuristic, 1 over N, needs no data on the past. It’s the opposite of Big Data.

Russ Roberts: Well, except for the problem, you’ve got to figure out N. N doesn’t come–the number of assets is not given.

Gerd Gigerenzer: That’s true.

Russ Roberts: That’s another problem.

Gerd Gigerenzer: Yeah. But, that’s the same thing for Markowitz optimization.

Russ Roberts: Yeah. For sure. For sure.

12:16

Russ Roberts: Now, you are a strong and I think eloquent promoter of human abilities and a counterweight to the view that we’re going to be dominated by machines. They’re going to take over, because they’ll be able to do everything–everything, everything. So, we’re kind of remarkable: our brains are really amazing. At yet at the same time, there’s a paradox in your book, which is that you’re very worried about the ability of tech companies to use Big Data to manipulate us. How do you resolve that paradox?

Gerd Gigerenzer: So, the statement that you made before is just right on the point. So, it’s not about AI by itself. It’s about the people behind AI and their motives.

We usually talk about whether AI will be omniscient, or AI will be just an assistant tool; but we need to talk about those behind it. What’s the motive? So, that is what really worries me.

So, it’s certainly that we are in a situation today where a few tech companies, and mostly a few relatively young white males who are immensely rich, shape the emotions, the values, and also control the time of almost everyone else. And, that worries me.

You are a free market person. And I, also, am a person who tries to believe in people’s abilities. But we need to be aware that the opposite won’t happen.

So, and here, one thing that we might think about–how to improve the situation–is the following: Google gets 80% of the revenue from advertisement. Facebook, 97%. And, that makes the customer–the user–no longer the customer.

So, in the book, How to Stay Smart, I use an analogy, the free coffee house. Imagine in your hometown, there is a coffee house that offers free coffee. Soon, everyone–all the other coffee houses–will be bankrupt.

We all go there, enjoy our times, but in the tables, the tables are bugged, and on the wall are videos, which record everything we talk, and to whom we talk, when we do this, and that will be analyzed. And, in the coffee house, there are people–sales people–who interrupt us all the time in order to make us buy personalized products.

The sales people are the customers of this coffee house. We, who enjoy our coffee–we are the product being sold: precisely, our time, our attention.

So that’s roughly how the business model of Facebook and others functions.

And it also gives us an idea about a solution. Namely: Why don’t we want to have real coffee houses back, where we can pay rather than being the product?

16:14

Russ Roberts: The problem with that–and by the way, you know, I started off–I’m sure long-time listeners can go back to my earliest episodes on this topic, where I was extremely skeptical and less worried, not worried at all; to a little bit worried; to now, today, I’m somewhat worried.

And, listeners will recognize my metaphor of the repair person who comes to your house to fix your washing machine. Does it for free. But, while he is there, he takes a lot of photographs of what you bought and what’s on your shelves and says, ‘Oh, by the way, you don’t mind if I use these to sell to my friends? Because they want to know what you buy and what you’re interested in. What books are on your shelf and the receipt that you have here for this product you bought.’

And, there is something creepy about it. The creepy part about it for me is that most people don’t think about it. They don’t realize that there’s cameras in the coffee house. They don’t realize that the person, that everything they say is being recorded, and who they’re talking to, and what the topics are, and so on.

On the other hand, you could argue, and sometimes I argue like this, because it’s interesting and it may be true, ‘Okay. So those sales people interrupt my conversation every once in a while. They don’t literally shut me up. They just hold an ad next to my friend’s head and distract me from–you and I having this conversation in the coffee house.

And I find that somewhat annoying. But actually, it’s kind of useful, because sometimes it’s something I actually want, because they know a lot about me. And, the coffee is free.

So, you’re telling me, I need to go to this coffee house over here where I don’t get interrupted. Okay. That’s nice. But, the coffee is $5 a cup. What’s scary about it, to you?

Russ Roberts: I think there is stuff to be scared about. I’m playing a little bit of rhetoric here now. But I’m increasingly scared, so take a shot.

Gerd Gigerenzer: So, there are two kinds of personal information that need to be distinguished. The one is, like, collecting information about what books you buy and recommending you other books.

The other thing is collect all the information about you that one came including whether you’re depressed today, whether you are pregnant, whether you have had a heart failure or have cancer, and use that information to target you in the right moment with the right advertisement.

So, that’s the part that we do not need.

And also, in some countries–like, so I’m living in Berlin. And, East Germany had had the Stasi.

Russ Roberts: The secret police.

Gerd Gigerenzer: If the Stasi would’ve had these methods, they would be over-enthusiastic.

So, we see something similar in China and other countries.

And, the final point I want to make is, what people underestimate is how closely tech companies are interrelated with governments. So, they say, ‘Oh, oh, it doesn’t matter whether Zuckerberg knows what I’m doing, but the government doesn’t know.’ Uh-Uh. Snowden has, a few years ago, shown how close the connection in the United States is. In the United Kingdom, there’s Karma Police. And, many countries.

So, then, the–let me make another point. What would it cost us to get freedom and privacy back? So I made a little calculation.

If you take the Facebook–now Meta Corporation–and would reimburse–so, from reimburse Zuckerberg for his entire revenue, it would be about, per month, it would be about $2 per month. Per person. That’s all.

And those countries who can[?cannot?] afford $2, then we pay $4.

And that would solve the problem. So there is a solution for that, if you want to go there. The question is how we get there.

20:44

Russ Roberts: The reason–I’m a little bit skeptical of that. The reason is, is that I pay $5 a month for an app that is helping me with my Hebrew. I pay $5 a month for a lot of things, by the way. There’s a lot of Substack and Patreon accounts I pay $5 a month to. So, I have a lot of–$60 a year. Somebody has decided that 4.99 is pretty easy for people to swallow. And, by the way, when it says ‘$60 a year,’ sometimes I go, ‘Oh, that’s a lot,’ but ‘$4.99 a month, that’s nothing.’

Russ Roberts: Anyway, I have a bunch of those. And, you’re suggesting that Zuckerberg could have twice the money, twice the revenue he has now, if he would charge people $4 a month, instead of $2 a month.

Now, economics predicts that, in general, when you make people pay $4 for something that they used to get for free, you won’t have 2 billion users. You’re going to have fewer. But, as long as you still have a billion, you’re suggesting that going to $4 a month would have such an enormous effect on their user base that Zuckerberg won’t do it voluntarily: we’d have to impose something through regulation.

Gerd Gigerenzer: Yeah. That’s the problem, yeah, how to get there. And I see the problem. But I’m just saying that it wouldn’t be–in terms of the contribution of individuals to get their privacy back–it wouldn’t be much. It’s just a coffee.

Russ Roberts: But, most people don’t care.

Gerd Gigerenzer: Yeah. That’s the problem.

Russ Roberts: Would you argue–are you arguing that they should care? I think you’re arguing they should care, because they don’t realize–I think a lot of people don’t realize what they’re actually being surveilled about, how widespread it is. But, you’re also arguing that even if they knew–and I think many people do know, we’ll talk about that maybe in a minute–they go, ‘Eh, what’s the big deal? I get a lot of products that I’m interested in. It’s actually pretty good.’

They don’t realize, potentially, that the products that they see in their search engine aren’t really the ones that they want. It’s the ones that the real customers line up–and we recently had a conversation with the head of Neeva, about, who used to work for Google Ads.

So, let me try a different way to get at a better situation, see what you think. This is the way an economist might think about it. So, that coffee shop, the real problem is there’s only one free coffee shop and everybody is in that coffee shop.

So, when I’m on Twitter, I’ve got my followers. I’ve accumulated them over years. I go to the new coffee shop that competes with it: it’s empty. It’s very hard for a new coffee shop to start up, because what we’d like there to be–one way to think about how to improve the situation is–there’s a lot of coffee shops.

And one of the coffee shops–the coffee is free, so are the pastries, it’s fantastic quality. The problem is that they force you to give blood, when you come in. They do an MRI (Magnetic Resonance Imaging). They know everything–like you say, they know all your mental states. It’s very invasive.

But, there’s another coffee shop down the street where it’s not free. It might be a subscription model: Once you are in the coffee shop, you can have as much coffee as you like. There’s a third coffee shop, where it’s per by the cup, because some people don’t drink so much, it’s not worth it to pay the full subscription.

The problem is I can’t find my friends in those coffee shops.

What I would suggest, for those people you, and maybe me, who are worried about surveillance and government intervention against us, tyranny–let’s find a way that I can port my friends to a different coffee shop without having to start from scratch.

Possible?

Gerd Gigerenzer: I mean, humans have imagination and we could find a way to get there. It’s just, we also need people who want that.

And, as you hinted before, there’s the so-called Privacy Paradox, which is: that, in many countries, people say that their greatest concern about the digital life is that they don’t know where the data is going and what’s done with that.

If that’s the greatest concern, then you would expect that they would be willing to pay something. That’s the economic view. You pay something for that.

But, then, so in Germany–Germany is a good case. Because Germany, we had the East German Stasi. We had another history before that–the Nazis, who would have been–enjoyed about such a surveillance system.

And, so Germans would be a good candidate for a people who are worried about their privacy and would be willing to pay.

That’s what I thought.

So, I have done, now, three surveys since 2018, the last one this year. So, representative sample of all Germans over 18. And asked them the question: ‘How much would you be willing to pay for all social media if you could keep your data?’

So, we are talking about the data about whether you are depressed, whether you’re pregnant, and all those things that they really don’t need.

And, so: ‘How much are you willing to pay to get your privacy back?’

Seventy-five percent–75%–of Germans said ‘Nothing.’ Not a single Euro.

And, the others were willing to pay something, yah.

So, if you have that situation–where people say, ‘Oh, my greatest worry is about my data’; at the same time, ‘No, I’m not paying anything for that,’ then that’s called the Privacy Paradox.

26:52

Russ Roberts: So, I found that fascinating. I want to give you what came to my mind and let you react to it. So, at night, when I get ready for bed, I close the curtains. I don’t want people looking at me as I get ready for bed.

I suppose I would feel differently if there was a camera taking a photo of my pre-sleep preparations. If no one could see my face, and it went out into the Internet, and no one could identify me. And, it was just, my body; but, it’s not obviously mine. And the only people who look at it are machines that, say, ‘Wow, he’s fatter than I would’ve thought. Let’s send him some ads for weight loss.’ Or, ‘Let’s send them some ads for books about exercise or dieting.’

Again, I might be excited to get those. It might be wonderful. The real problem would be that the books they send me are really bad books. But. people have paid to get those ads in front of me, and I’m stuck looking at those, and I don’t realize that, and so on.

I think the Privacy Paradox is the fact that, when you tell me that my data is available on the web, I think, ‘Well, but no one person is really looking at it.’ They can: there are individuals who could look at it.

Russ Roberts: But, so, Mark Zuckerberg does have my data. Not mine, so much. I’m rarely on Facebook. But, Facebook Users’.

But, I assume he doesn’t spend each night going through it, going, ‘Wow, I can’t believe how fat he is.’

So, you know–I don’t have a smart scale. But if you had a smart scale, they’d really know exactly how much I weigh. And they’d know that my shoes were artificially making me look 5’7″ instead of 5’6″, and so on.

But, I think most of us assume that it’s anonymous. more or less.

And, that’s the problem: is that it doesn’t have to be and we kind of ignore the possibility that it might not be anonymous, really.

Gerd Gigerenzer: I see people sleepwalking into surveillance. So, for instance, in the studies we have done, most people are not aware that a smart TV may record every personal conversation people have in front of it, whether it’s in the living room or in the bedroom. At least in the German data, 85% are not aware about that. Although it can be found in some zones in the end user notes, but who is reading these things?

Also, for instance, think about there’s already surveillance in a child’s life. So. remember Mattel’s Barbie? The first Barbie was modeled after a German tabloid cartoon, the Bild-Zeitung, and it just gave totally unrealistic long legs and tailored figures. The result was that quite a few little girls found their body not right. In 1998, the second version of Ken could talk briefly–utter sentences like, ‘Math is hard. Let’s go shopping.’

So, the little girls got a second message: They’re not up to math. They are consumers. And, the 2015 generation, called Hello Barbie, which got the Big Brother Award, can actually do a conversation with the little girl. But, the little girl doesn’t know that all the hopes and fears and anxieties it trusts to the Barbie doll are all recorded and sent off to third parties, analyzed by algorithms for advertisement purposes.

And also, the parents can buy the record on a daily or weekly basis to spy on their child.

Now, two things may happen, Russ. One is the obvious, that maybe when the little girl is a little bit older, then she will find out, and trust is gone in her beloved Barbie doll and also maybe in her parents.

But, what I think is the even deeper consequence is: the little girl may not lose trust. The little girl may think that being surveilled, even secretly, that’s how life is.

And so, here is another dimension that the potential of algorithms for surveillance changes our own values. We are no longer concerned so much about privacy. We still say we are concerned, but not really. And then, we’ll get a new generation of people.

32:42

Russ Roberts: Yeah; I don’t know how–I mean, it sounds horrible. I think, tied to government, it’s really–authoritarian government–it’s terrifying, potentially. I do think the smart TV is a great example of this privacy paradox, the way I’m thinking about it, which is, ‘Okay, it hears what I say in the bedroom; but it doesn’t know it’s really me. No one’s paying attention; it’s just an algorithm that analyzes it.’ I think, first of all, that’s today. And, I think that it is a very dangerous thing for all kinds of obvious reasons.

I think the other thing, though, is–the movie Minority Report, which is in my top 10 movies alongside The Lives of Others, which is about the Stasi, by the way. I’ll just throw that in as a bonus. But, Minority Report was very prescient. A lot of it is about the dangers of smart technology, artificial intelligence used to predict guilt before a crime. It has this idea of precognition–that it knows what you’re going to do because it has enough information about you to forecast.

And of course, your point, which is deep and true, is that we’re human beings. We’re not chess boards.

But, one of the things about that movie is that those kinds of movies usually rely on the fact that there’s some corner of existence that’s still private. In that movie, there’s a sequence where the hero is able to do something outside of the surveillance world. There’s an underground, there’s a corner, there’s a place.

And, the reality, though, is that if that world were here, there would be no such place. And, I think that’s the world we ought to be worried about.

In the movie, there’s a corner because otherwise, the plot’s not going to work, and it’s not interesting, and the hero is going to get killed, and that’s the end of the story. But, in real life, you really don’t want everything to be Barbie–I don’t think–listening to you recording and someone else knowing everything about you that you’re unaware of–seems horrible.

Gerd Gigerenzer: Yeah. So, the Minority Report also illustrates another twist–so, predicting whether someone will commit another crime is a situation of high uncertainty where algorithms are actually not good.

So, we know that recidivism predictions can be done by just simple heuristics with two or three variables as good as the secret COMPAS [Correctional Offender Management Profiling for Alternative Sanctions] or other algorithms. The two or three variables are: Previous Offenses and Age. Maybe a third one, Gender. Okay. That is something which I find very interesting because those sides, though, the enthusiasts who tell us that soon there will be super-intelligence where we can upload our brains–for whatever reason with the super-intelligence, want our brains uploaded. And we are–that’s a Californian dream of eternal life.

And also, the other side. So, this great book by Shoshana Zuboff about–both sides assume that the algorithms would be perfect. And that’s only true in a stable world. In astronomy, they will be very useful. But, it’s not the case; and I don’t see a way to get there. There will be an increase a little bit, but there will not get there.

And then we have a situation where we are–our behavior is predicted and controlled by algorithms, which are actually not very good.

But still, we submit to the recommendations; and people today on YouTube, some 75% of all videos being seen are no longer chosen by the viewers. They are recommended by an algorithm or just played.

And that’s why I think an important partial solution is: Make people smart. Open their eyes and make them think about what’s happening.

37:25

Russ Roberts: Yeah; I’ve always liked that solution, which you could call more information, raise awareness. A simple way to describe it, it’s called education. I spent a good chunk of my life thinking about, say, confirmation bias and similar problems. And, when you make people aware of it, it’s pretty cool. It’s a good thing to be aware of, that you’re easily fooled.

I think you quoted Richard Feynman: ‘The first principle is not to fool yourself and you’re the easiest person to fool.’ So, the more we make people aware of that, you think it’d make a better world.

I’ve become a little bit skeptical of people’s desire for truth. I think they like comfort more than they like truth.

And so, the education, which is–here I am, I’m a president of a college and I run a weekly podcast that tries to educate people–but it’s a quixotic mission, I’m afraid. It may not be the road to real success there.

But, I would say it’s the only road I want to go down–and I think it’s the right road–is to encourage people to be aware of these things and to be more sensitive to them. [More to come, 38:39]



Click Here For The Original Source

. . . . . . .