This week the Verge’s podcast Decoder interviewed former U.S. president Barack Obama for a discussion on “AI, free speech, and the future of the internet.”
Obama warns that future copyright questions are just part of a larger issue. “If AI turns out to be as pervasive and as powerful as it’s proponents expect — and I have to say the more I look into it, I think it is going to be that disruptive — we are going to have to think about not just intellectual property; we are going to have to think about jobs and the economy differently.”
Specific issues may include the length of the work week and the fact that health insurance coverage is currently tied to employment — but it goes far beyond that:
The broader question is going to be what happens when 10% of existing jobs now definitively can be done by some large language model or other variant of AI? And are we going to have to reexamine how we educate our kids and what jobs are going to be available…?
The truth of the matter is that during my presidency, there was I think a little bit of naivete, where people would say, you know, “The answer to lifting people out of poverty and making sure they have high enough wages is we’re going to retrain them and we’re going to educate them, and they should all become coders, because that’s the future.” Well, if AI’s coding better than all but the very best coders? If ChatGPT can generate a research memo better than the third-, fourth-year associate — maybe not the partner, who’s got a particular expertise or judgment? — now what are you telling young people coming up?
While Obama believes in the transformative potential of AI, “we have to be maybe a little more intentional about how our democracies interact with what is primarily being generated out of the private sector. What rules of the road are we setting up, and how can we make sure that we maximize the good and maybe minimize some of the bad?”
AI’s impact will be a global problem, Obama believes, which may require “cross-border frameworks and standards and norms”. (He expressed a hope that governments can educate the public on the idea that AI is “a tool, not a buddy”.) During the 44-minute interview Obama predicted AI will ultimately force a “much more robust” public conversation about rules needed for social media — and that at least some of that pressure could come from how consumers interact with companies. (Obama also argues there will still be a market for products that don’t just show you what you want to see.)
“One of Obama’s worries is that the government needs insight and expertise to properly regulate AI,” writes the Verge’s editor-in-chief in an article about the interview, “and you’ll hear him make a pitch for why people with that expertise should take a tour of duty in the government to make sure we get these things right.”
You’ll hear me get excited about a case called Red Lion Broadcasting v. FCC, a 1969 Supreme Court decision that said the government could impose something called the Fairness Doctrine on radio and television broadcasters because the public owns the airwaves and can thus impose requirements on how they’re used. There’s no similar framework for cable TV or the internet, which don’t use public airwaves, and that makes them much harder, if not impossible, to regulate. Obama says he disagrees with the idea that social networks are something called “common carriers” that have to distribute all information equally.
Obama also applauded last month’s newly-issued Executive Order from the White House, a hundred-page document which Obama calls important as “the beginning of building out a framework.”
We don’t know all the problems that are going to arise out of this. We don’t know all the promising potential of AI, but we’re starting to put together the foundations for what we hope will be a smart framework for dealing with it… In talking to the companies themselves, they will acknowledge that their safety protocols and their testing regimens may not be where they need to be yet. I think it’s entirely appropriate for us to plant a flag and say, “All right, frontier companies, you need to disclose what your safety protocols are to make sure that we don’t have rogue programs going off and hacking into our financial system,” for example. Tell us what tests you’re using. Make sure that we have some independent verification that right now this stuff is working.
But that framework can’t be a fixed framework. These models are developing so quickly that oversight and any regulatory framework is going to have to be flexible, and it’s going to have to be nimble.