From Dark Forces To Creative Liberation, What’s Ahead? | #ukscams | #datingscams | #european


Generative AI is enjoying an ongoing moment with the release of ChatGPT-4 this month and its potential impact on, well, everything. Its surface has only been lightly scratched with plenty more to follow on from the virtual chatbots currently capturing our imaginations.

When it comes to AI, several questions surface quickly such as the jobs and people doing them that might be replaced, the IP and copyright issues at stake, the ethics and governance models (or lack of) in place and who or what decides how deeply embedded they will become in our everyday lives.

The potential of AI is still in its infancy with the ability to transform major critical systems such as healthcare, agriculture and scientific research in the most astonishing ways. As well as in ways that feel disturbing and devaluing through a combination of unintended consequences or finding naive or bad actors in positions of power for its development.

Whether you’re among the 35% of US businesses that IBM says are already directly using AI or the 42% who are exploring it (or even the remaining 23% who are looking the other way for now) it’s a near-future reality.

With this in mind I sat down for an exchange with Gilles Babinet, Vice President of the French National Digital Council representing France for Digital at the EU. His role is to explore the potential, the impact and the boundaries of new technologies, and assess the opportunities and threats coming down the line. Here we explore the possibilities and pitfalls of what could be ahead.

Let’s start with AI and thinking about its applications such as ChatGPT. At what stage in its development would you say we’re at today?

GB: ChatGPT is the ‘aha’ moment for western countries as it was for the Chinese and Asian market when millions of people watched computer program AlphaGo beat top Go player Lee Sedol in a tournament in South Korea in 2016. It came as a shock for the market and triggered an AI rush.

But large language models (LLM) and reinforcement models, which are the basis of ChatGPT, are not recent. LLM was being used back in the 90s, while reinforcement learning finds its roots back in the 80s. The main breakthrough has been to bring these to scale and to shift people’s understanding to have no doubt about how powerful AI can be.

What’s your view on AI accelerating out of the realm of science, research and academia – and into the well-funded, high speed, commercial business world with lots of media and consumer interest?

GB: For years, decades even, these technologies were too complex to be broadly used. Now, they are becoming simpler, usable out-of-the-box, and therefore are much more likely to be tried and implemented on a far larger scale.

But with that comes huge risks, from disinformation and financial scams (check ‘AI love scam’ or ‘AI CEO fraud’) to trust in one another and even democracy itself may be put at risk with this new technology.

As Henry Kissinger and Eric Schmidt have co-written, this technology may be the most challenging issue our modern world has ever faced.

My main concern regards the hacking of our free will, the mercantilization of our lives, the risk that AI could be used to bend reality at the expense of democracy, and isolate us from one another.

SA: On the flip side, we’ve seen positive (as well as negative) examples of what happens when scientific research and discoveries move into commercial applications. I’m thinking of CERN and its development of the Hadron Collider with its exploration of particle theories. The discoveries of which have gone on to become used in medical diagnosis and therapy including cancer treatments.

It feels to me like positive benefits are within reach if we can define the right boundaries. But that’s a big if.

So back to the dark side and now we’re pretty terrified, can you elaborate?

GB: Until now, we have been struggling to deal with not very sophisticated technologies such as spam and phishing. Now, imagine making a real video call with someone who has the same face and voice as someone you know (your mother, a friend, your boss) but who is entirely fake.

One may think that this is science fiction but it has already happened and with the dissemination of AI technologies it will be even easier for anyone to fake or impersonate someone.

With so much to make us anxious, does more excite you about its potential for positive change than cause fear?

GB: It is simply impossible to say which side, positive versus negative, will prevail – there are simply too many variables.

Will nations be able to regulate? Will there be new and effective tools to efficiently detect the AI scams? Will Moore’s Law, whereby the number of transistors in a microchip doubles every two years, keep running for decades or come to an end?

The one sure thing is that no one can now stop these technologies from unrolling.

Therefore, we must increase the number of talented people, researchers and startups entering the industry in order to have more forces to counter the invisible and dark ones when they appear.

SA: Trust is clearly critical here and with geopolitical tensions being more sharply defined along pro vs anti West ideals and dominance, it’s unclear how we’ll reach a global digital governance model the world can adhere to.

With one model of ‘cyber sovereignty’ forming that could challenge the openness and interoperabily of digital economies vs the lighter regulatory standards model favoured by democracies such as those within the European Union. We’re a long way from an ideal common standard with no clear path forward

We’re clearly at an important juncture where the balance of AI’s negative or positive impact on society could go either way. When you’re making recommendations to government on the future role of technology, what positive role do you believe AI could play both in our economies and in society?

GB: We urgently need to have new productivity tools as there are so many challenges to fix (environment, underdeveloped countries, etc) that won’t be solved with our current technologies – and AI is very likely to bring these rapidly.

However, the risks are high, and we therefore need to have strong, effective regulations.

While many states and regulators are starting to realize how dangerous some social media can be – both to democracy and to people’s mental health, especially youngsters – they are not yet acting quick enough.

So, what should they be doing? And what model of regulation do you think is needed – industry self-regulation, government, or a mix of the two?

GB: The fact this represents a paradigm shift plays out in just how hard it is for regulation to deal with the newcomers. There are permanent concerns regarding antitrust, data regulation, data misuse, national security, etc. in China, US and Europe. Who would have thought that TikTok could become a national security concern for both the US and Europe?

In the longer run, I can see the regulation moving toward two new territories: (i) crowd-regulation (which is already at stake in the Digital Services Act in Europe) where it would be the users who would declare issues such as improper content on social media; (ii) AI regulation to face the AI from the platforms.

It is very easy to tweak the content regulation of a platform and most of the time it remains unnoticed. Only a permanent regulation that would detect some algorithm modification would be efficient enough to face the potential of the big tech platforms.

More generally, all this would work only if there is a way to educate as many people as possible on both the risk and opportunity of these technologies… and to hope for the best.

This is such an important point – as we become absorbed in AI’s potential we mustn’t lose sight of the impact on wider society and individuals. Can you elaborate on what forms of education you believe are needed?

GB: I am currently working on this with the French digital minister. We believe that it can start very low, by just answering the basic concerns of people: will my work be disintermediated by AI? Will AI become sentient? And then progressively open up more complex issues to people; let them understand what is a convolutional AI network (a deep learning algorithm), and maybe even themselves become experts.

We need to pave the way toward this with a lot of steps that let anyone reach the highest level of expertise possible.

And what about the impact of AI on culture? What is your view of how it is already and will increasingly impact creativity and art?

GB: On the one hand, I can conceive there will be lots of legal issues, in particular around publishing rights that could ignite the same type of trials (probably even bigger) than the ones we had in the early 2000s with Napster and similar P2P platforms.

But, on the other hand, I can see AI being a huge boost to artists’ creativity. In most cases, I can see AI increasing the potential for human innovation and creativity.

SA: I’d like to imagine that over time you might develop your own personal AI ‘creative partner’. There are two jobs for AI, one that helps us be more organized and productive by automating the things that contribute to creativity but aren’t creative in themselves and another that is more of a muse and magician type role helping to conjure up unexpected ideas and journeys yet to be imagined.

I’d love to hear a bit more on that creative impact. In your view, what are some of the best ways we’re using AI in this way now? And how do you see this developing in the future?

GB: Just try DALL-E. That is incredible. I asked for images for the cover of my next book and after only one hour, it came back with some options that were so good that I wondered how to show them to the graphic designer who had tried to make a few initial drafts. What if this very same designer had themselves used DALL-E? She would certainly have come up with far better versions than my attempts.

SA: I agree that many of the tools now available at our fingertips are incredible and the way we can increasingly access them is liberating.

Culturally I think this is very exciting as how we interpret and interact with the world as human beings is so socially constructed and nuanced. We’ve been forced into a common standard based on technology constraints for so long now and it will be exciting to see how this changes as we and technology evolve together in ways that are conceptually different and differently constructed.

We also see a lot of progress and real world applications relating to genomics, biometrics, blockchain, virtual and augmented reality, 3D printing, robotics, etc. What excites you most about how you see these technologies increasingly integrated and interacting in our lives?

GB: I am wondering if we can curtail the efficiency of AI to a limited number of fields. I can see it being applied to technologies, arts, sciences, and beyond. In agriculture for example, I can see huge productivity gains.

A farmer has to deal with a huge number of factors at any one time and these change throughout the farming process: temperature, moisture, soils, pricing, seeds, fertilizers, mechanical processes, to name a few. There are hundreds of them – and too many to be efficiently optimized.

When we bring AI to this example, it can be very good at dealing with multifactorial processes as well as sequential ones (which change over time). The potential is for a more productive agriculture, which is greener and uses much less fertilizer, creating fewer externalities, and potentially creating positive outcomes such as carbon capture.

This potential to solve the pressing global problems facing humanity is our huge hope. Is AI already being used in this way in agriculture?

GB: So far there is no mass approach to this, but the research coming from different universities is promising. MIT, Carnegie Mellon and Technion are among the universities that work in the field. Some companies such as John Deere are also strong proponents of the potential of AI to transform. Although it is difficult to assess a precise potential, one can easily conceive that robotisation and AI will have a massive impact on agriculture within 10 years.

SA: Like many others I’m increasingly involved in the care of ageing parents and it’s easy to underestimate how severe loneliness can be for those based in more individualistic vs community-based societies or subcultures. Its implications for health and quality of life are huge from what I can see and in this regard I see a very practical role for AI combined with robotics to fill a much needed gap. Whether it’s with supporting, stimulating and keeping safe dementia sufferers or keeping the affection and memories alive of someone loved over a lifetime who’s no longer there.

It’s these kinds of applications that I think could have a very positive and human impact on our lives and how we integrate technology into them.

It’s a fascinating area to focus attention, with huge implications – and shows the force for good that AI could be in the world and in our future. Will you personally be a part of that?

GB: I would be happy if I could have helped AI to be a key element of the decarbonisation solution to solve the environmental crisis. I believe it is going in the right direction, but there is a lot riding on finding the right balance between innovators and regulators – and that’s not easy at a global scale.

SA: Thank you so much for sharing your thoughts and for all the amazing work you’re doing as we navigate this tipping point and move boldly into the future.

Follow me on TwitterCheck out my website. 





Click Here For The Original Source.

. . . . . . .