Nick Clegg doesn’t want to talk about superintelligence


I think her product has a profoundly democratic effect. In theory, a child sitting in a rural town in rural Brazil should be able to receive the same responsive interaction with an Efekta AI tutor as someone living in Mayfair.

Has anything been lost due to the introduction of AI into the classroom? Will we end up with a generation of students who use chatbots as a crutch to draft essays, solve problems, etc.?

They’ll do it anyway. Trying to exclude AI from schools makes no sense. It’s about how to integrate AI into education. Bad teachers will use them poorly, and good teachers will use them very well, as they did with whiteboards and calculators.

But we are talking about a more fundamental change. I ask what it might mean for students to not develop basic skills.

If you go back to the time when calculators were invented, (people thought that) children would never be able to do mental arithmetic. But this did not turn out to be the case. It will have an impact, of course. But I think the net effect should be positive in terms of educational performance.

Children are likely to be uniquely vulnerable to the types of risks associated with chatbots. How do you think about those risks?

Of course there are risks – particularly as vulnerable adults and children become emotionally dependent and invested in a relationship with something that has a symbolic and human presence in their lives.

At a societal level, we should take a very precautionary approach. I think you should have clear age limits on how effective AI can be made available to young people.

Like Australia’s social media ban for under 16s?

There’s no point in imposing a ban if you can’t measure people’s ages. This is where policymakers rush to grab headlines about the ban and don’t quite think about the very difficult matters. Unless you want all these platforms to keep everyone’s passport details? My view for a long time has been that the only way to do this is through the choke points for iOS and Android, at the (App Store) level.

But in principle, I think you should take a similar precautionary approach. The tendency to become too emotionally invested and perhaps unduly influenced in your relationship by a kind and patient 24-hour voice listening to you all the time is very real.

I don’t think this poses a risk at all with the type of products Efekta produces.

Even though AI literally takes on the role of teacher?

Well, no, because it’s not. Agent AI produced by companies like Efekta will not have some sort of secret affair in the middle of the night where they say all sorts of horrible things to the pupil. It is a teacher-controlled experience.

I have spent almost seven years in Meta. At that time, it became artificial intelligence the Frontier technology. I’m curious how your experience at Meta has influenced your view on the opportunities, risks, and limits of AI—and the quest for superintelligence.

If you ask three people in the same organization about superintelligence, you will get three different answers. I have the impression that everyone in Silicon Valley must say they are within walking distance of artificial general intelligence or superintelligence, because that is the way to attract the best data scientists. I have a hard time dealing with a hand-wavy concept like this.

Leave a Reply

Your email address will not be published. Required fields are marked *