How I took the integrity of the background intelligence to the military money


Hello, welcome to you Decoder! I am Hayden Field, great artificial intelligence in freedom – The host of the guest of the episode Thursday. I have two other shows for you while Nelay is out of parental leave, and we will spend more time dive into some unexpected consequences of artificial intelligence.

Today, I am talking to Heidi Kaleb, who is the chief artificial intelligence science at the Institute of Artificial Intelligence now and one of the most prominent experts in industry in the integrity of artificial intelligence within independent weapons systems. Heidi has already worked with Openai in the past. From late 2020 to mid -2011, the safety engineer of the company was large during a critical period, when it was developing safety evaluation parties and the risk of the company’s planning tool.

Now, the same companies that have already looked for safety and morals in their mission data are now selling actively and developing new technology for military applications.

In 2024, Openai Removed A prohibition on “military and war” is the use of service issues. Since then, the company has signed a deal with the independent arms maker Anduril, and last June, Fell The Ministry of Defense held $ 200 million.

Openai is not alone. Antarbur, which has one of the most intelligent intelligence laboratories directed towards safety, has made a partnership with Palantir to allow their models to use for US defense and intelligence purposes, as they are DOD contract fell at $ 200 million. Also, adult technology players such as Amazon, Google and Microsoft, who have long worked with the government, are now paying artificial intelligence products for defense and intelligence, though this Increasing the protest of critics and active groups employee.

So I wanted to get Heidy in the show that during this main shift in the artificial intelligence industry, what it motivates, and why do you think that some of the leading artificial intelligence companies are too much about spreading artificial intelligence in highly dangerous scenarios. I also wanted to know what this batch is to spread artificial intelligence in the military degree of bad actors who might want to use artificial intelligence systems to develop chemical, biological, radiological and nuclear weapons-a danger that artificial intelligence companies say are increasingly concerned.

Well, here is Heidi Kaleb on artificial intelligence in the army. Here we are.

If you want to read more about what we talked about in this episode, check the links below:

Questions or comments about this episode? Hit us in Decoder@theverger.com. We really read every email!

Leave a Reply

Your email address will not be published. Required fields are marked *