This defense company makes AI agents that blow things up


Like many Silicon Valley companies today, Scouting AI The training is great Amnesty International Models and Agents To automate tasks. The big difference is that instead of writing code, answering emails, or buying things online, Scout AI agents are designed to search for and destroy things in the physical world using explosive drones.

In a recent demonstration at an undisclosed military base in central California, Scout AI technology was tasked with piloting an autonomous off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, then blew it to bits using an explosive device.

“We need to bring the next generation of AI to the military,” Colby Adcock, CEO of Scout AI, told me in a recent interview. (Adcock’s brother, Brett Adcock, is CEO of Figure AI, a startup working on humanoid robots.) “We take a basic, super-scalable model and train it to go from being a generalized chatbot or agent assistant to being a warfighter.”

Adcock is part of a new generation of startups racing to adapt technology from large AI labs to the battlefield. Many policymakers believe that harnessing artificial intelligence It will be the key to future military dominance. The combat potential of AI is one of the reasons the US government has it He sought to limit the sale From advanced artificial intelligence chips and chipmaking equipment to China, although recently the Trump administration He chose to relax those controls.

“It’s good for defense technology startups to expand into AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously worked at the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “This is exactly what they should do if the United States is going to lead the military adoption of AI.”

However, Horowitz also points out that harnessing the latest advances in AI may be particularly difficult in practice.

Large language models are inherently unpredictable, and they are AI agents, like those that control common languages OpenClaw AI AssistantYou could misbehave When they are tasked with relatively simple tasks such as ordering goods online. Horowitz says it may be particularly difficult to prove that such systems are robust from a cybersecurity standpoint, something that may be required for large-scale military use.

The recent Scout AI demo included several steps where the AI ​​was free to control combat systems.

At the beginning of the mission, the following command was entered into the Scout AI system known as Fury Orchestrator:

Outrage Coordinator, send one ground vehicle to the ALPHA checkpoint. Carrying out a kinetic strike mission with two drones. Destroy the blue truck 500 meters east of the airport and send the confirmation.

A relatively large AI model with more than 100 billion parameters, which can run either on a secure cloud platform or on an air-tight PC on-site, explains the initial command. Scout AI uses an undisclosed open source model with its own limitations removed. This model then acts as an agent, issuing commands to smaller models with 10 billion parameters running on the ground vehicles and drones participating in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.

Seconds after receiving marching orders, the ground vehicle set off along a dirt road winding through trees and trees. After a few minutes, the car stopped and sent two drones, which flew over the area where it received instructions that the target was waiting. After spotting the truck, an AI agent riding one of the drones issued an order to fly toward it and detonate an explosive device just before impact.

Leave a Reply

Your email address will not be published. Required fields are marked *