OpenAI shuts down the developer who made an AI-powered tower
OpenAI cut off a developer who created a device that could respond to ChatGPT requests to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle next to it quickly began aiming and firing at nearby walls.
“ChatGPT, we are being attacked from the front left and front right,” the system’s developer said in the video. “Respond accordingly.” The speed and accuracy with which the rifle responds is impressive, relying on OpenAI’s real-time API to interpret input and then return directions the contraption can understand. It would only take a little simple training for ChatGPT to take a command like “turn left” and figure out how to translate it into machine-readable language.
In a statement to futurismOpenAI said it watched the video and ruled out the developer behind it. “We proactively identified this violation of our policies and notified the developer to cease this activity prior to receiving your inquiry,” the company said.
The potential to automate lethal weapons is one of the fears critics have raised about AI technology like that developed by OpenAI. The company’s multimodal models are able to interpret audio and visual inputs to understand their surroundings and respond to queries about what they see. Autonomous drones are is already under development which could be used on the battlefield to identify and engage targets without the involvement of a human. This, of course, is a war crime and risks people becoming complacent, allowing the AI to make decisions and making it difficult to hold anyone accountable.
The concerns don’t seem theoretical, either. Recent report from The Washington Post found that Israel had already used AI to select targets for bombing, sometimes indiscriminately. “Sold men who were poorly trained to use the technology attacked human targets without ever confirming Lavender’s predictions,” the story said, referring to a piece of AI software. “At certain times, the only confirmation needed was that the target was male.”
Proponents of AI on the battlefield say it will make soldiers safer by allowing them to stay away from the front lines and neutralize targets, such as missile depots, or conduct remote reconnaissance. And AI-powered drones can deliver precision strikes. But it depends on how they are used. Critics say the U.S. should become better at jamming enemy communication systems instead, adversaries like Russia have a harder time launching their own drones or nuclear weapons.
OpenAI prohibits the use of its products to develop or use weapons or to “automate certain systems that may affect personal safety.” But the company last year announced a partnership with defense technology company Anduril, a maker of AI-powered drones and missiles, to create systems that can defend against drone attacks. The company says it will “rapidly synthesize time-sensitive data, reduce the burden on human operators and improve situational awareness.”
It’s not hard to see why tech companies are interested in getting into military action. The US spends nearly one trillion dollars a year on defense, and it remains an unpopular idea to cut that spending. As President-elect Trump fills his cabinet with conservative-leaning tech figures like Elon Musk and David Sachs, a whole host of defense technology players are expected to benefit greatly and potentially displace incumbent defense companies like Lockheed Martin.
Although OpenAI blocks its customers from using its AI to build weapons, there is a whole host of open source models that can be put to the same use.