Human abuse will make AI more dangerous

Rate this post


OpenAI CEO Sam Altman expects AGI, or artificial general intelligence— AI that outperforms humans at most tasks — around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026and he has hard that he’s “losing sleep over the threat of AI danger.” Such predictions are wrong. Like restrictions of current AI are becoming increasingly clear, most AI researchers have come to the conclusion that simply building bigger and more powerful chatbots will not lead to AGI.

However, in 2025 AI will still pose a huge risk: not from artificial superintelligence, but from human abuse.

These could be unintended abuses, such as lawyers over-relying on AI. After the launch of ChatGPT, for example, a number of lawyers were disciplined for using AI to generate erroneous court briefs, apparently unaware of the chatbots’ tendency to make things up. c British Columbialawyer Chong Ke was ordered to pay opposing counsel’s costs after she included fictitious AI-generated cases in a court filing. c New YorkSteven Schwartz and Peter Loduca were fined $5,000 for providing false citations. c ColoradoZachariah Crabill was suspended for a year for using fictitious lawsuits generated using ChatGPT and blaming a “legal intern” for the mistakes. The list is growing fast.

Other abuses are intentional. In January 2024, sexually overt taylor swift deepfakes flooded social media platforms. These images were created using Microsoft’s “Designer” artificial intelligence tool. While the company had railings to avoid generating images of real people, misspelling Swift’s name was enough to get around them. Microsoft since then fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deep fakes are spreading widely — in part because the open-source tools for creating deep fakes are publicly available. Current legislation around the world seeks to combat deep counterfeiting in the hope of limiting the damage. Whether it is effective remains to be seen.

In 2025 it will become even more difficult to distinguish what is real from what is made up. The accuracy of AI-generated audio, text and images is remarkable, and video will be next. This can lead to the “liar’s dividend”: those in positions of power dismiss evidence of their misconduct by claiming it is false. In 2023 Tesla argued that video from 2016 of Elon Musk may be a deep fake in response to accusations that the CEO exaggerated the safety of Tesla’s Autopilot, leading to an accident. An Indian politician claims that audio clips in which he admits to corruption in his political party were tampered with (the audio in at least one of his clips is checked as genuine from the press). And two defendants in the January 6 riots claimed the videos in which they appeared were deep fakes. Both were found guilty.

Meanwhile, companies are using public confusion to sell fundamentally questionable products by labeling them as “AI.” This can go very wrong when such tools are used to classify people and make subsequent decisions about them. Hiring a Retorio company, for example, claims that its artificial intelligence predicts the suitability of job applicants based on video interviews, but a study found that the system could be fooled simply by the presence of glasses or by replacing a plain background with a bookshelf, indicating that it relies on superficial correlations.

There are also dozens of applications in healthcare, education, finance, criminal justice and insurance, where AI is currently being used to deprive people of important life opportunities. In the Netherlands, Dutch tax authorities used an artificial intelligence algorithm to identify people who committed child welfare fraud. it wrongly accused thousands of parents, often demanding the return of tens of thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.

In 2025 we expect AI risks to arise not from AI acting on its own, but from what humans do with it. This includes cases where it seems to work well and be relied upon too much (lawyers using ChatGPT); when it works well and is abused (non-consensual deep fakes and the liar’s dividend); and when it is simply not fit for purpose (denying people’s rights). Reducing these risks is a huge task for companies, governments and society. It will be hard enough without being distracted by science fiction.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *