2024 showed that it is indeed possible to master AI
Almost all the big news about AI this year has been about how fast the technology is developing, the damage it’s causing, and speculation about how soon it will surpass the point where humans can control it. But in 2024, governments made significant progress in regulating algorithmic systems. Here’s a breakdown of the most important AI legislation and regulatory efforts from the past year at the state, federal and international levels.
condition
US state legislatures took the lead on AI regulation in 2024, introducing hundreds of accounts— some had modest goals like establishing exploratory committees, while others would impose serious civil liability on AI developers in the event their creations cause catastrophic harm to society. The majority of the bills failed to pass, but several states passed meaningful laws that could serve as models for other states or Congress (assuming Congress ever gets back into business).
As artificial intelligence flooded social media ahead of the election, politicians in both parties rallied behind laws against deep rigging. More than 20 states now there are bans against deceptive AI-generated political ads in the weeks immediately before an election. Bills aimed at curbing AI-generated pornography, particularly images of minors, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina and South Dakota.
Not surprisingly, given that it’s the tech industry’s backyard, some of the most ambitious AI proposals have come out of California. One high-profile bill would have forced AI developers to take precautions and held companies liable for catastrophic damage caused by their systems. That bill passed both houses of the legislature amid fierce lobbying efforts, but it was eventually vetoed by Governor Gavin Newsom.
Newsom, however, signed more than a dozen other accounts aimed at less apocalyptic but more immediate AI damage. A new law in California requires health insurers to ensure that the artificial intelligence systems they use to determine coverage are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills bans distribution of an AI-generated likeness of a dead person without prior consent and mandates that agreements for AI-generated likenesses of living people must clearly state how the content will be used.
Colorado passed a the first law of its kind in the US requiring companies that develop and use AI systems to take reasonable steps to ensure that the tools are not discriminatory. Consumer advocates called the legislation an an important baseline. Similar bills are likely to be hotly debated in other states in 2025.
And a middle finger to both our future robot overlords and the planet, Utah issued a law which prohibits any governmental entity from granting legal personality to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and other non-human things.
federal
Congress talked a lot about AI in 2024, and the House ended the year by releasing A 273-page bipartisan report outlining guiding principles and recommendations for future regulation. But when it came to actually passing legislation, federal lawmakers did very little.
Federal agencies, on the other hand, were busy all year round trying to achieve the goals outlined in President Joe Biden’s 2023 executive order. about AI. And several regulators, notably the Federal Trade Commission and the Department of Justice, have cracked down on misleading and harmful AI systems.
The agencies’ work to comply with the AI ​​executive order wasn’t particularly sexy or headline-grabbing, but it laid important foundations for managing public and private AI systems in the future. For example, federal agencies have begun hiring AI talent and have created standards for responsible model development and harm reduction.
And in a big step toward increasing the public’s understanding of how the government uses AI, the Office of Management and Budget has (most of) its fellow agencies disclose critical information about the AI ​​systems they use that may affect people’s rights and safety.
On the law enforcement side, the FTC Operation AI Comply target companies using AI in fraudulent ways, such as writing fake reviews or providing legal advice, and it sanctioned AI-weapons detection company Evolv for misleading claims about what its product can do. The agency too arranged investigation with facial recognition company IntelliVision, which it accused of falsely claiming its technology was free of race and gender bias, and prohibited drugstore chain Rite Aid from using facial recognition for five years after an investigation found the company used the tools to discriminate against shoppers.
Meanwhile, the DOJ has joined attorneys general in a lawsuit accusing the real estate software company RealPage’s massive algorithmic price-fixing scheme which raised rents across the country. It also won several antitrust lawsuits against Google, including one involving the company a monopoly on internet searches this could significantly shift the balance of power in the burgeoning AI search industry.
Global
In August, the European Union’s AI Act entered into force. The law, which already serves as a model for other jurisdictions, requires AI systems that perform high-risk functions, such as assisting with hiring or medical decisions, to undergo risk mitigation and meet certain standards on data quality for training and human oversight. It also prohibits the use of other AI systems, such as algorithms that can be used to assign social ratings to a country’s residents, which are then used to deny rights and privileges.
In September, China issued a major AI safety manual frame. Like similar frameworks published by the US National Institute of Standards and Technology, it is non-binding but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.
One of the most interesting parts of AI policy legislation comes from Brazil. At the end of 2024 The country’s Senate passed a comprehensive AI safety bill. It faces a challenging road ahead, but if passed, it would create an unprecedented set of protections for the kinds of copyrighted material typically used to train generative AI systems. Developers would have to disclose what copyrighted material was included in their training data, and creators would have the power to ban their work from being used to train AI systems or negotiate compensation agreements that would be based in part on size of an AI developer and how the material will be used.
Like the EU AI Law, the proposed Brazilian law would also require high-risk AI systems to follow certain safety protocols.