Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
While efforts to regulate the creation and use of artificial intelligence (AI) tools in the United States have been slow to make gains, President Joe Biden’s administration has sought to outline how AI would to be used by the federal government and how AI companies should. ensure the safety and security of your tools.
The incoming Trump administration, however, has a very different view on how to approach AI and could end up reversing some of the progress that has been made in recent years.
President Biden signed a executive order in October 2023, it aimed to promote the “safe, secure, and reliable development and use of artificial intelligence” within the federal government. President-elect Donald Trump has vowed to repeal that executive order, saying it would hinder innovation.
Biden also got seven leading AI companies to agree to guidelines for how AI should be safely developed in the future. Other than that, there are no federal regulations that specifically address AI. Experts say the Trump administration will likely take a more hands-off approach to the industry.
“I think the biggest thing we’re going to see is the massive repeal of the kind of initial steps that the Biden administration has taken toward meaningful regulation of AI,” says Cody Venzke, a senior policy adviser at the Department of National Political Defense. of the ACLU. “I think there’s a real threat that we’re going to see AI growth without significant guardrails, and it’s going to be a bit of a free-for-all.”
Unbridled growth is what the industry has seen so far, and it’s led to something of a wild west in AI. This can cause problems, including the spread of deepfake porn and political deepfakes, without lawmakers restricting how the technology can be used.
A major concern of the Biden administration, and those in the tech policy space, has been how generative artificial intelligence can be used to run disinformation campaigns, including deepfakes, which are fraudulent videos of people showing them saying or doing things they never did. . This type of content can be used to try to influence election results. Venzke says he doesn’t expect the Trump administration to focus on preventing the spread of misinformation.
Venzke says AI regulations aren’t necessarily a major focus for the Trump administration, but it’s on their radar. This week, Trump picked Andrew Ferguson to head the Federal Trade Commission (FTC) and will likely roll back industry regulation.
Ferguson, an FTC commissioner, has said he will aim to “end the FTC’s attempt to become an AI regulator,” Punchbowl News reported, saying the FTC, an agency independent accountable to the US Congress, should be fully accountable to the Oval Office. He has also suggested that the FTC investigate companies that refuse to advertise alongside hateful and extremist content on social media platforms.
Venzke says Republicans think Democrats want to regulate AI because it “woke up,” meaning it would recognize things like the existence of transgender people or human-caused climate change.
However, artificial intelligence doesn’t just answer questions and generate images and videos. Kit Walsh, director of AI and access to knowledge legal projects at the Electronic Frontier Foundation, tells Al Jazeera that artificial intelligence is being used in many ways that threaten people’s individual liberties, even in court cases, and it must be regulated to avoid damages. .
While people think that decision-making computers can eliminate bias, it can actually make bias more apparent if the AI is created using historical data that is biased. For example, an AI system that was created to determine who gets parole could use data from cases where black Americans received harsher treatment than white Americans.
“The biggest issues in AI right now are its use to inform decisions about people’s rights,” says Walsh. “This covers everything from predictive policing to deciding who gets government housing to health benefits. It’s also the private use of algorithmic decision-making for hiring and firing or housing and so on.
Walsh says he believes there is a lot of “technological optimism and solutionism” among some of the people Trump is interested in recruiting for his administration, and they may end up trying to use AI to promote “government efficiency.”
This is the stated goal of people like Elon Musk and Vivek Ramaswamywho will head what appears to be an advisory committee called the Department of Government Efficiency.
“It’s true that you can save money and lay off some employees if you’re okay with less precise decisions (that come with AI tools). And that might be the path someone might take in the interest of reducing government spending. But not would recommend, because it will hurt people who rely on government agencies for essential services,” Walsh says.
The Trump administration will likely spend much more time focused on deregulation than creating new regulations if Trump’s first term as US president in 2017-2021 is any indication of what to expect. This includes regulations related to the creation and use of AI tools.
“I would like to see reasonable regulation that paves the way for socially responsible development, deployment and use of AI,” says Shyam Sundar, director of the Penn State Center for Socially Responsible Artificial Intelligence. “At the same time, regulation should not be so tough that it stifles innovation.”
Sundar says the “new revolution” brought about by generative AI has created “a bit of a Wild West mentality among technologists.” Future regulations, he says, should focus on creating guardrails where necessary and promoting innovation in areas where AI can be useful.