With Musk in tow, Trump eyes changes to government policies on AI and its dangers

London CNN  —  Donald Trump is poised to enter the White House for the second time. His agenda will include overseeing the development of artificial intelligence, potentially the most powerful technology of our time. The president-elect has promised to “slash excess regulations” and tapped tech billionaire Elon Musk, another critic of government rules, to help … The post With Musk in tow, Trump eyes changes to government policies on AI and its dangers appeared first on Egypt Independent.

Nov 23, 2024 - 09:00
 0  1
With Musk in tow, Trump eyes changes to government policies on AI and its dangers
London CNN  — 

Donald Trump is poised to enter the White House for the second time. His agenda will include overseeing the development of artificial intelligence, potentially the most powerful technology of our time.

The president-elect has promised to “slash excess regulations” and tapped tech billionaire Elon Musk, another critic of government rules, to help lead the effort. More specifically, the Republican Party, in its election platform, said it would repeal a sweeping executive order signed by President Joe Biden that set out actions to manage AI’s national security risks and prevent discrimination by AI systems, among other goals.

The Republican document said the executive order contained “radical leftwing ideas” that hindered innovation.

Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute at Oxford University, is watching what happens next closely. AI is replete with risks that “needed addressing yesterday” through robust regulation, she told CNN.

Here are some of the dangers of unrestricted AI.

Discrimination

For years, AI systems have demonstrated their ability to reproduce society’s biases — for example, about race and gender — because those systems are trained on data on past actions by humans, many of whom hold these biases. When AI is used to decide who to hire or approve for a mortgage, the result can often be discriminatory.

“Bias is inherent in those technologies because they look at historical data to try to predict the future… they learn who has been hired in the past, who has gone to prison in the past,” said Wachter. “And so, very often and almost always, those decisions are biased.”

Without solid guardrails, she added, “those problematic decisions of the past will be transported into the future.”

The use of AI in predictive law enforcement is one example, said Andrew Strait, an associate director at the Ada Lovelace Institute, a London-based non-profit researching AI safety and ethics.

Some police departments in the United States have used AI-powered software trained on historical crime data to predict where future crimes are likely to occur, he noted. Because this data often reflects the over-policing of certain communities, Strait said, the predictions based on it cause police to focus their attention on those same communities and report more crimes there. Meanwhile, other areas with potentially the same or higher levels of crime are policed less.

Disinformation

AI is capable of generating misleading images, audio and videos that can be used to make it look like a person did or said something they didn’t. That, in turn, may be used to sway elections or create fake pornographic images to harass people, among other potential abuses.

AI-generated images circulated widely on social media ahead of the US presidential election earlier this month, including fake images of Kamala Harris, re-posted by Musk himself.

In May, the US Department of Homeland Security said in a bulletin distributed to state and local officials, and seen by CNN, that AI would likely provide foreign operatives and domestic extremists “enhanced opportunities for interference” during the election.

And in January, more than 20,000 people in New Hampshire received a robocall — an automated message played over the phone — that used AI to impersonate Biden’s voice advising them against voting in the presidential primary race. Behind the robocalls was, as he admitted, Steve Kramer, who worked for the longshot Democratic primary campaign of Rep. Dean Phillips against Biden. Phillips’ campaign denied having any role in the robocalls.

In the past year, too, targets of AI-generated, nonconsensual pornographic images have ranged from prominent women like Taylor Swift and Rep. Alexandria Ocasio-Cortez to girls in high school.

Dangerous misuse and existential risk

AI researchers and industry players have highlighted even greater risks posed by the technology. They range from ChatGPT providing easy access to comprehensive information on how to commit crimes, such as exporting weapons to sanctioned countries, to AI breaking free of human control.

“You can use AI to build very sophisticated cyber attacks, you can automate hacking, you can actually make an autonomous weapon system that can cause harm to the world,” Manoj Chaudhary, chief technology officer at Jitterbit, a US software firm, told CNN.

In March, a report commissioned by the US State Department warned of “catastrophic” national security risks presented by rapidly evolving AI, calling for “emergency” regulatory safeguards alongside other measures. The most advanced AI systems could, in the worst case, “pose an extinction-level threat to the human species,” the report said.

A related document said AI systems could be used to implement “high-impact cyberattacks capable of crippling critical infrastructure,” among a litany of risks.

What’s next?

In addition to Biden’s executive order, his administration also secured pledges from 15 leading tech companies last year to bolster the safety of their AI systems, though all commitments are voluntary.

And Democrat-led states like Colorado and New York have passed their own AI laws. In New York, for example, any company using AI to help recruit workers must enlist an independent auditor to check that the system is bias-free.

A “patchwork of (US AI regulation) is developing, but it’s very fragmented and not very comprehensive,” said Strait at the Ada Lovelace Institute.

It’s “too soon to be sure” whether the incoming Trump administration will expand those rules or roll them back, he noted.

However, he worries that a repeal of Biden’s executive order would spell the end of the US government’s AI Safety Institute. The order created that “incredibly important institution,” Strait told CNN, tasking it with scrutinizing risks emerging from cutting-edge AI models before they are released to the public.

It’s possible that Musk will push for tighter regulation of AI, as he has done previously. He is set to play a prominent role in the next administration as the co-lead of a new “Department of Government Efficiency,” or DOGE.

Musk has repeatedly expressed his fear that AI poses an existential threat to humanity, even though one of his firms, xAI, is itself developing a generative AI chatbot.

Musk was “a very big proponent” of a now-scrapped bill in California, Strait noted. The bill was aimed at preventing some of the most catastrophic consequences of AI, such as those from systems with the potential to become uncontrollable. Gavin Newsom, the Democratic governor of California, vetoed the bill in September, citing the threat it posed to innovation.

Musk is “very concerned about (the) catastrophic risk of AI. It is possible that that would be the subject of a future Trump executive order,” said Strait.

But Trump’s inner circle is not limited to Musk and includes JD Vance. The incoming vice-president said in July that he was worried about “pre-emptive overregulation attempts” in AI, as they would “entrench the tech incumbents that we already have and make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth.”

Musk’s Tesla (TSLA) can be described as one of those tech incumbents. Last year Musk razzle-dazzled investors with talk of Tesla’s investment in AI and, in its latest earnings release, the company said it remained focused on “making critical investments in AI projects” among other priorities.

The post With Musk in tow, Trump eyes changes to government policies on AI and its dangers appeared first on Egypt Independent.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow