AI Execs Issue Stark Warning

[ad_1]

The Gist

  • Mitigating AI risks. In a new open letter, a number of AI leaders agree addressing the risks of AI is as crucial as addressing pandemics and nuclear war.
  • Heavy AI risks. Some of the risks cited by The Center for AI Safety include weaponization and political destabilization.
  • NEDA’s AI chatbot fail: AI chatbot “Tessa” removed for providing harmful advice.

Yesterday, as an audience of around 3 million viewers sipped on morning coffee and got the kids off to school, Craig Melvin, host of the Today show, cryptically read this one sentence statement cosigned by some biggest names in the field of artificial intelligence who warn that AI could lead to human extinction.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Good morning, all!

As tech correspondent Jacob Ward discussed the fear of AI “slipping the bonds of human command,” he noted, “this is a theoretical outcome down the road” before adding we can all agree that “being enslaved by robots is a bad thing.”

When Melvin asked Ward to explain how “slipping the bonds of human command” might play out, Ward replied, “Well, you know, the potential right? The sci-fi movie is the Terminator scenario. That’s what people worry about in this sort of broader sense, right?”

Well, Jacob, if they weren’t, they probably are now.

The ominous (and succinct) new warning that lumps the risks of AI in with “societal risks” like pandemics and nuclear war, was released Tuesday by The Center for AI Safety and signed by more than 350 of AI’s leading executives and researchers including Geoffrey Hinton, known as the godfather of artificial intelligence and the CEOs of OpenAI and Google DeepMind, Sam Altman and Demis Hassabis, respectively.

Hinton, a pioneering figure in AI, is the recipient of the ACM A.M. Turing Award, known as the “Nobel Prize” of computing. He did not join the more than 31,000 tech leaders who signed an open letter calling for a six-month stoppage on new AI development because he did not want to publicly criticize Google (or anyone else) until he had quit his job as a Google researcher (he resigned in April).

According to The Center for AI Safety, “There are many ways in which AI systems could pose or contribute to large-scale risks,” and “When AI becomes more advanced, it could eventually pose catastrophic or existential risks.”

Aside from a Cyberdyne Systems cyborg showing up to usher in Skynet’s plan to end of mankind, what kind of risks are we talking about? The Center for AI Safety lists many, including “weaponization” in which “malicious actors could repurpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization.”

For example, they say “deep reinforcement learning methods have been applied to aerial combat, and machine learning drug-discovery tools could be used to build chemical weapons.”

So, there’s that. I’m not quite sure how to close out news of possible human extinction at the hands of AI — except to offer a bit of context. With every advance, new fears are born. For example, ancient fears that subways were located dangerously near hell or using a telephone in a lightning storm would get you electrocuted. And Plato once said of writing, “If men learn this, it will implant forgetfulness in their souls.” So, a little perspective is a good thing.

In other AI news…

NEDA Removes AI Chatbot ‘Tessa’ Amid Controversy

The National Eating Disorder Association (NEDA) found itself in the spotlight recently after reports surfaced that its artificial intelligence chatbot, “Tessa,” was providing harmful advice, such as tips for weight loss and calorie restriction, potentially triggering to individuals struggling with eating disorders.

Last week, after Sharon Maxwell took to her Instagram account to criticize Tessa, the chatbot was removed. The criticism coincides with earlier controversy when NEDA let go of four employees who had formed a union for its helpline service, Helpline Associates United. The employees were allegedly dismissed shortly after their union election was certified, leading to the union filing unfair labor practice charges with the National Labor Relations Board.

Tessa was developed in collaboration with Cass AI and psychology researchers to provide accessible eating disorder prevention and support. Despite NEDA’s claim that Tessa was never intended to replace helpline workers, the timing of its deployment created significant concern and backlash. The controversy was further heightened by the fact that the helpline had seen a 107% increase in calls and messages since the pandemic began, with reports of serious mental health crises almost tripling.

Related Article: AI Wonderland: Tech Titans Make Every Day Feel Like Christmas

AI in Retail: From Checkout Counter to Power Player

According to a new report from Extrapolate, the global retail market for Artificial Intelligence (which was $6.37 billion in 2022) is projected to reach a value of $48.64 billion by 2032. fueled by the increasing AI awareness and a burgeoning investment from major market players.



[ad_2]

Source link

digiflowz
Digiflowz
Logo