AI Execs Issue Stark Warning

The Gist
- Mitigating AI risks. In a new open letter, a number of AI leaders agree addressing the risks of AI is as crucial as addressing pandemics and nuclear war.
- Heavy AI risks. Some of the risks cited by The Center for AI Safety include weaponization and political destabilization.
- NEDA’s AI chatbot fail: AI chatbot “Tessa” removed for providing harmful advice.
Yesterday, as an audience of around 3 million viewers sipped on morning coffee and got the kids off to school, Craig Melvin, host of the Today show, cryptically read this one sentence statement cosigned by some biggest names in the field of artificial intelligence who warn that AI could lead to human extinction.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Good morning, all!
As tech correspondent Jacob Ward discussed the fear of AI “slipping the bonds of human command,” he noted, “this is a theoretical outcome down the road” before adding we can all agree that “being enslaved by robots is a bad thing.”
When Melvin asked Ward to explain how “slipping the bonds of human command” might play out, Ward replied, “Well, you know, the potential right? The sci-fi movie is the Terminator scenario. That’s what people worry about in this sort of broader sense, right?”
Well, Jacob, if they weren’t, they probably are now.
The ominous (and succinct) new warning that lumps the risks of AI in with “societal risks” like pandemics and nuclear war, was released Tuesday by The Center for AI Safety and signed by more than 350 of AI’s leading executives and researchers including Geoffrey Hinton, known as the godfather of artificial intelligence and the CEOs of OpenAI and Google DeepMind, Sam Altman and Demis Hassabis, respectively.
Hinton, a pioneering figure in AI, is the recipient of the ACM A.M. Turing Award, known as the “Nobel Prize” of computing. He did not join the more than 31,000 tech leaders who signed an open letter calling for a six-month stoppage on new AI development because he did not want to publicly criticize Google (or anyone else) until he had quit his job as a Google researcher (he resigned in April).
According to The Center for AI Safety, “There are many ways in which AI systems could pose or contribute to large-scale risks,” and “When AI becomes more advanced, it could eventually pose catastrophic or existential risks.”
Aside from a Cyberdyne Systems cyborg showing up to usher in Skynet’s plan to end of mankind, what kind of risks are we talking about? The Center for AI Safety lists many, including “weaponization” in which “malicious actors could repurpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization.”
For example, they say “deep reinforcement learning methods have been applied to aerial combat, and machine learning drug-discovery tools could be used to build chemical weapons.”
So, there’s that. I’m not quite sure how to close out news of possible human extinction at the hands of AI — except to offer a bit of context. With every advance, new fears are born. For example, ancient fears that subways were located dangerously near hell or using a telephone in a lightning storm would get you electrocuted. And Plato once said of writing, “If men learn this, it will implant forgetfulness in their souls.” So, a little perspective is a good thing.
In other AI news…
NEDA Removes AI Chatbot ‘Tessa’ Amid Controversy
The National Eating Disorder Association (NEDA) found itself in the spotlight recently after reports surfaced that its artificial intelligence chatbot, “Tessa,” was providing harmful advice, such as tips for weight loss and calorie restriction, potentially triggering to individuals struggling with eating disorders.
Last week, after Sharon Maxwell took to her Instagram account to criticize Tessa, the chatbot was removed. The criticism coincides with earlier controversy when NEDA let go of four employees who had formed a union for its helpline service, Helpline Associates United. The employees were allegedly dismissed shortly after their union election was certified, leading to the union filing unfair labor practice charges with the National Labor Relations Board.
Tessa was developed in collaboration with Cass AI and psychology researchers to provide accessible eating disorder prevention and support. Despite NEDA’s claim that Tessa was never intended to replace helpline workers, the timing of its deployment created significant concern and backlash. The controversy was further heightened by the fact that the helpline had seen a 107% increase in calls and messages since the pandemic began, with reports of serious mental health crises almost tripling.
Related Article: AI Wonderland: Tech Titans Make Every Day Feel Like Christmas
AI in Retail: From Checkout Counter to Power Player
According to a new report from Extrapolate, the global retail market for Artificial Intelligence (which was $6.37 billion in 2022) is projected to reach a value of $48.64 billion by 2032. fueled by the increasing AI awareness and a burgeoning investment from major market players.
“Retailers are using artificial intelligence to improve demand forecasting, optimize product placement, and make pricing decisions,” Extrapolate officials said in a statement. “Retailers can improve their understanding of customer behavior, reduce costs, and improve the shopping experience by leveraging the power of machine learning and deep learning algorithms.”
Instacart’s New AI-Powered Pal
Instacart has introduced a new AI-powered search tool, “Ask Instacart,” leveraging OpenAI’s ChatGPT to provide personalized shopping recommendations. The feature, embedded in the search bar of the Instacart app, offers information on food preparation, product attributes, dietary considerations and more. The system is designed to respond to a wide range of queries, from suggesting side dishes for specific main courses to recommending dietary-specific snacks. The new tool is being rolled out to all U.S. and Canadian customers in the coming weeks.
Previously, Instacart’s search bar was used to find products, stores and recipes. However, with “Ask Instacart,” the app aims to become a one-stop solution for food preparation, eliminating the need for separate Internet searches for recommendations. The introduction of the feature follows the recent launch of an Instacart plug-in for ChatGPT.
Related Article: OpenAI CEO Testifies on AI Safety and Ethics at US Senate Hearing
Disney’s Magic Wand: AI Set to Transform TV Ads and Streamline Marketing
Rita Ferro, head of advertising at Disney, has announced that the company’s next 12 months will be focused on artificial intelligence. Ferro shared the news during “The State of TV Advertising” event hosted by AdAge, with a sneak peek at AI’s potential influence on marketing and creative experiences across both television and internet platforms.
Discussing the potential of AI in shoppable marketing within streaming services, Ferro noted that presently, customers utilize QR codes to purchase products shown in advertisements on screen. However, she proposed that AI could be leveraged to identify products within visuals and propose direct purchasing options via the streaming interface, eliminating the necessity for QR codes.
AI Tweet of the Week: OpenAI Cracks the Math Code
OpenAI has announced a new model it says attains an unprecedented level of proficiency in solving mathematical problems. According to company officials, this is achieved by providing rewards for each correct step in the problem-solving process, in a method called “process supervision,” rather than merely rewarding the final correct answer, known as “outcome supervision.”
“In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans,” company officials said in a statement.
OpenAI CEO Sam Altman took to Twitter this week to share his excitement.
really exciting process supervision result from our mathgen team. positive sign for alignment. https://t.co/KS8JEtHVAt
— Sam Altman (@sama) May 31, 2023
AI Video of the Week: Coffee and Catastrophe
Nothing like waking up to a warning that AI could end humanity.