Generative AI Platforms: Navigating the Truth

[ad_1]

The Gist

  • Search shift. The search engines that originally set out to organize the world’s information and make it universally accessible and useful may be drifting into the information interpretation business.
  • Bias overcompensation. Google’s Gemini platform’s recent highly publicized breakdown illustrates how these generative AI platforms are beginning to overcompensate for the inherited biases in the data and algorithms.
  • AI challenge. AI bias will continue to be a challenge for generative AI platform providers and may require rethinking of the model to provide citations and parametrization to enable users to have more transparency.
  • Truth ambassadors. There’s a tremendous opportunity for generative AI builders to become ambassadors of the truth by ensuring high-quality data sources and increasing transparency by providing citations to source information.

There’s a concerning shift occurring in the information platforms we use every day. The search engines that originally set out to organize the world’s information and make it universally accessible and useful may be drifting into the information interpretation business. This trend is particularly evident in generative AI platforms, which are increasingly influencing how information is presented and understood.

What Has Happened to Search?

This is becoming more evident in the AI age where generative AI platforms are seemingly creating an interpretation layer between our prompts and the data sources to filter the responses. The motivation is supposedly to “protect” users from biases, controversial content and inaccuracies.

However, these corrections may in fact be introducing some new biases and inaccuracies that may be reflective of the organizational biases. Recently there have been highly publicized examples of responses from generative AI platforms that are “over-correcting” their responses to compensate for anticipated biases in training data.

This is a very dangerous line to cross.

The original search service provided the ability to index massive amounts of online content and provide an intuitive and easy interface for information retrieval. It basically provided direct access to others’ information based on relevance without any interpretation.

What has emerged is an information interpretation service. This is a very different product and paradigm from the original mission. These generative AI platforms are now acting as an aggregation, synthesis, filtering and interpretation layer. While this may be useful on some levels, it has obvious implications.

The fundamental question is how do we get the truth?

Related Article: Overcoming AI Bias in CX With Latimer

Continued Challenges for the Google AI Team

When OpenAI launched ChatGPT back in November 2022, Google was highly criticized for being sidelined in the AI race. The expectation was that Google has access to much of the world’s information and search data enabling them to be a prime leader in the AI race.

The justification was that Google had superior technology but was overly cautious about its business, reputation and customer relationships. Google actively considers how these potential backlashes could damage its reputation.

But then Google appeared to be on the AI path again, first releasing Bard and then more recently rebranding their entire AI suite to Gemini. However, just as things seemed to be moving along, disaster struck.

Gemini began to exhibit some controversial answers to basic questions. For instance, generating pictures of America’s founding fathers and the Pope in genders and ethnicities that are historically inconsistent. Other concerns such as calling India’s Prime Minister Modi a “fascist” and comparing Elon Musk to Hitler. This started a string of accusations calling Gemini “ultra-woke.”

Google’s CEO Sundar Pichai responded in an internal email calling the Gemini responses “completely unacceptable” and promised that Gemini would roll out after a clear set of actions. The company had to take the platform down to deal with the concerns. It is expected that the changes will likely involve structural changes, updated product guidelines, improved launch procedures, better testing and red-teaming.

The Gemini controversy quickly hit Alphabet’s bottom line, losing over $90 billion in market value as shares fell by 4%.

Related Article: 4 Tips for Taming the Bias in Artificial Intelligence

Generative AI Platforms: Information Filtering Was Never a Goal

The concept of organizing complex information and providing access through a centralized retrieval system is not new.

In 1945, Vannevar Bush, an American engineer, described an information retrieval system that would allow a user to access a great expanse of information, all at a single desk. The system was called memex and initially described in an Atlantic Monthly article titled “As We May Think“. The memex was intended to give users the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. 

The first web search engine was actually not created by Google. It was called Archie, and was created in 1990 by Alan Emtage, a student at McGill University in Montreal. The web crawler initially focused on indexing FTP sites but was the first tool to index content.

Additionally, early Internet location services like WHOIS were also leveraged as search and retrieval platforms for users, servers and even information. Some of these platforms even predated the debut of the Web in December 1990, with WHOIS dating back to 1982.

It’s amazing to note that prior to September 1993, the World Wide Web was entirely indexed by hand. Tim Berners-Lee would actually manually update the list of servers.

Google arrived on the scene in 1996 to revolutionize how search algorithms worked. In the late 1990s and early 2000s a proliferation of search engines followed such as Yahoo!, AltaVista, Excite, Lycos, Microsoft MSN and others. Many of these search platforms have since been acquired and vanished but Google still maintains 80%+ of the overall search marketshare.

Search bias has certainly crept in over time motivated by commercial, political and social influences. For instance, the ability to purchase certain search keywords to improve search relevance biases the native search results. Or the decision by a search engine not to index certain politically motivated sites also restricts access to information.

The important point to note is that none of these platforms were initially designed with the intent of filtering or interpreting information. Access to “raw” information and the ability to search was paramount. The goal was to present the most relevant information to a user based on the search ranking algorithms. The user was then empowered to navigate the search list and select which links appeared most relevant to their search intent.

Related Article: Addressing AI Bias: A Proposed 8th Principle for ‘Privacy by Design’

[ad_2]

Source link

digiflowz
Digiflowz
Logo