In the @techpolicypress article ““AI” Hurts Consumers and Workers — and Isn’t Intelligent“, @alexhanna and @emilymbender make the following arguments against generative AI:
- Generative AI is bad for consumers because it creates lower quality products/services that are erroneous and expose consumers to implicit racism, sexism, bias, and misinformation.
- Inferior generative AI-based products will create a 2-tier society, where the Haves will hire actual humans, while the “Have Nots” will be subjected to those inferior generative AI-based products and services.
- Generative AI cannot complete tasks set out for them. Their output needs to be verified by a human. This will lead to mass layoffs and people being hired as contractors to verify the output/actions of generative AI.
Their clarion call: investors and businesses should resist the generative AI hype, and the government should seek to regulate generative AI.
We at Gleen disagree. Here’s why.
Good Chatbots Shouldn’t Hallucinate
Hana & Bender use the word “erroneous,” but what they really mean is, generative AI chatbots hallucinate. It makes things up and can create responses that are factually incorrect. That’s because large language models (LLMs) are entirely based on predicting the next most likely word in a sentence.
While we agree that hallucination is a problem endemic to LLMs, it’s important to draw a distinction between LLMs – like GPT-4, Claude, and Llama – and chatbots (like ChatGPT and Gleen AI). A chatbot is an application that can use an LLM to generate responses. But a well-designed and well-implemented chatbot can and should be free of hallucination.
Take Gleen AI for example. Gleen AI is a chatbot system designed specifically to prevent hallucination and maximize the accuracy and relevance of every generative AI response. Gleen AI does utilize LLMs, but LLMs only make up 20% of the Gleen AI tech stack. The remaining 80% of the stack is focused on carefully managing the inputs to and outputs from the LLM to avoid hallucination and ensure relevance.
Good Chatbots Should Be Free from Bias & Misinformation
As the old saying goes in AI: garbage in, garbage out. Because LLMs consume large swathes of content from the public internet, and the public internet can be biased – e.g., sexist, racist, xenophobic, hetero-normative, etc. – LLMs and generative AI can expose unwitting consumers to bias and misinformation, according to Hanna & Bender.
Again, we need to draw a distinction between LLMs and well-designed chatbots. LLMs may in fact expose people to bias, e.g., if the majority of the content consumed by an LLM is sexist, then there is a greater likelihood that the next word predicted by an LLM will be sexist.
A well-designed chatbot, however, should not use the entire internet as its knowledge base, but only a carefully curated knowledge base. As long as that knowledge does not contain any biases or misinformation, a well-designed chatbot should have responses that are free from bias and misinformation.
For example, let’s assume that an enterprise wants to build a chatbot to support its products and services. A chatbot should only consume documents about those products and services. A well-designed chatbot will only reflect the biases and “misinformation” contained in those documents, no more, no less – e.g., if product documentation says the product can process 1,200 transactions per second, the chatbot should say the same (regardless of whether or not 1,200 transactions per second is accurate or inaccurate).
Similarly, If the product documents fed to a chatbot are void of gender, racial, or religious bias, its answers should also not have any bias as well.
A 2-Tier Society of Haves and Have Nots is Overly Dystopian
The argument that generative AI will create a 2-tier society, where the have nots will need to deal with inferior generative AI-based products and services and the haves will hire live humans has 2 fundamental assumptions:
1. Generative AI will lead only to inferior products and services; and
2. Humans will always outperform generative AI.
First, we at Gleen believe that generative AI will lead to superior, not inferior products and services. We’ve already argued above that generative AI (specifically delivered thru chatbots) can be free from error and bias. In addition, generative AI chatbots can be available 24/7/365, can chat in a hundred different languages, and can perform some tasks that humans currently perform.
Furthermore, generative AI has numerous benefits. Especially in the domain of customer support, generative AI can be used to support customers 24/7/365, in a hundred different languages. Generative AI can be used to dramatically decrease time to first response and, in many or most cases, dramatically decrease time to resolution as well.
Second, humans will not always outperform generative AI. If you hire a live personal assistant, you’re subjecting yourself to that person’s biases (both conscious and unconscious), and you’re also subjecting yourself to that person’s error rate and speed of execution. No human is perfect. Furthermore, no human can process information as fast as a machine.
Wait time is everything. For example, in the healthcare industry, time spent waiting for an appointment negatively impacts not only patient satisfaction and perceived quality of care, but also actual health outcomes, as 30% of patients have left before seeing a doctor due to excessive wait times.
Everyone hates wasting time. In the customer support world, faster response times invariably lead to more satisfied customers and superior products and services.
Generative AI as a Complement to Human Work
Rather than replacing human workers, generative AI can and should be viewed as a tool to augment human capabilities. Generative AI can automate routine and mundane tasks, freeing up human workers to engage in more complex, creative, and meaningful tasks. With appropriate training and upskilling, workers can leverage generative AI technologies to increase their productivity and job satisfaction.
It's important to think of generative AI not as a threat, but as a partner in the evolution of work, which can enhance human potential and the human condition.
In other words, whom would you rather have as a personal assistant:
- A generative AI; or
- A live human; or
- A live human that's proficient at using generative AI?
We’re Gleen, and our mission is to delight our customers’ customers.