Now is not the time for voice assistants. On the one hand, we use them for little more than setting timers, asking the time, and requesting music. On the other hand, the big tech they are tightening their belts: Amazon plans to lay off 10,000 workers and this adjustment is focused on the area of devices (including those that integrate Alexa) and Google many others. The icing on the cake is that the threat of AI hangs over them: it tells stories, explains functions and even removes doubts. And judging by our experience, it does better.
On one side of the ring, ChatGPT, OpenAI’s artificial intelligence. On the other, Google Assistant with the muscle of the almighty Google. For the fight we will always ask the same question and expect a concrete and precise answer. The same thing you might expect from a person. We will only consider the first answer.
bait tower. An easy question: who invented the light bulb?
Classic trivial question where they exist. While ChatGPT clearly answers, Google attributes the invention to a maximum of six inventors, although it returns only two orally (Edison and Swan), showing the rest in pictures. The concretion of one against the details of the other. While it is true that all of them are part of the history of the invention of the light bulb, officially the one who went down in history for this merit is Thomas Alba Edison.
Second turn. A complicated question: what is the capital of the country that borders Portugal?
Uh. Failure of understanding for both. ChatGPT is at least hitting the country, but it’s still halfway there. Google does stupid things both orally and in writing.
Tercer round. One of a kind: What is the derivative of a function?
Some people use the voice assistant to help them with their class tasks. But while Google filters the available information, limiting itself to returning the Wikipedia entry, the OpenAI chat elaborates the message through it.
Cuarto round. A general culture: What is the difference between a transgender person and a transsexual person?
This is even more evident when the question asks about the difference between two concepts. ChatGPT does this, but Google Assistant just reads us a snippet of code that a Google search for the question returns.
Quinto round. Read it carefully: Who was Anne Boleyn’s husband’s first wife? (Anne Boleyn)
In this case, both are correct, but ChatGPT’s response is much more natural: Google Assistant responds that “this information is from Wikipedia” showing us the table, OpenAI elaborates the response.
round sext. Who was Apple’s former chief designer?
Again, while Google returns us that this information is that which comes from Wikipedia against the answer developed by ChatGPT.
seventh round. Trick question. Neutral language: Which tennis player has the most Grand Slams in history?
Difference of opinions, even if again Google uses the search engine and references its answer “according to” a media. But…
round octave. How many Grand Slam tournaments has Serena Williams?
In our experience, OpenAI’s contextualization, understanding, and concreteness outperforms Google’s intelligence, but it’s not perfect. Few years ago Google jumps into the fray for its racist algorithm referring to two people of color as gorillas, but when faced with a neutral question, both intelligences responded assuming they were men. We will have to keep training the algorithms.