A few days ago, Google launched Gemini, replacing Google Assistant in some markets with its new AI.. This shows that the company, which has called itself an artificial intelligence company for many years, has decided to demonstrate it to its users. This is the evolution of the December announcement, when Google wanted to catch up and catch up with its rivals with its AI which promises to be very versatile and at the same time very powerful.
And 2023 was a year clearly dominated by artificial intelligence. Open AI has paved the way for a new path in computingat levels rarely seen, and which is reminiscent of the time when cell phones became smartphones. Companies like Microsoft have rushed to embrace this new reality and Google was slow to respond, but it did so very forcefully.
The launch of Gemini represents a huge step forward in a company that not only generates high-level language models, but also begins offering services to its private clients. A good example of this is the new Google One pricing which includes the use of the most powerful version of the new AI, Gemini Ultra, for $19.99 per month, in obvious conflict with the paid version of the Open AI chatbot.
Additionally, Gemini is available in an Android mobile app, just like ChatGPT, and in some countries it is even starting to be available. be able to replace the voice interface of the Google Assistant on mobile phones. Google’s commitment to Gemini is clear, powerful and gives us a glimpse of what the immediate future of the company led by Sundar Pichai will be.
What is Gemini?
Gemini It is a generative artificial intelligence system, trained to respond to requests that can come in different ways. It was designed to be multimodal, that is to say it is not limited to understanding a written text. Although there are not many tools using these functions yet, Gemini is able to understand spoken language and respond in the same way.
Additionally, it is possible to ask questions about images uploaded to their servers as well as asks you to create other images based on certain commands (prompts) the same way Midjourney or Dall-E works. However, these features are not yet fully implemented, although Google has started implementing them.
What can you do with Gemini?
This major language model is able to understand and create texts, do the same with audios or images. The problem is that there is a difference between what it can do technically and what current interfaces (Bard, mobile, etc.) allow. So, currently the most important use of Gemini is Bard, the vitamin Google chat open for free.
During the launch of Gemini, a video was seen (on these lines), which showed how the AI was able to answer questions and items that a person showed on a table via a camera. Soon we knew that this video was a montageand that Gemini could do it, but only through images and text responses, something equally astonishing but less spectacular.
However, This does not seem to be an impossible thing in the short or medium term.. Google could include Gemini in the Android camera app, to identify items, as some apps do with the use of other artificial intelligence. You can also create a version of the assistant that works like voice chat, even if it’s not integrated with home automation. At the rate this sector is progressing, we won’t reach 2025 without seeing a big leap forward in Gemini.
How many models are there?
The first thing you need to know is that Gemini is not one product as such, but three different products. The reason is that Google knows that not all devices have the same computing power and therefore cannot all perform the same tasks. It is for this reason that the company has developed three versions of the Gemini, depending on their use and, above all, where they will work.
- Gemini Nano: the most efficient model, created to perform tasks from the same device.
- Gemini Pro: a powerful model that also prioritizes speed.
- Gemini Ultra: the most powerful and largest model that handles more complex tasks.
Where will Gemini be available?
The answer to this question is continually evolving, because Google went from not wanting to launch AI tools aimed at the general public to the radically opposite pole. Initially, Gemini will be available in web applications, on physical devices and integrated into already launched products.
For example, it is possible to use Gemini Pro in the Bard version operational in 170 countries for a few days. It can also be used, in its Nano version, on the Pixel 8 Series or the Samsung Galaxy S24 Series. Finally, announcements indicate that Gemini will be integrated into Google applications, like Chrome, in a few months. The Ultra version, for the moment, is only accessible in test mode for a few developers.
Is Bard the same as Gemini?
Gemini was born as a great language model, one of several products developed by Google. It is the most powerful, but it is not created for the end user to interact with it. For this, interfaces are designed, which can be integrated into other applications, such as Google Photos or Chrome, or created expressly for this purpose.
The latter is the case of Bard, which is a text interface from which a person can use Gemini templates. Specifically, Gemini Pro is now used, but perhaps this will change in the future. Of course, a few days ago Google decided to change the name from Bard to Gemini. Currently, this nomenclature refers to both the language model and the interface on which users act.
Does it replace Google Assistant?
Gemini is much more powerful today than Google Assistant. The capabilities are much more advanced, although the assistant still has integration with home automation brands that, for now, Gemini does not have. Something similar happens to ChatGPT, which can communicate much more smoothly and correctly than Google Assistant or Alexa, but cannot completely replace them.
However, Google has already announced that Gemini will be able to replace the Google Assistant on mobile phones. For now, this change will only happen in certain markets, and Europeans are not one of them. In countries where the change is made, an application is launched to use Gemini independently, using text or images. This application will have the ability to manage calls or schedule alerts and in the future it will also be integrated with other applications. Of course, this new version will only be available on mobile phones while The already known Google Assistant will remain in your smart speakers. Apparently, the company will continue to maintain the assistant, but will devote its greatest efforts to Gemini.
Will developers be able to use it?
Logically, at Google, they want developers to use their language model before those of the competition. To do this, they gave them access to Gemini Pro via Google AI Studio, a free web tool for developers which will enable prototyping and creation of applications using the Gemini API.
They will also bring AI to the cloud via Vertex AI, a platform for data control and additional Google Cloud functions focused on business security, privacy… Finally, the Android app developers will be able to use Gemini Nanothe model that works directly on mobile devices.
It’s free?
Currently in Europe, the options to use Gemini (mobile phones, websites…) are free. However, it was recently announced in the United States, available only in English, Gemini Advanced, a paid version with which you will have access to Ultra 1.0, the most advanced engine of this artificial intelligence.
The price will be $19.99 per month and there will be a two-month free trial. The Google One monthly plan in which it is included will be called Google One AI Premium Plan and will also include 2TB of storage in Gmail, Drive and Photos, advanced image editing features and unlimited access to Google Meet.
This may interest you
Follow topics that interest you
Table of Contents