Technology

Google’s Android Auto Is Bringing Gemini To Your Vehicle

Google’s Android Auto Is Bringing Gemini To Your Vehicle

Ahead of its 2025 I/O developer conference, Google said at its Android Show that it will be implementing its generative AI, Gemini, in all vehicles that support Android Auto within the coming months.

In the blog post, the firm says that driving will become “more productive — and fun” with the addition of Gemini capability to Android Auto and, later this year, to vehicles running Google’s integrated operating system.

In a virtual briefing with reporters prior to the conference, Patrick Brady, vice president of Android for Cars, stated, “We believe that this is going to be one of the biggest changes in the in-vehicle experience that we’ve seen in a very, very long time.”

There are two primary ways that Gemini will appear in the Android Auto experience.

Gemini will function as a smart voice assistant with far greater power. Drivers will be able to ask Gemini to play music, make texts, and essentially perform all of the functions that Google Assistant already possesses. The difference is that Gemini’s natural language skills will allow users to give commands without having to be so robotic.

Along with handling the translation for the user, Gemini may also “remember” factors like whether a contact prefers to receive text messages in a specific language. Additionally, according to Google, Gemini will be able to locate quality eateries along a predetermined route, which is one of the most frequently displayed in-car tech demonstrations. Naturally, Gemini will be able to search through Google reviews and listings to answer more targeted queries, Brady said.

The other primary method Gemini will show up is through a feature Google is referring to as “Gemini Live,” in which the digital AI is practically constantly listening and prepared to have in-depth discussions about … anything. Brady said that discussions may cover anything from “Roman history” to spring break travel plans to coming up with meals a 10-year-old would enjoy.

Brady stated that if any of it sounds a little bothersome, Google doesn’t think so. He asserted that Gemini will “reduce cognitive load” because of its natural language capabilities, which will make it simpler to instruct Android Auto to perform particular activities with less fuss.

It’s a daring statement to make at a time when consumers are demanding that automakers abandon touchscreens and return to physical knobs and buttons, a demand that many of those manufacturers are beginning to abide with.

Many things are still being worked out. For the time being, Gemini will use Google’s cloud computing to function on vehicles with Google Built-In as well as Android Auto. However, according to Brady, Google is collaborating with automakers “to build in more compute so that [Gemnini] can run at the edge.” This would improve reliability as well as performance, which is difficult in a moving vehicle that can be connecting to different cell towers every few minutes.

Onboard sensors and, in some models, exterior and interior cameras also produce a lot of data in modern cars. Brady stated that “we’ve been talking about that a lot” and that Google has “nothing to announce” on Gemini’s potential use of the multi-modal data.

He stated, “We definitely think there’s some really, really interesting use cases in the future here as cars have more and more cameras.”

With support for over 40 languages, Gemini on Android Auto and Google Built-In will be available in all nations where the company’s generative AI model is already available.

error: Content is protected !!