{"id":814889,"date":"2024-12-26T02:18:50","date_gmt":"2024-12-26T08:18:50","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2024\/12\/26\/google-gemini-everything-you-need-to-know-about-the-generative-ai-models\/"},"modified":"2024-12-26T02:18:50","modified_gmt":"2024-12-26T08:18:50","slug":"google-gemini-everything-you-need-to-know-about-the-generative-ai-models","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2024\/12\/26\/google-gemini-everything-you-need-to-know-about-the-generative-ai-models\/","title":{"rendered":"Google Gemini: Everything you need to know about the generative AI models"},"content":{"rendered":"<div>\n<p id=\"speakable-summary\">Google\u2019s trying to make waves with Gemini, its flagship suite of generative AI models, apps, and services. But what\u2019s Gemini? How can you use it? And how does it\u00a0stack up to other generative AI tools such as OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2024\/09\/06\/chatgpt-everything-to-know-about-the-ai-chatbot\/\">ChatGPT<\/a>, Meta\u2019s <a href=\"https:\/\/techcrunch.com\/2024\/09\/08\/meta-llama-everything-you-need-to-know-about-the-open-generative-ai-model\/\">Llama<\/a>, and Microsoft\u2019s <a href=\"https:\/\/techcrunch.com\/2024\/08\/17\/microsoft-copilot-everything-you-need-to-know-about-microsofts-ai\/\">Copilot<\/a>?<\/p>\n<p>To make it easier to keep up with the latest Gemini developments, we\u2019ve put together this handy guide, which we\u2019ll keep updated as new Gemini models, features, and news about Google\u2019s plans for Gemini are released.<\/p>\n<h2 id=\"h-what-is-gemini\">What is Gemini?<\/h2>\n<p id=\"speakable-summary\">Gemini is Google\u2019s\u00a0<a href=\"https:\/\/www.wired.com\/story\/google-deepmind-demis-hassabis-chatgpt\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">long-promised<\/a>, next-gen generative AI model family. Developed by Google\u2019s AI research labs DeepMind and Google Research, it comes in four flavors:<\/p>\n<ul>\n<li><strong>Gemini Ultra<\/strong><\/li>\n<li><strong>Gemini Pro<\/strong><\/li>\n<li><strong>Gemini Flash<\/strong>, a speedier, \u201cdistilled\u201d version of Pro. It also comes in a slightly smaller and faster version, called Gemini Flash-8B.<\/li>\n<li><strong>Gemini Nano<\/strong>, two small models:\u00a0<strong>Nano-1<\/strong>\u00a0and the slightly more capable\u00a0<strong>Nano-2<\/strong>, which is meant to run offline<\/li>\n<\/ul>\n<p>All Gemini models were trained to be natively multimodal \u2014 that is, able to work with and analyze more than just text. Google says they were pre-trained and fine-tuned on a variety of public, proprietary, and licensed audio, images, and videos; a set of codebases; and text in different languages.<\/p>\n<p>This sets Gemini apart from models such as\u00a0<a href=\"https:\/\/techcrunch.com\/2022\/08\/25\/googles-new-app-lets-you-experimental-ai-systems-like-lamda\/\">Google\u2019s own LaMDA<\/a>, which was trained exclusively on text data. LaMDA can\u2019t understand or generate anything beyond text (e.g., essays, emails, and so on), but that isn\u2019t necessarily the case with Gemini models.<\/p>\n<p>We\u2019ll note here that the\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/26\/this-week-in-ai-the-fate-of-generative-ai-is-in-the-courts-hands\/#:~:text=The%20suits%20add%20to%20the,of%20training%20if%20they%20wish.\">ethics and legality<\/a>\u00a0of training models on public data, in some cases without the data owners\u2019 knowledge or consent, are murky. Google has an\u00a0<a href=\"https:\/\/cloud.google.com\/blog\/products\/ai-machine-learning\/protecting-customers-with-generative-ai-indemnification\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">AI indemnification policy<\/a>\u00a0to shield certain Google Cloud customers from lawsuits should they face them, but this policy contains carve-outs. Proceed with caution \u2014 particularly if you\u2019re intending on using Gemini commercially.<\/p>\n<h2 id=\"h-what-s-the-difference-between-the-gemini-apps-and-gemini-models\">What\u2019s the difference between the Gemini apps and Gemini models?<\/h2>\n<p>Gemini is separate and distinct from the Gemini apps on the web and mobile (<a href=\"https:\/\/techcrunch.com\/tag\/bard\/\">formerly Bard<\/a>).<\/p>\n<p>The Gemini apps are clients that connect to various Gemini models and layer a chatbot-like interface on top. Think of them as front ends for Google\u2019s generative AI, analogous to\u00a0<a href=\"https:\/\/techcrunch.com\/tag\/chatgpt\/\">ChatGPT<\/a>\u00a0and Anthropic\u2019s\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/10\/anthropics-claude-sees-tepid-reception-on-ios-compared-with-chatgpts-debut\/\">Claude family of apps<\/a>.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/06\/gemini-mobile-app-google.jpg?w=680\" alt=\"Google Gemini mobile app\"  ><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Gemini on the web lives\u00a0<a rel=\"nofollow\" href=\"https:\/\/gemini.google.com\/\">here<\/a>. On Android, the\u00a0<a href=\"https:\/\/play.google.com\/store\/apps\/details?id=com.google.android.apps.bard&#038;hl=en_US\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Gemini app<\/a>\u00a0replaces the existing Google Assistant app. And on iOS, the\u00a0<a href=\"https:\/\/support.google.com\/gemini\/answer\/14554984?hl=en&#038;co=GENIE.Platform%3DiOS\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Google and Google Search apps<\/a>\u00a0serve as that platform\u2019s Gemini clients.<\/p>\n<p>On Android, it also recently became possible to bring up the Gemini overlay on top of any app to ask questions about what\u2019s on the screen (e.g., a YouTube video). Just press and hold a supported smartphone\u2019s power button or say, \u201cHey Google\u201d; you\u2019ll see the overlay pop up. <\/p>\n<p>Gemini apps can accept images as well as voice commands and text \u2014 including files like PDFs and soon videos, either uploaded or imported from Google Drive \u2014 and generate images. As you\u2019d expect, conversations with Gemini apps on mobile carry over to Gemini on the web and vice versa if you\u2019re signed in to the same Google Account in both places.<\/p>\n<h2 id=\"h-gemini-advanced\">Gemini Advanced<\/h2>\n<p>The Gemini apps aren\u2019t the only means of recruiting Gemini models\u2019 assistance with tasks. Slowly but surely, Gemini-imbued features are\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/25\/google-brings-its-gemini-ai-to-gmail-via-a-sidebar-that-can-help-you-write-and-summarize-emails\/\">making their way<\/a>\u00a0into staple Google apps and services like Gmail and Google Docs.<\/p>\n<p>To take advantage of most of these, you\u2019ll need the Google One AI Premium Plan. Technically a part of\u00a0<a href=\"https:\/\/one.google.com\/about\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Google One<\/a>, the AI Premium Plan costs $20 and provides access to Gemini in Google Workspace apps like Docs, Maps, Slides, Sheets, Drive, and Meet. It also enables what Google calls Gemini Advanced, which brings the company\u2019s more sophisticated Gemini models to the Gemini apps.<\/p>\n<p>Gemini Advanced users get extras here and there, too, like priority access to new features, the ability to run and edit Python code directly in Gemini, and a larger \u201ccontext window.\u201d Gemini Advanced can remember the content of \u2014 and reason across \u2014 roughly 750,000 words in a conversation (or 1,500 pages of documents). That\u2019s compared to the 24,000 words (or 48 pages) the vanilla Gemini app can handle.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"1868\" height=\"884\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/07\/Screenshot-2024-07-28-at-2.53.42\u202fPM.jpg?w=680\" alt=\"Screenshot of a Google Gemini commercial\"  ><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Gemini Advanced also gives users access to Google\u2019s <a href=\"https:\/\/techcrunch.com\/2024\/12\/11\/gemini-can-now-research-deeper\/\">new Deep Research feature<\/a>, which uses \u201cadvanced reasoning\u201d and \u201clong context capabilities\u201d to generate research briefs. After you prompt the chatbot, it creates a multi-step research plan, asks you to approve it, and then Gemini takes a few minutes to search the web and generate an extensive report based on your query. It\u2019s meant to answer more complex questions such as, \u201cCan you help me redesign my kitchen?\u201d<\/p>\n<p>Google also offers Gemini Advanced users <a href=\"https:\/\/techcrunch.com\/2024\/11\/19\/googles-gemini-chatbot-now-has-memory\/\">a memory feature<\/a>, that allows the chatbot to use your old conversations with Gemini as context for your current conversation.<\/p>\n<p>Another Gemini Advanced exclusive is trip planning in Google Search, which creates custom travel itineraries from prompts.\u00a0Taking into account things like flight times (from emails in a user\u2019s Gmail inbox), meal preferences, and information about local attractions (from Google Search and Maps data), as well as the distances between those attractions, Gemini will generate an itinerary that updates automatically to reflect any changes.\u00a0<\/p>\n<p>Gemini across Google services is also available to corporate customers through two plans, Gemini Business (an add-on for Google Workspace) and Gemini Enterprise. Gemini Business costs as low as $6 per user per month, while Gemini Enterprise \u2014 which adds meeting note-taking and translated captions as well as document classification and labeling \u2014 is generally more expensive, but is priced based on a business\u2019s needs. (Both plans require an annual commitment.)<\/p>\n<p>In Gmail, Gemini lives in a <a href=\"https:\/\/techcrunch.com\/2024\/08\/29\/gmail-users-on-android-can-now-chat-with-gemini-about-their-emails\/\">side panel<\/a> that can write emails and summarize message threads. You\u2019ll find the same panel in Docs, where it helps you write and refine your content and brainstorm new ideas. Gemini in Slides generates slides and custom images. And Gemini in Google Sheets tracks and organizes data, creating tables and formulas.<\/p>\n<p>Google\u2019s AI chatbot <a href=\"https:\/\/techcrunch.com\/2024\/10\/31\/google-maps-is-getting-new-ai-features-powered-by-gemini\/\">recently came to Maps<\/a>, where Gemini can summarize reviews about coffee shops or offer recommendations about how to spend a day visiting a foreign city.<\/p>\n<p>Gemini\u2019s reach extends to Drive as well, where it can summarize files and folders and give quick facts about a project. In Meet, meanwhile, Gemini translates captions into additional languages.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"1600\" height=\"1190\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/06\/gemini-gmail.png?w=680\" alt=\"Gemini in Gmail\"  ><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/help-me-write-chrome-gets-a-built-in-ai-writing-tool-powered-by-gemini\/\">Gemini recently came to Google\u2019s Chrome browser<\/a>\u00a0in the form of an AI writing tool. You can use it to write something completely new or rewrite existing text; Google says it\u2019ll consider the web page you\u2019re on to make recommendations.<\/p>\n<p>Elsewhere, you\u2019ll find hints of Gemini in Google\u2019s\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/googles-gemini-comes-to-databases\/\">database products<\/a>,\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/google-injects-generative-ai-into-its-cloud-security-tools\/\">cloud security tools<\/a>,\u00a0and <a href=\"https:\/\/techcrunch.com\/2024\/04\/08\/google-rolls-out-gemini-in-android-studio-for-coding-assistance\/\">app development platforms<\/a>\u00a0(including\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/google-launches-firebase-genkit-a-new-open-source-framework-for-building-ai-powered-apps\/\">Firebase<\/a>\u00a0and\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/project-idx-googles-next-gen-ide-is-now-in-open-beta\/\">Project IDX<\/a>), as well as in apps like\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/google-photos-introduces-an-ai-search-feature-ask-photos\/\">Google Photos<\/a>\u00a0(where Gemini handles natural language search queries), <a href=\"https:\/\/techcrunch.com\/2024\/08\/07\/youtube-is-testing-a-feature-that-lets-creators-use-google-gemini-to-brainstorm-video-ideas\/\">YouTube<\/a> (where it helps brainstorm video ideas), and the\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/06\/googles-ai-powered-notebooklm-expands-to-india-uk-and-over-200-other-countries\/\">NotebookLM note-taking assistant<\/a>.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/google-launches-code-assist-its-latest-challenger-to-githubs-copilot\/\">Code Assist<\/a>\u00a0(formerly\u00a0<a href=\"https:\/\/cloud.google.com\/duet-ai?hl=en\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Duet AI for Developers<\/a>), Google\u2019s suite of AI-powered assistance tools for code completion and generation, is offloading heavy computational lifting to Gemini. So are Google\u2019s\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/google-injects-generative-ai-into-its-cloud-security-tools\/\">security products underpinned by Gemini<\/a>, like\u00a0Gemini in Threat Intelligence, which can analyze large portions of potentially malicious code and let users perform natural language searches for ongoing threats or indicators of compromise.<\/p>\n<h2 id=\"h-gemini-extensions-and-gems\">Gemini extensions and Gems<\/h2>\n<p>Announced at Google I\/O 2024,\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/googles-gemini-updates-how-project-astra-is-powering-some-of-i-os-big-reveals\/\">Gemini Advanced users can create Gems<\/a>, custom chatbots powered by Gemini models. Gems can be generated from natural language descriptions \u2014 for example, \u201cYou\u2019re my running coach. Give me a daily running plan\u201d \u2014 and shared with others or kept private.<\/p>\n<p>Gems are <a href=\"https:\/\/techcrunch.com\/2024\/08\/28\/google-says-its-fixed-geminis-people-generating-feature\/\">available<\/a> on desktop and mobile in 150 countries and most languages. Eventually, they\u2019ll be able to tap an expanded set of integrations with Google services, including Google Calendar, Tasks, Keep, and YouTube Music, to complete custom tasks.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"450\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/08\/ezgif-3-2a05707c0a.gif?w=680\" alt=\"Gemini Gems\"><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Speaking of integrations, the Gemini apps on the web and mobile can tap into Google services via what Google calls \u201cGemini extensions.\u201d Gemini today integrates with Google Drive, Gmail, and YouTube to respond to queries such as \u201cCould you summarize my last three emails?\u201d Later this year, Gemini will be able to take additional actions with Google Calendar, Keep, Tasks, YouTube Music and Utilities, the Android-exclusive apps that control on-device features like timers and alarms, media controls, the flashlight, volume, Wi-Fi, Bluetooth, and so on.<\/p>\n<h2 id=\"h-gemini-live-in-depth-voice-chats\">Gemini Live in-depth voice chats<\/h2>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/googles-gemini-updates-how-project-astra-is-powering-some-of-i-os-big-reveals\/\">An experience called Gemini Live<\/a> allows users to have \u201cin-depth\u201d voice chats with Gemini. It\u2019s available in the Gemini apps on mobile and the <a href=\"https:\/\/techcrunch.com\/2024\/08\/13\/made-by-google-2024-all-of-googles-reveals-from-the-pixel-9-iineup-to-gemini-ais-addition-to-everything\/\">Pixel Buds Pro 2<\/a>, where it can be accessed even when your phone\u2019s locked.<\/p>\n<p>With Gemini Live enabled, you can interrupt Gemini while the chatbot\u2019s speaking (in one of several new voices) to ask a clarifying question, and it\u2019ll adapt to your speech patterns in real time. At some point, Gemini is supposed to gain visual understanding, allowing it to see and respond to your surroundings, either via photos or video captured by your smartphones\u2019 cameras.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"450\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/08\/ezgif-7-612379e706.gif?w=680\" alt=\"Gemini Live\"><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Live is also designed to serve as a virtual coach of sorts, helping you rehearse for events, brainstorm ideas, and so on. For instance, Live can suggest which skills to highlight in an upcoming job or internship interview, and it can give public speaking advice.<\/p>\n<p>You can read our <a href=\"https:\/\/techcrunch.com\/2024\/08\/19\/gemini-live-could-use-some-more-rehearsals\/\">review of Gemini Live here<\/a>. Spoiler alert: We think the feature has a ways to go before it\u2019s super useful \u2014 but it\u2019s early days, admittedly.<\/p>\n<h2 id=\"h-image-generation-via-imagen-3\">Image generation via Imagen 3<\/h2>\n<p>Gemini users can generate artwork and images using Google\u2019s built-in <a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/google-launches-new-video-and-image-generation-tools\/\">Imagen 3<\/a> model. <\/p>\n<p>Google says that Imagen 3 can more accurately understand the text prompts that it translates into images versus its predecessor,\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/12\/13\/google-debuts-imagen-2-with-text-and-logo-generation\/\">Imagen 2<\/a>, and is more \u201ccreative and detailed\u201d in its generations. In addition, the model produces fewer artifacts and visual errors (at least according to Google), and is the best Imagen model yet for rendering text.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/08\/An-animated-image-of-a-tiny-dragon-hatching-from-an-egg-in-a-sunlit-meadow-surrounded-by-curious-glowing-butterflies.-Vibrant-colors-detailed-scales.png?w=680\" alt=\"Google Imagen 3\"  ><figcaption><span>A sample from Imagen 3.<\/span><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Back in February, Google\u00a0was forced to <a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/google-gemini-image-pause-people\/\">pause<\/a>\u00a0Gemini\u2019s ability to generate images of people after users complained of\u00a0<a href=\"https:\/\/www.theguardian.com\/technology\/2024\/feb\/28\/google-chief-ai-tools-photo-diversity-offended-users\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">historical<\/a>\u00a0<a href=\"https:\/\/www.theverge.com\/2024\/2\/21\/24079371\/google-ai-gemini-generative-inaccurate-historical\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">inaccuracies<\/a>. But in August, the company reintroduced people generation for certain users, specifically English-language users signed up for one of Google\u2019s paid Gemini plans (e.g., <a href=\"https:\/\/techcrunch.com\/2024\/06\/28\/what-is-google-gemini-ai\/\">Gemini Advanced<\/a>) as part of a pilot program.<\/p>\n<h2 id=\"h-gemini-for-teens\">Gemini for teens<\/h2>\n<p>In June, Google introduced a teen-focused <a href=\"https:\/\/techcrunch.com\/2024\/06\/24\/google-is-bringing-gemini-access-to-teens-using-their-school-accounts\/\">Gemini experience<\/a>, allowing students to sign up via their Google Workspace for Education school accounts.<\/p>\n<p>The teen-focused Gemini has \u201cadditional policies and safeguards,\u201d including a tailored onboarding process and an \u201cAI literacy guide\u201d to (as Google phrases it) \u201chelp teens use AI responsibly.\u201d Otherwise, it\u2019s nearly identical to the standard Gemini experience, down to the \u201cdouble check\u201d feature that looks across the web to see if Gemini\u2019s responses are accurate.<\/p>\n<h2 id=\"h-gemini-in-smart-home-devices\">Gemini in smart home devices<\/h2>\n<p>A growing number of Google-made devices tap Gemini for enhanced functionality, from the <a href=\"https:\/\/techcrunch.com\/2024\/08\/06\/chromecast-is-dead-meet-google-tv-streamer\/\">Google TV Streamer<\/a> to the <a href=\"https:\/\/techcrunch.com\/2024\/08\/13\/google-gemini-is-the-pixel-9s-default-assistant\/\">Pixel 9 and 9 Pro<\/a> to the <a href=\"https:\/\/techcrunch.com\/2024\/08\/06\/after-nine-years-googles-nest-learning-thermostat-gets-an-ai-makeover\/\">newest Nest Learning Thermostat<\/a>.<\/p>\n<p>On the Google TV Streamer, Gemini uses your preferences to curate content suggestions across your subscriptions and summarize reviews and even whole seasons of TV.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"8256\" height=\"5504\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/08\/Google-TV-Streamer-set-up.jpg?w=680\" alt=\"Google TV Streamer set up\"  ><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>On the latest Nest thermostat (as well as Nest speakers, cameras, and smart displays), Gemini will soon bolster Google Assistant\u2019s conversational and analytic capabilities.<\/p>\n<p>Subscribers to Google\u2019s <a href=\"https:\/\/techcrunch.com\/2019\/10\/15\/google-overhauls-nest-aware-cloud-recording-plan\/\">Nest Aware<\/a> plan later this year will get a preview of new Gemini-powered experiences like AI descriptions for Nest camera footage, natural language video search and recommended automations. Nest cameras will understand what\u2019s happening in real-time video feeds (e.g., when a dog\u2019s digging in the garden), while the companion Google Home app will surface videos and create device automations given a description (e.g., \u201cDid the kids leave their bikes in the driveway?,\u201d \u201cHave my Nest thermostat turn on the heating when I get home from work every Tuesday\u201d).<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"2880\" height=\"1600\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/09\/Screenshot_2024-09-09_at_8.41.01a_\u00afPM-transformed.png?w=680\" alt=\"Google Gemini in smart home\"  ><figcaption><span>Gemini will soon be able to summarize security camera footage from Nest devices.<\/span><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Also later this year, Google Assistant will get a few upgrades on Nest-branded and other smart home devices to make conversations feel more natural. Improved voices are on the way, in addition to the ability to ask follow-up questions and \u201c[more] easily go back and forth.\u201d<\/p>\n<h2 id=\"h-what-can-the-gemini-models-do\">What can the Gemini models do?<\/h2>\n<p>Because Gemini models are multimodal, they can perform a range of multimodal tasks, from transcribing speech to captioning images and videos in real time. Many of these capabilities have reached the product stage (as alluded to in the previous section), and Google is promising much more in the not-too-distant future.<\/p>\n<p>Of course, it\u2019s a bit hard to take the company at its word. Google\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/02\/10\/google-is-losing-control\/\">seriously underdelivered<\/a>\u00a0with the original Bard launch. More recently, it ruffled feathers<a href=\"https:\/\/techcrunch.com\/2023\/12\/07\/googles-best-gemini-demo-was-faked\/\">\u00a0with a video purporting to show Gemini\u2019s capabilities<\/a>\u00a0that was more or less aspirational \u2014 not live.<\/p>\n<p>Also, Google offers no fix for some of the\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/11\/googles-and-microsofts-chatbots-are-making-up-super-bowl-stats\/\">underlying problems<\/a>\u00a0with generative AI tech today, like its\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/06\/study-finds-ai-models-hold-opposing-views-on-controversial-topics\/\">encoded<\/a>\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/03\/12\/google-gemini-election-related-queries\/\">biases<\/a>\u00a0and tendency to make things up (i.e.,\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/09\/04\/are-language-models-doomed-to-always-hallucinate\/\">hallucinate<\/a>). Neither do its rivals, but it\u2019s something to keep in mind when considering using or paying for Gemini.<\/p>\n<p>Assuming for the purposes of this article that Google is being truthful with its recent claims, here\u2019s what the different tiers of Gemini can do now and what they\u2019ll be able to do once they reach their full potential:<\/p>\n<h3 id=\"h-what-you-can-do-with-gemini-ultra\">What you can do with Gemini Ultra<\/h3>\n<p>Google says that\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/08\/google-goes-all-in-on-gemini-and-launches-20-paid-tier-for-gemini-ultra\/\">Gemini Ultra<\/a>\u00a0\u2014 thanks to its multimodality \u2014 can be used to help with things like physics homework, solving problems step-by-step on a worksheet, and pointing out possible mistakes in already filled-in answers.<\/p>\n<p>Ultra can also be applied to tasks such as identifying scientific papers relevant to a problem, Google says. The model can extract information from several papers, for instance, and update a chart from one by generating the formulas necessary to re-create the chart with more timely data.<\/p>\n<p>Gemini Ultra technically supports image generation. But that capability hasn\u2019t made its way into the productized version of the model yet \u2014 perhaps because the mechanism is more complex than how apps such as ChatGPT generate images. Rather than feed prompts to an image generator (like\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/11\/06\/openai-launches-dall-e-3-api-new-text-to-speech-models\/\">DALL-E 3<\/a>, in ChatGPT\u2019s case), Gemini outputs images \u201cnatively,\u201d without an intermediary step.<\/p>\n<p>Ultra is available as an API through Vertex AI, Google\u2019s fully managed AI dev platform, and AI Studio, Google\u2019s web-based tool for app and platform developers.<\/p>\n<h3 id=\"h-gemini-pro-s-capabilities\">Gemini Pro\u2019s capabilities<\/h3>\n<p>Google says that Gemini Pro is an improvement over LaMDA in its reasoning, planning, and understanding capabilities. The latest version,\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/15\/googles-new-gemini-model-can-analyze-an-hour-long-video-but-few-people-can-use-it\/\">Gemini 1.5 Pro<\/a> \u2014 which powers the Gemini apps for Gemini Advanced subscribers \u2014 exceeds even Ultra\u2019s performance in some areas.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/googles-generative-ai-model-can-now-analyze-hours-of-video\/\">Gemini 1.5 Pro is improved in a number of areas<\/a>\u00a0compared with its predecessor, Gemini 1.0 Pro, perhaps most obviously in the amount of data that it can process. Gemini 1.5 Pro can take in up to 1.4 million words, two hours of video, or 22 hours of audio and can reason across or answer questions about that data (<a href=\"https:\/\/techcrunch.com\/2024\/06\/29\/geminis-data-analyzing-abilities-arent-as-good-as-google-claims\/\">more or less<\/a>).<\/p>\n<p>Gemini 1.5 Pro became generally available on <a href=\"https:\/\/techcrunch.com\/2023\/12\/13\/google-brings-gemini-pro-to-vertex-ai\/\">Vertex AI<\/a> and AI Studio in June alongside a feature called code execution, which aims to reduce bugs in code that the model generates by iteratively refining that code over several steps. (Code execution also supports Gemini Flash.)<\/p>\n<p>Within Vertex AI, developers can customize Gemini Pro to specific contexts and use cases via a fine-tuning or \u201cgrounding\u201d process. For example, Pro (along with other Gemini models) can be instructed to use data from third-party providers like Moody\u2019s, Thomson Reuters, ZoomInfo and MSCI, or source information from corporate datasets or Google Search instead of its wider knowledge bank. Gemini Pro can also be connected to external, third-party APIs to perform particular actions, like automating a back-office workflow.<\/p>\n<p>AI Studio offers templates for creating structured chat prompts with Pro. Developers can control the model\u2019s creative range and provide examples to give tone and style instructions \u2014 and also tune Pro\u2019s safety settings.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/with-vertex-ai-agent-builder-google-cloud-aims-to-simplify-agent-creation\/\">Vertex AI Agent Builder<\/a>\u00a0lets people build Gemini-powered \u201cagents\u201d within Vertex AI. For example, a company could create an agent that analyzes previous marketing campaigns to understand a brand style and then apply that knowledge to help generate new ideas consistent with the style.\u00a0<\/p>\n<h3 id=\"h-gemini-flash-is-lighter-but-packs-a-punch\">Gemini Flash is lighter but packs a punch<\/h3>\n<p>While the first version of Gemini Flash was made for less demanding workloads, the newest version, <a href=\"https:\/\/techcrunch.com\/2024\/12\/11\/gemini-2-0-googles-newest-flagship-ai-can-generate-text-images-and-speech\/\">2.0 Flash<\/a>, is now Google\u2019s flagship AI model. Google calls Gemini 2.0 Flash its AI model for the agentic era. The model can natively generate images and audio, in addition to text, and can use tools like Google Search and interact with external APIs.<\/p>\n<p>The 2.0 Flash model is faster than Gemini\u2019s previous generation of models and even outperforms some of the larger Gemini 1.5 models on benchmarks measuring coding and image analysis. You can try an experimental version of 2.0 Flash in the web version of Gemini or through Google\u2019s AI developer platforms, and a production version of the model should land in January.<\/p>\n<p>An offshoot of Gemini Pro that\u2019s small and efficient, built for narrow, high-frequency generative AI workloads, Flash is multimodal like Gemini Pro, meaning it can analyze audio, video, images, and text (but it can only generate text). Google says that Flash is particularly well-suited for tasks like summarization and chat apps, plus image and video captioning and data extraction from long documents and tables.<\/p>\n<p>Devs using Flash and Pro can optionally leverage context caching, which lets them store large amounts of information (e.g., a knowledge base or database of research papers) in a cache that Gemini models can quickly and relatively cheaply access. Context caching is an additional fee on top of other Gemini model usage fees, however.<\/p>\n<h3 id=\"h-gemini-nano-can-run-on-your-phone\">Gemini Nano can run on your phone<\/h3>\n<p>Gemini Nano is a much smaller version of the Gemini Pro and Ultra models, and it\u2019s efficient enough to run directly on (some) devices instead of sending the task to a server somewhere. So far, Nano powers a couple of features on the\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/11\/googles-june-pixel-drop-brings-gemini-nano-ai-model-to-pixel-8-and-8a-users\/\">Pixel 8 Pro, Pixel 8<\/a>, Pixel 9 Pro, Pixel 9\u00a0and\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/01\/17\/samsungs-galaxy-s24-will-feature-google-gemini-powered-ai-features\/\">Samsung Galaxy S24<\/a>, including Summarize in Recorder and Smart Reply in Gboard.<\/p>\n<p>The Recorder app, which lets users push a button to record and transcribe audio, includes a Gemini-powered summary of recorded conversations, interviews, presentations, and other audio snippets. Users get summaries even if they don\u2019t have a signal or Wi-Fi connection \u2014 and in a nod to privacy, no data leaves their phone in process.<\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" width=\"507\" height=\"1080\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/06\/Pixel8Pro_Recorder-Summaries.jpg?w=319\" alt  ><figcaption><span><strong>Image Credits:<\/strong>Google<\/span><\/figcaption><\/figure>\n<p>Nano is also in Gboard, Google\u2019s keyboard replacement. There, it powers a feature called Smart Reply, which helps to suggest the next thing you\u2019ll want to say when having a conversation in a messaging app such as WhatsApp.<\/p>\n<p>In the Google Messages app on supported devices, Nano drives Magic Compose, which can craft messages in styles like \u201cexcited,\u201d \u201cformal,\u201d and \u201clyrical.\u201d<\/p>\n<p>Google says that a future version of Android will tap Nano to\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/google-will-use-gemini-to-detect-scams-during-calls\/\">alert users to potential scams during calls.<\/a>\u00a0The <a href=\"https:\/\/techcrunch.com\/2024\/08\/13\/pixel-phones-get-an-ai-powered-weather-app\/\">new weather app<\/a> on Pixel phones uses Gemini Nano to generate tailored weather reports. And TalkBack, Google\u2019s accessibility service, employs Nano to\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/15\/the-top-ai-announcements-from-google-i-o\/\">create aural descriptions of objects<\/a>\u00a0for low-vision and blind users.<\/p>\n<h2 id=\"h-how-much-do-the-gemini-models-cost\">How much do the Gemini models cost?<\/h2>\n<p>Gemini 1.0 Pro (the first version of Gemini Pro), 1.5 Pro, and Flash are available through Google\u2019s Gemini API for building apps and services \u2014 all with free options. But the free options impose usage limits and leave out certain features, like context caching and <a href=\"https:\/\/cloud.google.com\/vertex-ai\/generative-ai\/docs\/multimodal\/batch-prediction-gemini\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">batching<\/a>.<\/p>\n<p>Gemini models are otherwise pay-as-you-go. Here\u2019s the base pricing \u2014 not including add-ons like context caching \u2014 as of September 2024:<\/p>\n<ul>\n<li><strong>Gemini 1.0 Pro:<\/strong>\u00a050 cents per 1 million input tokens, $1.50 per 1 million output tokens<\/li>\n<li><strong>Gemini 1.5 Pro:\u00a0<\/strong>$1.25 per 1 million input tokens (for prompts up to 128K tokens) or $2.50 per 1 million input tokens (for prompts longer than 128K tokens); $5 per 1 million output tokens (for prompts up to 128K tokens) or $10 per 1 million output tokens (for prompts longer than 128K tokens)<\/li>\n<li><strong>Gemini 1.5 Flash:<\/strong>\u00a07.5 cents per 1 million input tokens (for prompts up to 128K tokens), 15 cents per 1 million input tokens (for prompts longer than 128K tokens), 30 cents per 1 million output tokens (for prompts up to 128K tokens), 60 cents per 1 million output tokens (for prompts longer than 128K tokens)<\/li>\n<li><strong>Gemini 1.5 Flash-8B:<\/strong> 3.75 cents per 1 million input tokens (for prompts up to 128K tokens), 7.5 cents per 1 million input tokens (for prompts longer than 128K tokens), 15 cents per 1 million output tokens (for prompts up to 128K tokens), 30 cents per 1 million output tokens (for prompts longer than 128K tokens)<\/li>\n<\/ul>\n<p>Tokens are subdivided bits of raw data, like the syllables \u201cfan,\u201d \u201ctas,\u201d and \u201ctic\u201d in the word \u201cfantastic\u201d; 1 million tokens is equivalent to about 700,000 words. <em>Input<\/em> refers to tokens fed into the model, while <em>output<\/em> refers to tokens that the model generates.<\/p>\n<p>Ultra and 2.0 Flash pricing has yet to be announced, and Nano is still in\u00a0<a href=\"https:\/\/ai.google.dev\/gemini-api\/docs\/get-started\/android_aicore\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">early access<\/a>.<\/p>\n<h2 id=\"h-what-s-the-latest-on-project-astra\">What\u2019s the latest on Project Astra?<\/h2>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/googles-gemini-updates-how-project-astra-is-powering-some-of-i-os-big-reveals\/\">Project Astra<\/a> is Google DeepMind\u2019s effort to create AI-powered apps and \u201cagents\u201d for real-time, multimodal understanding. In demos, Google has shown how the AI model can simultaneously process live video and audio. Google released an app version of Project Astra to a small number of trusted testers in December but has no plans for a broader release right now.<\/p>\n<p>The company <a href=\"https:\/\/techcrunch.com\/2024\/12\/12\/google-wants-to-sell-those-project-astra-ar-glasses-some-day-but-it-wont-be-today\/\">would like to put Project Astra in a pair of smart glasses<\/a>. Google also gave a prototype of some glasses with Project Astra and augmented reality capabilities to a few trusted testers in December. However, there\u2019s not a clear product at this time, and it\u2019s unclear when Google would actually release something like this.<\/p>\n<p>Project Astra is still just that, a project, and not a product. However, the demos of Astra reveal what Google would like its AI products to do in the future.<\/p>\n<h2 id=\"h-is-gemini-coming-to-the-iphone\">Is Gemini coming to the iPhone?<\/h2>\n<p>It might.\u00a0<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2024\/03\/17\/apple-is-reportedly-exploring-a-partnership-with-google-for-gemini-powered-feature-on-iphones\/\">Apple has said that it\u2019s in talks to put Gemini and other third-party models to use<\/a>\u00a0for a number of features in its <a href=\"https:\/\/techcrunch.com\/2024\/09\/09\/what-is-apple-intelligence-when-is-coming-and-who-will-get-it\/\">Apple Intelligence<\/a> suite. Following a\u00a0keynote presentation at WWDC 2024, Apple SVP Craig Federighi\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/06\/10\/apple-confirms-plans-to-work-with-googles-gemini-in-the-future\/\">confirmed plans to work with models<\/a>,\u00a0including Gemini, but he didn\u2019t divulge any additional details.<\/p>\n<p><em>This post was originally published February 16, 2024, and has since been updated to include new information about Gemini and Google\u2019s plans for it.<\/em><\/p>\n<p><\/p>\n<\/div>\n<p> Kyle Wiggers, Maxwell Zeff<br \/><a href=\"https:\/\/techcrunch.com\/2024\/12\/12\/what-is-google-gemini-ai\/\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google\u2019s trying to make waves with Gemini, its flagship suite of generative AI models, apps, and services. But what\u2019s Gemini? How can you use it? And how does it\u00a0stack up to other generative AI tools such as OpenAI\u2019s ChatGPT, Meta\u2019s Llama, and Microsoft\u2019s Copilot? To make it easier to keep up with the latest Gemini<\/p>\n","protected":false},"author":1,"featured_media":814890,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1957,496],"tags":[19332,5220],"class_list":{"0":"post-814889","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-gemini","8":"category-google","9":"tag-gemini","10":"tag-google"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/814889","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=814889"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/814889\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/814890"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=814889"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=814889"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=814889"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}