Skip to content

Google I/O: Google is all in on generative AI

5-10-2023 – Google’s annual developer conference Google I/O kicked off with the 10 AM keynote. Google I/O is the highly anticipated annual developer conference hosted by Google. It’s an event where tech enthusiasts, software developers, and innovators gather to learn more about what Google has in store for the future of technology.

This year there were a few leaks surrounding the Pixel Fold, PaLM 2, but everyone expected the focus to be on AI. The event features a range of interesting sessions and multiple keynotes besides the overall presentation given by Sundar Pichai; the opening keynote is an interesting way to gauge the company’s focus.

Sundar Pichai gives opening remarks at Google’s Annual 2023 developers conference Google I/O

This year the first words out of Google’s CEO’s mouth were outlining 3 new AI features across 3 of their most popular products:

  1. The upcoming rollout of “Compose” for Gmail. Compose is a powerful email feature that allows you to ask for a specific email reply and then have a full email composed for you; the example given was asking a company (airline) for a refund automatically. The demo was not live but very impressive, showing the potential to expand on Google’s Smartcompose, which allows for expanded autocomplete and autosuggest.
  2. Immersive view for Google maps in select cities, Google is using AI + its street view images to give an accurate feel of a particular route. This feature will be available in 15 cities initially. Google imagines people using it as a part of their typical planning process to help navigate or consider other features of the route.
  3. Google Photos is getting smart with some help from AI. Google announced several new editing features coming to photos, including a magic eraser and magic editor. These features will bring powerful photo editing abilities to Google apps and phones with just a button click.

Besides these core AI feature updates to its existing products, Google announced several more standalone AI products and features:

PaLM 2 & Bard

PaLM 2 will be available in Google Cloud in several package types, allowing developers to easily deploy large language models in different contexts. It is currently in preview with some of its customers. Google has developed specialized versions for security scanning, medical subject matter expertise focusing on scans/testing, and coding.

Notable features of PaLM 2 and Bard:

  1. It will be able to run offline and on small packages – the smallest being “Gecko” for mobile devices
  2. It is multilingual, but right now, it can only take commands in English, Japanese, and Korean. The short-term goal is 40 languages.
  3. PaLM 2 can take context readily and adapt. Google allows feedback loops to be incorporated and can use your proprietary data.
  4. Bard is being fully run on PaLM 2, and Bard itself was announced as leaving preview to be publicly available in 180 countries.
  5. Dark theme Bard is available – this got the loudest cheers of the opening.
  6. Bard can respond with more than text – including tables or images. Bard’s coding chops seemed pretty incredible in the non-live demos.
  7. You can export anything from Bard directly to Sheets or Docs. You can export your code packages directly to Replit and publish on the cloud with just a few clicks in theory.
  8. Google announced a partnership with Adobe Firefly and Bard to introduce an image generation engine.

Sidekick is Google’s answer to Microsoft’s Copilot

Google announced Sidekick, which works in Google Workspaces to proactively give suggestions, including recommendations on what you can ask for help with. Sidekick pops out from a panel, and can help you compose your document, do research, or even make tables.

Microsoft should have undoubtedly brought back Clippy as the face of Copilot, and Google can do better than “Sidekick”.

Search is going conversational

The biggest changes to search will be to search reformulations and the total number of searches that generate 0 CTR. The previews at the conference were notably limited and did not even highlight where advertising spots would be. This is odd given the revenue model of the product and one of the core considerations. Two things are clear, though:

  1. Google wants you to search more complex questions and be able to refine your searches without starting over. For example, instead of searching for “good summer activities to do with kids” and then separately “good summer activities to do with people who have limited mobility”, Google wants you to search “where should I go to get outside in the summer with my kids and my parents who can not walk that well”.
  2. Google is not holding back answers – instead of short knowledge snippets – the new version of search takes over the page to help people immediately get the answer they want. Google is also adding more conversational and feedback elements to improve the answers over time. While Google’s answers contain sources, it seems like, especially once ads are in the mix, there will be a smaller share of organic traffic, leaving Google search results pages on average.

Google Labs

Google announced that Google Labs is “launching” or “relaunching” to the public and will serve as the place to sign up for experiences that are in preview mode. This seems small but important as generative AI companies have benefited from groups of testers and effective feedback loops more than they initially expected as an industry.

You can watch the keynote here:

Oh hi there 👋
It’s nice to meet you.

Sign up to receive sweet content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

Was this page helpful?

Leave a Reply


Lucas Barnes

Lucas Barnes writes opinion and covers news for Culturalist Press on technology and politics. Lucas has a BA in History from the University of San Diego and has worked in the technology industry for over a decade.LinkedIn: