Everything Google announced at its Google I/O keynote
Onstage at Google I/O in Mountain View, CEO Sundar Pichai announced that earlier this week the company surpassed 2 billion monthly active users on the Android platform, continuing its reign as the world’s most popular mobile operating system.
The company has added nearly 400 million users to its mobile operating system since September of 2015 when it last gave an update. By comparison, Apple announced in January of last year that there are 1 billion devices running iOS.
Pichai also detailed that the company has quickly grown its Google Photos platform. Google has been tweaking the service constantly, and is continuing to see some major traffic on the platform. The product now has over 500 million monthly active users that upload 1.2 billion photos onto the service every day.
A lot of numbers are being voiced in the billions today, Google currently has seven unique products with over one billion monthly active users each.
The company announced several new efforts. Google.ai is a spinoff division to encompass learning systems, research tools and applied AI to inform all of its work, including an AI that can build more AIs. Google Lens is a new recognition engine that enables intelligent mixed reality — performing text and object recognition and feeding it into other apps to act upon, such as using the camera to view your router serial numbers and automatically provide related links.
And lastly it announced Google Jobs, its platform to bring its contextual intelligence power to making it easier to find the jobs you’re looking for.
On the stage of Google I/O, CEO Sundar Pichai announced Google.ai, a new initiative to democratize the benefits of the latest in machine learning research. Google.ai will serve as a center of Google’s AI efforts — including research, tools and applied AI.
The new site will host research from Google and its Brain Team. It also allows anyone to quickly access fun experiments that highlight the company’s progress in the field. This includes AutoDraw, that makes it possible for unskilled artists to put their ideas on paper, Duet that can play along with piano players and Quick, Draw!, a game where an AI tries to guess your drawings. A selection of videos and posts about Google’s AI-first efforts are also co-located.
Google’s Tensor Flow has played a pivotal role in making machine learning accessible to a greater number of developers. But every day new research comes from universities and private research labs and Google wants to help make that accessible too.
Google announced its next generation of its custom Tensor Processing Units (TPUs) machine learning chips at Google I/O today. These chips, which are designed specifically to speed up machine learning tasks, are supposed to be more capable than CPUs or even GPUs at these tasks and are an upgrade from the first generation of chips the company released at last year’s I/O.
And speed up they have. Google claims the each second-generation TPU can deliver up to 180 teraflops of performance. We will have to wait and see what the average benchmarks look like, but they are a step forward for more than speed. The first generation TPU was only able to handle inference. The new one can also be used for training machine learning models, a significant part of the machine learning workflow all within this single, powerful chip.
70 percent of Google Assistant requests are already in natural language — not the typical keyword queries you’d usually use in Google Search.
Huffman noted that the Assistant will get more conversational in the coming months and also allow you to have conversations around things that you see, for example. It’ll be integrated with the new Google Lens and that product’s built-in image recognition technology. That way, you’ll be able to use the Assistant to easily talk about things around you. It’ll be some time before you can do that, though, because it won’t be rolling out until a few months from now.
Google Assistant is considered a more powerful voice assistant when you compare it to the current version of Siri. It lets you ask more complicated queries and it has third-party integrations. It also lets you control your connected devices, as the company just announced new partnerships with third-party companies.
Also new today, you can now type your queries instead of speaking out loud. This could be useful if you have a burning question and somebody is sleeping next to you.
Google took a lot of time to address how it was shaping notifications and responses on Home to add greater flexibility and utility to voice interactions.
Proactive Assistance lets Google Home talk to you and give you updates without having you prompt it. Have an event in your calendar and traffic is getting bad? Home can let you know that you might need to leave a bit earlier. You’ll also be able to have Home remind you of things without asking it whether there are things it needs to remind you of.
Perhaps one of the most interesting evolutions to come to Home was the addition of Visual Responses to the platform. Voice assistants can’t do everything and sometimes a picture is worth a thousand words. If the information that Google Home needs to tell you could be better expressed on a screen, you can throw it onto your phone or onto a TV with Chromecast support.
The company also announced it will offer photo books and will integrated Google Lens into Google Photos.
Suggested Sharing is a lot like the functionality today included in Facebook’s standalone private sharing app, Facebook Moments.
The Google Photos app will identify who from your Google Contacts is in your photo, and then give you a nudge to share your photos with them from the app.
Google’s Anil Sabharwal showed how this feature would work during an onstage demo at Google’s developer conference, Google I/O.
360 Videos on the TV.
It’s already been available for a few months as a developer preview, but now the rest of us can finally get our hands on an upcoming version of Android. Android O Beta starts shipping today, if you point your browser over to android.com/beta.
The company used the opportunity to show a number of features that have already been available in the developer preview. Notifications have gotten number of key updates, including the addition of Notification Dots – a little circle that sits in the corner of an app icon, letting users know that specific app has a new note tied to it. Giving it a long press, will pop up a preview window, similar to iOS, so users never have to leave the desktop to view.
Will boot faster, have more battery life and more secure.
Google announced that it is making Kotlin, a statically typed programming language for the Java Virtual Machine, a first-class language for writing Android apps. Kotlin’s primary sponsor is JetBrains, the company behind tools like IntelliJ. It’s 100 percent interoperable with Java, which until now was Google’s primary language for writing Android apps (besides C++).
The company also today said that it will launch a foundation for Kotlin (together with JetBrains). JetBrains open-sourced Kotlin back in 2012 and version 1.0 launched just over a year ago. Google’s own Android Studio, it’s worth noting, is based on the JetBrains IntelliJ Java IDE, and the next version of Android Studio (3.0) will support it out of the box.
Because Kotlin is interoperable with Java, you could already write Android apps in the language before, but now Google will put its weight behind the language. Kotlin includes support for a number of features that Java itself doesn’t currently support.
Google noted in a later keynote that this is only an additional language, not a replacement for its existing Java and C++ support.
It’s worth noting that the Kotlin announcement garnered what was likely the loudest applause from Google’s I/O keynote announcement today.
There are 2 billion Android devices currently in use around the world. Google is now thinking about the next 2 billion devices. In order to do this, Google has a new project called Android Go. It’s a lightweight version of the upcoming version of Android (Android O) with optimized apps and Play Store.
Google focused on devices with very low specs, users with limited connectivity and multilingual capabilities. And it can run on devices with less than 1GB of memory. The Play Store is going to highlight apps that can run on these cheap devices.
These apps should be less than 10MB, work well when you’re not connected to the internet and support devices with slow systems-on-a-chip and little RAM.
Sameer Samat talked about Chrome’s data saver as an essential feature to load more pages with a minimal amount of cell data. But the company doesn’t plan to stop there.
Google has announced it’s working on a service to offer detailed indoor location positioning using its Tango 3D sensing computer vision tech.
“One thing we’ve seen clearly is that AR is most powerful when it’s tightly coupled to the real world, and the more precisely the better,” said Clay Bavor, speaking at Google’s I/O conference today. “That’s why we’ve been working with the Google Maps team on a service that can give devices access to very precise location information indoors.”
Bavor described the feature as “kind of like GPS” but instead of talking to satellites — which isn’t necessarily viable given indoor reception issues — the cameras on a Tango device triangulate position based on “distinct visual features in the environment”.
Google announced today it’s launching a jobs search engine in the U.S. The service will focus on all types of jobs – from entry-level and service industry positions to high-end professional jobs. It will also leverage Google technologies like machine learning and A.I. to better understand how jobs are classified and related, among other things.
Google CEO Sundar Pichai gave a brief preview of the job search engine, called “Google for Jobs,” at Google’s developer conference I/O this afternoon.
“46% of U.S. employers say they face talent shortages and have issues filling open job positions,” explained Pichai. “While job seekers may be looking for openings right next door – there’s a big disconnect here…We want to better connect employers and job seekers through a new initiative, Google for Jobs.”
In a few weeks, Google will begin to recognize when U.S. users are typing job search queries into Google Search, and will then highlight jobs that match the query. However, Google is not necessarily taking on traditional job search service providers with this launch – instead, it’s partnering with them.