[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Head of Product

Our team is driven by the belief that apps have drastically improved the way we live, work, learn, and socialize, keeping us connected to each other and plugged into the information we need. Now more than ever, we understand the importance of supporting our developer community by ensuring you have the technology and resources you need to keep your business up and running. Whether you’re a high-growth startup or a global enterprise, we’re still here to help you build and operate your app.

Laura Willis
Developer Marketing
Android Developer Challenge

Last month, our friends at Android launched the Android Developer Challenge, and asked you to submit your ideas focused on helpful innovation, powered by on-device machine learning.

ML Kit for Firebase helps power many of these experiences, including adidas’ new in-store shopping experience for their London store. Shoppers can scan products on their phones while they are in the store and the app lets them check stock and request their size without the need for queues.

If you’ve used ML Kit for Firebase to create a great user experience, or if you’ve got a great idea for how you might use it, submit your idea by December 2! You can also check out more examples on the Android Developers blog.

Francis Ma
Head of Product

This week, we’re returning to Google I/O for the 4th year in a row to share how we’re making Firebase better for all app developers, from the smallest one-person startup to the largest enterprise businesses. No matter how many times we take the stage, our mission remains the same: to help mobile and web developers succeed by making it easier to build, improve, and grow your apps. Since launching Firebase as Google’s mobile development platform at I/O 2016, we’ve been continuously amazed at what you’ve built with our tools. It is an honor to help you on your journey to change the world!

For example, in Uganda, a start-up called Teheca is using Firebase to reduce the mortality rate of infants and new mothers by connecting parents with nurses for post-natal care. Over in India where smartphones are quickly replacing TVs as the primary entertainment source, Hotstar, India’s largest video streaming app, is using Firebase with BigQuery to transform the viewing experience by making it more social and interactive. Here’s how they’re doing it, in their own words:

Stories like these inspire us to keep making Firebase better. In fact, we’ve released over 100 new features and improvements over the last 6 months! Read on to learn about our biggest announcements at Google I/O 2019.

Simplifying machine learning for every app developer

New translation, object detection and tracking, and AutoML capabilities in ML Kit

Last year, we launched ML Kit, bringing Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package. It came with a set of ready-to-use on-device and cloud-based APIs with support for custom models, so you could apply the power of machine learning to your app, regardless of your familiarity with ML. Over the past few months, we’ve expanded on these by adding solutions for Natural Language Processing, such as Language Identification and Smart Reply APIs. Now, we’re launching three more capabilities in beta: On-device Translation API, Object Detection & Tracking API, and AutoML Vision Edge.

The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages. The Object Detection & Tracking API lets your app locate and track, in real-time, the most prominent object in a live camera feed. With AutoML Vision Edge, you can easily create custom image classification models tailored to your needs. For example, you may want your app to be able to identify different types of food, or distinguish between species of animals. Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative.

Customers like IKEA, Fishbrain, and Lose It! are already using ML Kit’s capabilities to enhance their app experiences. Here’s what they had to say:

"We’re working with Google Cloud to create a new mobile experience that enables customers, wherever they are, to take photos of home furnishing and household items and quickly find that product or similar in our online catalogue. The Cloud Vision Product Search API provided IKEA a fast and easy way to index our catalogue, while ML Kit’s Object Detection and Tracking API let us seamlessly implement the feature on a live viewfinder on our app. Google Cloud helps us make use of Vision Product Search and we are very excited to explore how this can help us create a better and more convenient experience for our customers.”
- Susan Standiford, Chief Technology Officer of Ingka Group, a strategic partner in the IKEA franchise system and operating IKEA in 30 markets.
“Our users are passionate about fishing, so capturing and having access to images of catches and species information is central to their experience. Through AutoML Vision Edge, we’ve increased the number of catches logged with species information by 30%, and increased our species recognition model accuracy from 78% to 88%..”

- Dimitris Lachanas, Android Engineering Manager at Fishbrain
“Through AutoML Vision Edge, we were able to create a highly predictive, on-device model from scratch. With this improvement to our state-of-the-art food recognition algorithm, Snap It, we’ve increased the number of food categories our customers can classify in images by 21% while reducing our error rate by 36%, which is huge for our customers.” - Will Lowe Ph.D., Director of Data Science & AI, Lose It!

Providing deeper insight into speed & performance of web apps

Performance Monitoring now supports web apps

Native mobile developers have loved using Firebase Performance Monitoring to find out what parts of their app are running slower than they expect, and for which app users. Today, we’re excited to announce that Performance Monitoring is available for web apps too, in beta, so web developers can understand how real users are experiencing their app in the wild.

By pasting a few lines of code to their site, the Performance Monitoring dashboard will track and visualize high level web metrics (like page load and network stats) as well as more granular metrics (like time to first paint and first input delay) across user segments. The Performance Monitoring dashboard will also give you the ability to drill down into these different user segments by country, browser, and more. Now, you can get deep insight into the speed and performance of your web apps and fix issues fast to ensure your end users have a consistently great experience. By adding web support to one of our most popular tools, we’re reaffirming our commitment to make app development easier for both mobile and web developers.

Firebase Performance Monitoring dashboard

Enhancing user segmentation capabilities for better personalization & analysis

Brand new audience builder in Google Analytics for Firebase

Google Analytics for Firebase provides free, unlimited, and robust analytics so you can measure the things that matter in your app and understand your users. A few weeks ago, we announced advanced filtering in Google Analytics for Firebase, which allows you to filter your Analytics event reports by any number of different user properties or audiences at the same time.

Today, we’re thrilled to share that we’ve completely rebuilt our audience system from scratch with a new interface. This new audience builder includes new features like sequences, scoping, time windows, membership duration, and more to enable you to create dynamic, precise, and fresh audiences for personalization (through Remote Config) or re-engagement (through Cloud Messaging and/or the new App campaigns).

For example, if you wanted to create a "Coupon users" audience based on people who redeem a coupon code within your app, and then complete an in-app purchase within 20 minutes, this is now possible with the new audience builder.

Audience builder on the Firebase Performance Monitoring dashboard

Other exciting announcements from I/O

In addition to the three big announcements above, we’ve also made the following improvements to other parts of Firebase.

Support for collection group queries in Cloud Firestore

In January, we graduated Cloud Firestore - our fully-managed NoSQL database - out of beta into general availability with lower pricing tiers and new locations. Now, we’ve added support for Collection Group queries. This allows you to search for fields across all collections of the same name, no matter where they are in the database. For example, imagine you had a music app which stored its data like so:

Cloud Firestore data storage structure flowchart example with artists in tier one and songs in tier two

This data structure makes it easy to query the songs by a given artist. But until today, it was impossible to query across artists — such as finding the longest songs regardless of who wrote them. With collection group queries, Cloud Firestore now can perform these searches across all song documents, even though they're in different collections. This means it’s easier to organize your data hierarchically, while still being able to search for the documents you want.

Cloud Functions emulator

We’ve also been steadily improving our tools and emulator suite to increase your productivity for local app development and testing. In particular, we’re releasing a brand new Cloud Functions emulator that can also communicate with the Cloud Firestore emulator. So if you want to build a function that triggers upon a Firestore document update and writes data back to the database you can code and test that entire flow locally on your laptop, for much faster development.

Configurable velocity alerts in Crashlytics

Firebase Crashlytics helps you track, prioritize, and solve stability issues that erode app quality, in real time. One of the most important alerts within Crashlytics is the velocity alert, which notifies you when an issue suddenly increases in severity and impacts a significant percentage of your users. However, we recognize that every app is unique and the one-size-fits-all alerting threshold might not be what’s best for you and your business. That’s why you can now customize velocity alerts and determine how often and when you want to be alerted about changes to your app’s stability. We’re also happy to announce that we’ve expanded Crashlytics to include Unity and NDK support.

Velocity alert settings

Improvements to Test Lab

Firebase Test Lab makes it easy for you to test your app on real, physical devices, straight from your CLI or the Firebase console. Over the past few months, we’ve released a number of improvements to Test Lab. We’ve expanded the types of apps you can run tests on by adding support for Wear OS by Google and Android App Bundles. We’ve also added ML vision to Test Lab’s monkey action feature so we can more intelligently simulate where users will tap in your app or game. Lastly, we’ve made your tests more reliable with test partitioning, flaky test detection, and the robo action timeline, which tells you exactly what the crawler was doing while the test was running.

Greater control over Firebase project permissions

Security and data privacy remain part of our top priorities. We want to make sure you have control over who can access your Firebase projects, which is why we’ve leveraged Google Cloud Platform’s Identity & Access Management controls to give you finer grained permission controls. Right from the Firebase console, you can control who has access to which parts of your Firebase project. For example, you can grant access to a subset of tools so team members who run notification campaigns aren’t able to change your Firebase database’s security rules. You can go even further and use the GCP console to create custom roles permitting access to only the actions your team members are required to take.

More open-sourced SDKs

To make Firebase more usable and extensible, we’re continuing to open source our SDKs and accepting contributions from the community. We are committed to giving you transparency and flexibility with the code you integrate into your mobile and web apps. Most recently, we open sourced our C++ SDK.

Recapping a few updates from Cloud Next 2019

In case you missed the news at Cloud Next 2019, here’s a quick recap of the updates we unveiled back in April:

  • Firebase Hosting and Cloud Run integration: This integration combines Firebase Hosting's global CDN and caching features with Cloud Run's fully managed stateless containers. Now, it’s easier than ever to add performant server-side rendering for your websites in any language you want, without having to provision or manage your own servers.
  • Paid enterprise-grade support: The Google Cloud Platform (GCP) support plan includes support for Firebase products, which is a new option for our larger customers who are interested in a more robust, paid support experience. As a reminder, free community support isn’t going anywhere!

Update on Fabric migration

In addition to making Firebase more powerful, we’ve also been hard at work bringing the best of Fabric into Firebase. We know many of you have been waiting for more information on this front, so we have outlined our journey in more detail here.

Onwards

We’re continuing to invest in Firebase and as always, we welcome your feedback! With every improvement to Firebase, we aim to simplify your app development workflows and infrastructure needs, so you can stay focused on building amazing user experiences. To get a sneak peek at what’s next, join our Alpha program and help us shape the future

Christiaan Prins
Product Manager
Max Gubin
Software Engineer

Today we are announcing the release of two new features to ML Kit: Language Identification and Smart Reply.

You might notice that both of these features are different from our existing APIs that were all focused on image/video processing. Our goal with ML Kit is to offer powerful but simple-to-use APIs to leverage the power of ML, independent of the domain. As such, we are excited to expand ML Kit with solutions for Natural Language Processing (NLP)!

NLP is a category of ML that deals with analyzing and generating text, speech, and other kinds of natural language data. We're excited to start out with two APIs: one that helps you identify the language of text, and one that generates reply suggestions in chat applications. Both of these features work fully on-device and are available on the latest version of the ML Kit SDK, on iOS (9.0 and higher) and Android (4.1 and higher).

Generate reply suggestions based on previous messages

A new feature popping up in messaging apps is to provide the user with a selection of suggested responses, either as actions on a notification or inside the app itself. This can really help a user to quickly respond when they are busy or a handy way to initiate a longer message.

With the new Smart Reply API you can now quickly achieve the same in your own apps. The API provides suggestions based on the last 10 messages in a conversation, although it still works if only one previous message is available. It is a stateless API that fully runs on-device, so we don't keep message history in memory nor send it to a server.

textPlus app providing response suggestions using Smart Reply

We have worked closely with partners like textPlus to ensure Smart Reply is ready for prime time and they have now implemented in-app response suggestions with the latest version of their app (screenshot above).

Adding Smart Reply to your own app is done with a simple function call (using Kotlin in this example):

val smartReply = FirebaseNaturalLanguage.getInstance().smartReply
smartReply.suggestReplies(conversation)
        .addOnSuccessListener { result ->
            if (result.status == SmartReplySuggestionResult.STATUS_NOT_SUPPORTED_LANGUAGE) {
                // The conversation's language isn't supported, so the
                // the result doesn't contain any suggestions.
            } else if (result.status == SmartReplySuggestionResult.STATUS_SUCCESS) {
                // Task completed successfully
                // ...
            }
        }
        .addOnFailureListener {
            // Task failed with an exception
            // ...
        }

After you initialize a Smart Reply instance, call suggestReplies with a list of recent messages. The callback provides the result which contains a list of suggestions.

For details on how to use the Smart Reply API, check out the documentation.

Tell me more ...

Although as a developer, you can just pick up this new API and easily get it integrated in your app, it may be interesting to reveal a bit on how it works under the hood. At the core of Smart Reply is a machine-learned model that is executed using TensorFlow Lite and has a state-of-the-art modern architecture based on SentencePiece text encoding[1] and Transformer[2].

However, as we realized when we started development of the API, the core suggestion model is not all that’s needed to provide a solution that developers can use in their apps. For example, we added a model to detect sensitive topics, so that we avoid making suggestions in response to profanity or in cases of personal tragedy/hardship. Also, we included language identification, to ensure we do not provide suggestions for languages the core model is not trained on. The Smart Reply feature is launching with English support first.

Identify the language of a piece of text

The language of a given text string is a subtle but helpful piece of information. A lot of apps have functionality with a dependency on the language: you can think of features like spell checking, text translation or Smart Reply. Rather than asking a user to specify the language they use, you can use our new Language Identification API.

ML Kit recognizes text in 110 different languages and typically only requires a few words to make an accurate determination. It is fast as well, typically providing a response within 1 to 2 ms across iOS and Android phones.

Similar to the Smart Reply API, you can identify the language with a function call (using Kotlin in this example):

val languageIdentification =
    FirebaseNaturalLanguage.getInstance().languageIdentification
languageIdentification
    .identifyLanguage("¿Cómo estás?")
    .addOnSuccessListener { identifiedLanguage ->
        Log.i(TAG, "Identified language: $identifiedLanguage")
    }
    .addOnFailureListener { e ->
        Log.e(TAG, "Language identification error", e)
    }

The identifyLanguage functions takes a piece of a text and its callback provides a BCP-47 language code. If no language can be confidently recognized, ML Kit returns a code of und for undetermined. The Language Identification API can also provide a list of possible languages and their confidence values.

For details on how to use the Language Identification API, check out the documentation.

Get started today

We're really excited to expand ML Kit to include Natural Language APIs. Give the two new NLP APIs a spin today and let us know what you think! You can always reach us in our Firebase Talk Google Group.

As ML Kit grows we look forward to adding more APIs and categories that enables you to provide smarter experiences for your users. With that, please keep an eye out for some exciting ML Kit announcements at Google I/O.

Christiaan Prins
Product Manager

If you're building or looking to build a visual app, you'll love ML Kit's new face contour detection. With ML Kit, you can take advantage of many common Machine Learning (ML) use-cases, such as detecting faces using computer vision. Need to know where to put a hat on a head in a photo? Want to place a pair of glasses over the eyes? Or maybe just a monocle over the left eye. It's all possible with ML Kit's face detection. In this post we'll cover the new face contour feature that allows you to build better visual apps on both Android or iOS.

Detect facial contours

With just a few configuration options you can now detect detailed contours of a face. Contours are a set of over 100 points that outline the face and common features such as the eyes, nose and mouth. You can see them in the image below. Note that as the subject raises his eyebrows, the contour dots move to match it. These points are how advanced camera apps set creative filters and artistic lenses over a user's face.

Setting up the face detector to detect these points only takes a few lines of code.

lazy var vision = Vision.vision()
let options = VisionFaceDetectorOptions()
options.contourMode = .all
let faceDetector = vision.faceDetector(options: options)

The contour points can update in realtime as well. To achieve an ideal frame rate the face detector is configured with the fast mode by default.

When you're ready to detect points in a face, send an image or a buffer to ML Kit for processing.

faceDetector.process(visionImage) { faces, error in
  guard error == nil, let faces = faces, !faces.isEmpty else { return }
  for face in faces {
    if let faceContour = face.contour(ofType: .face) {
      for point in faceContour.points {
        print(point.x) // the x coordinate
        print(point.y) // the y coordinate
      }
   }
}      

ML Kit will then give you an array of points that are the x and y coordinates of the contours in the same scale as the image.

Detect the location of facial features

The face detector can also detect landmarks within faces. A landmark is just an umbrella term for facial features like your nose, eyes, ears, and mouth. We've dramatically improved its performance since launching ML Kit at I/O!

To detect landmarks configure the face detector with the landmarkMode option:

lazy var vision = Vision.vision()
let options = VisionFaceDetectorOptions()
options.landmarkMode = .all
let faceDetector = vision.faceDetector(options: options)

Then pass an image into the detector to receive and process the coordinates of the detected landmarks.

faceDetector.process(visionImage) { faces, error in
  guard error == nil, let faces = faces, !faces.isEmpty else { return }
  for face in faces {
    // check for the presence of a left eye
    if let leftEye = face.landmark(ofType: .leftEye) {
      // TODO: put a monocle over the eye [monocle emoji]
      print(leftEye.position.x) // the x coordinate
      print(leftEye.position.y) // the y coordinate
    }
  }
}

We can't wait to see what you'll build with ML Kit

Hopefully these new features can empower you to easily build smarter features into your visual apps. Check out our docs for iOS or Android to learn all about face detection with ML Kit. Happy building!

Sachin Kotwani
Product Manager

In today's fast-moving world, people have come to expect mobile apps to be intelligent - adapting to users' activity or delighting them with surprising smarts. As a result, we think machine learning will become an essential tool in mobile development. That's why on Tuesday at Google I/O, we introduced ML Kit in beta: a new SDK that brings Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package on Firebase. We couldn't be more excited!

Machine learning for all skill levels

Getting started with machine learning can be difficult for many developers. Typically, new ML developers spend countless hours learning the intricacies of implementing low-level models, using frameworks, and more. Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience.

With ML Kit, you can use machine learning to build compelling features, on Android and iOS, regardless of your machine learning expertise. More details below!

Production-ready for common use cases

If you are a beginner or want to implement a solution quickly, ML Kit gives you five ready-to-use ("base") APIs that address common mobile use cases:

  • Text recognition
  • Face detection
  • Barcode scanning
  • Image labeling
  • Landmark recognition

With these base APIs, you simply pass in data to ML Kit and get back an intuitive response. For example: Lose It!, one of our early users, used ML Kit to build several features in the latest version of their calorie tracker app. Using our text recognition based API and a custom built model, their app can quickly capture nutrition information from product labels to input a food's content from an image.

ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy.

See these ready-to-use APIs in the Firebase console:

Heads up: We're planning to release two more APIs in the coming months. First is a smart reply API allowing you to support contextual messaging replies in your app, and the second is a high density face contour addition to the face detection API. Sign up here to give them a try!

Deploy custom models

If you're seasoned in machine learning and you don't find a base API that covers your use case, ML Kit lets you deploy your own TensorFlow Lite models. You simply upload them via the Firebase console, and we'll take care of hosting and serving them to your app's users. This way you can keep your models out of your APK/bundles which reduces your app install size. Also, because ML Kit serves your model dynamically, you can always update your model without having to re-publish your apps.

But there is more. As apps have grown to do more, their size has increased, harming app store install rates, and with the potential to cost users more in data overages. Machine learning can further exacerbate this trend since models can reach 10's of megabytes in size. So we decided to invest in model compression. Specifically, we are experimenting with a feature that allows you to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. The technology behind this is evolving rapidly and so we are looking for a few developers to try it and give us feedback. If you are interested, please sign up here.

Better together with other Firebase products

Since ML Kit is available through Firebase, it's easy for you to take advantage of the broader Firebase platform. For example, Remote Config and A/B testing lets you experiment with multiple custom models. You can dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly. You can even create population segments and experiment with several models in parallel.

Other examples include:

Get started!

We can't wait to see what you'll build with ML Kit. We hope you'll love the product like many of our early customers:

Get started with the ML Kit beta by visiting your Firebase console today. If you have any thoughts or feedback, feel free to let us know - we're always listening!

Francis Ma
Head of Product

It’s hard to believe that it’s only been two years since we expanded Firebase at I/O 2016 from a set of backend services to a full app development platform. In the time since then, it’s been humbling to watch the developer community embrace Firebase. We now have 1.2 million apps actively using Firebase every month!

No matter how much we grow, our mission remains the same: to help mobile app teams be successful across every stage of your development cycle, from building your app, to improving app quality, to growing your business.

Having such an amazing developer community is both a huge honor and a huge responsibility. Thank you for trusting us with your apps. It’s inspiring to hear the stories about what you’ve built with Firebase and your success is the reason we’re excited to come to work everyday!

Today, we’re announcing a number of improvements to Firebase. Let’s take a look.

Introducing ML Kit into public beta

Machine learning just got easier for mobile developers. We’re excited to announce ML Kit, an SDK available on Firebase that lets you bring powerful machine learning features to your app whether it's on Android or iOS, and whether you're an experienced ML developer or you're just getting started.

ML Kit comes with a set of ready-to-use APIs for common use cases: recognizing text, detecting faces, scanning barcodes, labeling images and recognizing landmarks. These APIs can run on-device or in the cloud, depending on the functionality. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy. You can also bring in your own TensorFlow Lite models for advanced use-cases, and ML Kit will take care of the hosting and serving, letting you focus on building your app.

These five APIs are just the first step. We'll be rolling out more in the future and if you want to be involved as an early tester, please visit our signup form to join the waiting list.

Whether you're building on Android or iOS, you can improve the experience for your users by leveraging machine learning. And with ML Kit, we hope to make it easy for developers of all experience levels to get started today. Visit our docs to learn more.

Improving Performance Monitoring

At I/O last year, we launched Performance Monitoring into beta to help you gain insight into your app's performance so you can keep it fast and responsive. Since then, we've seen tremendous adoption. Some of the largest apps in the world — like Flipkart, Ola, and Swiggy — have started using Performance Monitoring and we now report 100 billion performance metrics every day, helping developers improve their app's quality and make their users happy!

Now that the SDK is battle-tested, we've decided to graduate Performance Monitoring out of beta. With this change comes a couple of improvements that you'll see rolling out into the console today.

First, you'll now see an issues feed at the top of the Performance Monitoring dashboard. This feed gives you a quick and easy look at any performance issues occurring across your app, as well as Firebase's opinion on the severity of the issue.

Second, you can now easily identify parts of your app that stutter or freeze. Performance monitoring identifies rendering issues, telling you how many frames are dropped per screen in your app, so you can quickly troubleshoot the issue. If you have apps in the Play store, this is a great way to get detailed information on rendering issues reported in Android vitals, without writing additional code. You can get started with Performance Monitoring today by visiting our documentation.

Better analytics and access management controls

With Google Analytics for Firebase, you've always been able to see analytics for each of your project's apps. Last year, we added the ability to see your data in real time, with the addition of the StreamView and DebugView reports. Now, you'll notice that we've added real time cards throughout your Analytics reports to give you a better idea of what your users are doing right now.

Analytics is also getting two more upgrades with the addition of project level reporting and flexible filters. Project level reporting lets you see what's happening across all the apps in a project, so you have a more holistic view of your app business, while flexible filters allows you to slice your data more precisely to produce key insights. These updates will be rolling out in the coming weeks.

We're launching another update to the Firebase console today: improved identity and access management. This will allow you to more easily invite others to collaborate on your projects and control what they have access to, all from within the Firebase console.

Expanding Firebase Test Lab to iOS

At Firebase, it's always been vitally important to us to build products that work for development on both Android and iOS. That's why it's particularly exciting to announce that we're expanding Test Lab to include iOS, in addition to Android.

Test Lab provides you with physical and virtual devices that allow you to run tests to simulate actual usage environments. With the addition of Test Lab for iOS, we help you get your app into a high quality state - across both Android and iOS - before you even release it.

Test Lab for iOS will be rolling out over the coming months. If you want to be an early tester of the product, you can sign-up in this form to get on the waiting list today.

Just the beginning

It’s been an amazing journey at Firebase so far and we believe that we’re only getting started. By continuing to deepen our integrations with Google Cloud Platform, we aim to make it easy for you to leverage the enormous scale of Google’s infrastructure. We’re also immensely excited about the possibilities that machine learning holds for empowering developers like you. Predictions and ML Kit are the first two steps, but there’s much more we hope to do.

Thank you, as always, for being part of the journey with us. To hear about many of these announcements and more in detail, you can check out our YouTube playlist for recordings of all our talks at Google I/O. If you’re not already part of the Firebase Alpha program, please join and help shape the future of the platform. We can’t wait to see what you build next.