Google I/O 2018
This year, POSSIBLE Mobile had the opportunity to send sixteen people from our talented team of developers, designers, QA analysts, and more to Google I/O 2018. It was an exciting time for all of us to learn and play around with Google’s latest and greatest technology. We attended a wide variety of sessions covering concepts like AR and VR, Voice, Kotlin, Flutter, and more. From seeing significant Google Assistant improvements, to hearing how Android is taking another step in guiding developers on best practices for building apps, this year’s conference was no disappointment. Here are some of our favorite announcements made at Google I/O 2018:
Inspired by the Android support library, Android Jetpack offers new guidance to developers on best architecture practices for their next Android app. Combining the existing architecture components that were announced last year with some new and exciting tools like Actions and Slices (more on this below) and WorkManager, Jetpack provides developers with a common infrastructure so they can focus on other aspects of their app that make it unique.
Work Manager: This helps developers simplify how and when to handle asynchronous tasks. It combines chaining tasks with the ability to specify completion of a task based on certain conditions, helping cut down expensive and poorly timed operations and completing them even when the app has been killed.
Navigation Architecture Component: This was introduced to help eliminate the current pain points in handling the back stack. Developers are now able to use fragments and activities (destinations) to create a navigation graph, which eliminates the need for complicated navigation handling inside of the app code.
App Action & Slices
For POSSIBLE Mobile, Slices are one of the most exciting new features within Jetpack. Slices allow users to interact with an app’s content without ever having to go into an app’s full experience. By displaying small pieces of functionality from a particular app into a user’s Google search bar and assistant apps, developers can provide suggested app actions based on what they searched for. For instance, a sports app could implement app actions so that when a user searches for a specific team or game, they could be shown the score and then prompted to open the app directly to watch that game. By using Slices, developers will be able to add images, text, videos, and interactive controls for users to quickly access the content they’re trying to find. While Search and Google Assistant will be the leading portals for these early on, Google hopes Slices can eventually get integrated into more places like the home screen or other apps.
Google Assistant & “Duplex”
Believe it or not, the Google Assistant will soon be able to make phone calls for a user’s next haircut appointment or dinner reservation. This AI announcement regarding the Google Assistant received by far one of the loudest ovations at Google I/O 2018. While this is an exciting and incredibly impressive technology breakthrough from Google, POSSIBLE Mobile will be following these new additions closely as it brings up many ethical questions.
AR in Google Maps
In a future release of Google Maps, users will begin to see arrows directing them where to walk as they look at their real-world surroundings through the phone’s camera, eliminating the need to wander until the blue dot moves along the suggested path. During the Google I/O 2018 demo, a fox was shown guiding the user to their next location in the app. It’s unsure whether this fox will make it into the actual release, but we’re hoping so.
Android Things 1.0
Android Things is Google’s managed OS that allows developers to efficiently write code for Internet of Things (IoT) devices by connecting them to existing Android development tools and APIs. For example, when connecting a device to a temperature sensor, the SDK allows a developer to quickly read the temperature without having to do any low-level programming. Android Things helps prototype ideas quickly without investing too much time and money.
ML Kit allows developers to leverage multiple API’s from ML Kit into their apps to begin working with AI tools like image labeling, facial recognition, and text recognition. Using ML Kit in more apps will allow developers to provide users with more immersive experiences. If a developer wants to also create one of their models, they can use custom models with TensorFlow, and then use ML Kit as their API layer.
Some new additions to Google’s augmented reality platform, ARCore, include Cloud Anchors, Augmented Images, WebXR, and Sceneform. Cloud Anchors add the ability to connect multiple phones to an AR “field” for gaming or socializing with nearby users who are also in the same “field.” Augmented Images bring image recognition to augmented reality, allowing developers to extrapolate information from things like movie posters and artwork to present useful data to the user such as movie ratings or artist bios. Google will soon release WebXR into Chromebooks, which will add mixed reality (MR) experiences into web browser screens, and Sceneform for creating 3D objects without having to use OpenGL.
Based on our successful track record of implementing Google technology and writing high-quality Android apps, POSSIBLE Mobile is one of the chosen Google Certified Developer Agencies. If you’re wondering how the new changes out of Google I/O 2018 will affect your app, or how they can help you better serve your customers, please don’t hesitate to reach out. These new announcements are going to increase users’ expectations for how your app should run, and it’s crucial you don’t fall behind. Contact us at email@example.com, and one of our Android Directors will be in touch.
There are many more changes coming to Android that we can’t wait to work with, test, and integrate into our clients’ apps!