Saturday, June 19, 2021
Home > Columns > Science & Technology > Google I/O 2017 Recap

Google I/O 2017 Recap


Voiced by Amazon Polly

Google I/O is an annual developer conference hosted by Google, where Google reveals upcoming technologies and initiatives. The Google I/O 2017 is notable for its efforts to diversify Google and dive into machine learning. Currently, nearly 90% of Google’s profits come from advertising, and the company’s revenue stream is at considerable risk – as the old proverb says, “Don’t pull all your eggs in one basket.” In recent years, customers have started to value privacy more and more, which will decrease Google’s market cap in advertising. It is clear that Google must diversify their revenue methods and reduce their reliance on advertising – wherein both revenue and reliance are quickly declining. Then, it comes as no surprise that there have been negligible announcements on the future of Chrome, search engine features, or ad features. Interestingly, new features and initiatives for Android also signal Google’s commitment to the smartphone industry. This year, Google has made significant advancements in machine learning, branching out to fields like self-driving cars. It is clear that Google’s investment in machine learning will certainly play an enormous role in the future of Google – not just an ad-dependent company anymore.

From software that can determine objects from visuals information to a revamp of the Android OS that saves tons of battery life, the announcements of Google I/O 2017 have been remarkable. One of the most notable initiative that Google has managed to push forward this year is called Google Lens, which uses – in the words of Sundar Pichai, the CEO of Google, “a set of vision based computing capabilities” to determine objects from photographs, in real time, in real life. This new initiative is astounding in its machine learning integration. This process used to require significant resources and even more time due to machine vision complexities and machine recognition and correlation between objects. Now, any smartphone can leverage Google’s machine learning algorithms in Google Lens to identify and interact with objects. During a demo, Google displayed how Google Lens can identify a business based off of a photo of a business’ storefront and display the rating, name, and general cost, among other business information on the smartphone. Google aims to release Google Lens soon as it refines its accuracy and features.

While Google Lens aims at making a profound impact in the machine learning sector, Google also takes it directly to the household by upgrading Google Home, to become more integrated into the user’s household. Google plans to implement a set of features that it calls “Proactive Assistance”. One of these features allows Google Home to control lighting – objects like lamps and other indoor lighting – in order to get the user’s attention. For example, the device may dim the kitchen lights while the user is cooking to alert him/her about an upcoming appointment. Furthermore, the gadget can now be integrated with a TV as a platform from which Google Home can display visuals and information at the user’s request. The most notable feature that Google has implemented into Home has been using voice recognition to orchestrate hands-free calling. This feature is especially notable in more occupied households, where multiple persons may share Google Home. While these new features certainly benefit the user and allows for better Google Home service, it also breaches the line between the digital and the physical. In the brief history of cyberattacks, it has been demonstrated that the physical world can be easily manipulated and destroyed by digital means. As the Google Home becomes more integrated with the user’s household, it can access private and sensitive home infrastructure – lighting, for example. Potential attacks may disable lighting or TV access via Google Home. With little defensive cyber capability, the safety of Google Home and these features is to be tested.

Now, with over 2 billion Android devices active in the world, Google has bolstered their beloved mobile OS. Codenamed Android O, this Google initiative is seeking to dramatically increase Android performance across the spectrum. With the introduction of Android O, Google seeks to improve Android power and performance, in what Google calls “Vitals”. With Vitals, Google claims that the boot time for an Android will be nearly twice as fast, and that all apps will be faster by default. While bold and dazzling, the credibility of these claims have yet to be tested. Staying true to Google’s claim, it seems that Google has finally decided to fix a big problem in the Android OS – the allocation of resources, more specifically RAM, to background applications. In the past, the Android OS has operated with the policy of allowing background applications to run and use up memory and battery. The execution of most of these background applications were not necessary, and they often caused a huge battery drain and slower performance due to less RAM. Now, Vitals will place more restrictions on background location and execution, to free up RAM and protect battery life, in what Google calls “wise limits”. The most concrete change involves an app that can track the frequency of issues that other applications have, and provide feedback on how to resolve or mitigate these problems. On the developer side, Google has decided to officially support Kotlin, a programming language for multiplatform applications, on Android. This makes Kotlin now interoperable with Java, Android, JS, and Native infrastructures. It is notable that this is the first time Google has officially supported a language for Android; its significance may not yet be fully recognized, but it marks Google’s commitment to further Android development and support.

Another Android-targeted initiative that Google has launched is a set of features for Google Photos on Android. Using machine learning – and following the user trend of Instagram and Snapchat – Google Photos can suggest what photos the user should share, and whom to share it with. For large databases of photos – Google calls them libraries: the user can share libraries of photos as well. During Google I/O 2017, Google also announced the integration of Google Lens into Google Photos. For example, Google Lens can identify a picture of a flower that the user takes at a community garden. Moreover, Google now offers Photo Books that concretize digital photos into physical albums. These albums are “beautiful, high-quality, with a clean and modern design”, according to Anil Sabharwal, head of Google Photos, and who gave the presentation on the new features. It is emphasized that these Photo Books are easy to make, since they can be created from an Android device. Photo Books has officially launched in the US, starting at $9.99.

With powerful new initiatives that delve deep into machine learning and new, improved features that supplement existing Google, specifically Android, infrastructure, Google I/O 2017 has made a decisive statement about the future of Google. In the long term, Google seeks to be a machine learning company as it moves away from its reliance and reputation as a search engine and advertising company. With a strong spearhead threading into the machine learning field, and further initiatives to consolidate Android applications, this year’s Google I/O has made countless headlines in tech news. While Google attempts to diversify its revenue streams, it is clear that Google I/O 2017 marks the beginning of another venture into machine learning.

 

Loading Likes...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.