Skip to content Skip to sidebar Skip to footer

Ok Google, what’s the future of smart speaker applications?

The last few months have not been kind to some of the world’s largest technology players. Meta, the owners of Facebook, Whatsapp, and Instagram, recently saw their value tumble by over $80bn – almost as much as their IPO value in 2012. The recent decline in advertiser spending has been triggered in part by the global economic downturn. Meta’s problems have been compounded by negative press surrounding the Metaverse, one of the company’s largest investments. Meanwhile, Amazon faces a falling share price, and Elon Musk appears to have blundered his way into overpaying for Twitter, despite trying to pull out of the deal.

Even Alphabet, Google’s parent company, has not been immune from these trends with the company’s profits down 27% in the third quarter of 2022. In light of this, there have been reports that Google is undergoing a degree of restructuring. Allegedly, Google is refocusing on the hardware side of its business, largely at the expense of Google Assistant. This would make sense on paper as Apple, which makes the majority of its revenue from hardware sales, has proven more resilient than its cloud and ad revenue-dependent rivals. 

Google seems to be planning to cut the range of devices and platforms supported by Assistant. Reportedly, the company plans to cut back support for the Assistant service on devices not directly made or sold by Google. So devices such as Google Home, Google Pixel, and Nest would be supported, but devices not sold through the Google store would likely be excluded. It’s not clear at this point exactly which devices would lose support, but third-party devices such as cars that have Assistant integrated into their dashboard, smart TVs, and smartwatches not manufactured by a Google partner would probably be the first targets. Others might include certain mobile devices built for emerging markets that integrate with Assistant but don’t run android such as the Nokia 2720, the Jio Phone, and phones running KaiOS.

Throughout the year, there have been other signs that Google’s commitment to the Assistant project has been wavering. Back in June this year, Google announced the depreciation of Conversational Actions, a service that allowed developers to build bespoke applications for smart speaker devices including the Nest Home Hub. Instead, Google is directing developers towards App Actions, a platform that allows devs to integrate functions from their apps into Assistant for Android without any need for training data or labelling. However, the move will significantly limit the ability of third-party developers to create bespoke assistant actions unless they are for the limited domains permitted such as smart home services, and even then, they will be limited to the built-in intents that Google considers appropriate. So far, the domains listed are Communications, Finance, Food and drink, Games, Health and fitness, Productivity, Shopping, Social, Transportation, and Travel

The exact reason behind Google’s decision to shut down conversational actions isn’t made explicit. But according to Android Central, Google is saying that the benefit of moving to App Actions is that “developers can focus on growing engagement with their Android apps instead of building a completely separate voice-only experience from scratch”. This might suggest that third-party developers found the process of creating conversational experiences too complex and time-consuming. 

This suggests that Google is shifting its strategic priorities from allowing third parties to build their own Assistant actions.

What is Google Assistant, and why is it being scaled back?

In case you need a refresher, Google Assistant is Google’s virtual assistant application that integrates with mobile and smart devices. Allowing for interactions over voice or text, Google Assistant enables users to conversationally perform many everyday tasks. Assistant helps users by answering questions, making Google searches, sending messages, setting alarms, and playing music. In a sense, Google Assistant aims for a very broad kind of artificial intelligence, aiming to understand and respond to a vast range of user requests. 

The functionality of Google Assistant can be expanded through the App Actions platform which allows app developers to integrate basic assistant functions which can then be invoked via the Assistant. As an example, the user could ask to order pizza and the app action would activate the relevant app on the device and take the user directly to the relevant app location. You won’t be able to extend that to a multi-turn conversation and order a pizza within that conversation; you’ll just be able to open a supported app and shortcut to an area the app developer deems relevant.

One of the issues with Google Assistant and Google Home is its fragmented ecosystem. As app integrations are developed by individual app developers, rather than by Google themselves, the consistency of conversational experiences can vary widely. Moreover, the functions that are actually developed by Google have been shrinking for some time. For instance, Google recently decommissioned Android Auto (a driving mode for Android smartphones) back in July and replaced it with Google Assistant’s Driving Mode. However, few users made the switch, preferring instead to use the driving features already built into Google Maps. It only took three months for Google to announce that Driving Mode is also being shut down, and service will end this month. Ultimately, actions like this negatively impact consumer confidence in Assistant as its uncertain feature set keeps changing.

Other Google Assistant functionality has been on the chopping block this year. In March Google killed off its personalised information service Google Assistant Snapshot which delivered weather, traffic, and sports news tailored to individual users. Likewise, location-based reminders were removed from the service in June. Google already has a notorious reputation for shutting down projects, with the average lifespan of a Google product being just 4 years. Combined with the depreciation of Conversational Actions, the shrinking of the Assistant’s feature set looks set to continue into the foreseeable future. 

So why is Google shifting its focus with Assistant? We can only speculate at this point, but poor profitability is probably a factor. While the broad aim behind Google Assistant is to dominate the virtual assistant market, it’s always been unclear how Google plans to translate this large market share into a profitable service. The service has never been well suited to delivering ads to its users, and of course, advertising makes up the majority of Google’s revenue. With Google’s share price and profits feeling the squeeze, it’s likely that unprofitable products like Assistant will face stricter internal scrutiny at the company. Considering these circumstances, it is understandable that Google would want to refocus Assistant support around the hardware devices they actually sell.

What does this signal for conversational AI?

Thanks to Google Assistant, we have much to be grateful for. Consumers are more comfortable than ever with the idea of interacting with their smart devices using voice. Google has also used the data gathered from Assistant to improve their Speech-to-text API (which doesn’t require training), and will in turn help strengthen Dialogflow, Google’s no-code chatbot platform for businesses.

Although a controversial claim, we think that the sunsetting of Conversational Assistant Actions indicates that developers have found it too difficult to create sophisticated conversational experiences. These difficulties may be caused by the limitations of many virtual assistant development tools on the market and the high level of expertise required to create sophisticated conversational experiences. The Google Assistant director of product management may have hinted at this when she said, “Developers [for Google Assistant] find it challenging [when] you have to start from scratch every time.” 

Alexa suffers from similar challenges

Amazon Alexa suffers from similar issues. As a writer from Infoworld recently commented, Amazon may boast that there are “more than 900,000 registered Alexa developers” building “over 130,000 Alexa skills”, but “it’s still the case that it’s virtually impossible to actually use more than a small handful of those skills”. These problems have plagued Alexa for some time. Most customers only use Alexa to play music or set a timer while they cook, and an internal document from 2019 suggests that Amazon’s drive to add features has not led to an overall increase in user engagement. 

More recently, Amazon has announced that they plan to allow businesses and customers to use Alexa to initiate the customer support process. Businesses that have published Skills for the Alexa store will be able to connect customers with a human agent via their phone when self-service options have been exhausted; you’d think this might be through the device itself. An optimistic view on this new triage functionality might conclude that this means the Alexa platform is being extended to accommodate for deeper commercial integration. A pessimistic view might be that this suggests the containment rate of Alexa Skills is low and that the applications developed for the platform do not offer the depth necessary to meet customer needs. 

Update on 17th November 2022: Since the publication of this blog, it has been reported that Amazon intends to lay off a portion of the staff working on Alexa, including some of those working on Alexa skills. This would suggest that Amazon is suffering from similar economic pressures to Google and that, likewise, their voice assistant programme is undergoing some degree of restructuring. While it’s unlikely that Amazon Alexa will disappear entirely, we could see fewer new Alexa devices and products in the future.

Could there be a solution?

To simplify the support for voice in developer apps, it would be possible for Google to hide the complexity of intent and entity creation and remove the need for the training of associated semantic models. This solution would also remove the need for developers to be experts in AI and computational linguistics. Google could also provide off-the-shelf ontological NLP intent structures for a limited set of domains with ready-made speech-to-text support.     

To simplify the bespoke design of voice apps on smart speakers, a solution might be to limit the domains supported and provide out-of-the-box AI for those domains. The need to create, populate, and configure training data for a speech-driven application would again be removed in favour of providing out-of-the-box intents for particular domains.  

Google appears to be planning to implement both of these solutions, but this will limit the potential scope of Assistant’s interactions. It will be interesting to see if Alexa follows suit.

Sources

‘$80bn wiped from value of Facebook and Instagram owner Meta’ (The Guardian, 2022)

‘Mark Zuckerberg’s Metaverse Legs Demo Was Staged With Motion Capture’ (Forbes, 2022)

‘Amazon Shares Skid on Weak Outlook Amid Recession Fears’ (Wall Street Journal, 2022)

‘Musk says excited by Twitter deal despite overpaying’ (Reuters, 2022)

‘Google profits plummet 27 percent in Q3 2022 earnings report’ (ArsTechnica, 2022)

‘Report: Google “doubles down” on Pixel hardware, cuts Google Assistant support’ (ArsTechnica, 2022)

‘Apple stock closes out its best day since 2020’ (CNBC, 2022)

‘Google is sunsetting Conversational Actions in favor of App Actions’ (XDA, 2022)

‘Google’s Hardware Ambitions Will Shrink Assistant Investment: Report’ (voicebot.ai, 2022)

‘Google to sunset Assistant’s Conversational Actions as focus shifts to App Actions on Android’ (Android Central, 2022)

‘Facing Threat From Apple, Google Tries New Hardware Playbook’ (The Information, 2022)

‘Smart home fragmentation is keeping me from trying new gadgets’ (Android Central, 2022)

‘Is Amazon Alexa a success?’ (Infoworld, 2022)

‘Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show’ (Bloomberg, 2021)