Duplex is out for Public Testing
This week at an event in Mountain View, Google demoed the public release of Duplex to a group of reporters. The demo took place at Oren’s Hummus Shop in Mountain View but no video recording was allowed so we have mostly an account of what happened through the reports’ stories. After the software says hello to the person on the other end of the line, it will immediately identify itself: “Hi, I’m the Google Assistant, calling to make a reservation for a client. This automated call will be recorded.” (The exact language of the disclosure varied slightly in a few of the different demos.)
Google said it will disclose that the call is being recorded “in states that legally require” that disclosure. 11 states, including California, Illinois, and Florida “require the consent of every party to a phone call or conversation in order to make the recording lawful,” according to the Digital Media Law Project. 38 states and the District of Columbia have one-party consent laws. For calls between states, the stricter law needs to be enforced — for instance, California law requires it, but New York law doesn’t.
- All the articles I read on the news talked about impressive demos and a human-like voice. This was what terrified some people when we first saw the demo on stage at Google i/o
- Part of the press event this week was about reiterating the AI manifesto Sundar Pichai shared a few weeks ago about AI for good. Google pointed out how they want to be respectful of the businesses they work with as the service gets rolled out for testing to a small number of trusted users.
- Aside from learning a lot about what a conversation to book a restaurant or a hairdresser (these seemed to be what was tested both at i/o and this week) Google is also getting information about the businesses they can use to update their search engine and maps info.
- I find interesting that some reporters mentioned that Google Assistant sounded almost apologetic when the human sounded confused or flustered. This would indicate that Google is starting to use some sentiment analysis for the exchange. Sentiment analysis and the ability of the bot to adjust to what the human is feeling is a bit of a holy grail when it comes to a natural exchange. To be honest sentiment analysts is something that most call center operators often fail to show in an exchange, which makes most calls frustrating.
- Of course, most of the reporters who were at the event tried to fool Google Assistant giving answers that are possible but unlikely like, for instance, answering a request for a table by saying the restaurant is booked for a private event. In most cases, Google Assistant would try and understand before handing over the call to a human who would help terminate the call. One reporter mentioned that Google Assistant actually ended the call rather than handing over to a human. While some would jump to conclude that such instances show that Duplex is a failure I actually find the ability Google Assistant seems to have to understand its shortcomings and turn to a human for help to be quite smart.
- There is a part of me who wishes for all digital assistant to find their limitations and turn to a human with an answer rather than say “sorry I cannot help with that!”
- Aside from being a safety net, the call center linked to Duplex will help with the transcripts of the conversations.
- I am personally excited about the prospect of trusting digital agents to perform tasks for me that save time either via voice or text. Duplex shows us the future and like any other technology, it could be exploited to worsen rather than improve our life. Google’s cautious approach is the sensible way to roll this out. There is much more work to do in this space but learning in real life is irreplaceable. In a way, I see this Duplex testing in the same way as I see driverless cars testing on the road with a human guardian.
- Lastly, if you are in the Bay Area and you have never been to Oren’s Hummus Shop you must go!
Amazon’s new Show Mode Charging Dock
The new Show Mode Charging Dock for the Fire HD 8 and Fire HD 10 provides an easy way to charge the tablets while propping them up at the same time. Thanks to the new Show Mode feature, which will be rolling out in a software update starting on July 2nd, the dock also turns the tablets into an Echo Show, with an always-on-time, weather, and news ticker display and hands-free control via the Alexa voice assistant.
The dock uses magnets to align the tablet, which lives in an included case, with pogo pins that provide charging. Then there’s an adjustable kickstand on the back that lets you adjust the viewing angle of the tablet when it’s in the dock. Amazon will be selling the Show Mode Charging Dock for $39.99 (for the Fire HD 8) and $54.99 (for the Fire HD 10), with shipments expected to start on July 12th. Preorders will be able to save $5 off the regular price of each version. In addition, Amazon will sell bundles that include the tablet and the charging dock for $109.98 for the Fire HD 8 and $189.98 for the HD 10.
- This seems a no-brainer for Amazon. If you have an Amazon Fire Tablet, why not access it for searches, playing music and so on by voice with Alexa? Why not turn it into an Echo Show of sorts.
- The smaller dock costs as much as an Echo Dot which would make some people question the investment especially given the Fire is not equipped with far-field mics. But Amazon is all about choice! And for some users who have a tablet investing $39 to make their Fire more useful might seem like a better ROI than investing on a speaker. Or they might just want to have an Echo Show experience without spending as much money.
- The lack of far-field mics will not be an issue for users who might want to use their Fire Tablet as a step by step cooking tutorial or be able to get video feeds as part of their morning briefings. As I noticed with my Echo Show when I engage with it to take advantage of the screen I am much closer to it than when I engage with an Amazon Echo, a Google Home or HomePod.
- Some PC manufacturers have experimented with adding support for Alexa or Google Assistant to their tablets in the hope to make them more attractive. I want to make sure I am clear that this move by Amazon does not come across as an endorsement of that thinking. There is a difference in providing an extra use for something you already own or even an accessory to something you are planning to buy and thinking that accessory or feature will be a key purchases driver. Amazon’s Show Mode hits on the former not the latter. In other words, buyers have already made the decision to buy a Fire tablet and then they evaluate the dock as an added bonus.
- There is clearly value for Amazon in adding a screen to Alexa and I do wonder if this move, as well as the Fire TV Cube, offer a lower barrier of entry compared to the Echo Show when it comes to understanding how useful the device might be for a buyer. This is because both the TV and the tablet have their own primary function which is not centered around Alexa. Purchasing the Echo Show might feel more like a gamble for some buyers. Experiencing the video + Alexa combination through Show Mode and Fire TV Cube might, however, increase the appeal of an Echo Show for some consumers.