One of the best new features in the iPhone 6s line is Siri’s ‘always on‘ “Hey Siri…” feature. That means all you need to do to get Siri’s attention and ask a question is to say, “Hey Siri…” You’ll hear the familiar beep tone that tells you Siri is ready to listen. If you don’t respond with a question right away, Siri beeps again and says, “I’m listening.” I like that and it works well on the iPhone so we’re likely to see, or, rather, hear Siri on other devices in the future.
At the most basic level, Siri on the iPhone and iPad is a speech-recognition system tied into a growing number of activity functions that range from a simple ‘What time is it?’ to weather forecasts, sports scores, app launching, making phone calls, and much more. Google’s speech-recognition system is even more advanced than Siri and works on a Google Nexus smartphone without an internet connection.
There are two ways to look at how Siri is used. Voice commands to perform some kind of useful action and deliver a result. And, dictation, whereby Siri listens to what you say and types it out on the screen as you say it. Siri requires an internet connection. Google’s new speech-recognition system is small, fast, and does not require an internet connection.
That may not be much of an issue in the future as nearly all of our devices have or will have internet access in the future, but you can see the value right away. Siri on the Watch is convenient beyond the iPhone, but is somewhat anemic because Watch and Siri depend upon both iPhone and internet to work properly. That makes Siri’s initial responses on Watch very slow; slow to the point of “I won’t bother with that again.”
Imagine a Siri that carries a small footprint for application size, works in real-time, and has both speech-recognition and voice command capability– without using an internet connection. Future Watch models will have more memory so Siri could reside complete on Watch and still interact with apps, notifications, alerts, and alarms– and dictation– from the iPhone. Users then get faster response from Siri, faster answers to queries, and both work to improve both usability and value.
Here are three issues that concern me about an ‘always on‘ Siri and Apple’s approach to using voice-recognition. First of all, it’s magic. Talk to your wrist, your Mac’s screen, or your iPhone and get a response. The future– albeit somewhat anemic– is here. Second, there’s the plague of chatter increase as more of us begin talking to our devices; both in public where everyone can hear both query and response, and in private where Siri– or whatever voice recognition system is used; Google could have a better name– becomes a surrogate friend; a digital confidant.
Does anyone else see a problem with that?
Finally, if you’ve used Google’s Google Now or Microsoft’s Cortana, or even heard Alexa on Amazon’s Echo device, you might come to the conclusion that Apple’s Siri, the first voice recognition to go mainstream, has fallen behind the times, a function which Apple heralded as the future with great pomp and circumstance but still doesn’t do much that’s worthwhile, and in many ways doesn’t do as well as the competition.
That leads me to believe that Apple, as large and prosperous as the company has become, is juggling too many balls and doesn’t have the engineering talent or management skill to keep them all growing and advancing as they need. In voice-recognition Apple has fallen behind Google, Microsoft, and even Amazon.
I don’t mind Siri being on all the time, and I certainly look forward to an integrated Siri on future versions of the Mac, but it would be even better if Siri were more than ear candy.