The iPhone's Siri voice assistant may be able to tell Zooey Deschenel the weather outside even if she can't tell, but its overall average isn't that great, according to one analyst's tests.
The tests were conducted by Gene Munster of Piper Jaffray, not your usual suspect when it comes to hardware testing, as Munster is a financial analyst working on Wall Street. Munster assembled a team of testers and in total made 800 Siri queries in a quiet room, and then 800 more queries outside on a busy street.
Munster found that in the quiet room tests, Siri's comprehension of the question was 89% and the accuracy of its response was 68%. For the outdoor tests, comprehension was 83% and accuracy of answers was only 62%. Some of the questions that Siri failed to answer:
The challenge to Siri is context, with the Cinderella question the most obvious. Context and the relationship between words seem to be its weak spot. It also doesn't seem to like bad sentence structure, like the Peyton Manning question. But Munster wrote in his report "With the iOS 6 release in the fall, we expect Siri to improve meaningfully while reducing its reliance on Google."
And Siri should improve as users learn how to word their questions and Apple tweak it further, said Jim McGregor, principal analyst with Tirias Research.
"Siri is like any voice application, it learns over time and use just like the user. The same could actually be said about many smartphone text editors. The user becomes accustomed to the quirks of the software and the software isn't really an AI, but it should have algorithms for the most common commands of the user. It's like anything else on these mobile devices, the more you use it, the more accustomed you are to it," he said.
|
|
|
|
|
TechTarget publishes
more than 100 focused websites providing quick access to a deep store of
news, advice and analysis about the technologies, products and processes crucial
to the jobs of IT pros.
All Rights Reserved, Copyright 2000 - 2013, TechTarget | Read our Privacy Statement