Open Voice #70: Where is Bixby? 100K Alexa skills, beter routines for Amazon & Google, (ugly) MSFT earbuds, no BBC on Tune-In, Alexa device strategy, Annoying car assistant, Sony getting in the game
Hi {{first_name}}, If we were human than 70 would mean old! Instead we are still like Voice: young an
Open Voice #70: Where is Bixby? 100K Alexa skills, beter routines for Amazon & Google, (ugly) MSFT earbuds, no BBC on Tune-In, Alexa device strategy, Annoying car assistant, Sony getting in the game
Oh, wait, one thought: where is the Samsung Bixby news? It has been quiet. Is something cooking, something stuck… or the attention else where? As soon as we know, so will you!
Amazon now also allows for skills to be included in routines (a list of actions to be completed with a single command), very much like the Action Blocks Google announces.
AMAZON Echo owners shouldn’t have panicked when listening to their favorite BBC radio stations didn’t work as planned last Monday morning. They shook of one more set of chains.
It says a lot for the voice industry that an independent “top ten” can only reliably be made from star ratings, not actual usage. Volumes of ratings can directly be solicited by marketing and applying voice rating requests (which Amazon can add and remove). https://t.co/zcPFSYH0Iq
Last week, Cerence formally became a separate company as a spin-off from publicly traded Nuance. The company employs 1300 employees with 700 focused on research and development. This is huge.
Or at least when you want to raise money on the stock market using the Voice opportunity.
Microsoft launched their Surface Earbuds. With 24h battery life, and integration into the office suite they don’t come cheap. They’ll cost $249, which is quite expensive now that Amazon also launched their offering. We’re curious to see where Microsoft will take the product.
Hé Google HE Google! HE GOOGLE!!!!! Woonkamerlamp aan. Woonkamerlamp AAN! LAMP WOONKAMER AAN!!!!!!! WOONKAMER LAMP AAN!!!!!! WOONKAMER!!!!!! LAMP!!!!!! AAN!!!!!!!!!! HOERENGOOGLE DOE DIE FAKKING LAMP AAN!!!!!!!!!!
Researchers have successfully shrunk a giant language model to use in commercial applications. Not once, but twice. It’s a significant improvement over the BERT model google presented a year ago. So what does this mean? It could bring offline assistant ability closer, likely also on smaller wearable devices.
Here you have the usual Brain Roemmele post (it’s lengthy, and at some point too speculative). What I like is how he makes the product launch by Amazon explicit with the far/mid/near field analogy. To me it felt like Amazon was going after the body, searching for a more intimate place to be available, like many wearables, but the near field makes a lot of senses. And also - thinking through these frames might help come up with better use cases for voice services.