At a recent press event on Alexa’s latest features, Alexa’s head scientist, Rohit Prasad, mentioned multistep requests in one shot, a capability that allows you to ask Alexa to do multiple things at once. For example, you might say, “Alexa, add bananas, peanut butter, and paper towels to my shopping list.” Alexa should intelligently figure out that “peanut butter” and “paper towels” name two items, not four, and that bananas are a separate item.
If you are curious about spoken language machine understanding, you may be interested in this article.
Amazon Alexa has been researching how to solve the problem of understanding multiple items at once (like the example mentioned in the quote above). They just published a paper at SLT (Spoken Language Technology) 2018 with the title “Parsing Coordination for Spoken Language Understanding”, that you can find here
I’m not an expert in this field, but I found the paper quite interesting and how they managed to change the model to consider more domain-specific ontologies in a generic context. This field is still quite young, so I’m happy there is a lot of research in it since in the future I believe it would simplify by a lot out interaction with technology.