determining whether current voice recognition engines use past conversations with the user

Looking for documents describing implementation details of Google Assistant, Siri or Alexa. Specifically, need to know whether the speech engines rely on past conversations with the user when interpreting user voice queries.
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Shaun VermaakTechnical Specialist IVCommented:
whether the speech engines rely on past conversations with the user when interpreting user voice queries.
No, not the actual user but other sources of voice data collected

In terms of picking up a new language, Acero explains that the process starts by bringing in real people who can speak the new language to read various paragraphs and word lists, spanning different dialects and accents.

You do get technologies such as Dragon Natural Speaking which accuracy increases with end-user voice training but that is not the case with Google Assistant, Siri or Alexa

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Qian BaoDigital Media Specialist and Web DesignerCommented:
Do check out the public SDK released by Amazon, Apple and Google on their respective voice assistant. Maybe those documentations will give you a clue what's going on with their implantation.

Amazon Alexa SDK:
Google Assistant SDK:

From the following article...

As virtual assistants, through machine learning, begin to understand emotion and state of mind through tone of voice and user behaviour patterns, suggestions, recommendations, and references will be less grounded to explicit intent by the user. A transition that will elevate the extent to which we will be marketing to machines rather than humans in the future.
cyber-33Author Commented:
Thank you!
Fundamentals of JavaScript

Learn the fundamentals of the popular programming language JavaScript so that you can explore the realm of web development.

Ri HoCommented:
None of the above provide for voice recognition. Don't believe - Have two different speakers as " Siri what is my name?"
Shaun VermaakTechnical Specialist IVCommented:
That is not what OP was asking
Ri HoCommented:
"when interpreting user voice queries"
What is a voice query. Does this mean 'find a matching voice' in order to identify the speaker; similar to 'use sql to perform a data query'.

It is not clear to me that any one on this site understands the tech distinction between voice and speech recognition. Hence the ambiguity in the request and my response.

BTW: need your insight with SAPI.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Amazon Alexa

From novice to tech pro — start learning today.