With apologies to President Bill Pullman, let’s declare our independence from smart speakers. The voice interface, and the skills you’ve developed using it, are wonderful resources that are currently stuck in kitchens and dens all over the world. It’s time to free them to be carried with your users wherever they are, not wherever Amazon and Google allow them to be. Declare skill independence with Spokestack!
NLU-capable computers are everywhere — your smartphone, your smartwatch, your laptop, your TV — but nobody is building NLU-enabled voice assistants for them. Smart speakers, on the other hand, are essentially NLU-only skill/action platforms that artifically limit what your voice assistants can accomplish. Why do you have to choose between running free of platform restrictions, or having an NLU? That’s why we built Spokestack. Spokestack runs your independent voice assistants on mobile platforms like Android and iOS, along with providing integrated ASR and TTS services, in convenient, cross-platform, open-source libraries using a simple, consistent API. You can take your independent voice assistants with you as you walk away from the demolished alien mothership, instead of having it stuck in the house on a smart speaker!
We built Spokestack when we realized that NLU, ASR, and TTS on mobile platforms were all siloed between service providers, none of them considering the developer experience when creating complete,independent voice assistants. Not only that, all the service providers have business models that incentivize them to force everything to the cloud. So we built the Spokestack NLU service, using state of the art intent and slot TensorFlow machine learning that is familiar to Alexa and DialogueFlow developers, capable of running entirely on the mobile device to deliver speedy, privacy-preserving results. Originally, Spokestack was just used in our own multi-modal, cross-platform independent voice assistants, but since January we’ve been focused on creating a simple way for all developers — mobile, voice, smart speaker, and front-end — to make their own independent voice assistants for mobile platforms.
Unlike smart speakers, independent voice assistants can take advantage of multiple interfaces modalities to allow your users to get the job done using whatever interaction they find convenient. Visual, haptic, and now voice user interfaces are all accessible to apps using Spokestack.
What do you gain from that? You control and learn from the NLU classification of user utterances, not the smart speaker platform. When you’re stuck on Alexa, do you get the raw utterance that your user uses when they’re in your skill, even misclassified or misunderstood ones? With Spokestack, you control your data and your users’ data.
If you’ve ever read the EULA for smart speaker platforms, you know how rapacious they are with your data. With Spokestack, you keep the most useful bits of your data instead of feeding it (for free) to the FAANG.
To further that mission to create a complete developer experience for developing voice apps, we’re excited that now you can export your smart speaker skill to run on-device in the major mobile platforms, and declare your independence as an independent voice assistant from smart speakers!
You can leverage existing ASR and TTS services, which are the parts that actually benefit from scale and are difficult to DIY. Spokestack provides a seamless, unified API across mobile platforms (iOS, Android, and React/React Native) that makes converting your skill into a indepedent voice app easy.
Finally, like President Bill Pullman, you must have the hubris to believe.
We are fighting for our right to live. To exist. And should we win…the day the world declared in one voice…Today we celebrate our Independence Day!
While it’s fun to cheekily reference a cheesy scifi movie to help illustrate what our technology company can help with, please take a bit of time to learn about celebrating Independence Day during the eras of Emancipation and Reconstruction!
Originally posted June 29, 2020