Many of us at Roger Wilco have been heads down working on a really exciting project with our friends at RoboKind and our partners at IBM. We can’t get into too much detail about the end product and application, but the key integration we’re working on in the mean time is letting those who understand how to work with Watson Conversation Service built conversational models there and let Milo be the conduit for the conversation.
The first conversational model we built was based on the RoboKind FAQ. We figured what better ability to give Milo than to explain himself.
More about the tech on how we did it after the video.
So, for the geeks, the tech:
- We’re using a Raspberry Pi to host Node Red. This is what the microphone is attached to. The final application for this demo will be in a perhaps noisy place, so it will be necessary to engineer the audio better than it is possible to do with the on-board mic with Milo
- The Node Red flow takes the audio and sends it to Watson Speech to Text….
- … which sends the output to Watson Conversation Service.
- Rebeca built a conversational model out of the two R4A FAQs in Conversation Service that spits back the various responses.
- From there, the text is fed back to Milo via the internal API.
- Milo comes built in with Text to Speech models and his own custom voice.
Stay tuned for more fun as we even further hone this flow and make it a more natural conversational experience.
One Reply to “Playing with the Watson Cognitive Stack and Milo the Robot”
Comments are closed.