mark.ie: A Very Simple PoC of Using Voice to Admin a Drupal Website

0 Flares Twitter 0 Facebook 0 Filament.io 0 Flares ×
A Very Simple PoC of Using Voice to Admin a Drupal Website

I was playing around with the SpeechRecognition API last night and thought, “wouldn’t it be cool if we could use voice to administer a website?”. Then, I put together this tiny proof of concept module for use with Drupal.

markconroy
Tue, 05/14/2019 – 12:10

Here’s a short video of it in action.

Ok, that looks pretty cool. Show me the code.

  1. window.SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
  2.  
  3. const recognition = new SpeechRecognition();
  4. recognition.interimResults = true;
  5.  
  6. recognition.addEventListener('result', e => {
  7. transcript = Array.from(e.results).map(result => result[0]).map(result => result.transcript).join('');
  8. const statement = e.results[0][0].transcript;
  9. console.log(statement);
  10. if (statement === "voice admin new page") {
  11. window.location.href = "/node/add/page";
  12. } else if (statement === "voice admin new article") {
  13. window.location.href = "/node/add/article";
  14. } else if (statement === "voice admin log out") {
  15. window.location.href = "/user/logout";
  16. } else if (statement === "voice admin go home") {
  17. window.location.href = "/en";
  18. }
  19. });
  20.  
  21. // When we stop talking, start the process again, so it'll record when we start
  22. // talking again.
  23. recognition.addEventListener('end', recognition.start);
  24.  
  25. recognition.start();

WTF? That code is crap. You have hard-coded what you want the site do to. That’s not going to work for all sites, only for your specific use case.

Yep, that’s true at the moment. Like I say, it’s just a proof-of-concept. We’d need to create a settings page for users to decide if they want it to be available or not. And we’d have to create a better parsing system that listens for “voice admin” and then starts recording. Then we’d need to make sure that it patches together the fragments of speech after this to construct the action that you want to perform. Following that, it would be really cool if on the settings page users could type in commands and responses that SpeechRecognition will listen for and then perform.

I think all this is very possible, and probably not too much work. It would be great to see more progress on this.

If you’d like to create a pull request, the code is available on GitHub.