I'm curious about the current state of Web Speech API. This is a W3C specification that was implemented in both Firefox and Chrome and includes Automatic Speech Recogniti… (read more)
I'm curious about the current state of Web Speech API. This is a W3C specification that was implemented in both Firefox and Chrome and includes Automatic Speech Recognition.
There is a demo page at https://mdn.github.io/dom-examples/web-speech-api/speech-color-changer/
To use this in Firefox, I have to change two settings in about:config to True:
media.webspeech.recognition.enable
media.webspeech.recognition.force_enable
There used to be a setting that could be used to set a specific endpoint for the actual STT processing, but that doesn't show up in my list of settings anymore. I guess I could add it, but have no idea if it would be respected by the browser or what the endpoint should look like. Anyway, whatever endpoint Firefox is currently using does not seem to be working anymore, so I just get an "Error occurred in recognition: network" error.
In Chrome it works, but most likely is sending my raw audio to Google Cloud STT to do the actual transcribing. I'd like to be able to control the endpoint so I can use a local service connected to VOSK or Whisper or some other locally hosted STT engine.
I'm thinking about implementing something using WebRTC, but if anyone is working along similar lines, I'd like to join that project rather than creating my own.
Is anyone working on this?