I'm confused by the different elements of HA's voice assistant sentences.
-
What's the difference between a conversation
and an intent_script
? Per HA's custom sentence example, a conversation
has an intents
sub-element, and an intent_script
doesn't. Does a conversation
's intent
merely declare the element that will respond to the sentence, while an intent_script
is purely the response (i.e., does an intents
point to an intent_script
)?
-
HA then explains that while the example above defined the conversation
and intent_script
in configuration.yaml
, you can also define intents
in config/custom_sentences/
. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?
In configuration.yaml
:
conversation:
intents:
YearOfVoice:
- "how is the year of voice going"
In config/custom_sentences/en
:
intents:
SetVolume:
data:
- sentences:
- "(set|change) {media_player} volume to {volume} [percent]"
- "(set|change) [the] volume for {media_player} to {volume} [percent]"
- Then they say
responses
for existing intents can be customized as well in config/custom_sentences/
. What's the difference between a response
and an intent_script
? It seems like intent_script
can only be defined in configuration.yaml
and responses
can only be defined in config/custom_sentences/` - is that right?
Thanks for any clarification you can share.
First of all: It increases performance tremendously. For comparison
So running it on a cheap 100€ used GPU can get you results where Alexa, Siri and Google have to respect you in terms of accuracy and speed. This is a gamechanger for me. I already installed 3 M5Stack ATOM ECHOs in my Home and more will soon come in. It's incredibly accurate and quick.
The important part is to pick the correct docker image. The default one that's available at rhasspy doesn't have GPU support.
Now, to get it running it's actually pretty easy. First go to this link and download all the files. You have to build a custom docker image with those files. I have no idea how to do that with barebones docker as i am using portainer for everything. In Portainer you have to do:
Next you go
That will spin up a docker-compose with the local custom image you just built, running faster-whisper that is compatible with the wyoming protocol in home assistant and that can run on an NVidia GPU with cuda acceleration.
As you can see in the docker-compose it will expose port 10300. Next:
It will automagically know that it's whisper and will be fully integrated into your system. You can now add it into your voice assistant.
If you look at the logs of your new docker container you can see every voice command that is sent to your new whisper.
I finally got around to trying this. It's super easy and significantly improved response time. I will add that the last step is to configure the Voice Assistant you're using in Home Assistant to use the new entity you just added as the "Speech to Text" engine.
Thanks, @RandomLegend@lemmy.dbzer0.com!
Ah yes, that final step i forgot.
Awesome that it works for you!