this post was submitted on 22 Nov 2023
1 points (100.0% liked)

Home Assistant

4 readers
1 users here now

Everything Home Assistant. Questions, projects, news, you name it.

founded 1 year ago
MODERATORS
 

I'm confused by the different elements of HA's voice assistant sentences.

  1. What's the difference between a conversation and an intent_script? Per HA's custom sentence example, a conversation has an intents sub-element, and an intent_script doesn't. Does a conversation's intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

  2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

In configuration.yaml:

conversation:
  intents:
    YearOfVoice:
      - "how is the year of voice going"

In config/custom_sentences/en:

intents:
  SetVolume:
    data:
      - sentences:
          - "(set|change) {media_player} volume to {volume} [percent]"
          - "(set|change) [the] volume for {media_player} to {volume} [percent]"
  1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What's the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

Thanks for any clarification you can share.

you are viewing a single comment's thread
view the rest of the comments
[–] mike_wooskey@lemmy.d.thewooskeys.com 0 points 11 months ago (1 children)

What's involveditn running whisper on a computer other than the home assistant computer? I'm guessing its relatively easyyto install, hopefully in docker. How do you tell HA to use that whisper?

Also, its a bit surprising that moving the voice recognition to a GPU on a more powerful (presumably) computer doesn't improve HA performance.

[–] RandomLegend@lemmy.dbzer0.com 0 points 11 months ago* (last edited 11 months ago) (1 children)

First of all: It increases performance tremendously. For comparison

  • RPI4B
  • tiny-int8 -- WER 40% -- Processing time ~5s
  • base-int8 -- WER 70% -- Processing Time ~10s
  • medium-int8 -- Impossible

  • HP EliteDesk 800 G5
  • tiny-int8 -- Irrelevant
  • base-int8 WER 70% -- Processing time ~2s
  • medium-int8 WER 95% -- Processing time ~ 8s

  • External Server with GTX1660
  • medium-int8 WER 95% -- Processing time ~0.5s

So running it on a cheap 100€ used GPU can get you results where Alexa, Siri and Google have to respect you in terms of accuracy and speed. This is a gamechanger for me. I already installed 3 M5Stack ATOM ECHOs in my Home and more will soon come in. It's incredibly accurate and quick.

The important part is to pick the correct docker image. The default one that's available at rhasspy doesn't have GPU support.

Now, to get it running it's actually pretty easy. First go to this link and download all the files. You have to build a custom docker image with those files. I have no idea how to do that with barebones docker as i am using portainer for everything. In Portainer you have to do:

  1. "Images" in the navigation menu
  2. "+ Build new image" on the right hand of the header of your images list
  3. name it wyoming-whisper
  4. Copy and paste the content of "Dockerfile" you downloaded earlier into the "Web Editor"
  5. Under "Upload" you click on "Select Files" and select the Makefile and run.sh
  6. Click on "Build the image"

Next you go

  1. "Stacks" in your Navigation menu
  2. "+ Add stack" at the right side
  3. Give it a name (whisper e.g.)
  4. Copy the content of docker-compose.example.yml from the files you downloaded earlier.

That will spin up a docker-compose with the local custom image you just built, running faster-whisper that is compatible with the wyoming protocol in home assistant and that can run on an NVidia GPU with cuda acceleration.

As you can see in the docker-compose it will expose port 10300. Next:

  1. go into Home Assistant
  2. open Integrations
  3. click on Wyoming
  4. add a device
  5. input the IP of your external GPU server and the port 10300

It will automagically know that it's whisper and will be fully integrated into your system. You can now add it into your voice assistant.

If you look at the logs of your new docker container you can see every voice command that is sent to your new whisper.

[–] mike_wooskey@lemmy.d.thewooskeys.com 1 points 9 months ago (1 children)

I finally got around to trying this. It's super easy and significantly improved response time. I will add that the last step is to configure the Voice Assistant you're using in Home Assistant to use the new entity you just added as the "Speech to Text" engine.

Thanks, @RandomLegend@lemmy.dbzer0.com!

[–] RandomLegend@lemmy.dbzer0.com 1 points 9 months ago

Ah yes, that final step i forgot.

Awesome that it works for you!