I have input_text.event_1
where the value is currently “birthday”, input_text.event_2
where the value is currently “christmas”, input_date.event_1
where the value is currently “1/1/2000”, and input_date.event_2
where the value is currently “12/25/2024”. How do I configure voice assistant to recognize a phrase like “what’s the date of birthday” and returns “1/1/2000”?
I’m guessing there’s some combination of templating and “lists”, but there are too many variables for me to continue guessing: conversations, intents, sentences, slots, lists, wildcards, yaml files…
I’ve tried variations of this in multiple files:
language: "en"
intents:
WhatsTheDateOf:
- "what's the date of {eventname}"
data:
- sentences:
- "what's the date of {eventname}"
lists:
eventname:
wildcard:
true
- "{{ states('input_text.event_1') }}"
- "{{ states('input_text.event_2') }}"
Should it be in conversations.yaml
, intent_scripts.yaml
, or a file in custom_sentences/en
? Or does the “lists” go in one file and “intents” go in another? In the intent, do I need to define my sentence twice?
I’d appreciate any help. I feel like once I see the yaml of a way that works, I’ll be able to examine it and understand how to make derivations work in the future.
Thanks, @[email protected]. I didn’t know you could use special sentence syntax in automations. That’s pretty helpful because an action can be conditional, and I think you can even make them conditional based on which specific trigger fired the automation.
It still seems odd that I’d have to make separate automations for each helper I want to address (or separate automation conditions for each), as opposed to having the spoken command have a “variable” and then use that variable to determine which input help to return the value of. But if that’s possible, maybe it’s just beyond my skill level.
I can think of a couple ways you could have it be one automation, the first is you’d have multiple triggers with different ids and use the choose action to select the response based on the trigger id.
The other way that I’m a bit less sure about is passing the name of the input_date helper through to the response with a wildcard. You’d probably have to set the {{ trigger.slot.event }} as a variable and match that to an alias or an entity_id.
Yes, @[email protected], now knowing that I can use sentence syntax in automations, I have built 1 automation to handle my specific needs. But each trigger is a hardcoded value instead of a “variable”. For example, trigger 1 is “sentence = ‘what is the date of my birthday’” and I trigger an action conditionally to speak the value of
input_date.event_1
because I know that’s where I stored the date for “my birthday”.What would be awesome is your 2nd suggestion: passing the name of the input_date helper through to the response with a wildcard. I can’t figure out how to do that. I’ve tried defining and using slots but I just don’t understand the syntax. Which file do I define the slots in, and what is the syntax?
I did just check to see if you can pass along wildcards in an automation, which you can! I used this automation:
alias: sentence test description: trigger: - platform: conversation command: - When is [my] {date} condition: [] action: - set_conversation_response: curses, that damnable {{ trigger.slots.game }} enabled: false - choose: - conditions: - condition: template value_template: '{{ ''birthday'' in trigger.slots.date }}' sequence: - set_conversation_response: >- curses, that damnable {{ trigger.slots.date }}! It completely slipped my mind - conditions: - condition: template value_template: '{{ ''christmas'' in trigger.slots.date }}' sequence: - set_conversation_response: sir you know when {{ trigger.slots.date }} is!
This should give you a framework to build off of. It looks like when you don’t define a list of slots in an intent it just passes the wildcard along in a slot.
That is HUGE! Thank you, @[email protected]! This makes customizing conversations from automations so much more powerful and flexible!
No problem! I’ve been puttering around trying to figure this out and this post gave me the push I needed lol
I’m embarassed but very pleased that your example also taught me about
set_conversation_response
! I had been using tts.speak, which meant I had to define a specific media player, which wasn’t always I wanted to do. This is great!