Releases: valentinfrlch/ha-llmvision
New Action: Data Analyzer
Changelog
Features
- Added a new action:
data_analyzer
; It is a seamless connection between image input and home assistant sensors.data_analyzer
can read the latest value from charts, gauges or any image and update a sensor value. Monitor how many cars are parked, add your old power meter to home assistant and much more!- Supports
number
,text
,boolean
andselect
sensors. - See the docs: Data Analyzer Docs
- Supports
Blueprint
Tip
The blueprints needs to be re-imported to update! Your automations will remain.
If you get an error message after the update, select the devices you wish to be notified on again and save the automation.
- The blueprint now supports multiple devices!
- When
important
is set to true, critical notifications will now be delivered even if you have Do not disturb on. This will only be used for extremely important events like suspicious activity.
Bug fixes
- Blueprint: Fixed a bug where the live preview was shown even though snapshot mode was selected.
- Changed default event title from "Nothing seen" to "Unknown object seen".
Beta 3 - Data Analyzer
Changelog
Features
- Added a new action:
data_analyzer
; It is a seamless connection between image input and home assistant sensors.data_analyzer
can read the latest value from charts, gauges or any image and update a sensor value. Monitor how many cars are parked, add your old power meter to home assistant and much more!- Supports number, text, boolean and select sensors.
- See the docs: Data Analyzer Docs
- The blueprint now supports multiple devices! Send event notifications to all household members.
- When
important
is set to true, critical notifications will now be delivered even if you have Do not disturb on. This will only be used for extremely important events like suspicious activity.
Bug fixes
- Blueprint: Fixed a bug where the live preview was shown even though snapshot mode was selected.
- Changed default event title from "Nothing seen" to "Unknown object seen".
Event Memory
Changelog
Tip
Make sure to reimport the blueprint to get the latest features!
Blueprint
frigate_url
is now no longer required. Make sure you have the Frigate Integration for Home Assistant installed.- You can now choose between snapshot and live preview for 'Camera' mode.
- Camera entities now also apply to the 'Frigate' mode. You will only get notifications for camera entities specified. (This will not work correctly if you've changed the entity_id of your cameras)
Features
- LLM Vision can now remember events! Set up 'Event Calendar' in the LLM Vision integration. This will expose remembered events as a calendar. You can now even ask Assist things like 'Has my delivery arrived yet?' or 'Has the garbage been collected already?'.
See wiki for more information. - #75
expose_images
Saves analyzed images in /www/llmvision so they can be used for automations more easily.
Bug fixes
- Fixed a blocking call in the video pipeline
- #76 Fixes a bug where a custom endpoint would be ignored for custom OpenAI compatible providers.
Thanks to @helicalchris and @smkrv for testing!
v1.3 Remember Events
Changelog
Blueprint
- You can now choose between snapshot and live preview for 'Camera' mode.
- Camera entities now also apply to the 'Frigate' mode. You will only get notifications for camera entities specified. (This will not work correctly if you've changed the entity_id of your cameras)
Features
- LLM Vision can now remember events! Set up 'Event Calendar' in the LLM Vision integration. This will expose remembered events as a calendar. You can now even ask Assist things like 'When has the package been delivered?' or 'Has the garbage been collected alread?'.
- #75
expose_images
Saves analyzed images in /www/llmvision so they can be used for automations more easily.
Bug fixes
- Fixed a blocking call in the video pipeline
- #76 Fixes a bug where a custom endpoint would be ignored for custom OpenAI compatible providers.
v1.3-beta.1
Changelog
Blueprint
- You can now choose between snapshot and live preview for 'Camera' mode.
- Camera entities now also apply to the 'Frigate' mode. You will only get notifications for camera entities specified. (This will not work correctly if you've changed the entity_id of your cameras)
Features
- LLM Vision can now remember events! Set up 'Event Calendar' in the LLM Vision integration. This will expose remembered events as a calendar. You can now even ask Assist things like 'When has the package been delivered?' or 'Has the garbage been collected alread?'.
- #75
expose_images
Saves analyzed images in /www/llmvision so they can be used for automations more easily
Bug fixes
- Fixed a blocking call in the video pipeline
v1.2.1 Better Video Preprocessing
Changelog
Blueprint
- Added option to only send notifications for important events using AI
Features
video_analyzer
andstream_analyzer
now have amax_frames
parameter. The most relevant frames will then be picked accordingly. This replaces theinterval
parameter which will be deprecated in the next release. Make sure you change your automations until then!
Bug fixes
- If an image fetch fails, fetch will retry. This could occur when the camera stream was not preloaded.
- Fixes a blocking call in video_analyzer
- Fixes a bug that would resize images even when
target_width
wasn't set. - #72 Fixed max_tokes for Ollama
- Removed an error log that was triggered when a temporary directory could not be deleted because it did not exist.
- Removed unnecessary info level logging
Deprecations
- The
interval
parameter will be removed in v1.2.2.
v1.2.1 Better Video Preprocessing
Changelog
Blueprint
- Added option to only send notifications for important events using AI
Features
video_analyzer
andstream_analyzer
now have amax_frames
parameter. The most relevant frames will then be picked accordingly. This replaces theinterval
parameter which will be deprecated in the next release. Make sure you change your automations until then!
Bug fixes
- If an image fetch fails, fetch will retry. This could occur when the camera stream was not preloaded.
- Fixes a bug that would resize images even when
target_width
wasn't set. - #72 Fixed max_tokes for Ollama
- Removed an error log that was triggered when a temporary directory could not be deleted because it did not exist.
- Removed unnecessary info level logging
Deprecations
- The
interval
parameter will be removed in v1.2.2.
v1.2.0 - Stream Analyzer and new Provider Configurations
⚠️ Breaking Changes
⚠️ Unfortunately, due to the changes in how provider configurations are stored, providers may have to be set up again!provider
now takes a config entry instead of a string. Use the UI to pick your provider!include_filename
is now a required parameter. Make sure you include it in all your scripts and automations!
Changelog
Features
-
v1.2 adds
stream_analyzer
: It records for a set duration and analyzes frames in a given interval (much likevideo_analyzer
). This is faster as it avoids writing the files to disk and reading them again. -
The setup has been rewritten: You can now have multiple configurations for each provider and a restart is no longer necessary to delete configurations. This could be useful if you host multiple models on different servers (e.g. a small LLM on a Raspberry Pi and a larger model on a PC)
⚠️ Not all providers configurations will migrate automatically, so you will have to do the setup again.
Bug fixes & Other small improvements
- The action call UI has been updated with better descriptions and examples.
- Config entries are now correctly unloaded when removing them, a restart is no longer necessary.
- Default values have been updated to provider better results
New Provider: Groq
Changelog
llava
to the more efficient llava-phi3
model. You will need to download this model to use it. You can still use llava
by setting the model parameter.
Features
#65 Support for Groq (image_analyzer
only), thanks to @walloutlet
Bug fixes
- Fixed error parsing for multiple error messages
v1.1.1 Animated GIF support
Changelog
Features
- #54 This release adds support for animated GIFs through
video_analyzer
. Previously only static GIFs were supported or only the first frame was used.