{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":276242473,"defaultBranch":"master","name":"bi-aidetection","ownerLogin":"VorlonCD","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2020-07-01T00:57:59.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/7512113?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1670511712.3659","currentOid":""},"activityList":{"items":[{"before":"0eb7d17c89603689a43ab6779ea574cec68baa0b","after":"3e87ba0eed59872ea3c0b68dec07fe8b7a44abc2","ref":"refs/heads/master","pushedAt":"2024-06-01T22:55:29.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Support CPAI Mesh / Queueing! Settings > AI SERVERS > Edit server > \"Allow AI Server based queue\". Because CodeProject.AI manages its own queue it can handle concurrent requests (Unlike Deepsack), we ignore the fact that it is \"in use\" and just keep giving it as many images as we get to process. So far this actually seems to work really well. It should prevent some cases of the default 100 image queue error from happening. Note: When you enable this it will be more rare that a server OTHER THAN THE FIRST is used. If you still want other AI servers to be used by AITOOL there are a few things you can do:\n\n1) Reduce AI SERVER > Edit URL > Max AI server queue length setting. CPAI defaults to 1024, so if, for example, you dropped that down to 4, it would only try the next server in line when the queue was above 4. You will have to test in your environment to see if this makes sense as it may not.\n2) Reduce 'AI Server Queue Seconds larger than'. If its queue gets too high you can force it to go to the next AITOOLS server in the list.\n3) Reduce 'Skip if AITOOL Img Queue Larger Than' setting. If the AITOOL image queue is larger than this value, and the AI server\nhas at least 1 item in its queue, skip to the next server to give it a\nchance to help lower the queue.\n4) In AITOOL > Settings, enable \"queued\" checkbox. This way AITOOL will take turns and always use the server that was used the longest ago. This may not be ideal if some of the servers are much slower than others.\n\nTip: In CPAI settings web page, enable MESH and make sure it can talk to the other servers you may have configured. (all have to be on the same network with open/fowarded UDP ports - docker to docker to physical instance may take some work to get to see each other). This way, CPAI will do the work of offloading to the next server in line!\n\nTip: For faster queue processing, enable as many moduels (YOLOv5 6.2, YOLOv5.NET, YOLOv8, etc). It will help spread the workload out so in some cases you dont even need more than one CPAI server.\n\nTip: If you use IPCAM Animal and and a few others as 'linked servers', you will get errors if you have anything other than YOLOv5 6.2 enabled because the models have not been build for the others yet. I havent found a good way around this yet.\n\nTip: If the MESH cannot see DOCKER or VM based instances of CPAI servers, edit your C:\\ProgramData\\CodeProject\\AI\\serversettings.json file and manually add the servers it cannot automatically find. For example:\n\n\"KnownMeshHostnames\": [ \"prox-docker\", \"pihole\"],\n\n* Some new columns in Edit AI URL screen related to queue time, min, max, etc. AIQueueLength, AIQueueLengthCalcs, AIQueueTimeCalcs, etc. Some other regular aitool stats may not be as accurate when you enable 'Allow server based queue'\n* Update setup to only check for .NET 8 rather than 6\n* Implement new easier to use version of Threadsafe classes. This should also shrink the json settings file a bit and make the code easier to read.\n* If you enable 'Ignore if offline' for a CPAI server that is running in mesh mode and mesh returns an error (ie the mesh computer was turned off for example) you will not see an error.\n* Fixed bug where using linked servers, there may be duplicates or disabled urls in the list slowing down the overall response time.\n* Gotham City's corruption problem is still a work in progress. I'm Batman.","shortMessageHtmlLink":"* Support CPAI Mesh / Queueing! Settings > AI SERVERS > Edit server >…"}},{"before":"a073f46a131a5081a6268b914dfbc9948c82aa04","after":"0eb7d17c89603689a43ab6779ea574cec68baa0b","ref":"refs/heads/master","pushedAt":"2024-03-31T14:44:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Significantly reduced memory usage. It turns out I've always been mistakenly caching every single image processed in memory for at least an HOUR. Which can be a large amount of ram with multiple cameras, 4k, etc. https://github.com/VorlonCD/bi-aidetection/issues/345","shortMessageHtmlLink":"* Significantly reduced memory usage. It turns out I've always been m…"}},{"before":"e7f9e6efdba55587cddc086553aa917aa5df7a2b","after":"a073f46a131a5081a6268b914dfbc9948c82aa04","ref":"refs/heads/master","pushedAt":"2024-03-09T22:52:25.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Moved to .NET 8. This should perhaps speed things up here and there. You may need to install from here: https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/runtime-desktop-8.0.1-windows-x64-installer\n* Fixed issue where an AI Server may be stuck \"InUse\" if you close AITOOL while it was in use.\n* The next few items can help reduce annoying or unessessary errors sent to telegram:\n* Fix blueiris httpclient trigger timeout error in cases where mutiple triggers have been made in 'queued' mode. (Prevent concurrent trigger calls to the same servername/port - The trigger call to blue iris will be skipped if it is currently already working on a trigger - Why does it take so long? Sometimes it takes over a minute to get done triggering?)\n* HTTP Trigger timeout defaults to 120 seconds rather than 55\n* Move trigger URL calls to after sound and a few other things for less of delay with other notification such as sound when running in a home office (Blueiris can take forever to finish its http trigger, so we dont wait for it now)\n* IgnoreConnectionError: JSON setting to ignore errors (not send them via telegram, etc if we cant even get a ping resonse from the server. This might be useful when a computer is only on certain hours of the day without setting a specific URL time range.\n* Checkbox in URL edit screen \"Ignore if offline\". When this is enabled, it will ping the server first and if the ping fails it will silently ignore and skip the URL. This is useful for when a server is a machine that may go to sleep and the time schedule is not consistent enough. In AI Servers list > LastSkippedReason column, you will see \"NotOnline\" when this happens. LastSkippedReason shows most recent reasons first.\n* AI Server list tweaked to provide more debug info about why it was not used and make it more threadsafe\n* AI Server list Double-click AI server to edit\n* AI Server list up/down now keeps selection\n* AI Server list now correctly refreshes every 1 so you can see it working live\n* Codeproject.AI now sometimes returns a \"the request timed out\" error. If you disable the \"Error\" checkbox when editing a SERVER, it will prevent this from being an error that is sent via telegram, etc. (Because I was starting to find it annoying) - I think it happened when a custom IPCAM model and the regular model were being accessed at the same time.\n* We used to look for Debug: error: warn: etc anywhere within the log line. Now its only within the first 6 chars not including dots, dashes, spaces. This might reduce less serious things from being sent to telegram, etc.\n* Fix to fully respect unchecking AUTO ADD on the deepstack tab https://github.com/VorlonCD/bi-aidetection/issues/334\n* Made sqldatabase history connection a little more reliable. If it gets an error on initial connection it will try to revert to a backup copy. If that failes it will delete the database and recreate it.\n* Exit right-click tray icon\n* New AI icon generated by ChatGPT (Of course!)\n* For refinement servers, you can now use 'Animal', 'Person', people or 'Vehicle' in addition to actual object names\n* Pause and resume right-click tray icon\n* Better error checking when trying to activate blue iris window as non admin\n* Fix crash on camera properties if you dont have a camera selected\n* Fix triggering object list sometimes empty\n* Fix ObjectListView issue, .net 7/8 changed virtualListSize to _virtualListSize\n* Code cleanup, update to latest nuget packages, fixing a few security issues\n* Added more return properties for Codeproject.ai response (not used yet)\n* Added Action time min/max ms to status bar\n* Is it worth trying to integrate other cloud vision AI tools when we have a decent local CodeProject.AI now? \"Google.Cloud.Vision\", \"Azure AI Vision\", \"Imagga\", etc?","shortMessageHtmlLink":"* Moved to .NET 8. This should perhaps speed things up here and there…"}},{"before":"6101905cbf5b341490feb34fcea48fa9ebb7a929","after":"e7f9e6efdba55587cddc086553aa917aa5df7a2b","ref":"refs/heads/master","pushedAt":"2023-06-07T21:38:21.373Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Tweak to how I get local ip address list\n* Force all enabled relevant objects to the top of the list\n* Change default object list for new cameras (No longer contains Meat Popsicle :) )\n* Fix Save button on Prediction Tolerances form - https://github.com/VorlonCD/bi-aidetection/issues/333\n* Checkboxes in AI Server list for easy enable/disable.\n* AI Server list now has all enabled first, disabled last, and a few columns where reordered\n* Prediction Details screen now works correctly when you multi select more than one item from the history list.","shortMessageHtmlLink":"* Tweak to how I get local ip address list"}},{"before":"287b7a74e6ba54a10ee03c3f9d6ba0383ade3fc5","after":"6101905cbf5b341490feb34fcea48fa9ebb7a929","ref":"refs/heads/master","pushedAt":"2023-06-05T13:44:50.306Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Fix telegram pause/resume message\n* Added Ambulance as a known vehicle","shortMessageHtmlLink":"* Fix telegram pause/resume message"}},{"before":"11cd8bacd134e449a745f428dece44159be2dcfd","after":"287b7a74e6ba54a10ee03c3f9d6ba0383ade3fc5","ref":"refs/heads/master","pushedAt":"2023-06-04T15:22:15.726Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Fix error message when no codeproject.ai ALPR plates are found","shortMessageHtmlLink":"* Fix error message when no codeproject.ai ALPR plates are found"}},{"before":"6966394cde39ea917c0c78932cf160c7b0be0f49","after":"11cd8bacd134e449a745f428dece44159be2dcfd","ref":"refs/heads/master","pushedAt":"2023-06-03T23:03:02.022Z","pushType":"push","commitsCount":1,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"* Support adding all Codeproject.AI models\n* Fix Codeproject.ai ALPR plate detection so it actually shows the plate number found. (\"Sighthound_vehicle\" seems to work much better than CP ALPR though??) https://github.com/VorlonCD/bi-aidetection/discussions/332\n* Added name property to URL servers","shortMessageHtmlLink":"* Support adding all Codeproject.AI models"}},{"before":"f6f168b7db877bdd01d769128fa7802904ad7a30","after":"6966394cde39ea917c0c78932cf160c7b0be0f49","ref":"refs/heads/master","pushedAt":"2023-05-29T22:13:53.926Z","pushType":"push","commitsCount":3,"pusher":{"login":"VorlonCD","name":null,"path":"/VorlonCD","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7512113?s=80&v=4"},"commit":{"message":"Minor","shortMessageHtmlLink":"Minor"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wNi0wMVQyMjo1NToyOS4wMDAwMDBazwAAAARZ7O5h","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wNi0wMVQyMjo1NToyOS4wMDAwMDBazwAAAARZ7O5h","endCursor":"Y3Vyc29yOnYyOpK7MjAyMy0wNS0yOVQyMjoxMzo1My45MjY1MDFazwAAAAM2iRux"}},"title":"Activity · VorlonCD/bi-aidetection"}