The Ultimate Home Surveillance System – Free Local AI Person Detection
Ultimate Home Surveillance System
Today on the hookup I’m going to show you the inner workings of my locally controlled, non cloud based surveillance system including free AI person detection and 24/7 secondary resolution recording. Even if you’re not interested in the ultimate surveillance system, stick around for some crazy home assistant tricks that 99% of users don’t know about.
You’ve got a ton of options when it comes to video surveillance, there’s dedicated NVR packages, IP cameras with on device recording, user friendly wireless cameras, and many more, and every system has it’s merits and use cases, but one of the questions I get asked most often on my channel is “what the BEST solution is for video surveillance”, so today, I’m going to show you my system, and then I’m going to show you how you can do it yourself.
Here it is, my PC based Blue Iris surveillance system that records all 9 of my outdoor cameras 24 hours a day, 7 days a week at low 640×480 resolution. Locally processed AI computer vision checks each standard motion event for relevant objects. Personally, I use it to detect people and cars, but you might have a use for some of the other available detection objects like dogs, cats, or even bears. In the event a relevant object is detected it starts recording at full 4K resolution and sends an MQTT message to home assistant to be used in automations. Using this system I can sort through my alerts and not have to deal with a bunch of false motion events like shadows moving and trees blowing… every clip corresponds to a positive AI detection event which I’ve found to be over 98% accurate, meaning less than 1 out of 50 events is a false trigger. Using this method my 4 terabyte surveillance grade hard drive gives me over 30 days of 24/7 recording at 640×480 with all relevant motion recorded at 4k. In contrast, recording 24/7 in 4k would fill up a 4 terabyte hard drive in less than 5 days, and significantly shorten the lifespan of the drive.
One of my favorite parts of a blue iris system is the excellent web based interface that allows you to monitor your cameras and view recorded footage right in chrome without the need for any plugins or special settings. Blue Iris UI3 is light on resources and is hands down the best way to remotely monitor your security system on your PC, I actually prefer using the web interface to the full blue iris program for monitoring and reviewing footage, I only open up the main blue iris program for changing settings or adding cameras. In The blue iris UI3 interface I can view high live views, review all the AI motion events from a specific camera, or quickly scrub through the lower resolution 24/7 recording. Each motion event generates a thumbnail preview just by hovering over it, making it by far the easiest way to review security footage. I also love the server status system monitor that tells me the CPU and memory usage of my server.
Because the motion alerts are around 98% accurate, I can reliably use them in automations, like sending notifications if there’s a person at the door, or someone is in the back yard when we are not home. I use AI person detection to turn the patio light on and off reliably, and I even have a text to speech event that announces to trespassers that they are being recorded and that I have been notified of their presence if they are detected in the back yard while the house alarm is on.
The best part about all of this, is that all of this is locally controlled and hosted, 100% functional without an internet connection, and doesn’t rely on any cloud services.
Sound interesting? Here’s how it works.
Before we get started I have to address the Elephant in the room: The biggest hurdle for many to overcome with this system is not price or technical ability, but the fact that Blue Iris is a windows based program and cannot run on Linux. I’d like to be able to tell you that a linux version is on it’s way, but it’s just not the case, Blue Iris is, and will remain windows only for the foreseeable future.
I personally run my blue iris system on the same computer that handles my plex server and hosts the virtual machine where I run home assistant, which is a 6th generation intel core i7 with 32 gigabytes of RAM, 2 4-terabyte spinning hard drives, and a 256 gigabyte solid state boot drive. Mine was a hand me down, but you could build something similar for around $400-500 using ebay or facebook marketplace.
For reference, here’s what the resource usage looks like running two transcoded plex streams, home assistant VM, and AI person detection on all 9 of my cameras.
The other common complaint about blue iris relates to feature creep, which causes the number of settings and options to be completely overwhelming for a new user. I’m not going to cover every setting in this video, but I’ll hit the important ones. To that end, the most important setting, and the one that will make or break your blue iris experience is something called direct to disc recording. This basically means that blue iris takes the exact RTSP stream from your camera and records it without modifying the resolution, bitrate or frame rate, which means your system doesn’t need to waste any processing power re-encoding the video stream.
If you don’t use this setting your computer will absolutely not be able to handle 9 camera streams, but I mentioned that we need access to both a low resolution stream for 24/7 recording and a high resolution stream for motion events, so how is that possible without re-encoding?
Well, almost all security cameras have the option to output both a main stream, and a second substream, so we’re going to utilize the main stream for our high resolution recordings, and the substream for our 24/7 recordings. Every camera sets this up a little bit differently, but here are the settings I use on my Annke C800, which is the camera I chose for best all around best 4k camera.
After logging in to the web interface for the camera, click Video/Audio and select main stream from the stream type dropdown. For this stream I want to use the maximum resolution at 15 frames per second using a variable bitrate and I’m going to use H265+ encoding which will result in much lower file sizes. Go ahead and save that profile, then select sub-stream in the dropdown. For this stream I want 640×480 resolution, 15 frames per second, variable bit rate at only medium quality, and H265 encoding. The low resolution stream will produce very low file sizes which is great for both 24/7 recording, and AI computer vision, that can sometimes struggle with larger images.
The last thing to do is a bit of googling to find your specific camera manufacturer’s RTSP URLs for the main stream and substream, which are all a bit different. For instance, these are the URLs for Annke cameras.
With our feeds correctly setup we can add them to Blue Iris and set them up for computer vision.
And by the way, I’m not taking any credit for this computer vision stuff, the ipcamtalk user GentlePumpkin is the mastermind behind all of this. Link to his awesome how-to post is in the description.
As I mentioned, starting out in blue iris can be overwhelming, so lets cover some of the most basic settings first. Go to main menu, and then general settings. In the clips and archiving tab, at the very least you need to setup a new footage folder to store your video files. You should limit the total amount of space on the drive that will be occupied by footage, and when that amount of space is occupied you can choose to delete clips or archive them in a different location like a NAS drive using the stored folder. For this setup you’ll also need a folder for still images to be used with computer vision, so select Aux 7 and rename it aiinput, and give it a small size limit like 1 gigabyte, since it will only contain low resolution images.
While we’re in setup click over to the web server tab. In this tab you need to decide which port you want to use for your blue iris webserver, I recommend changing it from port 80, which is often used by other services, I personally use port 81. Check the box that says “Use UI3 for non-IE browsers”, which is the awesome web based UI, and make sure the correct ip address is listed for your local internal LAN access. If you want to set up your blue iris to be accessible from outside your network you can also setup remote access, but the most secure way to set it up is without external access and instead use a VPN to simulate local access when you are away. Last, hit advanced and set your authentication to require from non-lan only, meaning if you log into blue iris from within your network you won’t need to enter a username and password. Also deselect “use secure session keys and login page” and hit okay.
Last, click on users and create a new user with a secure password and admin privileges to use with your computer vision system.
The next step is to add a camera. The first camera we’re going to add is our low resolution 24/7 recording substream camera. Click main menu, add new camera. On the next screen give the camera a descriptive name, I’m going to call mine annkesd. Select “network IP” as the camera type and check “enable motion detector” and “Direct to disc recording”. Hit okay and the camera configuration window will appear. This is where you’ll put the URL for the rtsp stream of the cameras substream along with the username and password of the camera. I’m not 100% sure what the setting does, but I like to click the box that says “Limit decoding unless required”, because that sounds like a pretty good thing to do!
In the general window you can assign this camera to as many groups as you’d like. I recommend at least adding it to a group called “substreams” that you can use to comb through any of the 24/7 recordings that you need to.
In the trigger window you can click the configure button next to motion sensor to increase the sensitivity of detection, because we want to error on the side of too much detection, since the computer vision will be checking them all anyways. Deselect the “Capture an alert list image” box and set the break time to 4 seconds. In the record tab, select continuous from the dropdown and leave the folder as new. Also check the box that says JPEG snapshot each, and change the time period to 5 seconds, and the quality to 100%. Select your aiinput folder and then check the “only when triggered” box. This will take a new still image every 5 seconds when motion is detected and save it into the computer vision folder for analysis. I like to have my continuous recording divided up into 1 hour files, but it’s really up to you. The last thing to do on this page is to deselect the box to include the jpegs in the all clips timeline.
Last, head over to the alerts tab, and select “Never” from the dropdown box, and then you’re done with your 24/7 recording standard definition stream with jpeg capture on motion. You’ll repeat this exact same process for each of your cameras, luckily, when you create a new camera it allows you to copy the settings of an existing camera, which significantly shortens the process since you’ll only need to update the camera stream URLs and camera name.
To setup the full resolution cameras go to main menu and then add camera, use the same name as before, but hd instead of sd, and this time only select direct to disc, not motion detection. For this camera you’ll enter the URL for your camera’s main high resolution stream and again, and check that mysterious limit decoding unless required box.
Under trigger, the only box you want checked is capture an alert list image, and under record you’ll select “when triggered” and then give yourself at least 5 seconds of Pre-trigger video buffer. As far as I can tell, the video pretrigger is stored in RAM, so this shouldn’t be too taxing on your drive, but if you have a low RAM system with 4K cameras you might run into issues, I’ve found that 5 seconds of pretrigger is enough for cameras that will be spotting people, but I use 10 seconds for car detection.
Repeat this process to add each of your high resolution camera streams.
You may have noticed that we set this cameras to record on trigger, but then we disabled motion detection, which is the primary way to trigger a camera, and that’s because the AI program is going to be in charge of triggering the camera, so lets get that set up.
The AI program has two components: The computer vision server that processes images using an API, and a custom built program made by GentlePumpkin that watches the aiinput folder for specific files and sends those images to be processed by the computer vision server.
The computer vision server we’re going to use is called “DeepStack by DeepQuest AI” and it has both paid and free level pricing. We’re going to utilize the free version, but you still need to register on their website to get an activation code… Even though we need to register to get an activation code, all images are processed locally and no cloud services are used. Once you sign up with DeepStack using the link from the description you can obtain your activation key from the web interface by clicking on dashboard.
You have two main options for installing the deepstack server, either as a windows program, or a docker container. The HUGE downside to using the windows program is that there is no way to auto start the program after a restart because it can’t be run as a service, and you need to click specific buttons to start the server, so to get a consistent and reliable system we need to use the docker container. You could install docker for windows to accomplish this, but if you’re already running home assistant in a virtual machine there is another option: You can install additional docker containers in your home assistant instance, since HassOS uses docker, you just need to access it by installing the portainer addon from the community addons repository.
A quick warning: This gives you way more access than you’re used to having in home assistant, and the likelihood of breaking everything is high if you mess something up, so make sure you have a backup of your virtual machine, and know that this is not for the faint of heart.
After installing portainer, toggle the switch at the top to turn off safe mode, and hit start. After giving it a few seconds to load, hit open webUI to get to the portainer interface.
Click on primary, then containers, and add container.
Name your container something descriptive like deepstack and then for the image, we’re going to pull it from the docker server by entering deepquestai/deepstack.
We need to send commands to this server, so we need to map a port, hit manual network port publishing. I used port 83, so click publish new network port and map port 83 on the host to port 5000 on the container.
Under advanced container settings, go to volumes and hit map additional volumes, and you’re going to type in /datastore and click localstorage – local from the dropdown,
then go to env and add an environmental variable called VISION (DASH) DETECTION in all caps and set it equal to true,
last, click on restart policy and set it to always, then hit deploy container. If everything went well, you should now have a computer vision server running on the ip address of your home assistant virtual machine on port 83 that will automatically start any time your virtual machine is running.
Access your deepstack server by putting that address in a web browser, and you’ll need to put in your activation code from the deepstack website dashboard. The activation code expires after two years, but it’s not clear what will happen at that point… hopefully you’ll just need to paste in an updated activation code.
Next we need to set up the program that serves your blue iris images to the vision server. Grab the zip file for the latest version of the ai tool from the link in the description and unzip it somewhere permanent on your blue iris computer. You’ll set this up as a windows service later so it automatically starts when your computer reboots, but for now, lets just get it setup. So double click on aitool.exe to get it started.
Under the settings tab, go to input path and browse to the aiinput folder that you set up in blue iris, then for deepstack url you’ll enter the ip address if your virtual machine colon 83, which is the port that the service is running on. Then you’ll setup a camera by clicking add camera. Give it a name and then where it says input file begins with, you’ll put in the name of the low resolution camera we set up in blue iris, so annkesd in this case, if you’re not sure what this will be you can just go look at the aiinput folder to see what prefixes your files have.
Select which objects you’d like to detect and leave your confidence limits at 0 and 100%, we’ll adjust these later when we have some data. I choose not to use the cooldown timer, but you can set this to only analyze one image from that camera in a specific period of time, this would be useful if you have limited system resources and are noticing high CPU usage. The last thing you need is your trigger URL, this uses the blue iris API to trigger a specific camera by feeding it the command “trigger” with the camera name, username and password as attributes. If you don’t get this URL correct, nothing is going to work, so check your command by putting it in a normal web browser, if it’s correct you should get a response like this and your HD camera should start recording. Hit save and test it out. You should see the Overview page switching between running and processing image. If it seems to be stuck on the processing image step, you likely have an issue with your deepstack server url, or your deepstack server may not be running.
In the history URL you can see the images that are being feed to the AI server, and what objects were detected along with how confident the AI was that it was actually that object type. If you’re noticing too many false positives, you can raise the lower confidence limit, but I’d recommend getting a feel for each camera and the confidence levels that it generates before tweaking those values.
A problem that I personally ran into was detection of objects that were not the cause of the motion, for instance, I wanted to detect cars with my second story front yard camera, but it always detected the cars that were parked in my neighbor’s driveway, resulting in a lot of false motion. The good news is that it’s pretty easy to make a mask for one specific area, which will block triggers from objects in that area. There’s a good tutorial for doing that in GentlePumpkin’s post, but the result is that the computer vision still works, but it doesn’t trigger the camera if the object was in a masked zone.
I recommend checking on your history tab for the first couple of days you are running the server to see what issues the computer vision might have with your specific setup, and when you feel confident with the accuracy of the detection you can set AI tool up to run as a service in the background which will start immediately whenever your computer restarts, again, the exact method for that is outlined in detail in GentlePumpkin’s post.
Now that your motion detection won’t be tricked by shadows, spider webs, rain, or moving trees you can reliably use that data in automations. Blue iris can send and receive MQTT messages, allowing you to get this information in node red, or even set up binary sensors in home assistant. To set up MQTT in blue iris click on main menu, then digital io and iot, then hit the configure button in the MQTT section and put in the credentials for your MQTT broker. Then, to actually send a message, we need to go to the setting of the full resolution camera that we want to monitor, click on alerts, select “fire when this camera is triggered” and then under actions, we’ll go to on alert and add an MQTT topic and payload, then hit okay, and you can test it by hitting the lightning bolt icon.
To set up a motion sensor in home assistant we can add an entry to the configuration file under binary sensor set the device class to motion, put in the MQTT topic for that camera, and since we’re only sending the positive motion events, we need to have an “off delay” for a certain number of seconds. You can tweak this to your liking, but I used 30 seconds without a new motion event to revert back to the “clear” status.
Once in home assistant you can use this binary motion sensor to do any number of things like send actionable notifications, warn intruders, or control outdoor lights only when it’s dark, and a human has been detected, the possibilities are endless.
I’m not sure how many people are going to make it to this point in the video, because setting a system like this can feel overwhelming at times, but in my opinion this is the absolute best system possible for keeping things local, relatively inexpensive and reliable. If you have questions or need help leave a comment or come join the thousands of home automation and security enthusiasts on the hookup home automation facebook group.
If you’d like to help support my channel and the videos that I make, consider becoming a patron or check out the other links in the description. If you enjoyed this video, please hit that thumbs up button and consider subscribing, and as always, thanks for watching the hookup.