It’s summer time, and, as usual here in Italy, it’s time for vacation!
Vacations means leaving your home and hoping that everything will be ok when you’ll go back.
As I said here (only in italian, i’m afraid), I’m using Arlo and some others cameras to check the state of my house while i’m outside.
The major issue of the motion detect based system it’s the “false alarm” event.
In order to avoid them, i used ImageAI to analyze the video and send me a notification only if a *REAL* person it discovered on the frames.
This summer, i was thinking about improving this system, giving more realtime-ness. How? Of course, with a Nvidia GPU!
I was looking for some benchmarks in order to understand how much better my algorithm will work with a GPU.
My search was a hole in the water: noone in “the internet” was so nerdy to post it’s own benchmark for people like me 🙁
So here I am!
Testing!
I tested the same algorithm (this one) with CPU and GPUs using a real HD video from one of my cameras (2m:02s is the duration of my video).
On my CPU, a Intel i7-8650U, the process eats 4 of my 8 cores for 2 minutes!
Why only 4 cores? Because on the same system i’m running other VMs, so i would like to have some spare resources.
So, here we go with the GPU: a Geforce MX130.
The number was: 00m:53s!
The average process time for each frame, with GPU, was about 100ms.
I know what are you thinking now: 53 seconds against 120 seconds it’s not a huge improvement (about 200%), but:
1) my GPU on this Proxmox system, was just a useless piece of hardware before this;
2) even if there isn’t any improvements, use a GPU instead of a CPU, means leaving CPU and RAM resources for other processes.
I’m really excited about this piece of software!