Multi-Camera People Detection Algorithm
Extracts image features from a live video feed to find a specific person from a group of people.
My original plan was to develop an algorithm that tracked motorcycles racing around a track and autonomously switch between different cameras based on the best one available (i.e. motion). Recently in Ferguson, MO, the police were taking peoples cameras and preventing them from recording the events going on. This took away their right for freedom of the press, thus letting the PD do what ever they want without public knowledge. Without this protesters aren't able to hold the police accountable for anything that happens during the protest.
I wanted to develop an algorithm (and expand on my open-sourced code for the motorcycle racing algorithm) that will let anyone with a camera start recording then for each camera that has a specific person or group of people in view will have the chance to live stream their input. The algorithm will be running from someones phone that other users can join and start broadcasting their video stream.
My implementation during the hackathon was the initial user selection and comparison/searching algorithm to detect that person within a group of people. It's a full body feature extraction algorithm using OpenCV and SURF features to compare between different images. The input is a live broadcast from my webcam and outputs a live video of everything going on at any time.
My future plans include expanding my motorcycle selection algorithm to include the new method for finding users and porting everything to mobile platforms for release to the public. I'm planning on making this a free application so that everyone can use it without worries. This gives the general population to be the big brother again and hold leadership and police accountable for possible actions.