For those that have heard of OpenTLD, how does it alternate between tracking different objects? It can only track one object at a time, but if I had two or more objects trained in the same video feed, how does OpenTLD decide what to track? In all the sample videos the user manually bounded the object to be tracked, and afterwards it was automatically tracked.
Is this considered an object tracker only? And not an object recognition system? I'm slightly confused about this.
For my appli开发者_如何学JAVAcations, I'm fine with tracking/detecting one object at a time, but only if I have the option of switching over to track another object.
For example, in a Haar-like feature setup: 1. I have a cup and a book trained using several positive and negatives 2. Starting up my Haar recognition software, the software picks up both the cup and the book and highlights them with the correct labels
Ideally what I think/hope/wish that OpenTLD does is: 1. Using compiled exe, bound cup in video, track and learn 2.next bound book in video, track and learn 3. In a video feed with both the book and cup, I tell the program to tell me all the objects that it can detect in the live video feed. 4. program tells me that it detects cup and book and gives me option to track one of them
Is this feasible?
精彩评论