You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is not an "issue" as much as a question. First of all, I want to thank you for sharing this code. I have been testing it out on my images, and it works better right "out of the box" than any other segmentation method I've tried. It's pretty amazing. I am working with images and video captured from drones used to inspect power grid structures (flying around telephone poles, transmission towers, etc). So, when the drone slowly circles around a telephone pole filming it's components, if you look at adjacent video frames it will look like the objects in the foreground (i.e. objects on the pole, like transformers and insulators) are being shifted slightly relative to the background. In other words, the video is basically doing the "shifting" that your code is simulating. I guess the same could be said about any video showing an object moving relative to the background, like a car, as long as the foreground object isn't changing too much (shape, size, angle) from frame to frame. Anyway, I was imagining that an approach similar to yours could be used for unsupervised segmentation of these foreground objects, given such video. Let's assume the object tracking issue is solved... i.e. I can extract from the video a sequence of frames where the same foreground object appears in roughly the same size and shape in the center of the frame, and only the background is shifting from frame to frame. Before I start working on adapting your method to this scenario, I was wondering if you had any thoughts about this. Is there an obvious way to do this? Or is there an obvious reason I shouldn't do this? Is this already a solved problem? Thanks for your time ;)
The text was updated successfully, but these errors were encountered:
This is not an "issue" as much as a question. First of all, I want to thank you for sharing this code. I have been testing it out on my images, and it works better right "out of the box" than any other segmentation method I've tried. It's pretty amazing. I am working with images and video captured from drones used to inspect power grid structures (flying around telephone poles, transmission towers, etc). So, when the drone slowly circles around a telephone pole filming it's components, if you look at adjacent video frames it will look like the objects in the foreground (i.e. objects on the pole, like transformers and insulators) are being shifted slightly relative to the background. In other words, the video is basically doing the "shifting" that your code is simulating. I guess the same could be said about any video showing an object moving relative to the background, like a car, as long as the foreground object isn't changing too much (shape, size, angle) from frame to frame. Anyway, I was imagining that an approach similar to yours could be used for unsupervised segmentation of these foreground objects, given such video. Let's assume the object tracking issue is solved... i.e. I can extract from the video a sequence of frames where the same foreground object appears in roughly the same size and shape in the center of the frame, and only the background is shifting from frame to frame. Before I start working on adapting your method to this scenario, I was wondering if you had any thoughts about this. Is there an obvious way to do this? Or is there an obvious reason I shouldn't do this? Is this already a solved problem? Thanks for your time ;)
The text was updated successfully, but these errors were encountered: