Steps to Develop a Platform for Robust Vision

MQP Team:

  • Samir Zutshi  

Advisor:

  • Eduardo Torres-Jara  

The scope of this work is centered on real time and active systems. An active system is desired, as a more robust system is needed for robust vision where we can implement algorithms that are non-dependent on illumination. There are four degrees of freedom from the optical point of view, zoom, focus, light filtering and motion. Motion is addressed because it is less dependent on illumination for applications such as detecting edges. The work presented here is based on using image subtraction to detect changes in motion between two frames in black and white. When looped, this effectively presents any motion detected in the viewing window of the camera. To further apply this concept, the detection is done with the moving of the camera rather than the detecting motion of the environment. This effectively creates edges of the area in view and in addition, gives more valuable information about the subjects in the environment when compared with that of a canny rendering of the same view. There is concern with the performance of this in real time and therefore, we are using techniques such as integral imaging for clustering the edges, as regular clustering can be expensive. Currently there is a platform with one degree of freedom that is intended to be tested with the techniques mentioned above. The results are represented as visual comparisons frame by frame while the camera is given a simple motion path to detect such changes.  The difficulties of real time are what are meant to be highlighted in this work and the techniques presented will attempt to address some of these difficulties with the goal to find an alternative and efficient solution.

 
  • Email a Friend
  • Bookmark this Page
  • Share this Page