You are here

Kamera görüntülerinden gidilen yolun kestirimi

Moving path estimation from video sequences

Journal Name:

Publication Year:

Abstract (2. Language): 
The aim of the study is to determine the 2D trajectory of a moving object by utilizing images obtained by a camera which is inserted on the object. Initially, consecutively generated images from the camera were subjected to analysis using the SIFT algorithm – a methodological step which allowed for the identification and computational description of the key points (interested point) that were to be used for the generation of the 2D trajectory. The ability to detect and match features across multiple views of a scene is a crucial first step in many computer vision algorithms for dynamic scene analysis. State-of-the-art methods such as SIFT perform successfully when applied to typical images taken by a digital camera or camcorder. However, these methods often fail to generate an acceptable number of features when applied to medical images, because such images usually contain large homogeneous regions with little color and intensity variation. Feature detection and description are possibly the most important steps in many computer vision algorithms. Distinctive image features can be used to establish matches across multiple images in a video sequence. Next, the length and direction of the object trajectory were determined by using histograms which facilitated the superimposition of the SIFT obtained key points and their corresponding points in real-time. More and more, the experiments conducted in real-time environment with a mobile phone camera, showed that the trajectory could successfully be determined if there are static objects surrounding the moving object and the camera; while the environment of the motile object is consisted of non-static objects, the determined trajectory is followed by a noisy background. The noise occurs as a result of the mapping of the key points of a motile object in the environment to the key points(interested point) corresponding to the same motile object but only few seconds later as the object moves to another position. Also, the accuracy of the method can be evaluated once the visual odometry results are compared for net cumulative error to the results obtained with wheel odometry. In other words, visual odometry makes use of an image sequence to estimate the motion of a robot and optionally the structure of the world. For example, the low-cost and small-size of cameras, combined with the high-information content of the images they capture make them ideal for robot platforms. In the meantime we compare different approaches within this framework and show that relative orientation is superior to using absolute orientation to estimate pose. We test our algorithm on outdoor and indoor environments as well as present results showing its effectiveness. SIFT method was used to detect the identical points of the successive frames. In addition to this, the histograms of the coordinate differences of the matched points are used for estimating the path. The respective pathway was represented in two dimensional spaces by directional combining of the matched (identical) points. The SIFT method has been utilized effectively on 2D grayscale images to identify and match invariant features. Furthermore, the SIFT method works efficiently for object recognition problems where a training image of the object of interest is given as well as the number of features extracted, or number of matches between consecutive views is a good indication for the success of the algorithm and If the number of extracted features or matches between conductive frames drops suddenly to a very low value, then it is very likely the image is blurry. Image blur is most of the time caused by fast robot rotation. Once the rotation is over, image come back to a good quality that allows extracting of a large number of features and matches. Blurry images causes by rotations are typically introduced for two or three frames. Thus, once a low number of features is detected we can stop visual odometry and wait for a good image. Once a good image is obtained, there is a very high probability. The goal of the experiment is to be able to achieve a comparison of the experimental path and theoretical path of a certain object that holding a camera. As a result the experimental results show, the path estimation by using successive images is applicable in static environments, but in the dynamic environments, the generated paths are very noisily. The merit of the proposed approach is that it retains the benefit of fast feature extraction methods and performs the more expensive robust image matching only when needed.
Abstract (Original Language): 
Çalışmada hareket eden bir nesne üzerine yerleştirilen bir kameradan alınan görüntülerle nesnenin 2 boyutlu ortamda gittiği yolun bulunması amaçlanmıştır.2 boyutlu gidilen yolun bulunmasında temel olarak izlenen yöntemlerden biri ve kamera görüntülerinden gidilen yolun kestiriminde kullanılan yöntem olan görüntü karelerinden ortak noktaların çıkarımı ve bu ortak noktalardaki yer değiştirmesinin bulunması ile hareketli nesnenin konumunun bulunması olmuşur. Metot olarak öncelikle kameradan alınan ardışık görüntülerdeki eşleşen SIFT (Scale-invariant feature transform- Ölçekten Bağımsız Öznitelik Dönüşümü) algoritmasının çıktıları olan kilit noktaları elde edilmektedir. Daha sonra bulunan kilit noktalardan eşleşen noktaların koordinat farklarının histogramları kullanılarak bu iki görüntü arasında nesnenin gittiği yoldaki mesafenin büyüklüğü ve gittiği yolun yönü belirlenmektedir ve gidilen yol 2 boyutlu uzayda yönlü doğru parçaları dizisiyle temsil edilmektedir. Bir cep telefonu kamerasıyla gerçek ortamlarda yapılan deneyler sonucunda gidilen yolun, hareketsiz ortamlarda başarılı, hareketli (kameranın bağlı olduğu nesne haricinde başka hareketli nesneler olduğu) ortamlarda ise gürültülü olarak belirlenebildiği görülmüştür. Ayrıca, gürültü oluşumunun nedeni ise hareketli olan nesnelerin üzerinde kilit noktaların bulunması ve bir sonraki görüntüde olan aynı hareketli nesne üzerindeki kilit noktaların bulunup eşleştirilmesidir. Bu yöntemin doğruluğu ise görsel odometri ile elde edilen sonuçlar ve tekerlek odometrim ile elde edilen sonuçların kümülatif hata toplamlarının karşılaştırılmasıyla ölçülebilmektedir.
5-11

REFERENCES

References: 

Haehnel, D., Schulz, D., and Burgard, W., 2002,
Map building with mobile robots in populated
environments, International Conference on
Intelligent Robots and Systems (IROS), 496-501.
Mark Maimone, Yang Cheng, Larry Matthies, 2007,
Two Years of Visual Odometry on the Mars
Exploration Rovers, Journal of Field Robotics,
24(3), 169-186.
David G. Lowe, 2004, Distinctive image features
from scale-invariant keypoints, International
Journal of Computer Vision, 60(2), 91-110.
Serce, H., Bastanlar, Y., Temizel, A., Yardimci,
Y., 2008, On Detection of Edges and Interest
Points for Omnidirectional Images in Spheria
l Domain, SIU 2008, 20-22 April, Didim,
Turkey.

Thank you for copying data from http://www.arastirmax.com