English 中文(简体)
按速度分列的申请内容(使用Emgu进行探测)
原标题:Order Motion components by velocity (Motion detection using Emgu)

I am trying to create an ordered list of Motion Components , ordered by motion. For example in a street view camera I would like the vehicles first, followed by bicycles, followed by pedestrians. Is this possible using MotionDetector sample in Emgu?

    if (motionPixelCount < comp.area * 0.05) continue;

This seems to be some kind of thresholding, but what does motionPixelCount really mean? Could this be a proxy for the speed/velocity of the motion component? or something else entirely? It is being compared to the component area.

tl;dr: how do I determine the velocity/speed of the motion components relative to the rest of the objects?

最佳回答

提出动议的职能摘要 PixelCount的情况如下:

    public void MotionInfo(Rectangle motionRectangle, out double angle, out double motionPixelCount);
    //
    // Summary:
    //     Given a rectagle area of the motion, output the angle of the motion and the
    //     number of pixels that are considered to be motion pixel
    //
    // Parameters:
    //   motionRectangle:
    //     The rectangle area of the motion
    //
    //   angle:
    //     The orientation of the motion
    //
    //   motionPixelCount:
    //     Number of motion pixels within silhoute ROI

它是通过在ROI的“Rectangle”动议范围内检查餐厅数量而确定的。 这阻止了10亿个成果,其中1或2个食堂似乎已经移动。 然而,通过确定一种更准确的里程航道,如果你知道,你不大可能随意进行小型移动,那么你就能够消除这一门槛。

我们可以利用这些数据计算检查内容的速度和速度,但我们需要一些具体的数据。 我们需要知道能够在特定距离内对物体进行真正世界测量的颜色。 我在上详细讨论了这个问题。 如果我们知道所追踪物体的规模,距离照相机距离远,那就很容易了。

如果我们有一个固定距离的物体,那么我们就能够从现实的世界角度将物体带走,并将物体按物体的颜色区分开。 这样做比进行孤儿和易受感染儿童的计算容易,因为它不需要准确的距离测量,但更准确地计算物体面积可以更准确地计算跟踪结果。 如果我们拿到这种六分母-毫米测量方法:

这项职能使你总体动议距离:

double overallAngle, overallMotionPixelCount;
_motionHistory.MotionInfo(motionMask.ROI, out overallAngle, out overallMotionPixelCount);

因此:

Speed = (overallMotionPixelCount*mm_per_pixel_mesurment)/ Time_Between_Frames

如果我的物理速度仍然接近速度,并且需要你随时间跟踪物体,那么我就在速度快后再次表示:

Velocity = (overall_distance_movedmm_per_pixel_mesurment) /(Sum_of_frames_TrackedFrame_Rate)

Velocity = (overall_distance_moved*mm_per_pixel_mesurment) / (time_object_was_ Lineed_for)

最后,这很难做到,这就是为什么使用Sonar或Infra-red等不同技术准确跟踪速度的原因。

联合王国至少有一个令人感兴趣的例子是使用油漆线和快照相机。 这些线路之间的距离必须在特定测量的5毫米以内,然后(通过同一车的两幅照片)按照这些参考文献计算出的车速。 这是提供图像深度数据的一种精细方法。 正在替换这些纳尔类型,因为如果这些线路在固定数额的5毫米距离之内,则无法准确衡量速度,也无法在法庭上站立。

Ok I想知道在轨旁边,但除非你在知道距离和已知大小的物体上进行跟踪,否则,如果没有某些深度数据,或者没有像象牙这样的特定照相机,或者没有两部照相机,就会出现明显的距离。 (请注意,如果你知道照相机行走的距离,而且没有轮换,可以采用一个照相机和两个框架。)

我希望这有助于你,

Cheers,

Chris

问题回答

暂无回答




相关问题
Python/OpenCV: Converting images taken from capture

I m trying to convert images taken from a capture (webcam) and do some processing on them with OpenCV, but I m having a difficult time.. When trying to convert the image to grayscale, the program ...

How to smooth a histogram?

I want to smooth a histogram. Therefore I tried to smooth the internal matrix of cvHistogram. typedef struct CvHistogram { int type; CvArr* bins; float thresh[CV_MAX_DIM][2]; /* ...

Testing complex datatypes?

What s are some ways of testing complex data types such as video, images, music, etc. I m using TDD and wonder are there alternatives to "gold file" testing for rendering algorithms. I understand that ...

Displaying OpenCV iplimage data structures with wxPython

Here is my current code (language is Python): newFrameImage = cv.QueryFrame(webcam) newFrameImageFile = cv.SaveImage("temp.jpg",newFrameImage) wxImage = wx.Image("temp.jpg", wx.BITMAP_TYPE_ANY)....

What is the memory structure of OpenCV s cvMat?

Imagine I have the following: CvMat* mat = cvCreateMat(3,3,CV_16SC3) This is a 3x3 matrix of integers of channel 3. Now if you look at OpenCV documentation you will find the following as the ...

Face detection and comparison

I m running a small research on face detection and comparison for my article. Currently, I m using rapid face detection based on haar like features based on OpenCV cascade (I ll implement learning ...

热门标签