English 中文(简体)
Viola Jones/ AdaBoost学习阶段
原标题:learning phase of Viola Jones / AdaBoost

我对。 Viola Jones算法

根据我的理解,我给出假冒代码中的算法:

# learning phase of Viola Jones
foreach feature # these are the pattern, see figure 1, page 139
    # these features are moved over the entire 24x24 sample pictures
    foreach (x,y) so that the feature still matches the 24x24 sample picture
        # the features are scaled over the window from [(x,y) - (24,24)]
        foreach scaling of the feature
            # calc the best threshold for a single, scaled feature
            # for this, the feature is put over each sample image (all 24x24 in the paper)
            foreach positive_image
                thresh_pos[this positive image] := HaarFeatureCalc(position of the window, scaling, feature)
            foreach negative_image
                thresh_neg[this negative image] := HaarFeatureCalc(position of the window, scaling, feature)
            #### what s next?
            #### how do I use the thresholds (pos / neg)?

这是本SO问题中的前线:。 Viola-Jones v. 180k characteristics

这一算法称为“国土功能”,我认为我的理解是:

function: HaarFeatureCalc
    threshold := (sum of the pixel in the sample picture that are white in the feature pattern) -
        (sum of the pixel in the sample picture that are grey in the feature pattern)
    # this is calculated with the integral image, described in 2.1 of the paper
    return the threshold

至今没有任何错误?

Viola Jones的学习阶段,基本发现哪些特征/导师是决定最多的。 我不理解文件描述的AdaBoost是如何运作的。

www.un.org/spanish/ecosoc 问题:文件中的AdaBoost如何看待代号法中的内容?

问题回答

Viola Jones:

  • First you will hard code 180k classifiers(feartures)
  • And initially all the training data will have same weights
  • Now each classifier(feature) will be applied to every training data you have
  • Each feature will classify your training data as either a face or not a face
  • based on this classifications for every feature you will calculate the error
  • After calculating the error of each feature, you pick the feature with error that is most far away from 50% by which i mean if you have one feature with 90% error and other feature with 20% error then you will select the feature with 90% error and add it to the strong classififer
  • You will update weights of each training data
  • Now you will repeat the process untill you get a nice accuracy for you validation data with the strong classifier that you have built
  • AdaBoost is a technique of making a strong classifiers out of weak classifier




相关问题
How to add/merge several Big O s into one

If I have an algorithm which is comprised of (let s say) three sub-algorithms, all with different O() characteristics, e.g.: algorithm A: O(n) algorithm B: O(log(n)) algorithm C: O(n log(n)) How do ...

Grokking Timsort

There s a (relatively) new sort on the block called Timsort. It s been used as Python s list.sort, and is now going to be the new Array.sort in Java 7. There s some documentation and a tiny Wikipedia ...

Manually implementing high performance algorithms in .NET

As a learning experience I recently tried implementing Quicksort with 3 way partitioning in C#. Apart from needing to add an extra range check on the left/right variables before the recursive call, ...

Print possible strings created from a Number

Given a 10 digit Telephone Number, we have to print all possible strings created from that. The mapping of the numbers is the one as exactly on a phone s keypad. i.e. for 1,0-> No Letter for 2->...

Enumerating All Minimal Directed Cycles Of A Directed Graph

I have a directed graph and my problem is to enumerate all the minimal (cycles that cannot be constructed as the union of other cycles) directed cycles of this graph. This is different from what the ...

Quick padding of a string in Delphi

I was trying to speed up a certain routine in an application, and my profiler, AQTime, identified one method in particular as a bottleneck. The method has been with us for years, and is part of a "...

热门标签