开发者

What are Eigenfaces generated from?

开发者 https://www.devze.com 2023-03-14 16:54 出处:网络
I\'m working with eigenfaces for a facial recognition program I am writing. I have a couple questions about how eigenfaces are actually generated:

I'm working with eigenfaces for a facial recognition program I am writing. I have a couple questions about how eigenfaces are actually generated:

  1. Are they 开发者_运维问答generated from a lot of pictures of different people, or a lot of pictures of the same person?

  2. Do these people need to include the people you want to recognize? If not, then how would any type of comparison be made?

  3. Is an eigenface determined for every image you provide, or do multiple pictures go towards creating one eigenface?

This is all about the generation or learning phase of the eigenfaces. Thanks for any help or pointing me in the right direction!


I actually find the description for Eigenfaces on Wikipedia quite useful. To answer your questions:

  1. Yes, you should take pictures from many different people.
  2. No, the eigenfaces basically give you a way to describe other faces. You can think of the eigenfaces as a basis in a vector space. You have to make sure that you can describe the face that you want to recognise with the eigenfaces that you have. If you only use Caucasian faces to determine the eigenfaces, you might have problems describing a variety of Asian faces with them and vice versa.
  3. The eigenfaces are computed from a set of images, i.e. multiple images lead to multiple eigenfaces.

Edit: Answering the question, that Kevin added in the comment to the question:

The idea behind using eigenfaces, is that you can express an image of a face by mixing eigenfaces together. Let's suppose you have three eigenfaces ef_1, ef_2, ef_3 and you have an image of a face f_1 = a_1 * ef_1 + a_2 * ef_2 + a_3 * ef_3. The eigenfaces do not change, regardless which face you want to express with them, however, the coefficients a = (a_1, a_2, a_3) are characteristic to the face. This is what you would use to compare two faces.

But in order to get to the stage where you can use eigenfaces, you first have to align (register) an observed face with the eigenfaces, which is not trivial and a completely different topic (see pxu's answer).

P.S.: I recommend, that you keep an eye on Area 51: Computer Vision, which is a Stack Overflow sister site about computer vision in the making.


  1. Many different people are highly necessary to achieve support to cover all possible faces.
  2. No need for that, although you need to represent all dimensions. A good analogy is to barycentric coordinates for describing the location of a point in a triangle. You are getting a weighted average to the vertices. If you don't have sufficient vector support (for example, only having two points), then you can't describe points that lie outside the line no matter how you play with the weighted average. This is essentially bjoernz's point for Caucasian vs. Asian faces. Note that this analogy is a gross simplification. The weights in eigenfaces are actually more like PCA or Fourier coefficients.
  3. Each image gets turned into an eigenface which is a vector of principal components.

Nota bene: you need very good registration of the faces. Eigenfaces is notoriously bad about translation/rotation invariance. Your results are likely to be terrible unless you register well. The original Turk and Pentland paper was groundbreaking not just because of the technique but for the scale and quality of data set they gathered which enabled said technique.

0

精彩评论

暂无评论...
验证码 换一张
取 消