How Does Facial Detection Actually Work?

Ahmed Belhaj
7 min readJan 30, 2022

--

And how does it compare to facial recognition? Are they the same thing?

The answer is no — facial detection and facial recognition are not the same thing. All facial recognition systems use facial detection, but not all facial detection systems use facial recognition. Although they sound similar, they are pretty different.

Facial Detection

What is it?

Facial detection is a computer technology used to identify the presence of human faces in digital images or videos. This is possible through machine learning and the usage of algorithms. The main purpose of facial detection is to be able to find any and all human faces within an image or video, even if the faces are in the background or are mixed among a multitude of other objects. Object-class detection is the proper term for this type of action, as the main purpose of this kind of detection is to find every instance of the specified object — even if it varies in size or location.

What is the identification process?

The main focus of facial detection is the front of the human face. More specifically, this process starts with the detection of the eyes before the presence of a human face can be confirmed. This actually happens to be considered one of the easiest features to detect, so once the algorithm detects human eyes, it will then try to find all possible facial regions. These facial regions may include human: eyebrows, irises, nose, nostrils, and mouth or mouth corners. Once the facial regions have been confirmed, additional tests may be applied in order to further validate if the detected item is indeed a face. As there are many kinds of facial detection software out there, the tests ran may vary from one another. Any face candidate could also be tested and measured for symmetry. While the test for symmetry can help conclude if a human face was found, it can also help determine different facial features that need to be further verified.

An image or video may be normalized so the human face can be properly detected. It is possible for normalization to either happen before or after a face is thought to be detected. This is because an image may have uneven illumination, meaning the lighting effect would need to be altered. Normalization would also help fix any shirring effect that may be impacting proper facial detection. The shirring effect is usually caused by head movement.

A Deeper Dive into the Face Detection Methods

The way a face may be detected through facial detection can vary depending on the kind of algorithm being used. There are four types of facial detection methods. These methods happen to be:

  • Knowledge Based
  • Feature Based
  • Template Matching
  • Appearance Based

Knowledge Based Method

The knowledge based method of face detection depends on a set of rules given in order to detect faces and is based on real human knowledge. This isn’t the best method as it can be very difficult to create a set of rules that would work for the diversity of human faces. The rules supplied could either be too specific or too general, which could lead to false positives or negatives.

Feature Based Method

The feature based method of face detection works by locating faces through their structural features. This method is generally super successful by first being trained as a classifier, which would then be able to tell the difference between facial and non-facial regions.

Template Matching Method

The template matching method of face detection basically compares any input images or videos against templates. These templates are either previously defined with a standard face pattern or saved parameters from previous correlations. While this method is most likely the simplest to implement, it can be prone to errors in face detection.

Appearance Based Method

The appearance based method of face detection probably has the best performance in comparison to the rest of the methods previously mentioned. This face detection approach mainly uses statistical analysis and machine learning in order to find the characteristics that define a face. This method is commonly used within face recognition in order to extract facial features.

Facial Recognition

What is it?

While facial detection is capable of finding and confirming if a face is present, facial recognition further identifies the face. It is a bio metric technology that tries to figure out who the facial candidate belongs to, meaning it tries to match a human face from an image or video to someone specific. Usually, this computer application would compare the captured human face against every face stored within a database. While it isn’t completely accurate, it will help provide an identification that has an extremely strong chance of being the correct match.

How does it do this?

Facial recognition pretty much works by pinpointing and measuring the human’s facial features from whatever video or image was given. The information gathered from this would then be compared against known faces that are generally saved in some sort of database or storage. This will help confirm the identity of the candidate or if they are even currently present in the database to begin with. As there are a multitude of facial recognition applications, the process in which human faces are identified may be done differently depending on the technology being used and for what purpose.

Some people classify recognition algorithms in two ways: holistic or feature-based. Holistic algorithms will try to recognize the entire human face as one component. Meanwhile, feature-based algorithms will go further and divide the human face into separate components. These type of algorithms then, not just analyze the components individually, but also keep track of the spatial location between each one.

Generally, recognition algorithms have two main approaches: geometric or photo-metric. The geometric approach looks at distinguishing features while the photo-metric approach is more statistical in the way that it distills images into values. These values would then be compared against a template in order to eliminate any variances.

How is Facial Recognition Implemented in the World?

Facial recognition has many use cases and can be extremely helpful. Not only can it make the life of any average, modern phone user a little bit more convenient — it is also well known for assisting the police or the FBI in catching criminals.

Many of the phone users today use both facial detection and facial recognition. This is done through the use of different phone features. The modern Samsung and iPhone cellular device can actually organize photos by faces — sometimes they do it automatically! A user with the qualifying phone can have their device automatically detect faces in any image taken, which can then allow the user to organize photos by face or even search specific faces up as if it were a tag. The device does so by comparing the newly taken face to known people the user has saved manually or the device may have added automatically from often repeating in the user’s gallery. Many of these phone users actually use facial recognition in order to unlock their phones. This is a popular and well known feature of the iPhone, FaceID, that is also available on Samsung devices. For both of these features, this technology detects a face and then tries to recognize it, hence the use of facial detection and facial recognition.

It is known that law enforcement agencies use facial recognition to help find criminals. If the police has an image of a suspect or arrestee, they may run the image through local, state, and even federal databases to see if they are recognized. If they were, that means they already have a history with law enforcement. If no results can be found, it is possible the facial recognition has failed, but is more likely to be because this is the first time the individual is being run through the system.

Additional Facial Detection Use Cases

While facial recognition uses facial detection in order to work, there are further technologies that apply facial detection in order to complete their functionality.

Facial Motion Capture

Facial motion capture is a process that uses face detection in order to convert the movements of a human’s face into a digital database. This database is then used to create computer graphics, commonly known as CG or CGI, and computer animation, which is commonly used to create movies, games, and even real-time avatars. The data saved in the database is usually either coordinates or relative positions of reference points on the human’s face.

A capture similar to facial motion is called facial expression. The way this capture works is by using the input of human faces to recognize emotions, which would then manipulate the computer generated characters accordingly.

Face Detection in Augmented Reality

Just like face detection is a fundamental part of many technologies, it also plays a big part in facial augmented reality. The way AR works when it comes to faces is that it detects facial features and maps AR features over them when needed. Once the presence of a human face is detected through facial detection, the AR software then generates a 3D face mesh that then basically modifies the face in real time.

One of the most commonly known application to be famous for its AR features is Snapchat. Facebook, Apple, and Google are also companies that are prominent in the AR face tracking field. This is because AR face tracking has many use cases, such as advertising, retailing, makeovers, etc.

AR face tracking is good way to market. It is commonly seen as Snapchat Ad filters, being a new and fun way for brands to get content out to consumers. It’s a memorable way to introduce advertisements, for example, it allows for a new way to promote movies — by releasing filters that can turn the user into a movie character. It’s even possible for consumers to use AR face recognition to try out products they may want to buy. The market place is growing to have consumers virtually try on their product before making a purchase, like the website Topology Eyewear. This is a company that allows the user to try on a pair of glasses using AR. Make up companies can do the same thing, by creating beauty filters with specific products.

Resources:

Face detection — Wikipedia

Face Detection Vs. Face Recognition: What’s the Difference?

--

--

Ahmed Belhaj
Ahmed Belhaj

Written by Ahmed Belhaj

Dedicated DevOps Engineer with expertise in system administration, core development, and a full-stack software development.

No responses yet