In this article, we test several pre-trained Face Recognition APIs . We test these solutions on various relevant use cases.
In recent years, within the world of Artificial Intelligence, one of the most popular applications is computer vision. This popularity is due to the huge diversity of applications and needs : medical imaging, industry, transport, surveillance, security, etc. Nowadays, every field uses cameras and pictures in their activities.
Computer vision includes various functionalities:
This table does not represent an exhaustive list of all computer vision functionalities. Many solutions are based on several features combined.
It is very important to distinguish pre-trained APIs and AutoML APIs:
This article briefly treats pre-trained Face Recognition APIs. The aim is to inform about which problems can be solved with this kind of API. Who are the main providers on the market ? What is the optimal process when using pre-trained APIs?
During our study on Face Recognition pre-trained APIs, we decided to choose 5 providers APIs that provide high performance according to many blog articles and rankings.
This is the pull of providers APIs we are going to test. It is interesting to note that some other solutions exist and open source solutions exist. We can mention ChoochAI, Facex, Clarifai, Eyedea, Sightcorp, Eyerecognize, Ayonix, Microsoft Azure, etc.
As said previously, face detection APIs are used in hundreds of fields, for many various use cases. In this article, we are going to test different face recognition APIs with various types of pictures representing common use cases.
We chose 3 use cases from different fields, represented by 3 pictures and we analyze these features:
Face attributes depend on providers, it can be age, gender, smile, glasses, emotions, ethnicity, etc.
For each use case, we tested the Face Detection API from the 5 providers, with one picture per use case. Of course, for a real project you will need to test on a representative part of your database (not only one picture) to have the right view about different performance.
Better than comparing results from different APIs, Eden AI provides the Genius functionality. This functionality returns a smart combination of all results. For our examples, we will see what we can get with this combined result.
The API response is only a text response. This response (often json format) will be used to develop applications. For our example, the way to proceed is:
1- Benchmark Face Detection / Analysis APIs available on the market
2- Choose the API provider that best fits with your project OR combine multiples providers APIs results
3- Integrate final API in your project / software
Finally, depending on the project, the visual results with bounding boxes printed on the pictures can be useful or not. But for the benchmark, this is the best and fastest way to find and visualize performances.
Google, BetafaceAPI, AWS, and Face++ provide API for multiple computer vision functionality. They also provide a graphic interface only to test and compute a few pictures.
The first test concerns a photo taken during a trade show where we exhibited the Eden AI solution. The difficulty here is that the faces are masked.
Google and Amazon results
BetafaceAPI result:
Samy Melaine (on the left, CTO of Eden AI):
5oclock shadow: yes (4%), age: 47 (60%), arched eyebrows: no (92%), attractive: no (86%), bags under eyes: no (57%), bald: no (3%), bangs: no, beard: yes (39%), big lips: no (72%), big nose: no (22%), black hair: no (24%), blond hair: no, blurry: yes (32%), brown hair: no (61%), bushy eyebrows: no (2%), chubby: no (14%), double chin: no (44%), expression: neutral (77%), gender: male, glasses: no, goatee: yes (10%), gray hair: no (35%), heavy makeup: no (97%), high cheekbones: no (87%), mouth open: no (54%), mustache: yes (17%), narrow eyes: yes (4%), oval face: no (19%), pale skin: no (21%), pitch: -8.27, pointy nose: no (55%), race: white (89%), receding hairline: no (5%), rosy cheeks: no, sideburns: no (10%), straight hair: no (36%), wavy hair: no (88%), wearing earrings: no (88%), wearing hat: no (2%), wearing lipstick: no, wearing necklace: no, wearing necktie: no (9%), yaw: -1.05, young: no (12%),
Taha Zemmouri (on the right, CEO of Eden AI):
5oclock shadow: no (60%), age: 17 (60%), arched eyebrows: no, attractive: no (45%), bags under eyes: no (96%), bald: no (93%), bangs: no (33%), beard: no (70%), big lips: no (69%), big nose: no (91%), black hair: no (18%), blond hair: no, blurry: yes (30%), brown hair: no (8%), bushy eyebrows: yes (14%), chubby: no (89%), double chin: no, expression: neutral (77%), gender: male (47%), glasses: no, goatee: no, gray hair: no, heavy makeup: no (90%), high cheekbones: no (77%), mouth open: no (55%), mustache: no (96%), narrow eyes: no (2%), oval face: yes (15%), pale skin: no (8%), pitch: -7.32, pointy nose: no (46%), race: asian (96%), receding hairline: no, rosy cheeks: no, sideburns: no, straight hair: no (40%), wavy hair: no (36%), wearing earrings: no (66%), wearing hat: yes (15%), wearing lipstick: no (93%), wearing necklace: no (98%), wearing necktie: no (77%), yaw: 14.14, young: yes (91%),
Face++ result:
Imagga result:
Use case n°1 review:
First of all, this use case is based on a photo of 2 members of our team at a trade fair. The difficulty was that they are masked but all providers have detected the two faces except Microsoft Azure.
Imagga, BetafaceAPI, AWS, GCP and Face++ predict the right gender for both faces. Concerning Samy’s glasses, Face++ is the only one that detected it, others did not and Imagga does not provide the information.
For the age, Taha is 27 and Samy is 28. Imagga predicts only categories, and it predicts Babies for one and kids for the other which is wrong. Google does not predict the age. Here are predictions for BetafaceAPI, AWS and Face++:
Any provider manages to get good precision for age with this picture, but Face++ give the most relevant results.
This second use case concerns a more classic photo taken at a checkout in a supermarket.
Google and Amazon results:
Eden AI: GCP and AWS face detection API
BetafaceAPI result:
Face++ result:
Imagga result:
Use case n°2 review:
For this use case, the picture quality is not optimal. BetafaceAPI does not detect the 4 faces. Face++ and AWS found the 3 male and one female faces. Imagga found 2 males and 2 females. Face++ detected woman glasses. We can notice that Face++ detected good smiling value (neutral and negative), and GCP detected the man’s headwear.
It is in the latter case a photo from a race with the peculiarity of having a lot of faces.
Google and Amazon results:
BetafaceAPI result:
Face++ result:
Imagga result:
Use case n°3 review:
Betaface found 1 face, Google found 9 faces, Face++ found 16 faces and Imagga found 20 faces. AWS performance is close to recognizing all the faces on the picture with 33 faces detected. It seems that AWS is very powerful for detecting all the faces on pictures, even if Imagga provides good performance too. Important to notice that AWS and Face++ detected eyeglasses on the runners at the front side.
Concerning the costs of the APIs, they are defined according to duration thresholds with degressive prices.
We consider a company that need to process 2M images per month:
BetaFace and Imagga do not purpose pay-per-use pricing, only monthly subscription. For this amount of data, the customer will have to contact them.
We intentionally chose 3 pictures corresponding to very different use cases: one with masks, one with low quality and faces in profile, one with many faces to detect. We could notice that each provider has specific strengths and weaknesses. Some API is better for face detection, other for detecting glasses or headwears, other for predicting age or gender, other for emotions, etc.
For GCP and AWS, we don’t need to use their API directly. In fact, the Eden AI Face Detection API allows to get the 2 providers APIs results (and also Microsoft Azure result) with only one simple request. With few lines of code, we can have access to the results from multiple providers. Imagga, BetafaceAPI and Face++ are not implemented on AI-Compare for the moment, so we use their API or interface directly.
With Eden AI, you can get a fast access to various results from various providers. So you can have a better idea about which is the solution that best fits for you.
The decision making is as following:
First you run your datas on Eden AI to benchmark solutions available on the market. Then you have 3 options:
a. You find a result that push you to choose one API that fits with your attempted performance
b. Different providers give good results so you use the Genius functionality to gather forces and get a combined result, better than any single result from a provider.
c. Pre-trained APIs cannot provide good results for your project:
This process garanties you to make the right choice to succeed in your project. Eden AI is only a tool that allows you to realize a benchmark very easily and quickly. Finally, it is possible to use Eden AI API to realize the entire project avoiding accounts and billings from many providers, and keeping the flexibility to not just choose one provider.
You are a solution provider and want to integrate Eden AI, contact us at : contact@ai-compare.com
You can directly start building now. If you have any questions, feel free to schedule a call with us!
Get startedContact sales