Last year Apple released a new Framework for Machine learning, which is called as CoreML. We can build intelligent apps with the help of CoreML. For this, we need the CoreML model to do an intelligent process, which is like a heart of the machine learning process in your app. Apple providing only a few hands-on models to integrate machine learning inside your app.
You can also get some useful models from this website. By using these limited pre-built model you can achieve things using machine learning in your app. We need something better to achieve big things like Facial recognition.
Okay, let’s come to our topic. How to do Facial recognition on iOS? When talking about facial recognition using machine learning first we need an efficient highly trained model to analyse your input data(image) and to process it. You have to work hard to train your model for better performance and efficiency.
Developing and training a model for better performance is a tedious and tiring process. So instead of creating a new model from scratch, it is better to go with a pre-built model with better efficiency, performance and with a continuous learning capability. AWS Rekognition service is one of the best options for doing visual analysis of images and videos.
Introduction to AWS Rekognition service:
AWS Rekognition is Deep learning-based visual analysis service for analysing, search and verify millions of images and videos. It makes very easier to add image and video analysis to your app. You just provide the image or video which you need to analyse, they will do all heavy lifting process to analyse your data and return all possible response from the content they analysed.
AWS Rekognition can be easily integrated to the other AWS services like AWS S3 and AWS Lambda. So you can develop a scalable and reliable visual analysis application.
Uses of AWS Rekognition service:
- Searchable image and video libraries
- Face based user verification
- Sentiment and demographic analysis
- Facial recognition
- Unsafe content detection
- Celebrity detection
- Text detection
Common benefits of using AWS Rekognition service:
- Integrate powerful image and video recognition into your apps
- Deep learning-based image and video analysis
- Scalable image analysis
- Integrate with other AWS services
- Low cost
The main motto of this series of blogs is to identify(modern way of login) your app’s user by using their picture instead of using a traditional username and password login. So I don’t want to explain each and everything about AWS Rekognition service. If you are interested, feel free to check out below links for more details.
AWS Rekognition service docs Link.
AWS Rekognition service console Link.
Key points about AWS Rekognition service: (Mostly we will discuss Image analysis using AWS Rekognition service)
- In AWS Rekognition you can search a user by using their live photo. For this, you need to create faces collection and add faces of all users whenever new user signed up your app.
- You can analyse following basic things in image analysis like person face in the input image is male or female, happy or sad, with a beard or without a beard, etc…
- AWS providing iOS and Android SDK for Rekognition service.
- As of now, this service is available only in the following regions
- EU (Ireland)
- US East (N. Virginia)
- US East (Ohio)
- US West (Oregon)
- You can send input image in API or you can send AWS S3 image reference to AWS Rekognition service.
- Your input image to AWS Rekognition should be in .png or .jpg file format.
- If you have many faces in the input image, Rekognition will detect large 100 faces in the input image.
- You can set the similarity threshold in compare/search face action. Means for every face search action, they will give the confidence value of the face to be matched with some similar face. We can say like give success response to us only if the confidence value is greater than some value.(Example greater than 99%)
- AWS Rekognition will not save the actual image in face collection, instead, it will analyse the input image first and extract the feature vectors from the face. It will save only the feature vectors in the collection. Misusing of users faces is not possible since it going to be mathematical values.
- You can index the user’s face and you can find the user based on this index.
- You can delete the faces from collections using Face ID if needed. This will be useful when your user changing profile picture or deleting the account.
- Since Machine learning is all about the prediction of things, you can’t completely rely on it. To make sure your prediction is correct you need to add validation to your app when user setting profile image, it should not be celebrity image, it should not be some object image like flower, car, bike, etc.., it should have person face in image, the calculated age of face in image should approximately match with the DOB if you got it from user, etc.
- AWS Rekognition has two types of API. In our case, we need storage based APIs
- Storage-based APIs — For image analysis, we have following operations (Index faces, list faces, search faces by image, search faces and delete faces)
- Non-Storage based APIs — For image analysis, we have following operations (Detect labels, Detect faces, Compare faces, detect moderation labels, Recognise celebrities, detect text)
In next blog, we will start to integrate AWS Rekognition service with the iOS app to identify your user using AWS Rekognition iOS SDK.