On 6 and 7 June Madrid’s La Nave pavilion hosted the 5th OpenExpo Europe, Europe’s biggest Open-Source, Free Software and Open World Economy (Open Data and Open Innovation) trade fair, attracting a turnout this year of 5000 attendees and 400 companies from Spain and around the world.
GMV joined in this event together with other firms like Google, Microsoft, Oracle and Red Hat, taking an active part not only in the congress, with a paper by José Carlos Baquero, Manager of GMV’s Big Data and Artificial Intelligence Division, but also running a stand in the fair.
On this stand GMV’s team staged a demo using some of the Amazon services that its artificial-intelligence team is currently working with. This demonstration included a raffle of the new Amazon Echo. Those who wished to take part simply had to follow these steps:
- Post a selfie in a public Twitter account with the hashtag #gmvopenexpo2018.
- Drop into GMV’s stand, where our team were photographing each person and were immediately able to identify him or her, screening as proof the tweet this person had previously posted. He or she was then instantly entered for the competition.
This facial recognition feat was achieved mainly by means of an Amazon service called Amazon Rekognition. This service provides several different video- and image-analysis features, including:
- Analysis of facial features in images and videos.
- Object, scene, and activity detection in images and videos.
- Facial recognition in images.
- Celebrity recognition in videos and images.
- Unsafe video and image content detection.
- Recognition of text in images.
To find out more about this Amazon service, see their website: https://aws.amazon.com/rekognition/?nc1=h_ls
To complete the process various other Amazon and external services were used. The process was divided into two separately run sub-processes.
The first, using the Python library for accessing the Twitter API called Tweepy, recorded the tweets launched at a given hashtag (#gmvopenexpo2018). If these tweets contained any image they were sent to a Lambda Amazon function (useful for serverless calculations), where Amazon Rekognition services were invoked to detect the existence of persons in the image and save the facial data in the database otherwise, as well as the tweet metadata.
The second process saw to analyzing the stand webcam images. To be able to carry out this process automatically, an initial real-time face detection was performed with OpenCV, a fairly popular image analysis library. Thus, as soon as a person was detected in front of the camera, a photo was taken, whereupon this image was compared by means of Amazon Rekognition with database images to try and recognize the face concerned. If a database match was indeed found, the data associated with that face would be retrieved, including the original tweet, which would then be screened to prove the facial recognition.
The results were positive, both at an eyecatching shop-window level and also at a practical level, where, despite this being such a simple demo, it was in fact shown that technology like Amazon Rekognition gave perhaps a better result in a few minutes than would otherwise have been obtained after days of work if implementing a solution from scratch. It is here where the sheer potential of these tools comes in, enabling us to save an appreciable amount of time in certain cases and showing how important solutions of this type might prove to be in a business context.
Author: Jorge Moreno
Las opiniones vertidas por el autor son enteramente suyas y no siempre representan la opinión de GMV
The author’s views are entirely his own and may not reflect the views of GMV