The importance of AI ethics in identifying persons and categorizing images
From the artificial intelligence point of view, vision and image-recognition APIs are now mature enough for migration to advanced manufacturing and artificial vision processes. We can now safely talk about the performance of such tasks as detection of complex faults, classification of textures and materials, reading of characters, verification of assemblies, localization of misshapen parts, etc. The truth is that image-analysis software now offers real-time solutions for complex vision challenges. Image recognition procedures allow us to interpret what system vision records and then classify and use it to optimize our industrial production chain or other needs in any other activity sector that could not be carried out beforehand with traditional vision.
Image recognition works by creating a neural network that processes all image pixels individually before going on to process them. This technology, like all artificial intelligence, calls for instruction or training to improve the functions offered and model precision. For that reason these networks are normally fed with as many images as possible.
The Artificial Intelligence and Big Data Division of GMV’s Secure e-Solutions sector has developed a demo for OpenExpo Europe, Europe’s biggest B2B technological innovation congress, showing just what artificial intelligence is capable of on the strength of image processing. This demo involves creating a database from images uploaded by the attendees to Twitter (with a specific hashtag); when the uploaders then passed GMV’s stand the system recognized them, showing their tweet and relating each one to the character of a famous TV drama series and medieval fantasy that most resembled them.
Artificial intelligence is giving rise to new tools and spectacular applications, outperforming humans in tasks of classification and image detection. But it is important to take the algorithmic bias into account. The algorithms used, after all, might take decisions that perpetuate past discriminations in society or even generate new ones. This was precisely the issue dealt with by José Carlos Baquero, Artificial Intelligence and Big Data Manager of GMV’s Secure e-Solutions in his congress paper.
Baquero’s speech advocated transparency and clear explanation of training models in the search for fair algorithms and a responsible use of artificial intelligence. This calls for ingenious techniques to correct the deep-lying data bias and force models to make more impartial forecasts. The concern about the transparency and fairness of machine learning is growing; it is a problem we need to analyze to ensure a fairer and more promising future.