Skip to main content
Logo GMV

Main navigation

  • Sectors
    • Icono espacio
      Space
    • Icono Aeronáutica
      Aeronautics
    • Icono Defensa y Seguridad
      Defense and Security
    • Icono Sistemas Inteligentes de Transporte
      Intelligent Transportation Systems
    • Icono Automoción
      Automotive
    • Icono Ciberseguridad
      Cybersecurity
    • Icono Servicios públicos Digitales
      Digital Public Services
    • Icono Sanidad
      Healthcare
    • Icono Industria
      Industry
    • Icono Financiero
      Financial
    • Icono Industria
      Services
    • All Sectors

    Highlight

    EMV Transit
    EMV Transit: technology that keeps on working
  • Talent
  • About GMV
    • Get to Know the Company
    • History
    • Management Team
    • Certifications
    • Corporate Social Responsibility
  • Communication
    • News
    • Events
    • Blog
    • Magazine GMV News
    • Press Room
    • Media library
    • Latest from GMV

Secondary navigation

  • Products A-Z
  • GMV Global
    • Global (en)
    • Spain and LATAM (es - ca - en)
    • Germany (de - en)
    • Portugal (pt - en)
    • Poland (pl - en)
    • All branches and all GMV sites
  • Home
  • Communication
  • News
Back
New search
Date
  • Services

GMV recognized for its capability of identifying and mitigating Artificial-Intelligence bias

07/02/2019
  • Print
Share
GMV’s capability of identifying and mitigating Artificial-Intelligence bias is hailed with the 2nd prize in the LUCA Challenge

Artificial Intelligence is making increasing inroads into today’s society and pundits say it is only likely to make even further headway in the future, until it finally becomes part of our daily decision-making procedures or even replaces them. We are speaking about everyday cases like the granting of a mortgage, assessing the likelihood of a criminal reoffending or the deciding on the best way of distributing medical resources. These ongoing developments have sparked off an ethical debate about leaving certain decisions up to technology, especially in view of the fact that recent studies and publications have pinpointed discriminatory bias in these smart systems. This debate in turn has led to some social concern about the ethical use of data, over and beyond its privacy and security. To confront this problem Telefónica’s Data Unit (LUCA) has organized an international challenge to encourage reasonable use of Artificial Intelligence.

A real passion for taking on new challenges and harnessing all chances to innovate are both hard-wired into GMV’s mindset, so it didn’t hesitate to take up LUCA’s challenge. GMV’s Artificial Intelligence and Big Data team, comprising Alexander Benítez, Paloma López de Arenosa, Antón Makarov and Inmaculada Perea, and led by José Carlos Baquero, presented a proposal that was then awarded 2nd prize in the challenge. “As a society we have to progress towards a less discriminatory world. Machine learning offers us a perfect chance to do so. Every day more and more decisions are delegated to machines so we are duty bound to pay due heed to how these machines learn, just as we do when bringing up children. It is in our power to make sure these algorithms are fair and guarantee we are all treated equally” argues Antón Makarov, GMV Data Scientist.

The work carried out by this team involved analysis of an open data set of Spain’s National Statistics Institute (Instituto Nacional de Estadística: INE) about salaries in Spain, showing there is a gender-based salary gap, with men more likely to reach highly paid positions. First of all the system showed that this inequality still exists even when gender-based information is cancelled out. A model was then trained up with this data showing that it learns this bias. If this first salary-forecasting model were used to make decisions on the person concerned, this would give rise to discriminatory decisions. A solution was then put forward to lessen the data bias and train up a new model based on this data, generating fairer predictions while hardly affecting performance, thus lessening the gender discrimination. “We have replicated the experiment using different algorithms and obtaining similar results. This proves that the bias is learned regardless of the classifier used. Luckily, there is a growing volume of research into this matter and better bias-mitigation algorithms are now being developed, meaning the future bodes well for this matter” says Alexander Benítez, GMV Data Scientist.

GMV’s proposal, therefore, sheds light on the possible ethical consequences of an improper use of data and represents a great stride forward towards a less discriminatory world in which machines making important decisions about individual rights do so guaranteeing that each of these individuals is treated fairly.

 

  • Print
Share

Related

GMV-IBM
  • Services
GMV revolutionizes data access with an intelligent solution based on IBM technology
PAIT, la herramienta de GMV y Peoplematters, galardonada en los XVI Premios Comunicaciones Hoy
  • Services
PAIT, the tool of GMV and Peoplematters, wins an award in the 16th Comunicaciones Hoy Awards
PAIT: apoyo tecnológico para cumplir con la normativa
  • Digital Public Services
  • Services
PAIT® solution: technological support for the new equal pay and pay transparency regulations

Contact

Isaac Newton, 11 Tres Cantos
E-28760 Madrid

Tel. +34 91 807 21 00

Contact menu

  • Contact
  • GMV around the world

Blog

  • Blog

Sectors

Sectors menu

  • Space
  • Aeronautics
  • Defense and Security
  • Intelligent Transportation Systems
  • Automotive
  • Cybersecurity
  • Digital Public Services
  • Healthcare
  • Industry
  • Financial
  • Services
  • Talent
  • About GMV
  • Shortcut to
    • Press Room
    • News
    • Events
    • Blog
    • Products A-Z
© 2025, GMV Innovating Solutions S.L.

Footer menu

  • Contact
  • Legal Notice
  • Privacy Policy
  • Cookie Policy

Footer Info

  • Commitment to the Environment
  • Financial Information