Sharing solutions to the challenge of fairness in predictive models of artificial intelligence
Universidad CEU San Pablo has hosted the first conference on artificial intelligence and ethics under the title: “Looking for an ethical algorithm”. Some of the insightful papers were presented by outstanding experts in this matter, including, José Carlos Baquero, GMV’s Big Data and Artificial Intelligence Manager, who spoke about his AI experience and explained how to achieve fairer, bias-free algorithms.
From credit requests to online dating, machine learning models are automating our day-to-day decision making. Nonetheless, over and above the positive impact of artificial intelligence on business models, we also have to bear in mind its negative externalities (fairness, responsibility, transparency and ethics) in terms of the algorithms used for these decision-making processes.
In his speech José Carlos Baquero, Big Data and Artificial Intelligence Manager of GMV’s Secure e-Solutions sector, stressed the importance of fairness as one of the mainstays of ethical artificial intelligence. He invited his audience to reflect on this, while also presenting the latest ways to mitigate emergent discrimination in our models. This involves highlighting the factors of interpretability and transparency, subjecting complex models to a rigorous interrogation while at the same time making more robust and fairer models in our predictions, allowing us to modify the optimism of the functions and tag on constraints.
It is, however, clear that building impartial predictive models is not a simple matter of removing sensitive attributes of the training data. Ingenious techniques are required to correct the deep-lying data bias and force models to make more impartial predictions. Furthermore, making impartial predictions involves a cost that entails an impairment of our model’s performance.
In short, if we analyze and come to a better understanding of both the predictive model and machine learning, we will be able to head off problems and build a sense of fairness into the predictive models of artificial intelligence. It is a case of taking impartiality seriously and making sure our predictions do not unfairly harm our environment, in the interests of getting the very best out of artificial intelligence.