Cristian Randieri is Professor at eCampus University. Kwaai EMEA Director, Intellisystem Technologies Founder, C3i official member.
Artificial intelligence will continue to revolutionize the world at an arrestable and ever-increasing pace, but without adequate human oversight, it may have multiple risks, amplifying inequalities and systematic distortions. Bias in AI models and the possibility of intentional manipulation raise many ethical and strategic questions for companies and institutions that are called upon to analyze the roots of these problems, their consequences and strategies to mitigate their effects.
Bias In Artificial Intelligence: An Invisible Risk
The standard literature defines cognitive bias as a systematic distortion that affects decision making processes and cognitive understanding, often resulting from limited data sets and unconscious biases and decision protocols favoring certain viewpoints. However, bias is a human factor generated by the intrinsic characteristic of the way our brain processes information, often resulting from cognitive shortcuts (heuristics) that help us make decisions in the shortest possible time. It occurs whenever a person tries to evaluate the current situation based on their past experiences, omitting differences where possible to reuse the same criteria adopted in a similar past situation. Omitting such differences, however, can sometimes be decisive in invalidating the final evaluation, which can lead to systematic distortions in reasoning, influencing the resulting judgments and decisions. AI systems become biased when the model is trained with asymmetrical data or when structural defects exist within the design, resulting in unfair outcomes. Facial recognition systems serve as a prime example: An algorithm that trains primarily with images of light skin tones leads it to show better accuracy for that group at the expense of darker skin tone individuals.
The problem extends beyond mere technology to become a serious matter of social justice and inclusion. Algorithmic bias is a big problem that comes from the choices made during the creation of an algorithm, regardless of whether the choices are intentional or not. It has the potential to have very severe consequences in areas such as giving out loans, hiring and the criminal justice system. It’s scary to think that data can seem neutral yet produce discriminatory outcomes. This is a testament to the necessity of more transparency and accountability in the creation and use of these technologies to ensure everyone gets a fair shot. However, user-induced bias is another aspect of the discussion.
When people interact with AI, they tend to exhibit certain behaviors that perpetuate current biases. This is evident in social media, where the algorithms prefer the users’ interests to present content that is like-minded. This process leads to the formation of what has been referred to as the “filter bubbles,” which enhance the divisions and increase polarization in society.
Corrupt AI: When Manipulation Takes Over
AI corruption represents an intentional threat that threatens the integrity of models, whereas bias typically develops unintentionally. The greatest danger among the manipulation techniques commonly used for AI systems comes from data poisoning because this method allows users to insert false information into training datasets, which leads to altered algorithm behavior and preferential treatment of certain issues, such as modified artificial credit scores. Model-based vulnerabilities known as backdoors create intentional weaknesses that enable malicious entities to control algorithmic decisions with significant risks in fields including security and justice. Systems face input data manipulation through adversarial attacks, which leads them to generate wrong results while trying to evade anti-fraud detection.
Model corruption achieves its damaging effects when someone alters the model’s core parameters. The process includes making selective adjustments to the algorithm’s framework without necessarily promoting bias, which often results in unanticipated protection of certain population groups. This can occur during recruitment, where certain groups are unfairly preferred for employment.
These problems create trust issues with artificial intelligence and lead to ethical and operational challenges that require innovative solutions as responses. The identification of these problems is essential for keeping AI systems fair and trustworthy.
The Motivations Behind AI Corruption
What might cause someone to tamper with an AI system? This can be due to various reasons, which can be as simple as an economic perspective, political perspective or even some implicit bias that people may not be fully aware of. For instance, in the financial services industry, AI can be used to rig the market through high-frequency trading algorithms or to give other investors food for thought on stock prices. In the insurance industry, for instance, AI algorithms that have been biased can deny insurance coverage to people deemed to be high-risk, a practice that is detrimental to what is referred to as “marginalized groups.”
From a political standpoint, the misuse of AI can impact any platform that is designed to manage information online. A clear example is how the algorithms used in social media can be manipulated to change people’s perceptions, support certain ideologies, or spread false information, as evidenced by the Cambridge Analytica scandal. Additionally, it’s worth noting that sometimes, the personal biases of those who create these technologies can unintentionally seep into the systems they design.
Conclusions
Artificial intelligence is a powerful innovation, but without specific and adequate oversight, it can amplify injustices and undermine trust in its models. For artificial intelligence to continue to be a tool for equitable and inclusive progress, it is essential to adopt mitigation strategies that include greater transparency in development processes, diversifying data sets, conducting independent audits and implementing specific regulations that effectively prevent this technology’s misuse.
Only with a deep collective commitment between companies, governments and civil society will it be possible to ensure that such a powerful and widespread tool as AI continues to assert itself as a driver of innovation without compromising the highest human values of justice and equity.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read More
Cristian Randieri
