
Responsible Artificial Intelligence: Raising our super baby to become an upright citizen.
Introduction
Utopia is the top genre on my Netflix. I find it fascinating watching the reenactment of the ideas in the minds of writers about what the future will look like for the human race. Will our lives be filled with smart technology? Will an AI robot end the human race as we know it to start a ‘superior’ race? These are the questions that plague my mind after I step out of the Netflix Utopia world and find AI systems in the real world that seem to be early stage models of what is portrayed in movies. I then console myself with the thought that this fear mongering is all someone's imagination. But I still cannot help but wonder if we are steadily moving towards AI that we cannot understand or control.
The power of AI for good cannot be overstated and its capacity to advance the human race is amazing. Artificial Intelligence is making inroads into fields like accessibility for disabled persons, robotics, (and my personal favourite) reviving dying languages, to name a few. Properly harnessed, AI will be of immense help to the human race and make our lives better.
Needless to say, every good thing can be used for harm, and Artificial Intelligence is no exception. The use of deepfakes and the serious concerns regarding our personal data being used to train AI models without our consent are just a few examples of AI being misused.
As with every advancement, there is an adjusting period and this is where we are with AI. Although AI was developed as far back as the 1950s, it has seen a boom in recent years and can be likened to an impressionable infant that we have to manage for our mutual benefit. This is where responsible AI practices come in to help us shape the development of the field.
What is Responsible AI?
The concept of responsible AI requires that all involved in the lifecycle of AI systems, from development to decommissioning, take measures to ensure that AI systems are fair and equitable and do not perpetrate aspects of humanity we are desperately trying to change. According to the World Economic Forum, “Responsible AI means AI development and deployment that is valid and reliable, safe, fair, secure and resilient, accountable and transparent, explainable and interpretable.” We want AI systems to continue to be awesome while replicating the better nature of humans.
The Responsible AI Institute outlines the tenets of responsible AI models as:
- Compliant with necessary regulatory and ethical standards
- Reproducible in terms of performance and results
- Explainable and transparent in their development and impact
- Routinely monitored and updated to maintain quality
- Properly cataloged and documented following standardized policies.
We will consider these tenets below.
Compliant with necessary regulatory and ethical standards
In April 2024, an AI chatbot by New York City (USA) developed to assist small business owners was found to have encouraged some of the users to break employment laws in different ways, including taking a cut of workers’ tips. AI has also been accused of copyright violations, with Google being sued for using personal data and copyrighted material to train its AI systems.
Legislation governing AI is gaining momentum, for example the Cyberspace Administration of China (“CAC”) published the “Interim Measures for the Management of Generative Artificial Intelligence Services,” the European Union passed the AI Act which complements the General Data Protection Regulation (GDPR), and the United States of America (USA) has the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
While legislation is necessary, effective lawmaking must be based on a sufficient understanding of a system in order to proscribe specific actions. However, as it stands, we do not fully understand how AI works to make effective legislation governing the field. It follows that we cannot only look to the law when it comes to AI systems. There are ethical aspects that also have to be taken into consideration. For example, in 2018, it was revealed that Amazon’s AI recruiting tool was biased against women. This further goes to reinforce the discrimination against women, which is a problem that we are trying to solve, and not worsen.
Another ethical consideration is the predicted mass layoffs in companies after the adoption of AI systems. During a networking event I attended, one of the speakers expressed the guilt he experienced after selling machinery to a Japanese company which led to layoffs of thousands of workers. I was impressed by his story and considered how we have to retain our humanity in this AI buzz. The swift deployment of AI which may lead to job instability has proven disastrous not only to employees but to companies. For example, Zillow’s use of AI to determine the value of homes led to a massive loss and had them considering laying off over 2000 employees. This shows that AI systems must be rolled out carefully and with deep consideration because a failure can have catastrophic consequences on businesses and employees.
Reproducible in terms of performance and results
Every child wants to change the world when they grow up. This is because there’s a lot to change. AI models are trained on data of the world that we agree needs to change. It follows that without intervention, AI will replicate our less-than-desired attributes and may even take them further down the dark road. For example, in 2016, Microsoft’s AI chatbot, Tay, published racist tweets on X (formerly Twitter).
To prevent such occurrences in the future, the datasets used to train AI and the process the modules use to achieve their results must be reproducible. In the Tay example above, Microsoft can go back and examine the data on which the module was trained to determine the problems and fix them. When AI systems are developed in a manner that is reproducible, developers can tweak them to increase efficiency, saving time and effort.
Explainable and transparent in their development and impact
While we can’t all be AI engineers, we know our data has, by virtue of our use of technology, been used to train some AI systems one way or another. Responsible AI requires that AI systems be transparent and explainable. This means that people using AI should understand how the system works. It is for this reason that the Article 7 of the GDPR requires that data subjects consent to the processing of their personal data and have the right to withdraw such consent.
In addition, the impact of AI systems being employed should be explained to users. One less discussed aspect of the use of Artificial Intelligence is energy requirements. According to the UN Environment Program (UNEP), a request made to ChatGPT takes ten times the energy required for a Google search. This is in addition to the massive amounts of water needed to cool the datacentres that AI systems reside on, potentially adding to the global potable water problems we are already facing. It is important that these considerations are not swept under the rug so that we can find more sustainable options.
Routinely monitored and updated to maintain quality
The world is constantly evolving and our needs change over time. This may lead to an AI system becoming obsolete because of outdated datasets or changes in circumstances. Can you imagine if ChatGPT was trained on old English? Every response will be replete with ‘thou’ and ‘thuses’ which will defeat the purpose of the system when users cannot understand the answers to their queries. Another example is the effect the passing of a new law will have on a legal AI system. If the module is not updated with the changes, it will mislead people who may make decisions against their interests. This is why it is important to conduct regular checks to ensure that AI systems continue to function properly and remain relevant to the populace being served.
Properly cataloged and documented following standardized policies
I love cooking and there is nothing more frustrating than experimenting with ingredients to create an amazing meal, but you cannot remember the process you used to achieve such greatness. A few years ago, I made mouth-watering lentil tofu kebabs. At the time, I was visiting a friend and used whatever she had in her spice cabinet to make the kebabs. When I got home, I tried to replicate the recipe but to date, I have not been able to achieve that first magic. This is what we are trying to avoid in Artificial Intelligence systems.
Sometimes, an amazing system is developed and the dataset on which the module was trained is not stored properly, or the workings of the module are not properly documented. This means that if you want to produce the same results – like I have been desperately trying with my lentil tofu kebabs – you are not sure how to. Responsible AI requires proper documentation so that if a new system has to be created based on the original system, we do not have to go back to the drawing board.
Conclusion
We should think of AI as a Boss Baby with an unimaginably high IQ. We are not in competition with it because we have our unique strengths and weaknesses. What we should be focusing on is shaping it to work with us to create the better world we strive for. This is why we must govern Artificial Intelligence.