top of page
Search
Chidi Ameke

Bias in Algorithms, Machine Learning & AI: 8 Daily Actions to Reduce Them

Updated: Apr 20, 2022


Domestic friendly-looking Robot

Hello friends,


Welcome to my newsletter. I will be sharing insights for your personal and organisational change and transformation.


This newsletter discusses the critical importance of addressing Bias in Algorithms, Machine Learning and AI; and the 8 daily actions you can take to reduce them.


This article is relevant to you for the following reasons:


AI is increasingly being used, and our dependence on them will only deepen. However, their efficacy and accuracy aren't optimal in all use cases. Therefore, we all need to understand how machine intelligence is formed and its implications. We can also play an active role in ensuring that AIs will best serve humanity and not be a frightening reflection of our worst traits and flaws.

You will learn the following:

  • What data ethics is

  • The four stages of artificial intelligence maturity

  • Artificial Intelligence ethics

  • Algorithm bias

  • Machine learning bias

  • Ethical systems of intelligence

  • 8 daily actions to reduce bias in algorithms


Are you ready?


Let's begin!



What is data ethics?


Data ethics is concerned with the moral implications of acquiring, storing and using personal information without consent and its impact on privacy and individual liberty.


The four stages of artificial intelligence maturity


Artificial intelligence (AI) refers to the branch of computer science concerned with bridging the gap between machine and human intelligence. Although AI is still in its infancy, computer engineers are racing to encode human knowledge and reasoning capabilities into robots. The reasons for doing this are manifold, including first-mover advantage and advancing science and technology.


AI maturity can be categorised into four stages.


They are as follows:


Automation

Robotic fingers reach out to make contact
Stage one

Digital transformation resides within stage one of AI maturity. It will continue to unlock commercial opportunities for the foreseeable future.


Leading technology enablers, including Microsoft, Google, IBM and Oracle, are pioneering cloud-based solutions and infrastructure. They have modernised and transformed business operations and enabled efficiency, productivity and agility. This has resulted in greater digitisation and automation of repeatable and manual tasks and reduced operational costs.


Machine Learning

Little girl playing with a book
Stage two

The first part is training machines to learn about our natural world by ingesting vast datasets.


The second part entails training machines to decode languages. For robots and computer systems to be truly intelligent, they have to understand the complexities of human communication beyond the literal translation of words. Human communication is highly sophisticated because of the interchange between many elements. These include but are not limited to context, nuance and tonality.


Furthermore, individual communication styles leverage various tools to aid expression and convey thoughts, including sarcasm, inference, omissions and humour. Additional complexities are added when misinformation is wielded, such as deceit, half-truths and manipulation. Together, these contribute to layered and rich storytelling that helps us express thoughts and ideas, share information, experiences, and knowledge.


When communication is authentic and sincere, we build trust, relationships and influence one another.

Through Natural Language Processing (NLP), machines learn to interpret human language. We see its widespread use in chatbots, spam filters, search engines, voice assistance, etc.


Intelligent communication between humans and machines isn't close to maturity by any stretch of the imagination. Significant inroads still need to be made to help AIs understand and consistently and correctly interpret non-verbal communication, sentiments, gestures and the likes. It will also need ongoing training to accurately distinguish truth from lies, facts from fiction and propaganda, etc.


Most challenging of all will be the attempt to codify neurodiversity (cognitive variation in information processing compared to the majority of the population). There is already a considerable bias within AI for its inability to recognise neurodiversity. This must be openly acknowledged and its implications when leveraging AI data to assess human performance.


Reasoning and Logic

Young lady looking out of the window in a contemplative manner
Stage three

The focus here is on reasoning and logic to enable robots to develop new thoughts, concepts and solutions in response to commands and instructions by their human proprietors.


Once a machine understands our world and can use human language to communicate with us, it will enter into the typical developmental phase for children known as the age of reason. For a child, it usually starts around the age of seven. However, a robot's cognitive advancement will be exponentially quicker than a person's, in line with Moore's Law.


Artificial Intelligence (AI)

Metallic Robot looking upwards
Stage four

This is ultimately where AI is heading. Machines will have the intelligence, capabilities and determination to create new realities, circumstances and situations to aid in achieving their master's intended outcome. The AI proprietors will be the ultimate benefactor of AI technologies, much like landowners have the legal rights to the resources within their land. AI proprietors or their licensees will reap the rewards from the social outcomes engineered by their AI technologies.


At this stage, AI's scale of influence and impact should not be underestimated. It will encompass social, political, scientific, medical, economic and military systems. Through robust scenario planning and execution ability, probability analysis and predictive analytics, machines will get good at designing and manipulating situations and events to achieve the desired outcome.


This raises the question of AI ethics. What intrinsic values and principles should be hardwired into AI machines to protect humanity?


Continue reading to learn more.


Artificial Intelligence ethics

Asian women in conversation in an office setting

AI ethics is a hotly debated topic in many tech and academic communities. However, the conversation must quickly broaden into the corporate, social and political spheres. This is for the simple reason that we need global alliances and alignment on AI ethics and governance.


In the era of AI-driven technological innovation and value creation, societies must establish an ethical code of conduct, principles, regulations, and laws to which AI solutions must adhere. Doing so will protect individual liberty.

Analysts, data scientists, and information technology professionals are already concerned with data ethics. However, brands and corporations must not wait to be told by governments of this imperative. Instead, they should be leading the conversation because they are involved in the race to establish dominance in AI-driven tech solutions and new commercial platforms like the metaverse.


Soon, brands will face the uncomfortable question of their AI ethics. If unethical and unscrupulous AI practices are discovered within their AI system, consumers will challenge the brand's right to operate. Corporations will be expected to demonstrate transparency and make available the datasets used to train their systems when deploying AI solutions. The openness required will be akin to brands' efforts to verify their sustainability and DEI credentials to preserve their business.


The threat of nefarious actors seeking to exploit unregulated AI technologies for their own gains will be comparable to the global threat posed by terrorists, hackers and malware. It results in the need for homeland and cyber security as a preventative and countermeasure. Every system (digital and analogue) and process should be constantly monitored for threat, be it unconscious bias or much more existential.


Algorithmic Bias

Street camera with blue sky in the background

Software engineers use datasets to teach machines to discover otherwise undetectable and invisible patterns and structures to help us solve problems and identify opportunities. As the system ingests enormous datasets, it classifies and categorises identified patterns. The insights can be monetised and used to inform medical, scientific, commercial or political decision-making.


It's, therefore, no surprise why the likes of Google and Facebook receive the most ad spend. Social platforms like Facebook (Meta), and search engines like Google, acquire most of their data and insights from customers who use their "free" products and services.


Our social media engagement data and personal data are used to predict behaviours and sentiments. Those insights are sold to advertisers. The ability to control one's personal data is part of our human rights. However, "around 81% of Americans express concerns regarding companies collecting private data." (Source: dataprot)


Artificial Intelligence is accelerating the conversation around algorithm bias, its ethical implications and technological singularity (if/when human intelligence is enhanced or overtaken by artificial intelligence).


Machines learn from their inputs (historical datasets) to predict future outcomes by calculating statistical probabilities. This makes AI highly susceptible to biases because society is riddled with biases and inequalities, and datasets are not devoid of this reality; instead, they reflect them.

Let me explain!


Assuming facial recognition data inputs are primarily of a particular group (any group). The AI will determine that demographic as the "standard" to assess all other groups. Alternatively, it may deem the primary group the "majority" or baseline of what "normal" is within that context. However, this may not be factually accurate in reality. Should the fact be contrary, the machine doesn't know or care, as it can only go by the given dataset.


Consequently, the machine will assume other groups that it has less data on as anomalies, abnormalities and outliers. It will categorise and classify them as such. After that, all its computational analyses and conclusions will affirm that judgment. In this example, inaccurate, limited or incomplete datasets will skew or severely bias the results. The outcome will always favour the primary group (positively or negatively).


Furthermore, the bias will be reinforced through the feedback loop if human decisions repeatedly validate the established biases. There is a concerning disparity between data analytics and actual reality when this happens. Corrective action must ensure reliable and representative historical datasets when building AI systems. The alternative is to adopt healthy scepticism of AI predictions and analysis where the dataset is limited, unreflective of reality or incomplete.


The police have been known to report to the Biometrics and Forensics Ethics Group (BFEG) that facial-recognition algorithms are known to display bias. Societal biases are likely hardwired into facial-recognition technologies, further eroding trust between the public and state.


Machine learning Bias

Children at school interacting

Machine learning is a scoring system that scores the probability of your next course of action. It can predict one's ability to maintain loan terms and your likelihood of surviving certain diseases, as an example. Obviously, there are vast sums of data and maths that goes into it.


For the most part, we indiscriminately trust the machine's data analysis. Very few people challenge the statistics they receive and assume them to be unequivocal facts. However, a machine's predictions are based on the quality of its inputted dataset. For instances where the data sample is small, the probability of error and a skewed answer is enormously high. Coupled with the fact that datasets contain inequalities and biases, to begin with.


AIs have the "freedom" to decide the best routes to solving complex problems. They compute variables at speeds beyond human understanding. The problem is that there is no way of auditing an AI's prediction for accuracy. This should concern us all and compel us to interrogate their results.

To illustrate, in 2020/21, UK schools and universities were shut for public safety during the coronavirus pandemic. Final exams could not be taken as a result. Universities needed another way to award students places without obtaining their exam results.


The planned solution:

The UK government instructed teachers to estimate their students' exam performance. The predicted grades were adjusted by the Office of Qualifications and Examinations Regulation (Ofqual), using an algorithm that factored in the historical performance of schools and universities. The intention was that the algorithm would eliminate grade inflations and accurately predict test scores and performance.


The outcome:

When students received their AI predicted results, many were shocked to discover they had lower results than expected based on their previous grades and performance. Approximately 40% of the anticipated results were downgraded, with just 2% of scores being increased. The majority of the victims were students with high grades from disadvantaged educational institutions. They were more likely to have their scores downgraded by the AI. Comparatively, students from affluent schools were more likely to increase their scores.


The algorithm reinforced economic and social biases, inequalities, and prejudices. It further exposed how current social systems primarily benefit the already systematically privileged.

As we rely more and more on data, we cannot afford to passively accept the algorithmic outputs as factual gospel, perfect in every way. It should be seen as a discussion point, an assumption, an opinion that requires qualifying through nuanced objectivity and overlaid with empathy and compassion. The latter is uniquely human and the essence of conscious organisms.


Ethical systems of intelligence

Three friends laughing out loud in the street

As societies increase their use of AI, inherent biases will become even more entrenched and continue to propagate social discrimination. Unless meaningful action is taken to prevent or reduce it.


In a democratic society, individuals must have the right to be made aware of how the state and corporations use their anonymised data; and their right to withdraw consent. The public will inadvertently enable rogue actors and governments to use AI to weaponise biases and inequalities if these fundamental human rights are ignored.


Predictive analytics is not an inevitable fact that cannot be challenged. On the contrary, all statistics are worthy of interrogation. Considerations should also be made for incomplete datasets. This is akin to recognising that some voices are silent, feeling unsafe to speak up. Others are overlooked, unheard or intentionally left out. Whatever data is captured, the story is only partially true if only some of the voices are heard.


Unless one lives in an authoritarian state, personal liberty and privacy are sacraments and penetrate into our sense of freedom of thought, individual self-expression and personal choice. These are the fabric of humanity and the root of our divine rights. They must be preserved for human life to be truly free, not bound in servitude to dictatorship.


It's fair to say that technology is advancing at a pace that lawmakers cannot keep up with. For example, the public's awareness of the mental harm caused by social media to young people has only become a recent phenomenon. It was spotlighted in a significant way in 2021. Tech whistleblower, Frances Haugen, accused Facebook (now Meta) of prioritising profits over public safety and monetising polarising and divisive content.


It isn't enough to pinpoint the biases and flaws in machines and then blame those machines for their inherent bias. We are all accountable for the errors in our systems. No one can abstain from the responsibility of decency and fairness. All our collective actions and decisions are the original data source that will ultimately be aggregated by AI and represented back as predictions.


A simple way to understand algorithmic bias is to recognise that AI systems provide a mirror reflection of the preferences, inequalities and prejudice that already exists within society.

Mankind wants the ability to predict the future. The truth is, no one can predict the future. The next best thing is to leverage historical evidence to indicate future outcomes. This approach isn't wrong if we repeat our forefathers' same choices and decisions.


Implement the following 8 daily actions to reduce bias in algorithms
  1. Act altruistically and perform small acts of kindness each day.

  2. Consider the welfare of others and put their interests above your own.

  3. Be empathetic and compassionate.

  4. Stand up consistently for fairness, truth and justice.

  5. Take risks that could result in the improvement of another person's life.

  6. Give generously, expecting nothing in return.

  7. Use your privileges for the greater good of humanity.

  8. Treat others as you would like to be treated.


These collections of frequent daily actions will form the data points that AI will ingest and regurgitate as predictions of future outcomes.


If you want to be sure of the future, act wisely today, and together as a species, we will create a better future for everyone.


Get in touch to continue the conversation.



Chidi Ameke books promo ad
Transform your career and organisation.

The Intelligent Change Management Guide: How to Successfully Lead and Implement Change in Your Organisation. Find out more!


Purpose-Driven Transformation: The Corporate Leader's Guide to Value Creation and Growth. Find out more!


Accelerate: Your Career Ascension Guide. Find out more!




59 views0 comments

Comments


bottom of page