Artificial Intelligence still have biases?

0
939

It is not shocking that many companies are turning to artificial intelligence (AI) technologies to review large quantities of data, such as machine learning. If it’s reviewing financial documents to verify whether you qualify for a loan or flaws in legal contracts or to decide whether you suffer from schizophrenia, you’ve been shielded by artificial intelligence! Is it completely foolproof, however, or impartial?

The risks of bias vary for each corporation, industry, and organization. Via various methods, they will find their way into artificial intelligence systems. For example, via a stealth attack, it can either be purposely inserted into an AI system or accidentally, making it hard to ever be seen or discovered. It may also be attributed to people who already have biased information that represents their biased thinking or because of bias in data sampling. We also have long tail biases that arise when the training data misses those categories.

The existence of data bias will lead to bias in the model of artificial intelligence, but what is more dangerous is that bias can be exacerbated by the model. For example, a team of researchers found that 67% of the photos of people cooking were women, but 84% of the cooks were identified as women by the algorithm. Algorithms of deep learning (another AI technology) are rapidly being used to make life-impacting decisions, such as employee recruiting, the criminal justice system, and diagnosis of health. If the algorithms make incorrect decisions because of AI bias in these situations, the results will be catastrophic in the long run.

For example, Pro Publica, a nonprofit news organization, critically evaluated AI-powered risk assessment software known as COMPAS in 2016.COMPAS was used to estimate the probability that, if released, a prisoner or convicted criminal would commit more crimes. The false-positive rate (labeled “high-risk” but not re-offending) was found to be almost twice as high for black defendants (45% error rate) as for white defendants (45% error rate) (an error rate of 24 percent ). Apart from this, artificial intelligence tools misclassify/mislabeled/misidentified individuals because of their race, gender, and ethnicity in some instances. When the Beauty.AI website used AI robots as beauty contest judges in the same year, it found that people with light skin were considered much more desirable than people with dark skin.

It is important to uncover unintentional bias in artificial intelligence and align technology resources with policies and principles in the business domain of diversity, fairness, and inclusion. 68 percent of organizations do need to discuss equity in the AI systems they build and deploy as per 2020 PwC AI Predictions.

Machine learning and deep learning models are mostly constructed in three phases: preparation, validation, and testing. While bias can creep in well before the data is gathered and at several other stages of the deep-learning process, in the training phase itself, bias affects the models. Parametric algorithms such as linear regression, linear discrimination analysis, and logistic regression are typically vulnerable to high bias. Because of their utility, tackling AI prejudices can become more tricky as artificial intelligence systems become more reliant on deep learning and machine learning.
No question opting for diverse data will mitigate AI prejudices, which is not sufficient by giving room for more data touchpoints and metrics that meet different goals and insights. The presence of proxies for social classes, meanwhile, makes it difficult to create a deep learning model or any other AI model that is conscious of all possible sources of bias.

 Follow and connect with us on Facebook, Linkedin & Twitter