However, when AI makes the wrong decisions, what will happen?
To help answer this question and more, we sat down with Ada Lopez, Senior Manager, Product Diversity Office at Lenovo.
Ada manages the Lenovo Product Diversity Office team and is responsible for the daily operations of the Diversity by Design Process. In addition to leading the team, she works with development teams across the company to implement inclusive and accessible design practices. With over 18 years of professional experience as a teacher, and both a product and project manager, Ada understands the importance of creating processes that increase diversity and inclusion which are supported through education, employee empowerment, and are championed by leaders.
UNPACKING THE BIAS
Bias issues, particularly gender bias, are serious issues in AI systems, which cause harm including discrimination, reduced transparency, and security and privacy threats. In the worst-case scenario, wrong AI decisions could lead to career damage, reputation loss, or even cost lives. AI will never reach its full potential as a tool for the greater good if our bias problem isn’t addressed.
To be used as a tool for the greater good, AI will have to be trained on the right data sets. The language in most data sources, from online news articles to books, causes data to skew towards men. For example, according to research, AI trained on Google News data tends to associate men with roles such as ‘captain’ and ‘financier’, whilst women are associated with ‘receptionist’ and ‘homemaker’.
Many AI systems are developed by male teams, and trained on biased data, which results in major issues for women. For instance, credit card companies seem to offer more generous credit to men. This is where people’s wellbeing can get damaged by AI’s wrong decisions.
Additionally, only 22% of AI and data science professionals are women, according to World Economic Forum research. Non-binary and transgender expressions have made gender to become an even more complex topic, which leads to more potential for bias in various forms.
AI helps to solve previously unsolvable problems, such as cancer and climate change, but only if the bias issue is addressed, otherwise we will see risks of AI being untrustworthy and ultimately irrelevant. AI professionals need to encounter and understand the bias issue to ensure these tools are useful, and to avoid another ‘AI Winter’ arriving in the industry where interest in the technology dried up, as seen in the 1970s.
TURNING DATA INTO VALUE
Going forward, businesses will increasingly rely on AI technology to turn their data into value. According to Lenovo’s Data for Humanity report, 88% of business leaders say that AI technology will be an important factor in helping their organisation unlock the value of its data over the next five years.
So how will business leaders deal with the problem of bias? For the first time in history, we have this powerful technology that is entirely created from our own understanding of the world.
AI is a mirror that we hold up to ourselves.
We shouldn’t be shocked by what we see in this mirror. Instead, we should use this knowledge to change the way we do things. That starts with ensuring that the way our organisations work is fair in terms of gender representation and inclusion – but also by paying attention to how data is collected and used.
Whenever you start collecting data, processing it, or using it, you risk inserting bias. Bias can creep in anywhere: if there is more data for one gender, for example, or if questions were written by men. For business leaders, thinking about where data comes from, how it’s used, and how bias can be combatted will become increasingly important.
Technical solutions will also play an important part. Data scientists don’t have the luxury of going through every line of text used in a training model.
There are two solutions to this: one is having many more people to test the model and spot problems. But the better solution is to have more efficient tools to find bias, either in the data which the AI is fed with, or in the model itself. With ChatGPT, for example, the researchers use a mental learning model to annotate potentially problematic data. The AI community needs to focus on this. Tools to provide greater transparency in the way AI works will also be important.
DEALING WITH BIAS
It also helps if we consider the broader context.
The tools we use today are already creating bias in the models we will apply in the future.
We might think that we have ‘solved’ a bias issue now, but in 50 years, for example, new tools or pieces of evidence might change completely how we look at certain things. This was the case with the history of Rett syndrome diagnosis, where data was primarily collected from girls. The lack of data on boys with the disorder introduced bias into data modelling several years later and led to inaccurate diagnoses and treatment recommendations for boys.
Similarly, in 100 years, humans might work for only three days a week. That would mean that data from now is skewed towards a five-day way of looking at things. Data scientists and business leaders must take context into account. Understanding social context is equally important for businesses operating in multiple territories today.
Mastering such issues will be one of the touchstones of responsible AI. For business leaders using AI technology, being conscious of these issues will grow in importance, along with public and regulatory interest. By next year, 60% of AI providers will offer a means to deal with possible harm caused by the technology alongside the tech itself, according to Gartner.
Business leaders must plan thoroughly for responsible AI and create their own definition of what this means for their organisation, by identifying the risks and assessing where bias can creep in. They need to engage with stakeholders to understand potential problems and distinguish how to move forward with best practices. Using AI responsibly will be a long journey, and one that will require constant attention from leadership.
The rewards of using AI responsibly, and rooting out bias wherever it creeps in, will be considerable, allowing business leaders to improve their reputation for trust, fairness and accountability, while delivering real value to their organisation, to customers and to society as a whole.
Businesses need to deal with this at board level to ensure bias is dealt with and AI is used responsibly across the whole organisation. This could include launching their own Responsible AI board to ensure that all AI applications are evaluated for bias and other problems. Leaders also need to address the broader problem of women in STEM, particularly in data science. Women – especially those in leadership roles – will be central to solving the issue of gender bias in AI.
LEVERAGING AI’S FULL POTENTIAL
To unlock the value of data, it is vital for forward-thinking organisations to understand the problem of gender bias and work towards effective ways of dealing with it.
Think twice about how data is used within an organisation by utilising tools to detect bias and ensure transparency, as well as taking a thoughtful approach to the fulfilment of AI. It is also important for business leaders to take a broader view of the origin of their data, the way it is used, and the steps to be taken to avoid bias. These are essential considerations for businesses wanting to unlock the value of their data, and build an inclusive future where AI is leveraged to its fullest potential.