fbpx

AI for Good: The Need for Human Scrutiny

Person reaching out to a robot, Artificial Intelligence, Ethics concept, women in tech

ARTICLE SUMMARY

Solange Sobral, Partner and Executive Vice President at CI&T, looks at why businesses must prioritise ethics as a key part of their mission when building AI-powered products, as fears grow about its use, accuracy, and implications.

Solange Sobral is a Partner and Executive Vice President at global digital specialist CI&T.

ethics

She currently leads the CI&T operations in EMEA, promoting innovation and digital transformation for large organisations. In addition, Solange leads the company’s global ESG committee and supports the creation of humanised and inclusive management programs. She is a passionate proponent of Diversity and Inclusion in the workplace and has dedicated herself to illustrating both the cultural and business value of an inclusive workplace.

It isn’t a stretch to say that artificial intelligence (AI) is perhaps the most disruptive breakthrough in the history of computing.

A year since the launch of ChatGPT, AI is already top of the agenda for many businesses when it comes to investment – with three-quarters (76%) now using generative AI, or expecting to do so in the next 12-18 months. AI’s potential is seemingly almost limitless, with today’s transformational improvements to productivity and efficiency likely only just the beginning.

However, when we talk about the power of AI, it’s common for people to fear that it’s set to ‘take over’, like Skynet from the Terminator movies. AI technology is decades away from that being even a remote possibility, which gives us plenty of time to put procedures in place that ensure AI remains a tool for humans and a force for good.

Let’s explore how the coming months and years will shape AI development. From ethics to regulation, if we can deploy AI safely and responsibly, we can begin to enjoy its incredible capabilities today—and set up our organisations for long-term success tomorrow. 

The race to regulate

Now that AI has broken into the mainstream, its expansion seems unstoppable. And while developers are unlikely to slow their efforts, humans are an adaptive species, so it’s time to work out how we’ll live with and properly regulate AI.

The world’s first global AI safety summit in November, hosted by the UK, highlighted the need for universal collaboration on this. Bringing together international governments, leading AI companies, research experts and civil society groups, it has widely been regarded as a success – resulting in a joint declaration by 28 countries to understand and agree on opportunities, risks and need for international action on frontier AI, as well as the establishment of the world’s first AI Safety Institute in Britain.

These agreements should help to spark further innovation in AI while deploying it safely and transparently. In the years ahead, we can expect to see new ideas making businesses operate even faster and become more scalable. With watertight rules and processes, AI won’t simply take jobs as many fear—it’ll hopefully offer us humans extra freedoms to take on more impactful, fulfilling tasks. At the same time, this new technology will enable new personalised customer experiences for all different industries, creating new demand and different kinds of opportunities and jobs. But first, there’s another big issue that developers must address.

You're invited! Join SheCanCode's Women in Tech Community

Find a supportive network, opportunities, jobs & much more, so you can excel in your tech career.

Removing human error from AI ethics

An important element to consider as part of any new regulation is ethics. AI models are usually fed by human datasets, which contain their own biases and prejudices. Without proper care, these faults can begin to contaminate the outputs of AI technologies. 

“Part of the problem is that companies haven’t built controls for AI bias into their software-development life cycles, the same way they have started to do with cybersecurity,” Todd Lohr, Technology Consulting Leader at KPMG, told the Wall Street Journal. Meanwhile, other issues such as plagiarism and copyright infringements are also yet to be solved.

Fortunately, developers are already working on fixes. OpenAI itself has announced it is building a tool that identifies whether a piece of text was produced by AI, to help fight issues like plagiarism, misinformation, and human impersonation. If we can safely navigate past these problems, we can firmly focus on the technology’s benefits.
 
After all, it’s important to remember that several other technological disruptions have brought similar threats to light, such as the internet, smartphones, data science applied to understand customer behaviour, the cloud, and so on. As a society, we managed to navigate their initial difficulties and then concentrated on harnessing these revolutions for good. There’s no reason we can’t do the same with AI. We can establish a safe and productive partnership in which human beings and AI can work together as a team, powered by AI, with humans holding the final responsibility for all the revisions and outcomes. Ultimately, the final decision to press the button and release any solution created by a hybrid group should always be made by a human.

The positive power of AI 

Our society is becoming increasingly complex, so we need equally sophisticated tools. When used correctly, AI will become a key accelerator of public improvements worldwide.
 
According to a new report from McKinsey and Harvard University, AI could improve the healthcare industry’s clinical operations, and boost quality and safety, by crafting personalised treatment and medication plans that transform patient care. Plus, AI-powered efficiencies could even help the industry save hundreds of billions per year in healthcare spending.
 
Meanwhile, Insider Intelligence has estimated that financial sector AI can reduce industry costs by $447 billion in 2023 through task automation, fraud detection, and personalised wealth management insights.

And though some are worried about AI’s possible role in academic plagiarism, AI algorithms within education can also analyse students’ data and adapt to their learning styles, giving them feedback and recommendations that are tailored to individual needs and help students reach their potential.

Our collective responsibility 

We’re living through the exciting early days of a technology that can bring spectacular improvements to how we live and work. At the same time, we need to admit there are some threats as well. Now is the time to ensure that humans are the guarantors of AI, not the other way around.
 
As Deloitte AI Institute Global Leader and Humans For AI Founder Beena Ammanath said, “What we need is more participation from the entire constellation of people interested in the future of humanity — which is all of us. The potential of AI is enormous, and what is needed is not just intent, but imagination and collaboration. Businesses today can help drive more momentum for social betterment by leading cross-industry conversations and pursuing AI deployments for the public good.”
 
Ultimately, AI is the latest addition to a vast arsenal of technologies ready to help humans solve problems. However, it’s also incumbent upon us—business leaders, industries, governments, and more—to take proactive measures that reduce the risks of misuse and unintended consequences. When this technology is used fairly and responsibly, we have the power to change people’s lives for good. What better business mission is there to uphold than that?

RELATED ARTICLES

Agnès Leroy, Senior Software Engineer at Zama, shares her insights on managing a remote team within a Deep Tech company. As the manager of the...
Laura Tuck, an accomplished engineer behind groundbreaking innovations such as FemTech solutions at Elvie and female urinals at Peequal, explores how addressing the "subject snobbery"...
Ada Lovelace Day, celebrated annually on the second Tuesday of October, honours the remarkable achievements of Ada Lovelace, a visionary mathematician, and the world’s first...
In this episode, we delve into the remarkable journey of Magdalena Bilska, an AI Engineer at Showpad, a Polish native who embarked on a transformative...