Emma is a senior commercial and regulatory lawyer focussed on technology. She leads the highly rated Technology, Data and Digital team at Harbottle & Lewis (which she established on her move from Deloitte), where she acts for all manner of small and large suppliers and customers in a variety of tech verticals on their commercial partnerships with both the private and public sectors and business arrangements including the handling of data and wider regulatory compliance. She is recognised as a ‘Leading Individual’ by legal directories in her fields of expertise. She is also Director and Counsel of the Interparliamentary Forum on Emerging Technologies, a not for profit which she co-founded with British MP Darren Jones, and has now built a global network of legislators focussed on sharing and collaborating on the regulation of emerging technology. She sits on the Global Forum of Women 4 Ethical AI – a network of 17 global experts focussed on the global adoption of ethical AI and is working with Unesco and legislators in countries from across the world on the implementation of the Unesco Recommendation on the ethics of artificial intelligence at country level. Emma has been listed by Computer Weekly as one of the UK’s Top 20 most influential women in tech in 2022 and she sits as a Non Executive Director at tech scale-up Playfinder.
FROM MY OWN EXPERIENCE, AS A LAWYER AND TECHNOLOGIST, I HAVE BEEN EXPERIMENTING WITH SOME OF THE TECHNOLOGY AIMED AT THE LEGAL PROFESSION FOR THE LAST 5 YEARS OR SO AND HAD REACHED THE CONCLUSION THAT IT STILL HAD SOME WAY TO GO.
ChatGPT has changed that. For those that don’t know, ChatGPT is a Generative AI large language model that has been trained on the vast swathes of data publicly available on the internet. It’s a type of artificial intelligence. More importantly, it’s a type of artificial intelligence that is free to use and achieved 1 million users in 5 days – 250 times faster than Netflix. Access to artificial intelligence has suddenly become democratised and it seems the speed by which it has improved has taken a lot of people by surprise.
ChatGPT has the ability, at first glance, to hold a conversation with the user and generate human-like responses. Users are writing code; brainstorming ideas; simplifying complex ideas; translating, draft essays, CVs and cover letters. All of these tasks are found to a lesser or greater extent in many office-based roles and can be extremely time consuming unless the writer has a template or a starting point that they prepared previously; so you can understand the appeal (setting aside the ownership of the intellectual property which will take some time to resolve the legal position).
However, ChatGPT simply produces a response based on the data and information it has been trained on. As we know, there is a lot of readily accessible imperfect information or data sets on the internet. Seemingly accurate information or plausible responses generated by ChatGPT can be shown to be false, misleading, inaccurate, or impractical once interrogated. This highlights the importance of critical thinking and data evaluation skills, which are essential components of A Learning Path To Becoming a Data Scientist. Even if ChatGPT is limited to drawing on data generated since 2021, understanding how to assess the quality of information is crucial in the data science field.
From my own personal perspective, incorrect or inaccurate information is rarely helpful – particularly when it is provided as a source of fact without warning of the source that it is based on or that it may be inaccurate. For example, this issue has the potential to turn the education system on its head – or does it? Is it now more than ever that we need students that don’t accept outputs at face value and have the confidence to scrutinise?
The most significant risk, and one that has concerned me for several years and inspired me to co-found the Interparliamentary Forum on Emerging Technologies, is the ability for these large language models, like any AI, to present outcomes that appear to be properly considered, yet are based on datasets that are biased – so reinforcing and amplifying the biases that exist within our society, particularly where the AI is used as intended – as a general purpose technology.
ChatGPT goes into some detail as to how it is trained – Reinforcement Learning with Human Feedback with supervised fine tuning and ‘a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO)’ – one might read it as taking much needed steps to build trust in this AI with human intervention at key points.
However, this is where we return to the fundamental issue – the importance of diversity within our tech industry, in heritage, experiences, and views – and how we must find a way to inspire the women of today to build our collective technological future where everyone can thrive – and that those in the industry need to be supported and in turn support other females to remain, progress, and to receive the credit they deserve for their achievements within the sector.
In this episode, we delve into the remarkable journey of Magdalena Bilska, an AI Engineer at Showpad, a Polish native who embarked on a transformative...
Delve into the intricate web of ethical AI and Machine Learning technologies. Join us as we confront the ethical challenges head-on, exploring the profound implications...
Caroline Carruthers discusses the critical need for women in C-suite positions to ensure the responsible deployment of AI systems. Reflecting on the positive strides since...
Eilidh McMenemie discusses the gender gap in tech and the potential of AI to bridge it. AI offers personalised learning, mentorship, and hands-on opportunities, empowering...
 
Required 'Candidate' login to applying this job. Click here to logoutAnd try again
What is ChatGPT’s place in the professional world?
ARTICLE SUMMARY
In this article, Emma Wright from Harbottle & Lewis takes a look at artificial intelligence, with ChatGPT in particular, and it’s role in the professional world.
Emma is a senior commercial and regulatory lawyer focussed on technology. She leads the highly rated Technology, Data and Digital team at Harbottle & Lewis (which she established on her move from Deloitte), where she acts for all manner of small and large suppliers and customers in a variety of tech verticals on their commercial partnerships with both the private and public sectors and business arrangements including the handling of data and wider regulatory compliance. She is recognised as a ‘Leading Individual’ by legal directories in her fields of expertise. She is also Director and Counsel of the Interparliamentary Forum on Emerging Technologies, a not for profit which she co-founded with British MP Darren Jones, and has now built a global network of legislators focussed on sharing and collaborating on the regulation of emerging technology. She sits on the Global Forum of Women 4 Ethical AI – a network of 17 global experts focussed on the global adoption of ethical AI and is working with Unesco and legislators in countries from across the world on the implementation of the Unesco Recommendation on the ethics of artificial intelligence at country level. Emma has been listed by Computer Weekly as one of the UK’s Top 20 most influential women in tech in 2022 and she sits as a Non Executive Director at tech scale-up Playfinder.
FROM MY OWN EXPERIENCE, AS A LAWYER AND TECHNOLOGIST, I HAVE BEEN EXPERIMENTING WITH SOME OF THE TECHNOLOGY AIMED AT THE LEGAL PROFESSION FOR THE LAST 5 YEARS OR SO AND HAD REACHED THE CONCLUSION THAT IT STILL HAD SOME WAY TO GO.
ChatGPT has changed that. For those that don’t know, ChatGPT is a Generative AI large language model that has been trained on the vast swathes of data publicly available on the internet. It’s a type of artificial intelligence. More importantly, it’s a type of artificial intelligence that is free to use and achieved 1 million users in 5 days – 250 times faster than Netflix. Access to artificial intelligence has suddenly become democratised and it seems the speed by which it has improved has taken a lot of people by surprise.
ChatGPT has the ability, at first glance, to hold a conversation with the user and generate human-like responses. Users are writing code; brainstorming ideas; simplifying complex ideas; translating, draft essays, CVs and cover letters. All of these tasks are found to a lesser or greater extent in many office-based roles and can be extremely time consuming unless the writer has a template or a starting point that they prepared previously; so you can understand the appeal (setting aside the ownership of the intellectual property which will take some time to resolve the legal position).
However, ChatGPT simply produces a response based on the data and information it has been trained on. As we know, there is a lot of readily accessible imperfect information or data sets on the internet. Seemingly accurate information or plausible responses generated by ChatGPT can be shown to be false, misleading, inaccurate, or impractical once interrogated. This highlights the importance of critical thinking and data evaluation skills, which are essential components of A Learning Path To Becoming a Data Scientist. Even if ChatGPT is limited to drawing on data generated since 2021, understanding how to assess the quality of information is crucial in the data science field.
From my own personal perspective, incorrect or inaccurate information is rarely helpful – particularly when it is provided as a source of fact without warning of the source that it is based on or that it may be inaccurate. For example, this issue has the potential to turn the education system on its head – or does it? Is it now more than ever that we need students that don’t accept outputs at face value and have the confidence to scrutinise?
The most significant risk, and one that has concerned me for several years and inspired me to co-found the Interparliamentary Forum on Emerging Technologies, is the ability for these large language models, like any AI, to present outcomes that appear to be properly considered, yet are based on datasets that are biased – so reinforcing and amplifying the biases that exist within our society, particularly where the AI is used as intended – as a general purpose technology.
ChatGPT goes into some detail as to how it is trained – Reinforcement Learning with Human Feedback with supervised fine tuning and ‘a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO)’ – one might read it as taking much needed steps to build trust in this AI with human intervention at key points.
However, this is where we return to the fundamental issue – the importance of diversity within our tech industry, in heritage, experiences, and views – and how we must find a way to inspire the women of today to build our collective technological future where everyone can thrive – and that those in the industry need to be supported and in turn support other females to remain, progress, and to receive the credit they deserve for their achievements within the sector.
Crossing Continents & Disciplines: A journey into AI Engineering
Navigating Ethical AI: Insights from Capco on Bias, Governance, and Diversity
How diversity in leadership can ensure that AI is for all
Comment
RELATED ARTICLES