Beyond the Hype: AI Startup Founders Warn About the Risks of ChatGPT-like Models
Imagine a scenario where you apply for a job position and you come across an AI-based recruitment process. If such a system is trained on reliable data sets, then you can expect a good outcome. However, there is also a dark side of AI.
If the AI is trained on biased data sets, then this could lead it to make decisions that perpetuate or even amplify existing inequalities. For example, if a ML algorithm is trained on data that reflects historical hiring practices, it might learn to prioritize male candidates over female candidates, leading to gender discrimination.
While these models are incredibly powerful and can make decisions that often outperform human experts, they come with a significant downside: they are largely opaque – so it can be difficult, if not impossible, to understand how they arrive at their decisions.
And these are just some of the mild examples illustrating the potential dangers that the technology poses.
The Recursive had insightful conversations with a group of innovative AI startup founders about the dark side of AI, delving into their concerns about the revolutionary technology they’re crafting, as well as the sincere desire to educate people on how to skilfully navigate it.
The dangers of of discriminating and biased AI models
The loudest ethical concerns behind large language models such as ChatGPT right now come in the form of whether such models can be potentially discriminating and create content which is simply dangerous. However, there’s a lot more to it, according to Croatian mathematician and entrepreneur Sinisa Slijepcevic.
“I’d go back to a story I’m fascinated about and it comes from the UK. Back when COVID-19 happened, there was a situation when suddenly there were no A-Level exams, which are the basis for graduation and a precondition to enter any university. And then somebody in UK Ministry of Education came up with a model who’s going to predict some of this final grade, based on all the grades that this person has received in the past, and based on everything else that we can think of,” Slijepcevic, who is also the CEO and founder of data analytics and ML startup Cantab PI, tells The Recursive.
What happened next is that the model turned out to be hugely discriminating – if you went to good school by default, the model pushed your grades up, if you went to a bad school, it pushed your grades down, the Croatian mathematician explains.
“This is super dangerous. So whoever had this idea, first it was not thought through and it wasn’t very good. Because this is not only about discrimination – it is about developing predictive models without standard use cases. You could create damage by deploying something that you do not fully understand and do a lot of damage,” he says.
Source: therecursive.com