Artificial intelligence (AI) has long been positioned (by its creators) as a force for good. A labour-saving, cost-reducing tool that has the potential to improve accuracy and efficiency in all manner of fields. It’s already been used across sectors – from finance to healthcare – and it’s changing the way we work. But the time has come to ask, ‘at what cost?’ Because the more AI is developed and deployed, the more examples are being found of a darker side to the technology. And the recent ChatGPT data input scandal showed that our understanding to date is just the tip of a very large and problematic iceberg.

The more sinister side of AI

There is a range of issues with AI that have been either unacknowledged or brushed under the industry carpet. They are each cause for varying degrees of concern, but together present a bleak picture of the future of the technology.

The ethics of applied AI

Without human-generated data, there can be no AI:  Because of the proliferation of AI content replacing human content on the Internet, some predict we will run out of new training data by 2026. The recent gnashing of teeth by leading luminaries in the industry shows there is some legitimate concern at the pace of change.  Some have said we are seeing AI’s “iPhone moment”.  But how dangerous is it?

Well, for one thing, it is capable of determining a person’s race from an x-ray alone. The recent focus is on Large Language Models, but this reminds there are many other powerful AI applications out there.  Will AI take your job, or render your business obsolete?  Possibly not, but a person using it might. The entire content creation industry looks pretty shaky right now, but almost anywhere there is a “mundane” human touchpoint involving interpersonal interaction could easily be a target for AI powered applications.

Criminals will use it.  In fact, already are.  Generative AI solutions are already producing high-quality audio and video, enough to fool a human into handing over money believing it is a loved one.  It will produce better malware, better phishing emails, and better suggestions as to how to manipulate individuals in real time.

But like the internet, which has created and destroyed so much, we can’t uninvent it, and we’re doing a poor job of stopping people using it.  So can we help stop the negatives, while reinforcing the positives?   There are arguments which suggest a libertarian free-for-all is best, because otherwise criminal behaviour goes further underground, making it harder to track.  On the other hand, we see the EU trying to legislate AI out of existence, but only if it is controlled by large companies. Which is the biggest evil?  The criminals sucking up and using data to try and take our money without us noticing or big business sucking up and using our data to try and take out money with us noticing?  This debate will run and run, as different countries take different stances.  We have already seen the announcement by Japan that it is not a breach of copyright to use copyright material in AI training (even if illegally obtained!), something at stark odds with the current headwinds of opinion (if not actual law)

To Know More, Read Full Article @ https://ai-techpark.com/understanding-the-darker-side-of-ai/ 

Visit AITech For Industry Updates