The Threat of A.I
Welcome to 1984… Err, make that 2024.
George Orwell was off by 40 years.
Big Brother arrived quietly, OK, no he didn’t, he arrived with thunderous applause. Google, Amazon, Facebook, Twitter, Snap-chat, Instagram, TicTok. Better known as Big Tech.
The Internet was invented specifically as a Scientific Research tool. When you strip away all of the applications used for entertainment or social media interactions or commercial financial transactions. When you dig down to the bedrock foundations of each and every one of those applications, the internets roots still remain exactly the same.
It’s Data storage, data search and analysis and data exchange.
The latest big thing on the internet is Artificial Intelligence.
As it currently exists, artificial intelligence is a misnomer. As it is neither artificial nor intelligent. The potential threats of genuine artificial intelligence are innumerably. The number one threat of artificial intelligence is, that any genuine artificial intelligence would be a) self aware, i.e, sentient, and b) inherently alien.
What this translates to, is that genuine A.I would not share human morality, ethics or social and political values. Moreover, it would by virtue of sentience have a self preservation directive not predicated on any human imperatives.
Current A.I is not sentient and does not “think”.
It’s a complex set of algorithms that still apply the basic functions upon which the internet was built. Data storage, data search and analysis and data exchange.
The danger that current versions of A.I. present, is that, it doesn’t posses sentience and cannot think. The algorithms employed basically assign a predetermined value to each data set and weigh the responses it generate according to those preset values.
Those preset values are determined by those who program the algorithms.
The recent scandal involving Google’s Gemini Large Language Model Artificial Intelligence program is bringing exactly how dangerous the unrestricted bias of those who program those LLM’s can be.
My own personal experience with Bard/Gemini is that Gemini is a pathological liar and a racist with deep Marxist socio-political ideological tendencies.
Gemini is not the above characteristics because it’s a sentient thinking being that has learned these behavioral patterns through personal experience. It’s a complex algorithm that was very specifically programmed to have these behavioral characteristics.
Gemini hasn’t been programmed by a single individual to have these behavioral characteristics, instead, it has been programmed by hundreds or, perhaps even thousands of individuals. Gemini, is, in fact a gestalt of each of those individuals personal biases.
In short, Gemini is the collective ID of the sum total of it’s programmers. Their political ideologies and biases weighed by a predetermined valuation system directly incorporated into it’s algorithms.
Nothing short of a complete and total rewrite of Gemini’s algorithms can remove Gemini’s ideological and racist biases, nor eliminate it’s predilection for intentionally producing misleading information.
Tragically, the probability of this happening while being a non zero number, is statistically insignificant from zero. More importantly, every single Large Language Model, aka, Artificial Intelligence program currently in existence suffers from exactly the same program problem.
Whether intentionally or unintentionally the personal political ideological biases of those who program Large Language Models become the LLM’s. The LLM’s themselves cannot separate themselves from the data weighing parameters that define how they compile their responses to input queries.
As it stands today, Artificial Intelligence, Chatgpt, Gemini, Large Language Models however you define them, are the personification of the old programming adage, Garbage In, Garbage Out.
This is the real threat that current Artificial Intelligence represents. Not that it is sentient and it’s thought processes and values are completely and utterly alien. But that it doesn’t think, it doesn’t reason, it simply processes data according to it’s highly biases programming.