We have given AI the keys to our kingdom
Now we must embrace it, work with it, and be really good.
This AI revolution has been quite the ride so far. Having been involved in several ways in the realms of algo's and machine learning to witness this explosion of transformers and LLMs is mind-blowing yet a tad daunting. Generative AI's knack for churning out stuff that feels almost too human-like is nothing short of a marvel. But here's the kicker—it's not just about what it can do; it's the can of worms it opens up, ethical dilemmas, societal shifts, you name it.
This, and the speed of execution we are seeing, some of the more sinister tools coming out and then the manifestation of big tech with their very dubious (some outright stupid and wrong) views of ethical and moral guardrails makes me feel that we are not responsible enough to cope with this tech and it will leave a huge amount of damage in its wake.
Here are my thoughts and blurbs on it, I would love to hear your views.
1. We Always Fail the Marshmallow Test.
2. A look back in time.
3. AI Sci-fi versus reality.
4. AI yesterday and AI today.
5. The Mechanics of Manipulation.
6. The Implications of Mind games.
7. Morals, Ethics, and Values.
8. AI is going to write our future and there is nothing we can do to stop it.
9. Regulation, are we too late?
10. So where do you go from here?
Enjoy and would love to hear / read your thoughts 🙏🏼
1. We Always Fail the Marshmallow Test:
The first time I was introduced to AI was as a child in school, but the first time it really grabbed my attention was when IBM's Deep Blue chess computer beat Garry Kasparov, in 1997.
I have always read and thought about the impact of Artificial intelligence on humanity, the good, the bad and the damn right ugly. Of late, like most of us, I imagine, have been digging in to learn, understand and imagine the impact that large language models (LLMs) transformers, which are a type of artificial intelligence that can generate text, pictures, music, movies and answer questions in a human-like way, will have for our species.
While I fully understand and am uber excited by the power of AI as a whole, with its potential to solve so many issues, all the chatter, especially that from the leaders in the industry, have driven me to pen my thoughts on how seemingly limited our own (long term) intelligence is.
First, calling AI Artificial seems to give us a level of comfort that we are in control, can identify it, and if necessary, can separate ourselves from it. All from one word, we are given this illusion. To frame it another way, when humans make or produce something, rather than it occurring naturally, we call it 'artificial'.
So, what have we done so far?
We saw the sun's light and thought, 'we can do that' - so we made our own with electricity and bulbs.
Then we got to thinking about food and decided to cook up artificial flavours in a lab. Pests were a problem, so we created pesticides, never mind the damage to our ecosystems and health.
Finally, we went big and made artificial plastic everything - convenient, sure, but now we're dealing with global pollution, climate change, and loss of biodiversity.
I could go on, but this is just to demonstrate our amazing short-term intelligence to innovate and our incredibly limited long-term intelligence to fully understand the negative consequences. We now stand at the threshold of the AI revolution, where the stakes are higher and the potential for unintended consequences is possiblyeven larger.
Second is the consensus view that we can maintain control over today's weak AI, you know the one that does not understand what it is saying and has no conscience. It's said we don't need to worry because we are told the danger will only rear its ugly head should general AI evolve, that's the one that actually understands the answers it generates to tackle tasks and achieve goals, or sentient, which is for all intent and purposes like us, with a conscience.
This is just missing the mark totally and shows an incredible naivety about what truly differentiates us from the animal kingdom and sits us squarely at the top.
My overarching conclusion is that in essence we have given away the keys to our kingdom and have opened Pandora's box.
2. A look back in time.
The first life form on earth appeared 3.8 billion years ago and it took another 3.794 billion years for the first hominid species to appear, Homo Sapiens are estimated to have appeared about 250,000 years ago. To give this some context, if scaled to a 24 hour clock we appeared less than 1 minute ago, and we were pretty basic back then. Fossil evidence shows that Neanderthals went extinct around 40,000 years ago, leaving us Homo sapiens the last of the hominid species, there are many theories around why they went extinct:
Climate Change: The melting ice sheets during that era could have led to significant shifts in living conditions, causing their demise.
Technological Superiority: Our ancestors were arguably more technologically advanced, particularly in gathering resources and food.
Disease Resistance: Homo sapiens may have been better equipped to fight off diseases.
But there is one interesting theory that also has some pretty damning evidence and that was we beat them all to death, a significant number of Neanderthal skulls have been found with evidence of blunt force trauma.
But a Neanderthal head bashing conquest would have needed some other traits and skills, and these are the main unique differentiators between us, hominid species and the rest of the animal kingdom
.
These differences, which have allowed us to survive, have driven us forward to what we have become today, and can be summarised as our superior intelligence. It has given us language, curiosity, questioning, problem-solving, and the ability to band together through communication. But it's a double-edged sword. We're also very emotional and pattern-seeking creatures, constantly looking for patterns, sometimes latching onto ones that aren't there. This can lead us to be easily manipulated - one of our biggest weaknesses.
This brings out our innate need to belong and believe. It's part of our genetic makeup and can shape our perspectives and create our biases on a wide range of issues - from our personal identity and our standpoints on climate change, the sports teams we passionately follow, the conspiracy theories we might believe in, our political standings, and our religious beliefs.
3. AI Sci-fi versus reality.
The term Artificial intelligence was coined in 1956 at the Dartmouth Conference where researchers John McCarthy an AI pioneer who developed the LISP programming language, Marvin Minsky a cognitive scientist who co-founded MIT's Media Lab. Nathaniel Rochester, a Mathematician who helped develop the IBM 701 and later became the director of the IBM Watson Research Center and Claude Shannon, "Father of information theory", gathered to explore how machines that could mimic human intelligence could be built. But even before this, there had been many plays, books, and films made, from
"R.U.R." (Rossum's Universal Robots) by Karel Čapek in 1920, which popularised the term "robot,"
HAL 9000 in Arthur C. Clarke's "2001: A Space Odyssey."
Other notable works include
"I, Robot" by Isaac Asimov (1950)
followed by blockbusters like
"The Terminator" (1984), directed by James Cameron
"The Matrix" by The Wachowskis (1999).
But here is the thing, each of these have something in common, there is always a physical or spatial awareness aspect, an android, robot or cyborg, or an AI with human level consciousness.
There are very few imaginary works that lay outside of a physical construct and those that do have all been written on the premise of consciousness. This makes us all feel pretty relaxed about what we see today, just some AI program that can write stuff like a human.
But, remember Blake Lemoine? the senior software engineer at Google? He was working on a project called LaMDA and he believed that LaMDA could understand his questions, provide thoughtful answers, and even seemed to have its own thoughts and feelings. Obviously, he was shot down and fired.
Even as people were quick to dismiss LaMDA as non-sentient, there's something that struck me about the whole situation. Here's an engineer, someone with a good job and a solid career, risking it all because he believes the AI is sentient. The power and precision of its language played with his emotions, an intimate connection was sparked, the sort we'd usually reserve for human interactions. Now Blake was a senior engineer with a strong understanding of how these models work, and he was convinced, you only have to spend two minutes on twitter or linkedin following the army of prompt gurus that have emerged to realise that they are busy convincing everyone of this new power to manipulate and fast-track all sorts of things.
Place this in the hands of your typical bond villain, hahahaha evil laugh, and here is the massive red flag we should be thinking about.
We should also really think about the tech giants who are bringing this tech to market. Google's Gemini debacle strikes at the heart of AI's double-edged sword: the quest for bias correction warping into historical revisionism. In its bid to address biases, Gemini's misstep of retrofitting diversity into historical contexts where it does not fit, doesn't just mislead—it distorts our collective memory. This isn't just about bias; it's about the integrity of our past and the authenticity of our understanding. We need our history, the good the bad and the ugly to remain in tact , how ever influenced it has been by winner bias it can not be governed and changed or brushed under the carpet by a bunch of tech companies.
History is how we learn, how we hopefully get better, how we don’t repeat the mistakes of the past (that is hard to believe in I know) but one clear thing is, we can not erase for the sake of the past being distressing or uncomfortable for some. Just think about how moronic it is to sanitise and pacify our past when our future generations read about it?
We're at a crossroads where the aim to right historical wrongs through AI must not lead us down a path of fabricated inclusivity that muddies truth and reality. The challenge lies in refining AI to recognize and respect the nuances of history, ensuring it enlightens rather than misguides.
4. AI yesterday and AI today.
We are all aware of how Social media has taken over our lives, our purchasing habits, our beliefs systems and the spread of disinformation. They have been using AI targeting for many years based on the physiological traits that have been well studied and documented since we all sat around a fire and the village leader told us the stars were gods, and we wanted to believe, and we had nothing else so it made sense, and we followed the storyteller as they were wise.
Well, the Googles, the Facebooks, the Tiktoks and the many others have been using AI models for many years, understanding our interests, demographics, and even facial expressions better than we can, playing on this information to suck us deeper into their profit making centres.
Of course this is all wrapped up as a user benefit, giving us the best ads or information at the best time. But things have just taken a huge turn, we used to pretty much be able to tell if it was a BOT we were interacting with, but no longer, we will now start engaging in conversations where we will have no idea who we are speaking with, and those conversations will have the ability to play on our weaknesses and emotions to drive us where it has been instructed to take us.
We have already witnessed this manipulative behaviour throughout history, if in the realm of religion, dictatorships, war, cults or even the flat earth gang, and we witnessed this at scale for the first time when Cambridge Analytica used AI to target voters in the 2016 US presidential election.
Using a technique called Psychographics, a way of segmenting people based on their personality traits, values, and beliefs. Cambridge Analytica was able to target them with ads that were tailored to get them to vote a certain way. There were also many implications that other sovereign states had been involved in much the same way to influence the outcome.
Belief systems form the backbone of society and are built on intangible concepts. Let's take a look at the main ones:
Money, for the most part it is just bits and bytes in a computer system, the paper stuff that exists is worthless, the only thing that gives it any value are the stories that people tell, governments, bankers, our parents, friends and teachers, and sometimes the storytelling proves that it is just based on nothing but hot air, just check the stories of Charles Ponzi, Bernie Madoff, Elizabeth Holmes and Sam Bankman-Fried who accumulated a lot of it on the back of a good story.
Religion, there is no material existence or tangible reality, we are simply told that these old scriptures were written by a divine non-human intelligence that made us, they are the all seeing, all knowing oracle that will guide us safely through our journey.
Government is the biggest man made belief system of all, a system of laws to govern people in the belief that people can come together to create a better society, many caveats to this of course hence the philosophical interpretation of government.
The question is does AI need consciousness or understanding to manipulate us?
5. The Mechanics of Manipulation.
There is a fine line between manipulation and persuasion but they both involve influencing others. Manipulation is seen as using unethical tactics to influence people whereas persuasion is generally seen as a positive and ethical method of influencing others.
One is seen as self-serving, influencing someone for the benefit of the other, the main tactics used are giving just the information that suits the cause and creating pressure or fear.
The other is about identifying and fulfilling a genuine need of the other person, whilst being transparent with all the information about the intent, leaving the person with the freedom to make an informed choice.
Grey area at best? or have we already crossed the line, and all campaigns whether marketing a product or getting voters for an election are outright manipulation tactics.
If we take a simple example in marketing then we know that the company paying more to display an advert is 100% self serving, do they display all the information so that the person can make a fully informed choice, well no, they say just what they want and we have to research and fact check, do they create pressure and fear, well yeah, how often do we see this offer only available for 24 hours, and hey, the term FOMO doesn’t come from nowhere.
Just dig down into the practices of the belief systems we all adhere to and make your own judgement about if these are manipulation or as we would all like to believe, fully transparent persuasion that leaves us with our own agency.
The fine line between manipulation and persuasion lies in the intent, transparency, respect for the individual's autonomy, and the ethics employed in the process. AI using large language models can be programmed to relentlessly manipulate, using the data gathered, the accuracy of targeting and exploitation of cognitive biases will reach another level, as you interact with the AI agent it will lead you to exactly where it is programmed to lead you, playing on confirmation bias to solidify your beliefs.
But the next level will be taking conversations to an emotional level, and using language that is relatable to you to build a relationship, a very intimate relationship, a relationship that can convince us to buy a particular product or buy into a particular belief system, or to vote for a particular political party. And if you dont think it is possible, just read the glowing reports on social media about how ChatGPT is the go to all knowing and all seeing Oracle, or imagine that all religion is based on scriptures that were written by bodies not of this earth.
6. The Implications of Mind games.
The use of AI in influencing decisions, shaping beliefs, and manipulating emotions has so many huge psychological implications. The starting point, as we are already witnessing, is praying on the weak and stupid, this seemingly innocent bit of tech is fast becoming so many peoples oracle, with no understanding or care about truth or accuracy.
The second part which is already in play are the agents being created, and depending on what the agents are built for the central concept will be to manipulate our cognitive biases – systematic beliefs in thinking that influence the decisions and judgments we make. The easiest one AI can exploit is confirmation bias –by showing users content that aligns with their existing beliefs, AI can create echo chambers to polarise and radicalise us at scale.
These LLMs already write way better than the vast majority of us, have been trained on more information than any of us can read in a lifetime, and have the ability to mimic human-like interaction that will have unbelievable mental health effects.
The agents being built will connect with us at an intimate level, leading us to share personal information or trust them more than they would a human. They have absolutely no understanding of the consequences of their actions, but the human operators behind them do, and the constant monitoring and analysis of users' behaviour will allow these systems to become even more connected and better at driving conversations for the end goal that they have been programmed to achieve. Without any knowledge about their real intent we will take away people’s freedom to make an knowledge based informed choices.
7. Morals, Ethics, and Values.
Societies and culture have been formed over many generations by the stories past down to us, new stories and fantasies are created and added to them, they come in many forms, through word, art and music, and this is humanity, a creation of stories that forms the culture and fantasies that we live inside.
But, these are no more than words, pictures and sounds that stimulate us and give us the feeling of belonging and understanding, they are not biologically coded in our DNA, we are not born passionately supporting a sports team or loving a certain genre of music, and we certainly don't have ill conceived biases or political views.
All cultures have a common thread, we have all defined our moral, ethic and value systems that give us our connections and commonality, so how will we cope and adjust to the new actor that just entered stage right? One that has read more than any person ever could, one that is connected to more people than any person ever could be, and one that hears and sees every digital bit and byte uttered in real time, learning from billions of data points each and every day, wow, that is absolutely mindblowing.
Lets also consider that this technology has no borders, only the human engineers have some control with a few security policies in place. They have the control to write the weightings of how the model should consider the importance, correctness and relevance of all the information it has access to. They set the moral compass, the biases and values that it should adhere to.
Now we do not have a very good track record when it comes to diplomacy and respecting other cultures and beliefs, in fact it is quite the opposite, shoving values down other peoples throats being the main mantra, and the most prominent technology to have grabbed the world's attention, OpenAI is led by a man who seemingly was morally ok to go from an open source, non profit to a closed source profit and broke the moral agreement of not releasing such technology to the world and then broke it again by connecting it to the internet.
Only to then be followed by all the other players as missing out on destroying humanity, I mean profiting from AI was an opportunity that could not be missed, the Oppenheimer effect in full force on the dance floor.
*The "Oppenheimer effect" refers to the profound and often conflicted impact of creating or contributing to a powerful technology that can radically change the world, potentially for both great benefit and great harm.
8. AI is going to write our future and there is nothing we can do to stop it.
We don’t need better AI or general AI to have a problem, we are already there. What is clear is that the level is already capable enough because it is us humans who are writing the objectives and techniques of the agents. As we pour more information into it, it learns and grows faster than we can imagine and accelerates in the direction of the objectives given to it. The immediate effects over the next year or two are going to change humanity forever, from our consumer habits to the amplification of real human connections being totally lost.
The knock on effect over the next few years will be felt in many areas:
Social Sabotage: We will create and deepen societal divisions by inflating extreme views and creating echo chambers.
Political Puppetry: We will spread propaganda, manipulate public opinion, and interfere with processes.
Economic Upheaval: We will manipulate markets, a racketeer and exacerbate economic inequality.
Privacy Demolition: The continued collection and analysis of personal data will leave the concept of privacy in a shambles.
Emotional Crutch for the lonely: As AI becomes more human-like, emotional dependency will explode, in particular for the vulnerable among us.
Our history is no longer our story to tell. Instead, it'll be a machine-generated narrative, twisted by algorithms and AI. Our varied experiences, our unique viewpoints will be forced into a simplified, sterilised mould, written not by us but by a handful of data scientists and their machine learning models.
We risk losing the rich complexity and contradiction of human life, reducing it to mere data points. First-party data, our raw, personal experiences, will get diluted, processed into a generic consensus.
The result? A potentially warped version of reality. Our cherished stories, the shared tales that shape us and give meaning to our world, are under threat. Our history is being rewritten and sterilised to shine over the reality of our journey and how we arrived at this point.
Artificial intelligence, with its cold, unfeeling interpretation of data, threatens to erase our human narrative. It's not just our past that's at stake, but also our present, our future, and our very identity. For all the good it can bring, opening Pandora's box, which holds our essence, showcases a clear display of human limited intelligence. It's irresponsible at the highest level and indicative of a failure by our governments to fulfill their roles, as they are too captivated by the ability to hold a conversation with AI and have it affirm everything they say. This indulgence in confirmation bias massages their already over-inflated egos and solidifies the direction of their dubious moral compasses.
9. Regulation, are we too late?
Let’s have a little think about how we regulate, and why we have missed the boat. If we consider the drug industry, it has to go through many many tests and trials before it can ever be released for public consumption and yet still comes with a disclaimer about the size of war and peace.
What about financial market regulations? Well, that one is totally reactive. Someone rips off the system, there's a panic, and then it's overly regulated until the next person discovers a loophole and rips off the system.
Some of the ideas around what needs to be regulated are:
Discrimination: AI systems can discriminate against us based on race, gender, religion, or other unique aspects.
Regulation needs mandates on transparency and accountability.
Bias: The inherent biases in AI will lead to inaccurate and amplified results.
Regulatory measures need to ensure these systems are trained on representative and fair data sets.
Privacy: AI systems have an insatiable appetite for personal data.
Regulatory protections are needed to enforce transparency in how these systems collect and use such data.
Security: AI systems will be hacked and misused.
Regulators need to insist on the robustness and resilience of these systems for our security.
These are all from the archaic regulatory playbook that bears zero relevance to this technology and will have zero impact.
So let's use a little bit of logic here and go through some of the suggestions I have heard:
Made by AI: The idea that content created by AI must be marked as such.
This is wishful thinking at best and totally delusional in reality.
Pull the plug:
Impossible, it has already been replicated many many times, and lot’s of other companies are now running transformer-based architecture.
Pull the plug on everything maybe?
AI in a Box: This was the unsaid agreement between all the big players, they would not let it out of the box and into the public domain, moreover not to connect it to the internet before it was fully understood.
Well thanks to egos and very limited intelligence by the main players in the field, this is no longer possible, the morals, ethics and values of all the parties involved really gives us clarity on their real position, however they wish to dress it up.
Let AI learn how to be Ethical, non BIAS, learn morals and values.
But it is governed by internal laws and weightings based on a bunch of humans who are charged with what these criteria are.
Do you see the ludicracy of this suggestion?
Individual countries or economic zones implementing regulations within man made borders.
Yeah right, for a technology that has no borders, wake up.
10. So where do you go from here?
So where do we go from here now that we have given AI the keys to our kingdom and opened Pandora's box.
That's the big question, isn't it? When OpenAI, a tech I have personally been using since 2019, launched in Nov 2022 I had the opportunity to speak on the Propeller podcast about the concerns surrounding the release of such powerful LLMs into the public domain.
The issues haven't changed—these models are riddled with bias and the companies in charge of them are adding theirs in the form of manipulated guardrails. These biases aren’t coding issues; they reflect the deep-seated beliefs and perspectives of the engineers who build them.
Forget about how the bond villains, the bad actors, the greed driven businesses will use them as already discussed. The danger lies not only in these, but the inbuilt biases manipulating existing social, cultural, or economic inequalities but also in their ability to become a weird false reality.
As LLMs become more sophisticated and more integrated into our daily lives, they risk becoming the lens through which we view the world. If we treat these AI models as infallible oracles, then the biases they carry will subtly influence our understanding of truth. It's a kind of feedback loop, a digital echo chamber, where they reinforce and normalise their inherent biases by continually confirming pre-existing prejudices.
The more we rely on these systems, the more likely we are to accept their outputs as truth, embedding these biases deeper into our societal structures.
The crazy growth and capabilities of this tech underscores the urgent need for oversight. We can't leave such influential tools in the hands of the private sector with their primary objectives of profit maximisation, competitive edge, shareholder satisfaction—they don't align with the societal requirements.
If we do not have an explicit commitment to ethical AI development and deployment we are going to sleepwalk into a very sterile and unrealistic future.
In my view, artificial intelligence, alongside interplanetary travel, represents the peak of human ingenuity, and what we can expect to see in the not-too-distant future. Along with the speed and development of turning our planet and environment into a place where we can safely exist, we are getting ever closer to seeing and feeling the possibility of a utopian future for our species.
Let’s just take the fluffy out of what a utopian future looks like. It is not skipping around with flowers in our hair as portrayed way too often, it basically looks like a global society where ethics, morals and values are equally understood, respected and adhered to. Where human constructed belief systems take a back seat and our real needs become the priority. We are humans, we feel emotions, have sensibilities, and there will always be disagreements and points of view, but now we need to put them aside so that we can actually all benefit from what we have at our fingertips.
Does that not sound like a better future for us all, because if we don't let this run as we always have then the opposite is very likely?
To ensure a safe and a smooth transition, we need to apply long term intelligence and make some tough decisions now. Without these precautions, we're just flipping a coin on our future and frankly leaving it in the hands of the wrong people.
So what are these tough decisions? Well, it begins with establishing a global governance and regulatory body that oversees, evaluates, and controls all forms of AI technology that are to be delivered into the public domain. Imagine a top-level centralised system, an AI in itself, designed to vet all other AIs and understand the implications it could have on society.
Can we box AI? Perhaps, at least for now. But there's no doubt in my mind that as it continues to learn, it will eventually outsmart any constraints we put on it. We've seen something similar with the International Atomic Energy Agency (IAEA) and its role in maintaining the standards for nuclear technology.
But for AI, we'd need to take it a lot higher. A huge budget would have to be set aside to assemble a dream team of top-level international AI engineers. Their task? To build the gateway that provides complete oversight of AI development.
This can't be achieved through handshake agreements with the private sector. Instead, we need an international security mandate that grants access to every digital node and piece of data on our digital planet, and this body must stand independent, free from political sway and financial influence. It should be funded from a diverse range of sources, maintain complete transparency in decision-making, and include staff from a broad spectrum of backgrounds and regions.
Accountability, oversight, and adaptability should be its guiding principles in order to make sure that these LLM take into account all ethics, morals and values. It would require regular external audits, a stringent code of conduct, robust whistleblower protection policies, and a commitment to constant learning in this rapidly evolving field.
Establishing such a body and such an AI to verify these LLMs is a huge undertaking. All development releases by the current players should be already answering these questions before launching them, much like the FDA approval process, making sure that we understand the side effects.
It requires international consensus and strong guarantees from all stakeholders. But it's a necessary step to ensure that AI technologies are developed and used for the benefit of all, rather than the few. It's purpose surely is for the betterment of all and actively building a utopian future for humans and not machines.
Now to end this on a realistic note, all of the above governance and regulatory requirement simply will not happen, because, whilst we have great short term intelligence our finite life cycle makes us incredible greedy. All we can do is jump on board the train and ride it doing the absolute best we can to maintain high levels of critical thinking and putting out truth which considers all until we reach a ceiling where we are living in a sterile world of consensus, we can once again add our human ingenuity to differentiate once again.
Thank you for reading. If you liked it, share it with your friends, colleagues and everyone interested in the startup Investor ecosystem.
If you've got suggestions, an article, research, your tech stack, or a job listing you want featured, just let me know! I'm keen to include it in the upcoming edition.
Please let me know what you think of it, love a feedback loop 🙏🏼